WO2024108350A1 - 空间结构图和户型图生成方法、装置、设备和存储介质 - Google Patents

空间结构图和户型图生成方法、装置、设备和存储介质 Download PDF

Info

Publication number
WO2024108350A1
WO2024108350A1 PCT/CN2022/133313 CN2022133313W WO2024108350A1 WO 2024108350 A1 WO2024108350 A1 WO 2024108350A1 CN 2022133313 W CN2022133313 W CN 2022133313W WO 2024108350 A1 WO2024108350 A1 WO 2024108350A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
space
subspace
medium
panoramic image
Prior art date
Application number
PCT/CN2022/133313
Other languages
English (en)
French (fr)
Inventor
关海波
田虎
李海洋
杨毅
朱辰
张�林
吴伟东
段小军
李瑜杰
Original Assignee
北京城市网邻信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京城市网邻信息技术有限公司 filed Critical 北京城市网邻信息技术有限公司
Priority to PCT/CN2022/133313 priority Critical patent/WO2024108350A1/zh
Publication of WO2024108350A1 publication Critical patent/WO2024108350A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • the present invention relates to the technical field of model reconstruction, and in particular to a method, device, equipment and storage medium for generating a spatial structure diagram and a floor plan.
  • the two-dimensional model of the target space is usually used to assist users in understanding the spatial structure information of the space.
  • the two-dimensional model of the target space is used to display the structure of the apartment.
  • the target space is usually reconstructed in three dimensions first, for example, by optical imaging or structured light scanning; then, the three-dimensional model obtained by the three-dimensional reconstruction is cut into horizontal sections to obtain the corresponding two-dimensional model.
  • the two-dimensional models obtained by the above methods often lack details and are not accurate enough.
  • the present invention provides a method, device, equipment and storage medium for generating a space structure diagram and a floor plan, so as to improve the accuracy of the generated space structure diagram and floor plan.
  • an embodiment of the present invention provides a method for generating a spatial structure diagram, which is applied to a control terminal, and the method includes:
  • the target medium is an image of a physical medium in the target space in the target panoramic image
  • the target panoramic image is a panoramic image for identifying the target medium in the panoramic image of the at least one shooting point
  • a mapping medium corresponding to the target medium is determined in the spatial profile to obtain a spatial structure diagram of the target space.
  • an embodiment of the present invention provides a spatial structure diagram generating device, which is applied to a control terminal, and the device includes:
  • An acquisition module used to acquire point cloud data and a panoramic image obtained by the information acquisition terminal at least one shooting point in the target space;
  • a processing module is used to obtain a spatial contour of the target space based on the point cloud data of the at least one shooting point; obtain a target medium identified in a target panoramic image, wherein the target medium is an image of a physical medium in the target space in the target panoramic image, and the target panoramic image is a panoramic image used to identify the target medium in the panoramic image of the at least one shooting point; determine a mapping medium corresponding to the target medium in the spatial contour to obtain a spatial structure diagram of the target space.
  • an embodiment of the present invention provides an electronic device, comprising: a memory, a processor, and a communication interface; wherein an executable code is stored on the memory, and when the executable code is executed by the processor, the processor executes the spatial structure diagram generation method as described in the first aspect.
  • an embodiment of the present invention provides a non-temporary machine-readable storage medium having executable code stored thereon.
  • the executable code is executed by a processor of an electronic device, the processor can at least implement the spatial structure diagram generation method as described in the first aspect.
  • an embodiment of the present invention provides a method for generating a floor plan, which is applied to a target control terminal, and the method includes:
  • a target medium identified in a target panoramic image is obtained, wherein the target medium is an image of a physical medium in the target subspace in the target panoramic image, and the target panoramic image is a panoramic image captured by at least one shooting point of the target subspace and used to identify the target medium;
  • a floor plan of the target physical space obtained by splicing the multiple space structure diagrams is acquired.
  • an embodiment of the present invention provides a floor plan generating device, which is applied to a target control terminal, and the device includes:
  • An acquisition module used to acquire point cloud data and panoramic images corresponding to a plurality of subspaces in the target physical space obtained by the information acquisition terminal, so as to determine a plurality of space contours corresponding to the plurality of subspaces; wherein the plurality of subspaces correspond to the plurality of space contours one by one, and the point cloud data and panoramic image of any subspace are acquired from at least one shooting point in any of the subspaces;
  • a display module used for displaying a plurality of space contours corresponding to the plurality of subspaces for editing
  • a processing module is used to obtain, for a target subspace among multiple subspaces, a target medium identified in a target panoramic image, wherein the target medium is an image of a physical medium in the target subspace in the target panoramic image, and the target panoramic image is a panoramic image collected by at least one shooting point of the target subspace and used to identify the target medium; determine a mapping medium corresponding to the target medium in the spatial contour of the target subspace to generate a spatial structure diagram of the target subspace; and in response to completion of an operation to obtain multiple spatial structure diagrams corresponding to the multiple subspaces, obtain a floor plan of the target physical space obtained by splicing the multiple spatial structure diagrams.
  • an embodiment of the present invention provides a method for generating a floor plan, which is applied to a control terminal, and the method includes:
  • the target medium is an image of a physical medium in the target subspace in the target panoramic image
  • the target panoramic image is a panoramic image for identifying the target medium in a panoramic image acquired at at least one shooting point in the target subspace
  • mapping medium for representing the target medium on a target space contour of the target subspace to obtain a spatial structure diagram of the target subspace
  • an embodiment of the present invention provides a floor plan generating device, which is applied to a control terminal, and the device includes:
  • An acquisition module used to acquire point cloud data and panoramic images respectively corresponding to a plurality of subspaces in the target physical space sent by the information acquisition terminal, wherein the point cloud data and panoramic image of any subspace are acquired at at least one shooting point in the any subspace;
  • a processing module is used to obtain a target space outline of the target subspace for the current target subspace to be edited according to point cloud data and/or panoramic images collected at at least one shooting point of the target subspace during the process of sequentially performing spatial structure diagram acquisition processing on the multiple subspaces; obtain a target medium identified in the target panoramic image, wherein the target medium is an image of a physical medium in the target subspace in the target panoramic image, and the target panoramic image is a panoramic image used to identify the target medium in the panoramic image collected at at least one shooting point of the target subspace; determine a mapping medium used to represent the target medium on the target space outline of the target subspace to obtain a spatial structure diagram of the target subspace; in response to the completion of the acquisition operation of the spatial structure diagram of the target subspace, if there is no subspace among the multiple subspaces whose spatial structure diagram has not been acquired, then obtain a floor plan of the target physical space obtained by splicing the spatial structure diagrams of the multiple spaces.
  • an embodiment of the present invention provides a method for generating a floor plan, the method being used to generate a floor plan of a target physical space, the target physical space including at least N spaces, and being applied to a control terminal, the method comprising:
  • Step 1 acquiring point cloud data and panoramic images collected by an information collection terminal in each of the N spaces, wherein the point cloud data and panoramic images are collected at at least one shooting point in each of the spaces;
  • Step 2 obtaining an Mth space outline of an Mth space among the N spaces for displaying for editing, wherein the Mth space outline is obtained based on point cloud data and/or a panoramic image collected from at least one shooting point of the Mth space;
  • Step 3 obtaining a target medium identified in the target panoramic image of the Mth space, so as to obtain a mapping medium of the target medium in the Mth space outline according to the target medium, so as to edit the Mth space outline according to the mapping medium, so as to obtain the floor plan of the Mth space;
  • the target panoramic image is a panoramic image for identifying the target medium in a panoramic image captured by at least one shooting point in the Mth space, and the target medium is an image of a physical medium in the Mth space in the target panoramic image;
  • Step 4 determine whether the Mth space is the last space among the N spaces for which a floor plan diagram is generated
  • step 5 assign M to M+1 and return to step 2;
  • step 6 obtain the floor plan of the target physical space composed of N floor plan structure diagrams for display, and the process ends; wherein M and N are natural numbers, and 1 ⁇ M ⁇ N.
  • an embodiment of the present invention provides a floor plan generating device, the device is used to generate a floor plan of a target physical space, the target physical space includes at least N spaces, and is applied to a control terminal, the device comprising:
  • An acquisition module used to acquire point cloud data and panoramic images collected by the information collection terminal in each of the N spaces, wherein the point cloud data and panoramic images are collected at least one shooting point in each of the spaces;
  • a first processing module is used to obtain an Mth space outline of an Mth space among the N spaces for displaying for editing, wherein the Mth space outline is obtained based on point cloud data and/or a panoramic image collected from at least one shooting point of the Mth space; a target medium identified in a target panoramic image of the Mth space is obtained, so that a mapping medium of the target medium in the Mth space outline is obtained based on the target medium, so as to edit the Mth space outline based on the mapping medium, so as to obtain a floor plan of the Mth space; the target panoramic image is a panoramic image for identifying the target medium in a panoramic image collected from at least one shooting point of the Mth space, and the target medium is an image of a physical medium in the Mth space in the target panoramic image;
  • the second processing module is used to determine whether the Mth space is the last space among the N spaces to generate a floor plan diagram; if not, M is assigned a value of M+1 and the process returns to execute the first processing module; if so, the floor plan of the target physical space composed of N floor plan diagrams is obtained for display, and the process ends; wherein M and N are natural numbers, and 1 ⁇ M ⁇ N.
  • an embodiment of the present invention provides a method for generating a floor plan, the method being used to generate a floor plan of a target physical space, wherein the target physical space includes a plurality of subspaces and is applied to a control terminal, the method comprising:
  • the target medium is an image of a physical medium in the first subspace in the target panoramic image
  • the target panoramic image is a panoramic image captured at at least one shooting point in the first subspace and used to identify the target medium
  • the splicing process result is determined as the floor plan of the target physical space.
  • an embodiment of the present invention provides a floor plan generating device, the device is used to generate a floor plan of a target physical space, wherein the target physical space includes a plurality of subspaces, and is applied to a control terminal, the device comprising:
  • An acquisition module used for acquiring point cloud data and panoramic images respectively corresponding to the plurality of subspaces obtained by the information acquisition terminal, wherein the point cloud data and panoramic image of any subspace are acquired at least one shooting point in any subspace;
  • a stitching module is used for, in the process of stitching the floor plan diagrams of the plurality of subspaces in sequence, obtaining, for a first subspace to be stitched, a target space contour of the first subspace according to point cloud data and/or panoramic images collected at at least one shooting point of the first subspace; obtaining a target medium identified in the target panoramic image, wherein the target medium is an image of a physical medium in the first subspace in the target panoramic image, and the target panoramic image is a panoramic image used to identify the target medium in the panoramic image collected at at least one shooting point of the first subspace; determining a mapping medium for representing the target medium on the target space contour of the first subspace to obtain a floor plan diagram of the first subspace; stitching the floor plan diagram of the first subspace with the floor plan diagram of the second subspace, wherein the second subspace is a subspace to which the floor plan diagram of the first subspace has been stitched before;
  • a processing module is used to determine the splicing result as the floor plan of the target physical space if there is no subspace in the multiple subspaces that has not been spliced with the floor plan structure diagram.
  • an embodiment of the present invention provides a method for generating a floor plan, wherein the method is used to generate a floor plan of a target physical space, the target physical space includes at least N spaces, and is applied to a control terminal, the method comprising:
  • Step 1 acquiring point cloud data and panoramic images collected by an information collection terminal in each of the N spaces, wherein the point cloud data and panoramic images are collected at at least one shooting point in each of the spaces;
  • Step 2 Obtain an Mth space outline for displaying for editing; wherein the Mth space outline is a space outline of the Mth space among the N spaces, and the Mth space outline is obtained based on point cloud data and/or a panoramic image collected from at least one shooting point of the Mth space;
  • Step 3 Acquire a first target medium identified in a first target panoramic image, so as to acquire a first mapping medium of the first target medium in the outline of the Mth space according to the first target medium, so as to edit the outline of the Mth space according to the first mapping medium to obtain a floor plan of the Mth space;
  • the first target panoramic image is a panoramic image for identifying the first target medium in a panoramic image acquired by at least one shooting point of the Mth space
  • the first target medium is an image of a physical medium in the Mth space in the first target panoramic image;
  • Step 4 Obtain the M+1th space outline for display for editing; wherein the M+1th space outline is the space outline of the M+1th space among the N spaces, the M+1th space is an adjacent space of the Mth space, and the M+1th space outline is obtained based on point cloud data and/or panoramic images collected from at least one shooting point of the M+1th space;
  • Step 5 Acquire the second target medium identified in the second target panoramic image, so as to acquire the second mapping medium of the second target medium in the M+1th space outline according to the second target medium, so as to edit the M+1th space outline according to the second mapping medium to obtain the floor plan of the M+1th space;
  • the second target panoramic image is a panoramic image for identifying the second target medium in the panoramic image acquired by the at least one shooting point of the M+1th space, and the second target medium is an image of the physical medium in the M+1th space in the first target panoramic image;
  • Step 6 splice the floor plan structure diagram of the M+1th space with the floor plan structure diagram of the Mth space, and determine whether the M+1th space is the last space among the N spaces for which a floor plan structure diagram is generated;
  • execute step 7 merge the Mth space and the M+1th space as the Mth space, and return to execute step 4;
  • step 8 use the splicing result as the floor plan of the target physical space for display, and the process ends.
  • an embodiment of the present invention provides a floor plan generating device, the device is used to generate a floor plan of a target physical space, wherein the target physical space includes at least N spaces, and is applied to a control terminal, the device comprising:
  • a first acquisition module is used to acquire point cloud data and panoramic images collected by the information collection terminal in each of the N spaces, wherein the point cloud data and panoramic images are collected from at least one shooting point in each of the spaces; acquire the Mth space outline for display for editing; wherein the Mth space outline is the space outline of the Mth space in the N spaces, and the Mth space outline is acquired based on the point cloud data and/or panoramic image collected from at least one shooting point in the Mth space; acquire a first target medium identified in a first target panoramic image, so as to acquire a first mapping medium of the first target medium in the Mth space outline based on the first target medium, so as to edit the Mth space outline based on the first mapping medium to acquire a floor plan of the Mth space; the first target panoramic image is a panoramic image for identifying the first target medium in the panoramic image collected from at least one shooting point in the Mth space, and the first target medium is an image of a physical medium in the Mth space in the first target panoramic image;
  • a second acquisition module is used to acquire the M+1th space outline for display for editing; wherein the M+1th space outline is the space outline of the M+1th space among the N spaces, the M+1th space is an adjacent space of the Mth space, and the M+1th space outline is acquired based on point cloud data and/or panoramic images collected from at least one shooting point of the M+1th space; a second target medium identified in a second target panoramic image is acquired, so that a second mapping medium of the second target medium in the M+1th space outline is acquired based on the second target medium, so as to be used to edit the M+1th space outline based on the second mapping medium to obtain a floor plan of the M+1th space; the second target panoramic image is a panoramic image for identifying the second target medium in the panoramic image collected from at least one shooting point of the M+1th space, and the second target medium is an image of a physical medium in the M+1th space in the first target panoramic image;
  • a processing module is used to splice the floor plan structure diagram of the M+1th space with the floor plan structure diagram of the Mth space, and determine whether the M+1th space is the last space among the N spaces to generate a floor plan structure diagram; if not, merge the Mth space and the M+1th space as the Mth space, and return to execute the second acquisition module; if so, use the splicing result as the floor plan of the target physical space for display, and the process ends.
  • an embodiment of the present invention provides an electronic device, comprising: a memory, a processor, and a communication interface; wherein the memory stores executable code, and when the executable code is executed by the processor, the processor executes the floor plan generation method described in the fifth aspect, the seventh aspect, the ninth aspect, the eleventh aspect and/or the thirteenth aspect.
  • an embodiment of the present invention provides a non-transitory machine-readable storage medium, on which executable code is stored.
  • the processor can at least implement the floor plan generation method described in the fifth aspect, the seventh aspect, the ninth aspect, the eleventh aspect and/or the thirteenth aspect.
  • the mapping medium on the spatial outline in the floor plan is determined based on the panoramic image.
  • the panoramic image can better reflect the actual position of doors and windows (i.e., the target medium) in the actual space. Therefore, based on the assistance of the panoramic image, the floor plan of each space is marked with more accurate door and window information, which can better reflect the scene information in the actual space. Therefore, the floor plan determined based on multiple floor plan diagrams can also accurately reflect the actual spatial structure of the target physical space.
  • FIG1 is a schematic diagram of a spatial structure diagram generating system provided by a first embodiment of the present invention
  • FIG2 is a flow chart of a method for generating a spatial structure diagram provided by the first embodiment of the present invention
  • FIG3 is a schematic diagram of a point cloud image provided by the first embodiment of the present invention.
  • FIG4 is a schematic diagram of a spatial structure diagram provided by the first embodiment of the present invention.
  • FIG5 is a flowchart of another method for generating a spatial structure diagram provided by the first embodiment of the present invention.
  • FIG6 is a schematic diagram of the structure of a spatial structure diagram generating device provided by the first embodiment of the present invention.
  • FIG7 is a schematic diagram of a system for generating a floor plan according to a second embodiment of the present invention.
  • FIG8 is a flow chart of a method for generating a floor plan provided by a second embodiment of the present invention.
  • FIG9 is a schematic diagram of a floor plan of a target physical space provided by a second embodiment of the present invention.
  • FIG10 is an interactive flow chart of another method for generating a floor plan provided in the second embodiment of the present invention.
  • FIG. 11 is a schematic diagram of the structure of a floor plan generating device provided by a second embodiment of the present invention.
  • FIG. 12 is a flow chart of a method for generating a floor plan provided by a third embodiment of the present invention.
  • FIG13 is a schematic diagram of a space profile provided by a third embodiment of the present invention.
  • FIG. 14 is a schematic diagram of the structure of a floor plan generating device provided by a third embodiment of the present invention.
  • 15 is a flow chart of a method for generating a floor plan provided by a fourth embodiment of the present invention.
  • FIG16 is a schematic diagram of a scenario for generating a floor plan provided by a fourth embodiment of the present invention.
  • FIG17 is a schematic diagram of an M-th space profile provided in the fourth embodiment of the present invention.
  • FIG18 is a schematic diagram of a process for generating a floor plan according to a fourth embodiment of the present invention.
  • FIG19 is a schematic diagram of the structure of a floor plan generating device provided by a fourth embodiment of the present invention.
  • FIG20 is a flow chart of a method for generating a floor plan provided by a fifth embodiment of the present invention.
  • FIG21 is a schematic diagram of a target space profile provided by a fifth embodiment of the present invention.
  • FIG22 is a schematic diagram of an actual spatial structure of a target physical space provided by a fifth embodiment of the present invention.
  • FIG23 is a schematic diagram of a housing structure diagram provided in a fifth embodiment of the present invention.
  • FIG24 is a schematic diagram of another apartment structure diagram provided in the fifth embodiment of the present invention.
  • FIG25 is a schematic diagram of the structure of a floor plan generating device provided by a fifth embodiment of the present invention.
  • FIG26 is a flow chart of a method for generating a floor plan provided by a sixth embodiment of the present invention.
  • FIG27 is a schematic diagram of a scenario for generating a floor plan provided by a sixth embodiment of the present invention.
  • FIG28 is a schematic diagram of a space profile of a space Z provided in a sixth embodiment of the present invention.
  • FIG29 is a schematic diagram of a housing structure diagram provided in a sixth embodiment of the present invention.
  • FIG30 is a schematic diagram of a spliced apartment structure diagram provided by the sixth embodiment of the present invention.
  • FIG31 is a schematic diagram of another spliced apartment structure diagram provided by the sixth embodiment of the present invention.
  • FIG32 is a schematic diagram of the structure of a floor plan generating device provided by a sixth embodiment of the present invention.
  • FIG33 is a schematic diagram of the structure of an electronic device provided in an embodiment of the present invention.
  • the following describes the method for generating a floor plan structure diagram and a floor plan diagram provided by the present invention through multiple embodiments.
  • a space often contains at least one subspace.
  • a building space used for living includes a living room, a kitchen and two bedrooms.
  • the living room, kitchen and bedroom can all be considered as a subspace, i.e., a unit space.
  • a two-dimensional plan structure diagram of the space i.e., a floor plan diagram, is generated in advance.
  • the corresponding floor plan diagram is actually obtained by combining the spatial structure diagrams corresponding to at least one subspace.
  • the accuracy of the spatial structure diagram of any subspace will affect the accuracy of the finally generated floor plan diagram.
  • an embodiment of the present invention provides a method for generating a spatial structure diagram, which is used to generate an accurate spatial structure diagram of a target space.
  • the target space in the embodiment of the present invention refers to a unit space, that is, any of the above subspaces.
  • FIG1 is a schematic diagram of a spatial structure diagram generation system provided by the first embodiment of the present invention.
  • the spatial structure diagram generation system includes: an information collection terminal and a control terminal.
  • the information collection terminal can be directly integrated into the control terminal and be integrated with the control terminal; the information collection terminal can also be decoupled from the control terminal and separately set, and communicate with the control terminal through, for example, Bluetooth, Wireless Fidelity (WiFi) hotspot, etc.
  • WiFi Wireless Fidelity
  • the information collection terminal includes: a laser sensor, a camera, a motor and a processor (such as a CPU).
  • the laser sensor and the camera are used as perception devices to collect scene information of the target space, that is, point cloud data and image data of the target space.
  • the target space corresponds to at least one shooting point.
  • the information collection terminal responds to the information collection instruction, drives the laser sensor to rotate 360 degrees through the driving motor to collect the point cloud data corresponding to the target shooting point; drives the camera to rotate 360 degrees through the driving motor to shoot the image corresponding to the target point at multiple preset angles.
  • the above processor can stitch the images shot at multiple preset angles into a panoramic image through a panoramic image stitching algorithm such as a feature matching algorithm.
  • the target shooting point is any one of the at least one shooting point.
  • the information collection instruction is sent by the control terminal, or the information collection instruction is triggered in response to a trigger operation of the user on the information collection terminal.
  • point cloud data and image data both can be collected at the same time or in sequence.
  • the camera can be turned on synchronously during the process of collecting point cloud data to collect scene lighting information of the current shooting point for light measurement and determine the corresponding exposure parameters. Afterwards, the camera collects image data based on the determined exposure parameters.
  • High Dynamic Range Imaging can be combined to generate a high-quality panoramic image.
  • multiple preset angles can be customized by the user according to the viewing angle of the camera, and the images taken based on the multiple preset angles contain scene information within a 360-degree range of the current point.
  • the information acquisition terminal also includes an inertial measurement unit (IMU).
  • IMU inertial measurement unit
  • the IMU is used to correct the posture information corresponding to the collected point cloud data and image data to reduce errors caused by environmental or human factors (for example, the information acquisition terminal is not placed horizontally, etc.).
  • the control terminal can be a terminal device with data processing capabilities such as a smart phone, a tablet computer, a laptop computer, etc. As shown in Figure 1, the control terminal is used to obtain point cloud data and panoramic images obtained by the information acquisition terminal at least one shooting point in the target space to generate a spatial structure diagram of the target space.
  • the spatial structure diagram generation system shown in Figure 1 also includes a cloud server, which can be a physical server or a virtual server in the cloud.
  • the control terminal communicates with the cloud server by accessing a wireless network based on communication standards, such as WiFi, 2G, 3G, 4G/LTE, 5G and other mobile communication networks.
  • the cloud server may receive the point cloud data and the panoramic image of at least one shooting point in the target space forwarded by the control terminal to generate a spatial structure diagram of the target space, and feed the spatial structure diagram back to the control terminal for display.
  • the process of the cloud server generating the spatial structure diagram of the target space is the same as the process of the control terminal generating the spatial structure diagram, but because the cloud server has stronger computing power, it is more efficient in generating the spatial structure diagram of the target space, which can further improve the user experience.
  • the cloud server may also be directly connected to the information collection terminal in communication to directly obtain the point cloud data and panoramic image obtained by the information collection terminal at at least one shooting point in the target space, and generate a spatial structure diagram of the target space.
  • control terminal generating a spatial structure diagram of a target space.
  • FIG2 is a flow chart of a method for generating a spatial structure diagram provided by the first embodiment of the present invention, and the method for generating a spatial structure diagram is applied to a control terminal in the spatial structure diagram generating system shown in FIG1.
  • the method for generating a spatial structure diagram includes the following steps:
  • a target medium identified in a target panoramic image where the target medium is an image of a physical medium in a target space in the target panoramic image, and the target panoramic image is a panoramic image of at least one shooting point and is used to identify the target medium.
  • the target space is described by taking the target space as a building space as an example, but the present invention is not limited thereto.
  • the target space may also be a certain spatial structure, container or transportation vehicle, etc.
  • the information collection terminal responds to the information collection instruction, obtains the point cloud data and panoramic image corresponding to each shooting point at at least one shooting point in the target space in turn, and sends the collected point cloud data and panoramic image of at least one shooting point to the control terminal.
  • the at least one shooting point may be selected by the user in the target space according to modeling requirements; or may be a reference shooting point generated for the target space by the control terminal based on description information of the target space input by the user.
  • the process of acquiring the point cloud data and the panoramic image corresponding to any of the at least one shooting point is consistent.
  • the information collection terminal responds to the information collection instruction and acquires the point cloud data and the panoramic image corresponding to shooting point X.
  • the specific process of the information collection terminal acquiring the point cloud data and the panoramic image of shooting point X can be referred to the above embodiment, and will not be repeated here.
  • the information collection terminal can feed back the acquired point cloud data and panoramic images to the control terminal each time it acquires the point cloud data and panoramic images of a shooting point; the information terminal can also send the point cloud data and panoramic images of all shooting points in the target space to the control terminal in a unified manner after acquiring the point cloud data and panoramic images of all shooting points in the target space.
  • the control terminal and the information collection terminal synchronously acquire the point cloud data and the panoramic image of the target space.
  • control terminal obtains the spatial contour of the target space based on the acquired point cloud data of at least one shooting point in the target space.
  • the point cloud data of at least one shooting point can be fused based on the relative position relationship between at least one shooting point in the target space to determine the target point cloud data of the target space.
  • the target point cloud data obtained after fusion processing contains more data, the point cloud data is denser, and can better reflect the spatial structure information of the target space.
  • the accurate spatial contour of the target space can be obtained based on the target point cloud data.
  • the target point cloud data of the target space is first mapped to a two-dimensional plane to obtain a point cloud image of the target space; then, the spatial contour of the target space is determined based on the point cloud image.
  • the control terminal can identify the spatial contour corresponding to the point cloud image by, for example, an edge detection algorithm.
  • the point cloud data is actually a series of three-dimensional coordinate points, and any three-dimensional coordinate point can be represented by a three-dimensional Cartesian coordinate (x, y, z), where x, y, z are the coordinate values of the x-axis, y-axis, and z-axis, which have a common zero point and are orthogonal to each other.
  • the three-dimensional coordinate points (x, y, z) corresponding to the target point cloud data can be converted to two-dimensional coordinate points (x, y), for example: the z-axis coordinate value of the three-dimensional coordinate point is set to 0, and then the plane point cloud image of the target space is obtained based on the converted two-dimensional coordinate point.
  • a three-dimensional spatial structure diagram of the target space can also be generated based on the three-dimensional coordinate points (x, y, z) corresponding to the target point cloud data, and then a top view of the three-dimensional spatial structure diagram is obtained as the two-dimensional point cloud image of the target space.
  • the method after obtaining the point cloud image of the target space, the method also includes: receiving a user's correction operation on the point cloud image, and determining the spatial contour of the target space based on the target point cloud image obtained after the correction operation.
  • the above correction operation includes cropping.
  • there are often objects in the target space that will affect the point cloud data collection such as glass, mirrors, etc. These objects will cause some interference data in the acquired point cloud data.
  • These interference data are reflected in the point cloud image, which is specifically manifested as the existence of part of the image outside the regular wall line in the point cloud image (that is, the interference data corresponds to the interference image), or the wall line of the point cloud image is blurred.
  • the wall line in the point cloud image corresponds to the wall in the target space.
  • FIG3 is a schematic diagram of a point cloud image provided by the first embodiment of the present invention.
  • the user can modify the point cloud image through the edit button on the point cloud image editing interface.
  • the wall lines in the target point cloud image are clear, and a relatively accurate spatial contour can be identified based on the target point cloud image.
  • the spatial contour of the target space is composed of multiple contour lines, and the contour lines correspond to the wall lines in the point cloud image.
  • the spatial contour may include a target contour line that does not correspond to the wall line in the point cloud image. Therefore, in another optional embodiment, in response to the user's editing operation on the target contour line on the spatial contour of the target space, the shape and/or position of the target contour line may be adjusted so that the target contour line coincides with the wall line in the point cloud image. For example, the length and position of the target contour line l are adjusted so that the target contour line l coincides with the wall line L in the point cloud image, where the target contour line l and the wall line L correspond to the same wall in the target space.
  • control terminal is preset with other contour line correction options for the space contour, such as: adding contour line option, deleting contour line option, etc.
  • the target medium is first identified based on the panoramic image, and then the mapping medium corresponding to the target medium is determined.
  • the images corresponding to the doors and windows in the target space in the panoramic image are referred to as target media, and the markings corresponding to the doors and windows in the spatial contour are referred to as mapping media.
  • the target medium in the panoramic image can be identified based on a preset image recognition algorithm.
  • the target medium may be included in the panoramic images corresponding to more than one shooting point.
  • a target panoramic image in order to speed up the recognition efficiency of the target medium by the control terminal, before the control terminal recognizes the target medium in the panoramic image, a target panoramic image can be determined from the panoramic image of at least one shooting point for identifying the target medium.
  • the target panoramic image is a panoramic image that meets the preset recognition requirements, such as: a panoramic image with the widest field of view and the best light, or a panoramic image containing user marking information (such as: the best panoramic image).
  • the shooting point corresponding to the target panoramic image can be the same as or different from the shooting point corresponding to the point cloud data used to generate the spatial contour.
  • the target space contains two shooting points, namely shooting point A and shooting point B, and a panoramic image A1 and point cloud data A2 are obtained at shooting point A, and a panoramic image B1 and point cloud data B2 are obtained at shooting point B.
  • the spatial contour is generated based on the point cloud data A2, it can be determined that the panoramic image A1 is the target panoramic image, and it can also be determined that the panoramic image B1 is the target panoramic image.
  • the spatial contour is generated based on the point cloud data B2
  • it can be determined that the panoramic image A1 is the target panoramic image
  • the door body and the window have corresponding size information.
  • the mapping medium added to the spatial outline of the target space should at least be able to reflect the position information, size information and type information of the door body and/or window contained in the target space.
  • determining a mapping medium corresponding to a target medium in a spatial profile includes:
  • the panoramic pixel coordinates corresponding to the target medium in the target panoramic image and the mapped space contour coordinates are obtained to determine the mapping medium corresponding to the target medium in the space contour.
  • the mapping medium is adapted to the target identification and target display size of the target medium.
  • the target identification is used to distinguish target media belonging to different types. For example, target media belonging to a door body or a target media belonging to a window corresponds to different target identifications.
  • the above-mentioned mapping relationship between the target panoramic image and the space outline is a mapping between the target panoramic image and the space outline established based on the coordinate mapping between the point cloud data and the target panoramic image.
  • the relative position between the laser sensor and the camera has been pre-calibrated before the point cloud data and panoramic image are collected. Based on the pre-calibrated relative position and the relative position relationship between the actual shooting points, the coordinate mapping between the three-dimensional point cloud coordinates corresponding to the collected point cloud data and the panoramic pixel coordinates of the panoramic image can be determined.
  • the specific method of coordinate mapping of panoramic images and point cloud data is not limited.
  • the panoramic pixel coordinates can be directly mapped to three-dimensional point cloud coordinates, and the three-dimensional point cloud coordinates can be mapped to panoramic pixel coordinates according to the relative posture relationship between the devices for acquiring the panoramic image and the point cloud data;
  • the panoramic pixel coordinates can also be first mapped to intermediate coordinates, and then the intermediate coordinates can be mapped to three-dimensional point cloud coordinates, with the help of relative posture relationship and intermediate coordinate system;
  • the three-dimensional point cloud coordinates can be first mapped to intermediate coordinates, and then the intermediate coordinates can be mapped to panoramic pixel coordinates.
  • the specific type of the intermediate coordinate system is not limited, nor is the specific method used in the coordinate mapping process. The mapping method used will be different depending on the different intermediate coordinate systems and the different relative posture relationships.
  • FIG4 is a schematic diagram of a spatial structure diagram provided by the first embodiment of the present invention. Assume that the distribution of the actual doors and windows of the target space Y is shown in the left figure of FIG4, including door a, window b and window c. Based on the spatial structure diagram generation method provided by the embodiment of the present invention, the generated spatial structure diagram of the target space Y is shown in the right figure of FIG4. Among them, the mapping media corresponding to the target media door a, window b and window c in the target space are marked on the spatial contour j of the target space, and the corresponding position, size and type of the mapping media match the actual situation of the target medium in the target space.
  • the mapping medium corresponding to the door body and the window in the spatial contour of the target space is determined, so that the spatial structure diagram of the target space finally obtained contains an accurate spatial contour, and the mapping medium used to represent the door body and the window is marked at the correct position on the spatial contour, so that the spatial structure diagram can accurately reflect the actual spatial structure of the target space.
  • FIG5 is a flow chart of another method for generating a spatial structure diagram according to the first embodiment of the present invention. As shown in FIG5 , the method for generating a spatial structure diagram includes the following steps:
  • the target medium is an image of a physical medium in a target space in the target panoramic image.
  • the target panoramic image is a panoramic image of at least one shooting point and is used to identify the target medium.
  • steps 501, 503 and 504 may refer to the aforementioned embodiment and will not be described in detail in this embodiment.
  • confirming the validity of the point cloud data and the panoramic image includes: confirming whether the panoramic image received from at least one shooting point meets the shooting requirements, and confirming whether the point cloud data from multiple shooting points currently received can fully represent the target space.
  • a door or window in the panoramic image is blocked, or there is a stitching error in the panoramic image (for example, image misalignment), it is considered that the panoramic image of the shooting point does not meet the shooting requirements, that is, the panoramic image is invalid.
  • the information collection terminal is made to reacquire point cloud data and panoramic images at the shooting point. If a panoramic image is valid, a validity confirmation operation is performed on the panoramic image.
  • the validity confirmation operation is a confirmation operation triggered by the user on the control terminal interface for the panoramic image.
  • the purpose of validating the point cloud data is to confirm whether the received point cloud data of at least one shooting point can fully represent the target space, that is, to confirm whether to add a new shooting point to collect point cloud data.
  • the validity confirmation operation is performed on the point cloud data. If the user confirms that the point cloud data of the multiple shooting points currently received cannot fully represent the target space, then a new shooting point is added, and point cloud data is obtained at the new shooting point to make up for the missing point cloud data corresponding to the target space except for the point cloud data of at least one shooting point currently received.
  • the point cloud data of at least one shooting point currently received are fused and mapped to a two-dimensional plane to obtain a point cloud image of at least one shooting point. Afterwards, by judging whether the displayed point cloud image is consistent with the actual spatial structure of the target space, it is determined whether the point cloud data of at least one shooting point currently received can fully represent the target space. If they are consistent, the target space can be fully represented; if they are inconsistent, the target space cannot be fully represented.
  • the validity confirmation operation of the point cloud data and the panoramic image of at least one shooting point is actually confirming the correctness and completeness of the original data used to generate the spatial structure diagram of the target space. Based on the confirmation operation of the correctness and completeness of the original data, an accurate spatial structure diagram can be generated.
  • FIG6 is a schematic diagram of the structure of a spatial structure diagram generating device provided in the first embodiment of the present invention.
  • the device is applied to a control terminal.
  • the device includes: an acquisition module 11 and a processing module 12 .
  • the acquisition module 11 is used to acquire point cloud data and a panoramic image obtained by the information acquisition terminal at least one shooting point in the target space.
  • the processing module 12 is used to obtain the spatial contour of the target space based on the point cloud data of the at least one shooting point; obtain the target medium identified in the target panoramic image, wherein the target medium is an image of the physical medium in the target space in the target panoramic image, and the target panoramic image is a panoramic image in the panoramic image of the at least one shooting point for identifying the target medium; determine the mapping medium corresponding to the target medium in the spatial contour to obtain the spatial structure diagram of the target space.
  • the processing module 12 is specifically used to fuse the point cloud data of at least one shooting point to determine the target point cloud data of the target space; map the target point cloud data to a two-dimensional plane to obtain a point cloud image of the target space; and determine the spatial contour of the target space based on the point cloud image.
  • the acquisition module 11 is further used to receive a user's correction operation on the point cloud image.
  • the processing module 12 is specifically configured to determine the spatial contour of the target space according to the target point cloud image obtained after the correction operation.
  • the processing module 12 is specifically configured to adjust the shape and/or position of the target contour line in response to an editing operation on the target contour line on the spatial contour of the target space, so that the target contour line coincides with a wall line in the point cloud image.
  • the processing module 12 is specifically used to obtain the panoramic pixel coordinates corresponding to the target medium in the target panoramic image and the mapped spatial contour coordinates according to the mapping relationship between the target panoramic image and the spatial contour, so as to determine the mapping medium corresponding to the target medium in the spatial contour; wherein the mapping medium is adapted to the target identification and the target display size of the target medium, the target identification is used to distinguish target media belonging to different types, and the mapping relationship is a mapping between the target panoramic image and the spatial contour established according to the coordinate mapping between the point cloud data and the target panoramic image.
  • the device shown in FIG6 can execute the steps in the aforementioned embodiments.
  • the device shown in FIG6 can execute the steps in the aforementioned embodiments.
  • FIG7 is a schematic diagram of a system for generating a floor plan according to a second embodiment of the present invention.
  • the system for generating a floor plan includes: an information collection terminal and a target control terminal.
  • the information collection terminal can be directly integrated into the target control terminal and be integrated with the target control terminal; the information collection terminal can also be decoupled from the target control terminal and be separately set.
  • the information collection terminal communicates with the target control terminal through, for example, Bluetooth, Wireless Fidelity (WiFi) hotspot, or the like.
  • WiFi Wireless Fidelity
  • a physical space usually contains multiple subspaces.
  • a building space may contain a living room, a kitchen and two bedrooms, among which the living room, kitchen and bedroom can all be considered as a subspace.
  • the information collection terminal When generating a floor plan of the target physical space, the information collection terminal is used to collect point cloud data and panoramic images corresponding to multiple subspaces in the target physical space.
  • the point cloud data and panoramic image of any subspace are collected at at least one shooting point in any subspace.
  • the point cloud data and panoramic image of subspace X include: point cloud data a and panoramic image a collected at shooting point a in subspace X, and point cloud data b and panoramic image b collected at shooting point b in subspace X.
  • Shooting point a and shooting point b can be selected by the user in subspace X according to modeling needs; or they can be reference shooting points generated for subspace X by the target control terminal based on the description information of subspace X (such as space size) input by the user.
  • the information collection process corresponding to the information collection terminal is the same.
  • the data collection process of the information collection terminal is described by taking the collection process of point cloud data and panoramic images at the target shooting point Y in the subspace X as an example.
  • the information collection terminal includes: a laser sensor, a camera, a motor and a processor (such as a CPU).
  • the laser sensor and the camera are used as sensing devices to collect scene information of subspace X, that is, point cloud data and panoramic images of subspace X.
  • the information collection terminal responds to the information collection instruction, drives the laser sensor to rotate 360 degrees through the driving motor to collect the point cloud data corresponding to the target shooting point Y; drives the camera to rotate 360 degrees through the driving motor to collect the panoramic image corresponding to the target shooting point Y.
  • the camera in the information collection terminal is a panoramic camera or a non-panoramic camera. If the camera in the information collection terminal is a non-panoramic camera, during the above 360-degree rotation process, the camera is controlled to capture images corresponding to the target shooting point Y at multiple preset angles, and the above processor can stitch the images captured at multiple preset angles into a panoramic image through a panoramic image stitching algorithm such as a feature matching algorithm.
  • a panoramic image stitching algorithm such as a feature matching algorithm.
  • multiple preset angles can be customized by the user according to the camera's viewing angle.
  • the images taken based on the multiple preset angles contain scene information within a 360-degree range of the current point. For example, if the camera's viewing angle is 180 degrees, a certain reference direction can be 0 degrees, and a degree and (a+180) degrees based on the reference direction are determined as preset angles.
  • the information collection instruction is sent by the target control terminal, or the information collection instruction is triggered in response to a trigger operation of the user on the information collection terminal.
  • the point cloud data and the panoramic image may be collected simultaneously, or may be collected in sequence, which is not limited in this embodiment.
  • the camera can be turned on synchronously to collect scene lighting information of the current shooting point for light measurement and determine the corresponding exposure parameters. Afterwards, the camera collects the panoramic image based on the determined exposure parameters.
  • High Dynamic Range Imaging can be combined to generate a high-quality panoramic image.
  • the information acquisition terminal also includes an inertial measurement unit (IMU).
  • IMU inertial measurement unit
  • the IMU is used to correct the posture information corresponding to the collected point cloud data and image data to reduce errors caused by environmental or human factors (for example, the information acquisition terminal is not placed horizontally, etc.).
  • the floor plan generation system shown in Figure 7 also includes a cloud server, which can be a physical server or a virtual server in the cloud.
  • the target control terminal communicates with the cloud server by accessing a wireless network based on communication standards, such as WiFi, 2G, 3G, 4G/LTE, 5G and other mobile communication networks.
  • the cloud server can receive point cloud data and panoramic images corresponding to multiple subspaces forwarded by the target control terminal to generate a floor plan of the target physical space, and feed the floor plan back to the control terminal for display.
  • the process of the cloud server generating the floor plan of the target physical space is the same as the process of the target control terminal generating the spatial structure diagram, but because the cloud server has stronger computing power, it is more efficient in generating the floor plan of the target physical space, which can further improve the user experience.
  • the cloud server may also be directly connected to the information collection terminal to directly obtain the point cloud data and panoramic images corresponding to the multiple subspaces collected by the information collection terminal to generate a floor plan of the target physical space.
  • the target control terminal is taken as an example to illustrate the process of generating a floor plan of the target physical space based on the point cloud data and panoramic images corresponding to multiple subspaces in the target physical space obtained by the information collection terminal.
  • the floor plan can be understood as a two-dimensional plane structure diagram of the target physical space.
  • the target control terminal may be a terminal device with data processing capability, such as a smart phone, a tablet computer, a laptop computer, etc.
  • a smart phone such as a smart phone, a tablet computer, a laptop computer, etc.
  • the following describes the process of generating a floor plan of a target physical space from the perspective of the target control terminal in conjunction with a specific embodiment.
  • FIG8 is a flow chart of a method for generating a floor plan according to a second embodiment of the present invention, which is applied to a target control terminal. As shown in FIG8 , the method for generating a floor plan includes the following steps:
  • a target subspace among multiple subspaces obtain a target medium identified in a target panoramic image, where the target medium is an image of a physical medium in the target subspace in the target panoramic image.
  • the target panoramic image is a panoramic image captured by at least one shooting point of the target subspace and is used to identify the target medium.
  • the target physical space may be a building space, a complex connected structure or a container, a vehicle, etc.
  • this embodiment is described by taking the target physical space as a building space (such as an office area, or an indoor area of a residential house, etc.) as an example, but is not limited thereto.
  • step 801 if the information acquisition terminal is integrated into the target control terminal, the target control terminal can directly and synchronously acquire the point cloud data and panoramic images of multiple subspaces obtained by the information acquisition terminal; if the information acquisition terminal is connected to the target control terminal through a communication link, the target control terminal receives the point cloud data and panoramic images of multiple subspaces sent by the information acquisition terminal based on the pre-established communication link.
  • the process of the information acquisition terminal acquiring the point cloud data and panoramic images corresponding to the multiple subspaces in the target physical space can refer to the aforementioned embodiment. In this embodiment, the processing process after the target control terminal acquires the point cloud data and panoramic images corresponding to the multiple subspaces is mainly described.
  • the space contours respectively corresponding to the multiple subspaces are determined.
  • the spatial contour of the target subspace is determined based on the point cloud data and the panoramic image of the target subspace, including: determining the spatial contour of the target subspace based on the point cloud data of at least one shooting point in the target subspace, and/or, the panoramic image of at least one shooting point.
  • a first spatial contour can be obtained based on the point cloud data of at least one shooting point, and the first spatial contour can be directly used as the spatial contour of the target subspace; or, a second spatial contour can be obtained based on the panoramic image of at least one shooting point, and the second spatial contour can be directly used as the spatial contour of the target subspace; or, a spatial contour with better contour line quality is selected from the above-mentioned first spatial contour and the above-mentioned second spatial contour as the spatial contour of the target subspace, or the contour lines of the above-mentioned first spatial contour and the above-mentioned second spatial contour are fused to obtain a spatial contour with better contour line quality, and the fused spatial contour can be directly used as the spatial contour of the target subspace.
  • determining a spatial contour of the target subspace according to point cloud data of at least one shooting point in the target subspace includes:
  • the point cloud data of at least one shooting point are fused; the point cloud data after the fusion processing is determined to be the target point cloud data of the target subspace; the target point cloud data is mapped to a two-dimensional plane to obtain an initial point cloud image of the target subspace; a correction operation of the user on the initial point cloud image is received, and the point cloud image obtained after the correction operation is determined to be the point cloud image of the target subspace, and the spatial contour of the target subspace is determined by, for example, an edge detection algorithm.
  • the point cloud data is actually a series of three-dimensional coordinate points, and any three-dimensional coordinate point can be represented by a three-dimensional Cartesian coordinate (x, y, z), where x, y, z are the coordinate values of the x-axis, y-axis, and z-axis, which have a common zero point and are orthogonal to each other.
  • the target point cloud data is mapped to a two-dimensional plane to obtain an initial point cloud image of the target subspace, including: converting the three-dimensional coordinate points (x, y, z) corresponding to the target point cloud data into two-dimensional coordinate points (x, y), for example: setting the z-axis coordinate value of the three-dimensional coordinate points to 0, and then obtaining the initial point cloud image of the target subspace based on the converted two-dimensional coordinate points, and the initial point cloud image is a two-dimensional image.
  • a three-dimensional spatial structure diagram of the target subspace is first generated based on the three-dimensional coordinate points (x, y, z) corresponding to the target point cloud data, and then obtaining a top view of the three-dimensional spatial structure diagram as the initial point cloud image of the target subspace.
  • the above correction operation on the initial point cloud image includes cropping.
  • there are often objects in the target subspace that will affect the point cloud data acquisition such as glass, mirrors, etc. These objects will cause some interference data in the acquired point cloud data.
  • These interference data are reflected in the point cloud image, which is specifically manifested as part of the image outside the regular wall line in the point cloud image (that is, the interference data corresponds to the interference image), or the wall line of the point cloud image is blurred.
  • the wall line in the point cloud image corresponds to the wall in the target space.
  • the user can modify the point cloud image through the edit button on the point cloud image editing interface.
  • the user can modify the point cloud image through the edit button on the point cloud image editing interface.
  • the interference image outside the wall line is cut off to obtain a target point cloud image.
  • the wall line in the target point cloud image is clear, and a relatively accurate spatial contour can be identified based on the target point cloud image.
  • the multiple spatial contours can be displayed simultaneously to edit the multiple spatial contours at the same time, and the spatial structure diagrams corresponding to the multiple subspaces can be generated in parallel; or, the multiple spatial contours can be displayed in sequence to edit the multiple spatial contours one by one, and the spatial structure diagrams corresponding to the multiple subspaces can be generated one by one.
  • the generation process of the spatial structure graph of any subspace (referred to as the target subspace) in the multiple subspaces is the same. Therefore, the generation process of the spatial structure graph of the target subspace is taken as an example for explanation below.
  • the spatial contour of the target subspace is composed of multiple contour lines.
  • the target control terminal is preset with contour line correction options for the spatial contour, such as: adjusting the shape and/or position of the contour line, adding a contour line option, deleting a contour line option, etc.
  • the spatial contour when displaying the spatial contour of the target subspace, specifically, the spatial contour is displayed on a point cloud image of the target subspace.
  • the point cloud image is determined based on point cloud data of at least one shooting point of the target subspace, and the point cloud image includes wall lines, which correspond to walls in the target subspace.
  • the shape and/or position of the target contour line is adjusted so that the target contour line coincides with the wall line in the point cloud image.
  • the length and position of the target contour line l are adjusted so that the target contour line l coincides with the wall line L in the point cloud image, where the target contour line l and the wall line L correspond to the same wall in the target space.
  • the target contour line that does not have a corresponding wall line is deleted; or a target contour line corresponding to a certain wall line is added.
  • the target medium is first identified based on the panoramic image, and then the mapping medium corresponding to the target medium is determined.
  • the image corresponding to the door and window in the target space in the panoramic image is called the target medium
  • the mark corresponding to the door and window in the spatial contour is called the mapping medium.
  • the target medium in the panoramic image can be identified based on a preset image recognition algorithm.
  • the target medium may be included in the panoramic images corresponding to more than one shooting point.
  • the target control terminal In order to speed up the recognition efficiency of the target medium by the target control terminal, optionally, before the target control terminal recognizes the target medium in the panoramic image, it can also determine the target panoramic image from the panoramic image of at least one shooting point for identifying the target medium.
  • the target panoramic image is a panoramic image that meets preset recognition requirements, such as a panoramic image with the widest field of view and the best light, or a panoramic image containing user marking information (such as the best panoramic image).
  • the shooting point corresponding to the target panoramic image may be the same as or different from the shooting point corresponding to the point cloud data used to generate the spatial contour.
  • the target space contains two shooting points, namely shooting point A and shooting point B, and a panoramic image A1 and point cloud data A2 are obtained at shooting point A, and a panoramic image B1 and point cloud data B2 are obtained at shooting point B.
  • the spatial contour is generated based on the point cloud data A2, it can be determined that the panoramic image A1 is the target panoramic image, and it can also be determined that the panoramic image B1 is the target panoramic image.
  • the spatial contour is generated based on the point cloud data B2
  • it can be determined that the panoramic image A1 is the target panoramic image
  • the door body and the window have corresponding size information.
  • the mapping medium added to the spatial contour of the target subspace should at least be able to reflect the position information, size information and type information of the door body and/or window contained in the target subspace.
  • determining a mapping medium corresponding to a target medium in a spatial profile of a target subspace includes:
  • the panoramic pixel coordinates corresponding to the target medium in the target panoramic image and the mapped spatial contour coordinates are obtained to determine the mapping medium corresponding to the target medium in the spatial contour of the target subspace.
  • the mapping medium is adapted to the target identification and target display size of the target medium.
  • the target identification is used to distinguish target media belonging to different types. For example, target media belonging to a door body or a target media belonging to a window corresponds to different target identifications.
  • the above-mentioned mapping relationship between the target panoramic image and the space contour is a mapping between the target panoramic image and the space contour established based on the coordinate mapping between the point cloud data of the target subspace and the target panoramic image.
  • the relative position between the laser sensor and the camera has been pre-calibrated before the point cloud data and panoramic image are collected. Based on the pre-calibrated relative position and the relative position relationship between the actual shooting points in the target subspace, the coordinate mapping between the three-dimensional point cloud coordinates corresponding to the collected point cloud data and the panoramic pixel coordinates of the panoramic image can be determined.
  • the specific method of mapping the coordinates of the panoramic image and the point cloud data is not limited.
  • the panoramic pixel coordinates can be directly mapped to the three-dimensional point cloud coordinates, and the three-dimensional point cloud coordinates can be mapped to the panoramic pixel coordinates according to the relative posture relationship between the devices for acquiring the panoramic image and the point cloud data;
  • the panoramic pixel coordinates can also be first mapped to the intermediate coordinates, and then mapped to the three-dimensional point cloud coordinates, by means of the relative posture relationship and the intermediate coordinate system;
  • the three-dimensional point cloud coordinates can be first mapped to the intermediate coordinates, and then mapped to the intermediate coordinates.
  • the coordinates are mapped to the panoramic pixel coordinates.
  • the specific type of the intermediate coordinate system is not limited, nor is the specific method used in the coordinate mapping process. The mapping method used will be different according to the different intermediate coordinate systems and the different relative posture relationships.
  • a floor plan of the target physical space obtained by splicing the multiple space structure diagrams is acquired.
  • the panoramic image 1 of subspace 1 may contain a partial image m of subspace 2. Spatially, the area corresponding to image m is outside subspace 1, but within the field of view of the camera when taking panoramic image 1. Therefore, in practical applications, based on the panoramic image 1 of subspace 1 and the panoramic image 2 of subspace 2, the spatial connection relationship between subspace 1 and subspace 2 can be determined by, for example, feature matching; then, based on the spatial connection relationship, the spatial structure diagram of subspace 1 can be spliced with the spatial structure diagram of subspace 2. Similarly, the spatial structure diagrams of other subspaces are also spliced, and the image after splicing is the floor plan of the target physical space.
  • the spatial connection relationship between the subspaces may also be determined based on the point cloud data corresponding to the subspaces respectively.
  • Figure 9 is a schematic diagram of a floor plan of a target physical space provided in the second embodiment of the present invention, wherein a in Figure 9 is the actual spatial structure of the target physical space, b in Figure 9 is a spatial structure diagram of multiple subspaces in the target physical space, and c in Figure 9 is the floor plan of the target physical space.
  • the target physical space contains three subspaces, namely bedroom 1, bedroom 2 and living room 3, where bedroom 1 and living room 3 are connected through door body 1, bedroom 2 and living room 3 are connected through door body 2, and bedroom 1 and bedroom 2 are connected through window 3.
  • the generated spatial structure diagram x of bedroom 1, the spatial structure diagram y of bedroom 2 and the spatial structure diagram z of living room 3 are shown in b of FIG9 , each of which includes a spatial outline, and the spatial outline is marked with a mapping medium representing a door body and a window.
  • the floor plan of the target physical space obtained by splicing the spatial structure diagram x, the spatial structure diagram y and the spatial structure diagram z is shown in c of FIG9 .
  • the spatial structure diagram of the target subspace includes the spatial outline of the subspace and the mapping media corresponding to the windows and doors (i.e., the target media) in the subspace.
  • the accurate spatial outline corresponding to the target subspace can be obtained; based on the panoramic image, the position of the mapping medium corresponding to the target medium in the target subspace in the spatial outline can be determined, and the mapping medium corresponding to the target medium can be identified at the correct position on the spatial outline, so that the determined spatial structure diagram can accurately reflect the actual spatial structure information of the target subspace, and then the floor plan determined according to the multiple spatial structure diagrams can also accurately reflect the actual spatial structure of the target physical space.
  • the spatial outlines of multiple subspaces can be edited separately through multiple control terminals at the same time, and the corresponding spatial structure diagrams can be generated. Through the mutual cooperation of multiple control terminals, the time required to generate the floor plan of the target physical space can be reduced.
  • FIG. 10 is an interactive flow chart of another method for generating a floor plan according to a second embodiment of the present invention.
  • the method for generating a floor plan includes the following steps:
  • the target control terminal acquires point cloud data and panoramic images respectively corresponding to a plurality of subspaces in the target physical space obtained by the information collection terminal, so as to determine a plurality of space contours corresponding to the plurality of subspaces.
  • the target control terminal displays multiple space contours corresponding to the multiple subspaces for synchronous editing with other control terminals (1002-1), and sends the multiple space contours corresponding to the multiple subspaces to other control terminals (1002-2), so that the multiple space contours corresponding to the multiple subspaces are displayed on the other control terminals for synchronous editing with the target control terminal (1002-3).
  • the target control terminal and other control terminals obtain, for a target subspace among the multiple subspaces, a target medium identified in the target panoramic image, where the target medium is an image of a physical medium in the target subspace in the target panoramic image; and determine a mapping medium corresponding to the target medium in the spatial contour of the target subspace to generate a spatial structure diagram of the target subspace.
  • the target panoramic image is a panoramic image captured by at least one shooting point in the target subspace and is used to identify the target medium.
  • the target control terminal obtains the spatial structure diagram of the subspace obtained by other control terminals.
  • the target control terminal obtains a floor plan of the target physical space obtained by splicing the multiple space structure diagrams.
  • the target control terminal and other control terminals can edit the spatial contours of different subspaces respectively, and generate the spatial structure diagram of the corresponding subspaces.
  • the target control terminal is used to edit the spatial contour of subspace 1 and generate the spatial structure diagram of subspace 1
  • the other control terminals are used to edit the spatial contour of subspace 2 and generate the spatial structure diagram of subspace 2.
  • the process of each device editing the spatial contour and generating the spatial structure diagram is the same, and can refer to the aforementioned embodiment, which will not be repeated in this embodiment.
  • the other control terminals synchronize the point cloud data and panoramic images corresponding to multiple subspaces in the target physical space.
  • the target control terminal and other control terminals further include: displaying the terminal device identifiers corresponding to the multiple space outlines.
  • the terminal device identifiers are used to indicate the terminal device currently editing each space outline, and the terminal devices include the target control terminal and other control terminals.
  • the target control terminal is communicatively connected to other control terminals.
  • the target control terminal can synchronously update multiple space outlines on the display interface according to the editing data sent back by other control terminals.
  • the other control terminals may actively feed back the spatial structure diagram to the target control terminal; or, in response to the spatial structure diagram acquisition instruction of the target control terminal, send the generated spatial structure diagram to the target control terminal. If the target control terminal obtains the spatial structure diagrams corresponding to all subspaces, the obtained spatial structure diagrams are spliced to obtain the floor plan of the target physical space.
  • the number of other control terminals may be more than one, for example, it may match the number of subspaces of the target physical space. In this embodiment, the number of other control terminals is not limited.
  • FIG11 is a schematic diagram of the structure of a floor plan generating device provided in the second embodiment of the present invention.
  • the device is applied to a target control terminal.
  • the device includes: an acquisition module 21 , a display module 22 and a processing module 23 .
  • the acquisition module 21 is used to acquire the point cloud data and panoramic images corresponding to multiple subspaces in the target physical space obtained by the information acquisition terminal, so as to determine the multiple space contours corresponding to the multiple subspaces; wherein the multiple subspaces correspond one-to-one to the multiple space contours, and the point cloud data and panoramic image of any subspace are collected from at least one shooting point in any subspace.
  • the display module 22 is used to display a plurality of space contours corresponding to the plurality of subspaces for editing.
  • a processing module 23 is used to obtain, for a target subspace among multiple subspaces, a target medium identified in a target panoramic image, wherein the target medium is an image of a physical medium in the target subspace in the target panoramic image, and the target panoramic image is a panoramic image for identifying the target medium in a panoramic image acquired by at least one shooting point of the target subspace; determine a mapping medium corresponding to the target medium in the spatial contour of the target subspace to generate a spatial structure diagram of the target subspace; and in response to completion of an operation to obtain multiple spatial structure diagrams corresponding to the multiple subspaces, obtain a floor plan of the target physical space obtained by splicing the multiple spatial structure diagrams.
  • the device also includes a sending module for sending the multiple space contours corresponding to the multiple subspaces to other control terminals, so as to display the multiple space contours corresponding to the multiple subspaces on the other control terminals for synchronous editing with the target control terminal.
  • a sending module for sending the multiple space contours corresponding to the multiple subspaces to other control terminals, so as to display the multiple space contours corresponding to the multiple subspaces on the other control terminals for synchronous editing with the target control terminal.
  • the acquisition module 21 is further used to acquire the spatial structure diagram of the target subspace acquired by the other control terminals.
  • the display module 22 is further used to display terminal device identifiers corresponding to the multiple space contours, respectively, and the terminal device identifier is used to indicate the terminal device currently editing each space contour, and the terminal device includes the target control terminal and the other control terminals.
  • the processing module 23 is specifically used to obtain a first spatial contour based on point cloud data of at least one shooting point of the target subspace; obtain a second spatial contour based on a panoramic image of at least one shooting point of the target subspace; and determine the spatial contour of the target subspace based on the first spatial contour and the second spatial contour.
  • the display module 22 is specifically used to adjust the shape and/or position of the target contour line in response to an editing operation on the target contour line on the spatial contour of the target subspace, so that the target contour line coincides with the wall line in the point cloud image, and the point cloud image is determined based on point cloud data of at least one shooting point of the target subspace, and the spatial contour of the target subspace is composed of multiple contour lines.
  • the processing module 23 is further specifically used to determine the spatial connection relationship between the multiple subspaces based on the point cloud data and/or panoramic images respectively corresponding to the multiple subspaces; and splice the spatial structure diagrams according to the spatial connection relationship to obtain the floor plan of the target physical space.
  • the processing module 23 further obtains the panoramic pixel coordinates corresponding to the target medium in the target panoramic image and the mapped spatial contour coordinates according to the mapping relationship between the target panoramic image and the spatial contour of the target subspace, so as to determine the mapping medium corresponding to the target medium in the spatial contour of the target subspace; wherein the mapping medium is adapted to the target identifier and the target display size of the target medium, the target identifier is used to distinguish target media belonging to different types, and the mapping relationship is a mapping between the target panoramic image and the spatial contour established based on the coordinate mapping between the point cloud data of the target subspace and the target panoramic image.
  • the device shown in FIG. 11 can execute the steps in the aforementioned embodiments.
  • steps in the aforementioned embodiments please refer to the description in the aforementioned embodiments, which will not be repeated here.
  • the floor plan of the target physical space can be understood as a two-dimensional plane structure diagram of the target physical space.
  • the user can obtain the distribution information of each subspace in the target physical space and the connection relationship between the subspaces.
  • the target physical space as a living space as an example, based on the floor plan of a certain living space, the user can understand the location of the subspaces such as the living room and bedroom contained in the living space, as well as the orientation of the windows or doors in the subspaces, and then understand the lighting conditions of the building space.
  • an accurate floor plan for the target physical space helps users better understand the target physical space.
  • an accurate floor plan can better display the structure of the house to be sold, which helps buyers better understand the house to be sold, thereby increasing the transaction rate.
  • the floor plan generation system includes: an information collection terminal and a control terminal (i.e., the target control terminal in FIG. 7 ).
  • the information collection terminal can be directly integrated into the control terminal as a whole with the control terminal; the information collection terminal can also be decoupled from the control terminal and set separately, and the information collection terminal communicates with the control terminal through, for example, Bluetooth, Wireless Fidelity (WiFi) hotspot, etc.
  • WiFi Wireless Fidelity
  • the information collection terminal is used to collect point cloud data and panoramic images corresponding to multiple subspaces in the target physical space.
  • any subspace may contain more than one shooting point. Therefore, in this embodiment, the point cloud data and panoramic image of any subspace are collected at least one shooting point in any subspace.
  • the point cloud data and panoramic image of subspace X include: point cloud data Xa and panoramic image Xa collected at shooting point a in subspace X, and point cloud data Xb and panoramic image Xb collected at shooting point b in subspace X.
  • the shooting points may be selected by the user in the subspace according to modeling needs; or may be reference shooting points automatically generated for the subspace by the control terminal based on description information of the subspace (such as space size, etc.) input by the user.
  • the information collection terminal collects point cloud data and panoramic images at multiple shooting points in the target physical space in turn.
  • the corresponding information collection process is the same.
  • the data collection process of the information collection terminal is described by taking the process of collecting point cloud data and panoramic images at the target shooting point Y in the subspace X as an example.
  • the information collection terminal includes: a laser sensor, a camera, a motor and a processor (such as a CPU).
  • the laser sensor and the camera are used as sensing devices to collect scene information of subspace X, that is, point cloud data and panoramic images of subspace X.
  • the information collection terminal responds to the information collection instruction, drives the motor to drive the laser sensor to rotate 360 degrees to collect the point cloud data corresponding to the target shooting point Y; drives the motor to drive the camera to rotate 360 degrees to collect the panoramic image corresponding to the target shooting point Y.
  • the information collection instruction is sent by the control terminal, or the information collection instruction is triggered in response to a trigger operation of the user on the information collection terminal.
  • the motor can drive the laser sensor and the camera to rotate at the same time to collect point cloud data and panoramic images at the same time, or the motor can drive the laser sensor and the camera to rotate in sequence to collect point cloud data and panoramic images respectively. This embodiment does not limit this.
  • the camera can be turned on synchronously to collect scene lighting information of the current shooting point for light measurement and determine the corresponding exposure parameters. Afterwards, the camera collects the panoramic image based on the determined exposure parameters.
  • the camera in the information collection terminal is a panoramic camera or a non-panoramic camera. If the camera in the information collection terminal is a non-panoramic camera, during the above 360-degree rotation process, the camera is controlled to capture images corresponding to the target shooting point Y at multiple preset angles, and the above processor can stitch the images captured at multiple preset angles into a panoramic image through a panoramic image stitching algorithm such as a feature matching algorithm.
  • a panoramic image stitching algorithm such as a feature matching algorithm.
  • multiple preset angles can be customized by the user according to the camera's viewing angle.
  • the images taken based on the multiple preset angles contain scene information within a 360-degree range of the current point. For example, if the camera's viewing angle is 180 degrees, a certain reference direction can be 0 degrees, and a degree and (a+180) degrees based on the reference direction are determined as preset angles.
  • High Dynamic Range Imaging can be combined to generate a high-quality panoramic image.
  • the information acquisition terminal also includes an inertial measurement unit (IMU).
  • IMU inertial measurement unit
  • the IMU is used to correct the posture information corresponding to the collected point cloud data and image data to reduce errors caused by environmental or human factors (for example, the information acquisition terminal is not placed horizontally, etc.).
  • the control terminal is used to generate a floor plan of the target physical space based on the point cloud data and panoramic images corresponding to the multiple subspaces in the target physical space sent by the information collection terminal.
  • the control terminal can be a terminal device with data processing capabilities such as a smart phone, a tablet computer, and a laptop computer.
  • the floor plan generation system shown in Figure 7 also includes a cloud server, which can be a physical server or a virtual server in the cloud.
  • the control terminal communicates with the cloud server by accessing a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G/LTE, 5G and other mobile communication networks.
  • the cloud server may receive point cloud data and panoramic images corresponding to the multiple subspaces forwarded by the control terminal to generate a floor plan of the target physical space, and feed the floor plan back to the control terminal for display.
  • the process of the cloud server generating the floor plan of the target physical space is the same as the process of the control terminal generating the spatial structure diagram, but because the cloud server has stronger computing power, it is more efficient in generating the floor plan of the target physical space, which can further improve the user experience.
  • the cloud server may also be directly connected to the information collection terminal to directly obtain the point cloud data and panoramic images corresponding to the multiple subspaces collected by the information collection terminal to generate a floor plan of the target physical space.
  • control terminal takes the control terminal as an example to illustrate the process of generating a floor plan of the target physical space from the perspective of the control terminal.
  • FIG. 12 is a flow chart of a method for generating a floor plan provided by a third embodiment of the present invention, which is applied to a control terminal. As shown in FIG. 12 , the method for generating a floor plan includes the following steps:
  • a target space contour of the target subspace is acquired based on point cloud data and/or panoramic images collected at at least one shooting point of the target subspace.
  • the target medium is an image of a physical medium in a target subspace in the target panoramic image.
  • the target panoramic image is a panoramic image captured by at least one shooting point in the target subspace and is used to identify the target medium.
  • the floor plan of the target physical space obtained by splicing the spatial structure diagrams of the multiple spaces is acquired.
  • the process of the information collection terminal obtaining the point cloud data and panoramic images corresponding to multiple subspaces in the target physical space can be referred to the aforementioned embodiment, and will not be repeated in this embodiment.
  • step 1201 if the information acquisition terminal is integrated into the target control terminal, the control terminal can directly and synchronously acquire the point cloud data and panoramic images of multiple subspaces obtained by the information acquisition terminal; if the information acquisition terminal is connected to the control terminal through a communication link, the control terminal receives the point cloud data and panoramic images of multiple subspaces sent by the information acquisition terminal based on the pre-established communication link.
  • spatial structure diagrams corresponding to multiple subspaces are first obtained in sequence, and then the spatial structure diagrams corresponding to the multiple subspaces are spliced to determine the floor plan of the target physical space.
  • generating the spatial structure diagram corresponding to each subspace one by one, fewer computing resources are required, which can adapt to the computing processing capabilities of most control devices, thereby increasing the application scenarios of the floor plan generation method of this embodiment.
  • the spatial structure graph acquisition process corresponding to any subspace is the same.
  • the target subspace to be edited is taken as an example for description.
  • the multiple subspaces in the target physical space can be divided into: subspaces for which the spatial structure diagrams have been acquired, subspaces for which the spatial structure diagrams have not been acquired, and subspaces for which the spatial structure diagrams are being acquired, that is, the target subspace currently to be edited.
  • a target space contour of the target subspace is acquired based on point cloud data and/or a panoramic image collected at at least one shooting point in the target subspace.
  • a first spatial contour can be obtained based on point cloud data of at least one shooting point, and the first spatial contour is directly used as the target spatial contour of the target subspace; or, a second spatial contour can be obtained based on a panoramic image of at least one shooting point, and the second spatial contour is directly used as the target spatial contour of the target subspace; or, a spatial contour with better contour line quality is selected from the above-mentioned first spatial contour and the above-mentioned second spatial contour as the target spatial contour of the target subspace; or, the contour lines of the above-mentioned first spatial contour and the above-mentioned second spatial contour are fused to obtain a spatial contour with better contour line quality, and the fused spatial contour is directly used as the spatial contour of the target subspace.
  • manual or automatic editing may be performed on the first space contour and/or the second space contour to use the edited space contour as the space contour of the target space.
  • the point cloud data collected at at least one shooting point in the target subspace is mapped to a two-dimensional plane to determine the two-dimensional point cloud image of the target subspace. Since the relative position relationship of at least one shooting point in the target subspace is known, the point cloud data collected at at least one shooting point can be fused based on the relative position relationship to obtain dense point cloud data, and then mapped to obtain a two-dimensional plane point cloud image. Afterwards, the spatial contour based on the two-dimensional point cloud image recognition is displayed in the two-dimensional point cloud image, wherein the spatial contour is composed of multiple contour lines.
  • the shape and/or position of the target contour line in the spatial contour is adjusted so that the target contour line coincides with the wall line in the two-dimensional point cloud image, and the spatial contour composed of the contour line coincident with the wall line is determined as the target spatial contour of the target subspace.
  • the wall line in the point cloud image corresponds to the wall in the target subspace.
  • the point cloud data is actually a series of three-dimensional coordinate points, and any three-dimensional coordinate point can be represented by three-dimensional Cartesian coordinates (x, y, z), where x, y, z are the coordinate values of the x-axis, y-axis, and z-axis that have a common zero point and are orthogonal to each other.
  • the point cloud data collected at at least one shooting point of the target subspace is mapped to a two-dimensional plane to determine the two-dimensional point cloud image of the target subspace, including: converting the three-dimensional coordinate point (x, y, z) corresponding to the point cloud data collected at at least one shooting point of the target subspace into a two-dimensional coordinate point (x, y), for example: setting the z-axis coordinate value of the three-dimensional coordinate point to 0, and then obtaining the two-dimensional point cloud image of the target subspace based on the converted two-dimensional coordinate point.
  • FIG13 is a schematic diagram of a space contour provided by the third embodiment of the present invention.
  • the contour line l in the space contour Z does not coincide with the wall line L in the point cloud image, but in fact the contour line l and the wall line L correspond to the same wall in the target subspace.
  • the contour line l can be used as the target contour line, and the length and position of the target contour line l can be adjusted through the preset editing options so that the target contour line l coincides with the wall line L in the point cloud image.
  • the space contour Z' formed by the contour line coinciding with the wall line is the target space contour of the target subspace.
  • the target contour lines in the space contour that do not match the actual wall positions may be deleted; or a target contour line corresponding to a certain wall line may be added.
  • the 2D point cloud image can also be cropped, for example, the area with unclear boundaries in the 2D point cloud image can be cropped.
  • the point cloud data corresponding to this area may be interference data collected by the laser sensor due to interference from glass, mirrors, etc. in the target subspace.
  • the unclear wall lines in the 2D point cloud image can be highlighted to accurately identify the spatial contour.
  • the target medium is first identified based on the panoramic image, and then the mapping medium corresponding to the target medium is determined, so as to obtain the spatial structure diagram of the target subspace.
  • the image corresponding to the door and the window in the target space in the panoramic image is called the target medium
  • the mark corresponding to the door and the window in the spatial contour is called the mapping medium.
  • the target medium can be identified from the panoramic image by, for example, an image recognition algorithm, and the mapping medium can be determined based on the target medium.
  • the target medium may be included in the panoramic images corresponding to more than one shooting point.
  • a target panoramic image can be determined from the panoramic images of at least one shooting point for identifying the target medium.
  • the target panoramic image is a panoramic image that meets the preset recognition requirements, such as a panoramic image with the widest field of view and the best light, or a panoramic image containing user marking information (such as the best panoramic image).
  • the shooting point corresponding to the target panoramic image can be the same as or different from the shooting point corresponding to the point cloud data used to generate the spatial contour.
  • the target subspace contains two shooting points, namely shooting point A and shooting point B, and a panoramic image A1 and point cloud data A2 are obtained at shooting point A, and a panoramic image B1 and point cloud data B2 are obtained at shooting point B.
  • the spatial contour is generated based on the point cloud data A2
  • it can be determined that the panoramic image A1 is the target panoramic image
  • it can also be determined that the panoramic image B1 is the target panoramic image.
  • the spatial contour is generated based on the point cloud data B2
  • it can be determined that the panoramic image A1 is the target panoramic image
  • the door body and the window have corresponding size information.
  • the mapping medium added to the spatial contour of the target subspace should at least be able to reflect the position information, size information and type information of the door body and/or window contained in the target subspace.
  • determining a mapping medium corresponding to a target medium in a spatial profile of a target subspace includes:
  • the panoramic pixel coordinates corresponding to the target medium in the target panoramic image and the mapped spatial contour coordinates are obtained to determine the mapping medium corresponding to the target medium in the spatial contour of the target subspace.
  • the mapping medium is adapted to the target identification and target display size of the target medium.
  • the target identification is used to distinguish target media belonging to different types. For example, target media belonging to a door body or a target media belonging to a window corresponds to different target identifications.
  • the above-mentioned mapping relationship between the target panoramic image and the space contour is a mapping between the target panoramic image and the space contour established based on the coordinate mapping between the point cloud data of the target subspace and the target panoramic image.
  • the relative position between the laser sensor and the camera has been pre-calibrated before the point cloud data and panoramic image are collected. Based on the pre-calibrated relative position and the relative position relationship between the actual shooting points in the target subspace, the coordinate mapping between the three-dimensional point cloud coordinates corresponding to the collected point cloud data and the panoramic pixel coordinates of the panoramic image can be determined.
  • the specific method of coordinate mapping of panoramic images and point cloud data is not limited.
  • the panoramic pixel coordinates can be directly mapped to three-dimensional point cloud coordinates, and the three-dimensional point cloud coordinates can be mapped to panoramic pixel coordinates according to the relative posture relationship between the devices for acquiring the panoramic image and the point cloud data;
  • the panoramic pixel coordinates can also be first mapped to intermediate coordinates, and then the intermediate coordinates can be mapped to three-dimensional point cloud coordinates, with the help of relative posture relationship and intermediate coordinate system;
  • the three-dimensional point cloud coordinates can be first mapped to intermediate coordinates, and then the intermediate coordinates can be mapped to panoramic pixel coordinates.
  • the specific type of the intermediate coordinate system is not limited, nor is the specific method used in the coordinate mapping process. The mapping method used will be different depending on the different intermediate coordinate systems and the different relative posture relationships.
  • a subspace to be edited is determined from the at least one subspace to acquire the spatial structure diagram of the subspace to be edited.
  • determining a subspace to be edited from at least one subspace includes: randomly determining a subspace from at least one subspace as the subspace to be edited.
  • determining a subspace to be edited from at least one subspace further includes: determining a subspace to be edited from at least one subspace according to the acquisition time point of the point cloud data and the panoramic image corresponding to the at least one subspace, wherein the difference between the acquisition time point corresponding to the subspace to be edited and the current moment is greater than the difference between the acquisition time point corresponding to other subspaces in the at least one subspace and the current moment.
  • the subspaces for which the spatial structure diagram is not obtained include: subspace a, subspace b and subspace c, and the acquisition time of the point cloud data corresponding to them is t1, t2 and t3, respectively, wherein t1 is earlier than t2, and t2 is earlier than t3.
  • subspace a is determined to be the subspace to be edited.
  • subspace b is determined to be the subspace to be edited, and so on, until the spatial structure diagrams of all subspaces are obtained.
  • determining a subspace to be edited from at least one subspace further includes: determining a connection relationship between the multiple subspaces according to the point cloud data corresponding to the multiple subspaces; determining a subspace to be edited from at least one subspace according to the connection relationship, wherein the subspace to be edited is connected to the target subspace.
  • subspace 1 and subspace 2 are connected through the same target door, because the target door is in an open state when collecting point cloud data and panoramic images.
  • point cloud data 1 of subspace 1 and point cloud data 2 of subspace 2 may contain feature points m corresponding to the same object
  • panoramic image 3 of subspace 1 and panoramic image 4 of subspace 2 may contain image n corresponding to the same object.
  • the area corresponding to image n is outside subspace 1, but within the field of view of the camera when taking panoramic image 3.
  • the spatial connection relationship between subspace 1 and subspace 2 can be determined based on point cloud data 1 of subspace 1 and point cloud data 2 of subspace 2, and/or panoramic image 3 of subspace 1 and panoramic image 4 of subspace 2, by methods such as feature matching.
  • connection relationship may also be used to stitch together spatial structure diagrams of multiple subspaces.
  • the spatial structure diagrams of the multiple subspaces can be spliced according to the connection relationship of the multiple subspaces to obtain the floor plan of the target physical space.
  • the target physical space contains three subspaces, namely bedroom 1, bedroom 2 and living room 3, wherein bedroom 1 and living room 3 are connected through door body 1, bedroom 2 and living room 3 are connected through door body 2, and bedroom 1 and bedroom 2 are connected through window body 3.
  • each spatial structure diagram includes a spatial outline, and the spatial outline is marked with a mapping medium representing a door body and a window.
  • the spatial structure diagram x, the spatial structure diagram y and the spatial structure diagram z are spliced to obtain the floor plan of the target physical space, as shown in c of FIG9 .
  • the spatial structure diagrams corresponding to the multiple subspaces are first obtained in sequence, and then the spatial structure diagrams corresponding to the multiple subspaces are spliced to determine the floor plan of the target physical space.
  • the spatial structure diagram corresponding to each subspace is generated one by one, fewer computing resources are required, and the computing processing capabilities of most control devices can be adapted, thereby increasing the application scenarios of the floor plan generation method of this embodiment.
  • the position of the mapping medium corresponding to the target medium in the target subspace in the spatial contour is determined, and the mapping medium corresponding to the target medium can be identified at the correct position on the spatial contour, so that the determined spatial structure diagram can accurately reflect the actual spatial structure information of the target subspace, and then the floor plan determined based on the multiple spatial structure diagrams can also accurately reflect the actual spatial structure of the target physical space.
  • FIG14 is a schematic diagram of the structure of a floor plan generating device provided in the third embodiment of the present invention.
  • the device is applied to a control terminal.
  • the device includes: an acquisition module 31 and a processing module 32 .
  • the acquisition module 31 is used to acquire point cloud data and panoramic images corresponding to multiple subspaces in the target physical space obtained by the information acquisition terminal, wherein the point cloud data and panoramic image of any subspace are collected at least one shooting point in any subspace.
  • the processing module 32 is used to obtain a target space outline of the target subspace for the current target subspace to be edited according to the point cloud data and/or panoramic image collected at at least one shooting point of the target subspace during the process of sequentially performing spatial structure diagram acquisition processing on the multiple subspaces; obtain a target medium identified in the target panoramic image, wherein the target medium is an image of a physical medium in the target subspace in the target panoramic image, and the target panoramic image is a panoramic image used to identify the target medium in the panoramic image collected at at least one shooting point of the target subspace; determine a mapping medium used to represent the target medium on the target space outline of the target subspace to obtain a spatial structure diagram of the target subspace; in response to the completion of the acquisition of the spatial structure diagram of the target subspace, if there is no subspace among the multiple subspaces for which the spatial structure diagram has not been acquired, then obtain a floor plan of the target physical space obtained by splicing the spatial structure diagrams of the multiple spaces.
  • the processing module 32 is also used to respond to the completion operation of acquiring the spatial structure diagram of the target subspace. If there is at least one subspace among the multiple subspaces whose spatial structure diagram has not been acquired, a subspace to be edited is determined from the at least one subspace to acquire the spatial structure diagram of the subspace to be edited.
  • the processing module 32 is specifically used to determine a subspace to be edited from the at least one subspace based on the point cloud data corresponding to the at least one subspace and the acquisition time point of the panoramic image, wherein the difference between the acquisition time point corresponding to the subspace to be edited and the current moment is greater than the difference between the acquisition time points corresponding to other subspaces in the at least one subspace and the current moment.
  • the processing module 32 is further specifically used to determine the connection relationship between multiple subspaces based on the point cloud data corresponding to the multiple subspaces respectively; based on the connection relationship, determine a subspace to be edited from the at least one subspace, wherein the subspace to be edited is connected to the target subspace.
  • the processing module 32 is further specifically used to map the point cloud data collected at at least one shooting point in the target subspace to a two-dimensional plane to determine the two-dimensional point cloud image of the target subspace; display a spatial contour identified based on the two-dimensional point cloud image, wherein the spatial contour is composed of multiple contour lines; in response to a user's correction operation on the spatial contour, adjust the shape and/or position of the target contour line in the spatial contour so that the target contour line coincides with the wall line in the two-dimensional point cloud image; and determine that the spatial contour composed of the contour lines coinciding with the wall line is the target spatial contour of the target subspace.
  • the processing module 32 is further specifically used to obtain a mapping medium that matches the target identification and the target display size of the target medium, wherein the target identification is used to distinguish target media belonging to different types; determine the three-dimensional point cloud coordinates corresponding to the mapping medium according to the panoramic pixel coordinates corresponding to the target medium in the target panoramic image, and the relative posture between the device for obtaining the target subspace point cloud data and the panoramic image; and add the mapping medium to the target space contour of the target subspace according to the three-dimensional point cloud coordinates corresponding to the mapping medium.
  • the device shown in FIG. 14 can execute the steps in the aforementioned embodiments.
  • steps in the aforementioned embodiments please refer to the description in the aforementioned embodiments, which will not be repeated here.
  • FIG. 15 is a flow chart of a method for generating a floor plan provided by a fourth embodiment of the present invention, which is applied to a control terminal. As shown in FIG. 15 , the method for generating a floor plan includes the following steps:
  • Step S151 Acquire point cloud data and panoramic images collected by the information collection terminal in each of N spaces in the target physical space, wherein the point cloud data and panoramic image are collected at at least one shooting point in each space.
  • Step S152 obtaining an Mth space outline of an Mth space among the N spaces for displaying for editing, wherein the Mth space outline is obtained based on point cloud data and/or a panoramic image collected from at least one shooting point of the Mth space.
  • Step S153 obtaining the target medium identified in the target panoramic image of the Mth space, so as to obtain the mapping medium of the target medium in the Mth space outline according to the target medium, so as to edit the Mth space outline according to the mapping medium to obtain the floor plan of the Mth space.
  • the target panoramic image is a panoramic image used to identify a target medium in a panoramic image captured by at least one shooting point in the Mth space, and the target medium is an image of a physical medium in the Mth space in the target panoramic image.
  • Step S154 determine whether the Mth space is the last space among the N spaces for generating a floor plan; if not, execute step S155; if so, execute step S156.
  • Step S155 assign M to M+1 and return to step S152.
  • Step S156 Obtain a floor plan of the target physical space consisting of N floor plan structure diagrams for display, and the process ends.
  • the target physical space includes at least N spaces, M and N are natural numbers, and 1 ⁇ M ⁇ N.
  • the control terminal can be a terminal device with data processing capabilities, such as a smart phone, a tablet computer, or a laptop computer.
  • the floor plan structure diagram of each space when generating a floor plan of the target object space, is first generated one by one in the order from 1 to N, and then the floor plan structure diagrams of the N spaces are spliced to generate the floor plan of the target physical space.
  • the floor plan structure diagram of each space is generated one by one.
  • it requires less computing resources and can adapt to the computing processing capabilities of most control devices; on the other hand, it is convenient for users to independently edit the floor plan structure diagram of each space so as to generate accurate floor plan structure diagrams for N spaces respectively, thereby obtaining an accurate floor plan of the target physical space.
  • the mapping medium on the space outline in the floor plan structure diagram of each space is determined based on the panoramic image. Since the panoramic image can better reflect the actual position of the door body and window body (i.e., the target medium) in the actual space compared to the point cloud data, therefore, based on the assistance of the panoramic image, the floor plan structure diagram of each space is marked with more accurate door body and window information, which can better reflect the scene information in the actual space.
  • the following describes the method for generating a floor plan shown in FIG. 15 in conjunction with the schematic diagram of a scene for generating a floor plan shown in FIG. 16 .
  • FIG16 is a schematic diagram of a scenario for generating a floor plan provided by the fourth embodiment of the present invention.
  • the information collection terminal and the control terminal are decoupled from each other.
  • the information collection terminal collects point cloud data and panoramic images in each of the N spaces and sends them to the control terminal, so that the control terminal generates a floor plan of the target physical space based on the received point cloud data and panoramic images.
  • the information collection terminal can communicate with the control terminal through methods such as Bluetooth, Wireless Fidelity (WiFi) hotspots, etc.
  • the information collection terminal can also be directly integrated into the control terminal as a whole, and the control terminal can directly and synchronously obtain the point cloud data and panoramic images collected by the information collection terminal in each of the N spaces, without the need to transmit the point cloud data and panoramic images based on the established communication connection. In this way, the efficiency of the control terminal in generating the floor plan of the target physical space can be improved.
  • a floor plan of the target physical space can also be generated through a cloud server, where the cloud server can be a physical server or a virtual server in the cloud, and the control terminal communicates with the cloud server by accessing a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G/LTE, 5G and other mobile communication networks.
  • a communication standard such as WiFi, 2G, 3G, 4G/LTE, 5G and other mobile communication networks.
  • the cloud server can receive the point cloud data and panoramic images corresponding to the N spaces forwarded by the control terminal to generate a floor plan of the target physical space, and feed the floor plan back to the control terminal for display.
  • the cloud server can also communicate directly with the information collection terminal to directly obtain the point cloud data and panoramic images corresponding to the multiple subspaces collected by the information collection terminal to generate a floor plan of the target physical space.
  • the process of the cloud server generating the floor plan of the target physical space is the same as the process of the control terminal generating the spatial structure diagram, but because the cloud server has stronger computing power, it is more efficient in generating the floor plan of the target physical space, which can further enhance the user experience.
  • the point cloud data and panoramic image of each space are collected by the information collection terminal at at least one shooting point in the space.
  • the point cloud data and panoramic image of the Xth space include: the point cloud data Xa and panoramic image Xa collected at the shooting point a in the Xth space, and the point cloud data Xb and panoramic image Xb collected at the shooting point b in the Xth space.
  • the information collection terminal collects point cloud data and panoramic images at each shooting point, the corresponding information collection process is the same.
  • a certain shooting point Y is taken as an example for explanation.
  • the information collection terminal includes: laser sensor, camera, motor and processor (such as CPU), etc.
  • the laser sensor and camera are used as sensing devices to collect scene information of each space at each shooting point, that is, point cloud data and panoramic images;
  • the motor is used to drive the laser sensor and camera to rotate so as to collect point cloud data and panoramic images from various angles.
  • the information collection terminal also includes an inertial measurement unit (IMU for short).
  • IMU inertial measurement unit
  • the IMU is used to correct the posture information corresponding to the collected point cloud data and image data to reduce errors caused by environmental or human factors (such as: the information collection terminal is not placed horizontally, etc.).
  • the information collection terminal responds to the information collection instruction, drives the motor to drive the laser sensor to rotate 360 degrees to collect the point cloud data corresponding to the target shooting point Y; drives the motor to drive the camera to rotate 360 degrees to collect the panoramic image corresponding to the target shooting point Y.
  • the information collection instruction is sent by the user through the control terminal, or is triggered in response to the user's trigger operation on the information collection terminal.
  • the motor can drive the laser sensor and the camera to rotate at the same time to collect point cloud data and panoramic images at the same time, or it can drive the laser sensor and the camera to rotate in sequence to collect point cloud data and panoramic images respectively. This embodiment does not limit this.
  • the camera can be turned on synchronously to collect scene lighting information of the current shooting point for light measurement and determine the corresponding exposure parameters. Afterwards, the camera collects the panoramic image based on the determined exposure parameters.
  • the camera in the information collection terminal is a panoramic camera or a non-panoramic camera. If the camera in the information collection terminal is a non-panoramic camera, then during the above-mentioned 360-degree rotation process, the camera is controlled to capture images corresponding to the target shooting point Y at multiple preset angles, and the above-mentioned processor can stitch the images captured at multiple preset angles into a panoramic image through a panoramic image stitching algorithm such as a feature matching algorithm.
  • a panoramic image stitching algorithm such as a feature matching algorithm.
  • a certain reference direction can be 0 degrees, and a degree, (a+120) degrees and (a+240) degrees based on the reference direction are determined as preset angles, and the camera is controlled to capture at these three preset angles to obtain image 1, image 2 and image 3; then, stitch images 1 to 3 to obtain a panoramic image.
  • the number of preset angles can be customized by the user according to the viewing angle of the camera, and the images taken based on multiple preset angles contain scene information within a 360-degree range of the current point.
  • High Dynamic Range Imaging can be combined to generate a high-quality panoramic image.
  • the information collection terminal After the information collection terminal has collected point cloud data and panoramic images at a shooting point, it can directly send the point cloud data and panoramic images of the shooting point to the control terminal; or, first store them, and then after completing the collection of point cloud data and panoramic images at all shooting points in the current space, send the point cloud data and panoramic images of all shooting points in the space to the control terminal.
  • This embodiment does not impose any restrictions on this.
  • the control terminal After receiving the point cloud data and panoramic images of each of the N spaces in the target physical space, the control terminal generates the floor plan diagrams of the N spaces one by one, as shown in FIG16 . For example, the floor plan diagram of the first space is generated first, and then the floor plan diagram of the second space is generated, and so on, until the floor plan diagram of the Nth space is generated.
  • the order of obtaining the floor plan diagrams of the N spaces can be determined according to the acquisition time points of the point cloud data and/or panoramic images corresponding to the N spaces, that is, the N spaces can be sorted.
  • the value of N is 3, that is, the target physical space contains 3 spaces, namely, space a, space b and space c, wherein the acquisition time point of the point cloud data corresponding to space a is t1, the acquisition time point of the point cloud data corresponding to space b is t2, and the acquisition time point of the point cloud data corresponding to space c is t3, wherein t1 is earlier than t2, and t2 is earlier than t3.
  • the above three spaces can be sorted according to the order of t1, t2 and t3, for example: space a is the first space, space b is the first space, and space c is the third space. Then, the floor plan diagrams of each space are generated one by one in the order from 1 to 3. Optionally, the order of generating the floor plan diagrams corresponding to the N spaces can also be randomly determined.
  • the floor plan of the target physical space is composed of floor plan diagrams of N spaces.
  • the floor plan of the target physical space and the floor plan diagram of each space can be understood as a two-dimensional plan of the space.
  • the difference between the two is that the floor plan of the target physical space corresponds to more spaces, and the two-dimensional plan is "larger", while the floor plan diagram of each space is only a two-dimensional plan of the current space, and the two-dimensional plan is "smaller".
  • the two-dimensional plan is more commonly understood as a top view of the physical space.
  • each floor plan includes a space outline and a mapping medium.
  • the space outline is used to represent the walls in the physical space
  • the mapping medium is used to represent the windows and doors in the physical space. Therefore, the process of obtaining the floor plan of each space can be further divided into the process of determining the space outline and the process of determining the mapping medium, which are described below.
  • each space corresponds to a space profile.
  • the space profile of the Mth space is recorded as the Mth space profile.
  • the second space profile is the space profile of the second space
  • the third space profile is the space profile of the third space.
  • the Mth space outline of the Mth space can be determined based on the point cloud data and/or panoramic image collected from at least one shooting point of the Mth space, and the Mth space outline can be displayed for editing.
  • a first spatial contour can be obtained based on point cloud data of at least one shooting point in the Mth space, and the first spatial contour can be directly used as the Mth spatial contour; or, a second spatial contour can be obtained based on a panoramic image of at least one shooting point in the Mth space, and the second spatial contour can be directly used as the Mth spatial contour; or, a spatial contour with better contour line quality is selected from the above-mentioned first spatial contour and the above-mentioned second spatial contour as the Mth spatial contour; or, the contour lines of the above-mentioned first spatial contour and the above-mentioned second spatial contour are fused to obtain a spatial contour with better contour line quality, and the fused spatial contour is directly used as the Mth spatial contour.
  • manual or automatic editing may be performed on the first spatial contour and/or the second spatial contour, so that the edited spatial contour is used as the Mth spatial contour.
  • the Mth space outline includes multiple contour lines, among which there are some contour lines that do not match the actual positions of the walls in the Mth space.
  • the Mth space outline can be edited.
  • the Mth space outline may be displayed in the two-dimensional point cloud image of the Mth space, and then, in response to the user's editing operation on the Mth space outline, the outline of the Mth space outline is adjusted so that the outline of the Mth space outline coincides with the wall line in the two-dimensional point cloud image.
  • the wall line in the two-dimensional point cloud image corresponds to the wall in the Mth space.
  • the two-dimensional point cloud image of the Mth space is obtained by plane mapping the point cloud data of at least one shooting point in the Mth space. Since the relative position relationship of at least one shooting point in the Mth space is known, the point cloud data collected at at least one shooting point can be fused based on the relative position relationship to obtain dense point cloud data, and then mapped to obtain a two-dimensional point cloud image.
  • the point cloud data of at least one shooting point in the Mth space among the N spaces are first fused to determine the target point cloud data of the Mth space; then, the target point cloud data is mapped to a two-dimensional plane to obtain the initial two-dimensional point cloud image of the Mth space; thereafter, according to the user's correction operation on the initial two-dimensional point cloud image (for example: cropping out the area with unclear boundaries in the two-dimensional point cloud image, highlighting the unclear wall lines in the two-dimensional point cloud image, etc.), it is determined that the target two-dimensional point cloud image obtained after the correction operation is the two-dimensional point cloud image of the Mth space.
  • the point cloud data is actually a series of three-dimensional coordinate points, and any three-dimensional coordinate point can be represented by three-dimensional Cartesian coordinates (x, y, z), where x, y, z are the coordinate values of the x-axis, y-axis, and z-axis that have a common zero point and are orthogonal to each other.
  • the target point cloud data of the Mth space is mapped to a two-dimensional plane to obtain an initial two-dimensional point cloud image of the Mth space, including: converting the three-dimensional coordinate points (x, y, z) corresponding to the target point cloud data of the Mth space into two-dimensional coordinate points (x, y), for example: setting the z-axis coordinate value of the three-dimensional coordinate point to 0, and then obtaining the initial two-dimensional point cloud image of the Mth space based on the converted two-dimensional coordinate point.
  • the two-dimensional point cloud image of the Mth space may also be used to obtain the Mth space contour of the Mth space, for example, by performing edge detection on the two-dimensional point cloud image of the Mth space to obtain the Mth space contour.
  • the shape and/or position of the contour lines in the Mth space contour can be adjusted based on the preset contour line editing options, or the contour lines for which there are no corresponding wall lines can be deleted, or the contour lines corresponding to a certain wall line can be added.
  • FIG17 is a schematic diagram of the Mth space contour provided by the fourth embodiment of the present invention.
  • the contour line l in the Mth space contour does not coincide with the wall line L in the two-dimensional point cloud image of the Mth space, but in fact, the contour line l and the wall line L correspond to the same wall in the Mth space.
  • the length and position of the contour line l can be adjusted through the preset contour line editing options so that the contour line l coincides with the wall line L in the two-dimensional point cloud image, and the edited Mth space contour is obtained, as shown in the right figure of FIG17.
  • the contour line h can be added through the preset contour line editing option, and the contour line h coincides with the wall line H in the two-dimensional point cloud image, as shown in the right figure of FIG17.
  • the contour lines of the Mth spatial contour finally determined are connected to each other.
  • the above is the process of determining the Mth space profile.
  • the following describes the process of determining the mapping medium in the Mth space profile.
  • the corresponding mapping medium in the apartment structure diagram can be obtained in the following way: first, identify the corresponding target medium from the panoramic image through methods such as image recognition, that is, the image of the physical medium (door and window) in the Mth space in the panoramic image; then, determine the corresponding mapping medium based on the target medium.
  • the target medium may be included in the panoramic images corresponding to more than one shooting point in the Mth space.
  • a target panoramic image can be determined from the panoramic images of at least one shooting point for identifying the target medium.
  • the target panoramic image is a panoramic image that meets the preset recognition requirements, such as a panoramic image with the widest field of view and the best light, or a panoramic image containing user marking information (such as the best panoramic image).
  • the shooting point corresponding to the target panoramic image may be the same as or different from the shooting point corresponding to the point cloud data used to generate the space contour.
  • the Mth space contains two shooting points, namely shooting point A and shooting point B, and a panoramic image A1 and point cloud data A2 are obtained at shooting point A, and a panoramic image B1 and point cloud data B2 are obtained at shooting point B.
  • the Mth space contour is generated based on the point cloud data A2
  • it can be determined that the panoramic image A1 is the target panoramic image
  • the Mth space contour is generated based on the point cloud data B2
  • it can be determined that the panoramic image A1 is the target panoramic image
  • the coordinate mapping between the three-dimensional point cloud coordinates corresponding to the collected point cloud data and the panoramic pixel coordinates of the panoramic image can be determined.
  • mapping between the target panoramic image and the Mth space outline can be established based on the coordinate mapping between the point cloud data of the Mth space and the target panoramic image, that is, the mapping relationship between the target panoramic image of the Mth space and the Mth space outline is predetermined.
  • the mapping medium of the target medium in the Mth space contour is obtained according to the target medium, including: according to the mapping relationship between the target panoramic image of the Mth space and the Mth space contour, the panoramic pixel coordinates corresponding to the target medium in the target panoramic image and the mapped space contour coordinates are obtained, so as to determine the mapping medium corresponding to the target medium in the Mth space contour, so as to obtain the floor plan of the Mth space.
  • the mapping medium is adapted to the target identification and target display size of the target medium, and the target identification is used to distinguish target media of different types (door bodies or windows).
  • the specific method of coordinate mapping of panoramic images and point cloud data is not limited.
  • the panoramic pixel coordinates can be directly mapped to three-dimensional point cloud coordinates, and the three-dimensional point cloud coordinates can be mapped to panoramic pixel coordinates according to the relative posture relationship between the devices for acquiring the panoramic image and the point cloud data;
  • the panoramic pixel coordinates can also be first mapped to intermediate coordinates, and then the intermediate coordinates can be mapped to three-dimensional point cloud coordinates, with the help of relative posture relationship and intermediate coordinate system;
  • the three-dimensional point cloud coordinates can be first mapped to intermediate coordinates, and then the intermediate coordinates can be mapped to panoramic pixel coordinates.
  • the specific type of the intermediate coordinate system is not limited, nor is the specific method used in the coordinate mapping process. The mapping method used will be different depending on the different intermediate coordinate systems and the different relative posture relationships.
  • Mth space After determining the floor plan of the Mth space, determine whether the Mth space is the last space among the N spaces to generate a floor plan. If M is less than N, it means that there are still spaces that have not obtained floor plan diagrams, and M is assigned to M+1 to obtain the floor plan diagram of the M+1th space; if M is equal to N, it means that there are no spaces that have not obtained floor plan diagrams, and the first floor plan diagram, the second floor plan diagram, ..., and the Nth floor plan diagram are spliced in the same spatial coordinate system to form the floor plan diagram of the target physical space.
  • the N subspaces in the target physical space can determine the connection relationship between the N spaces based on the point cloud data of each space; or, determine the connection relationship between the N spaces based on the panoramic image of each space. Then, based on the connection relationship, the N apartment structure diagrams are converted to the same spatial coordinate system and spliced in the spatial coordinate system.
  • the N spaces in the target physical space are connected to each other through doors or windows.
  • the Eth space and the Fth space are connected through the same target door, because the target door is in an open state when collecting point cloud data and panoramic images.
  • the point cloud data 1 of the Eth space and the point cloud data 2 of the Fth space may contain feature points m corresponding to the same object
  • the panoramic image 3 of the Eth space and the panoramic image 4 of the Fth space may contain images n corresponding to the same object.
  • connection relationship between the Eth space and the Fth space can be determined by, for example, feature matching, based on the point cloud data 1 of the Eth space and the point cloud data 2 of the Fth space, and/or, the panoramic image 3 of the Eth space and the panoramic image 4 of the Fth space.
  • the aforementioned same object whether it is data collection in the Eth space or information collection in the Fth space, is within the collection range of the camera or laser sensor.
  • FIG18 is a schematic diagram of a floor plan generation process provided by the fourth embodiment of the present invention, wherein a in FIG18 is the actual spatial structure of the target physical space, b in FIG18 is the floor plan structure diagram of three spaces in the target physical space, and c in FIG18 is the floor plan of the target physical space.
  • the target physical space contains three spaces, namely bedroom 1, bedroom 2 and living room 3, wherein bedroom 1 and living room 3 are connected through door body 1, bedroom 2 and living room 3 are connected through door body 2, and bedroom 1 and bedroom 2 are connected through window body 3. It is assumed that bedroom 1 is determined as the first space, bedroom 2 is determined as the second space, and living room 3 is determined as the third space according to the corresponding point cloud data collection time.
  • the floor plan of the first space, the floor plan of the second space and the floor plan of the third space are generated in sequence from 1 to 3, as shown in b in FIG18.
  • Each floor plan includes a space outline, and the space outline is marked with a mapping medium representing a door body and a window. Since there is no space for which the floor plan is not obtained after the floor plan of the third space is determined, the floor plan of the target physical space is obtained by splicing the floor plan of the first space, the floor plan of the second space and the floor plan of the third space in the same spatial coordinate system according to the connection relationship between bedroom 1, bedroom 2 and living room 3, as shown in c in FIG18.
  • Figure 19 is a structural schematic diagram of a floor plan generating device provided in the fourth embodiment of the present invention, which is used to generate a floor plan of a target physical space, wherein the target physical space includes at least N spaces and is applied to a control terminal.
  • the device includes: an acquisition module 41, a first processing module 42 and a second processing module 43.
  • the acquisition module 41 is used to acquire the point cloud data and the panoramic image collected by the information collection terminal in each of the N spaces, wherein the point cloud data and the panoramic image are collected from at least one shooting point in each of the spaces.
  • the first processing module 42 is used to obtain the Mth space outline of the Mth space in the N spaces for display for editing, and the Mth space outline is obtained based on point cloud data and/or panoramic images collected from at least one shooting point of the Mth space; obtain a target medium identified in a target panoramic image of the Mth space, so as to obtain a mapping medium of the target medium in the Mth space outline based on the target medium, so as to edit the Mth space outline based on the mapping medium, so as to obtain a floor plan of the Mth space; the target panoramic image is a panoramic image for identifying the target medium in a panoramic image collected from at least one shooting point of the Mth space, and the target medium is an image of a physical medium in the Mth space in the target panoramic image.
  • the second processing module 43 is used to determine whether the Mth space is the last space among the N spaces to generate a floor plan diagram; if not, M is assigned a value of M+1 and the process returns to execute the first processing module; if so, the floor plan of the target physical space composed of N floor plan diagrams is obtained for display, and the process ends; wherein M and N are natural numbers, and 1 ⁇ M ⁇ N.
  • the first processing module 42 is specifically used to display the Mth space contour of the Mth space in the two-dimensional point cloud image of the Mth space in the N spaces; wherein the two-dimensional point cloud image is obtained after plane mapping of the point cloud data of at least one shooting point in the Mth space; in response to an editing operation on the Mth space contour, adjust the contour line of the Mth space contour so that the contour line coincides with the wall line in the two-dimensional point cloud image.
  • the first processing module 42 is specifically used to fuse the point cloud data of at least one shooting point of the Mth space among the N spaces to determine the target point cloud data of the Mth space; map the target point cloud data to a two-dimensional plane to obtain an initial two-dimensional point cloud image of the Mth space; and determine the two-dimensional point cloud image of the Mth space based on the user's correction operation on the initial two-dimensional point cloud image.
  • the first processing module 42 is specifically used to obtain the panoramic pixel coordinates corresponding to the target medium in the target panoramic image and the mapped spatial contour coordinates according to the mapping relationship between the target panoramic image of the Mth space and the Mth spatial contour, so as to determine the mapping medium corresponding to the target medium in the Mth spatial contour; wherein the mapping medium is adapted to the target identifier and the target display size of the target medium, the target identifier is used to distinguish target media belonging to different types, and the mapping relationship is a mapping between the target panoramic image and the Mth spatial contour established based on the coordinate mapping between the point cloud data of the Mth space and the target panoramic image.
  • the first processing module 42 is specifically configured to sort the N spaces according to acquisition time points of the point cloud data and/or panoramic images respectively corresponding to the N spaces.
  • the device shown in FIG. 19 can execute the steps in the aforementioned embodiments.
  • the floor plan generation system includes: an information collection terminal and a control terminal.
  • the information collection terminal can be directly integrated into the control terminal as a whole with the control terminal; the information collection terminal can also be decoupled from the control terminal and set separately, and the information collection terminal communicates with the control terminal through, for example, Bluetooth, Wireless Fidelity (WiFi) hotspot, etc.
  • WiFi Wireless Fidelity
  • the information collection terminal includes: a laser sensor, a camera, a motor and a processor (such as a CPU).
  • the laser sensor and the camera are used as perception devices to collect point cloud data and panoramic images corresponding to multiple subspaces in the target physical space, that is, scene information of multiple subspaces.
  • more than one shooting point may be set in any subspace, for example: subspace X corresponds to shooting point 1, shooting point 2 and shooting point 3. Therefore, in this embodiment, the point cloud data and panoramic image of any subspace refer to the point cloud data and panoramic image collected at at least one shooting point in any subspace.
  • the setting of shooting points can be customized by the user based on the current collection situation when the user collects the scene information of each subspace through the information collection terminal; or the information collection terminal or the control terminal can automatically generate reference shooting points for the subspace based on the description information of the subspace (such as the size of the space, etc.) input by the user.
  • the information collection terminal responds to the information collection instruction, drives the motor to drive the laser sensor to rotate 360 degrees to collect the point cloud data corresponding to the target shooting point Y; drives the motor to drive the camera to rotate 360 degrees to collect the panoramic image corresponding to the target shooting point Y.
  • the information collection instruction is sent by the control terminal, or the information collection instruction is automatically triggered in response to the user's operation on the information collection terminal.
  • the motor can drive the laser sensor and the camera to rotate at the same time to collect point cloud data and panoramic images at the same time, or it can drive the laser sensor and the camera to rotate in sequence, for example: first drive the laser sensor to rotate and then drive the camera to rotate, or first drive the camera to rotate and then drive the laser to rotate, so as to collect point cloud data and panoramic images in sequence, respectively.
  • This embodiment does not impose any restrictions on this.
  • the camera can be turned on synchronously during the process of collecting point cloud data to collect scene lighting information of the current shooting point for light measurement and determine the corresponding exposure parameters. Afterwards, the camera collects the panoramic image based on the determined exposure parameters.
  • the camera in the information collection terminal is a panoramic camera or a non-panoramic camera. If the camera in the information collection terminal is a non-panoramic camera, then during the above-mentioned 360-degree rotation process, the camera is controlled to capture images corresponding to the target shooting point Y at multiple preset angles, and the above-mentioned processor can stitch the images captured at multiple preset angles into a panoramic image through a panoramic image stitching algorithm such as a feature matching algorithm. Among them, multiple preset angles can be customized by the user according to the viewing angle of the camera.
  • a certain reference direction can be 0 degrees, and a degree and (a+180) degrees based on the reference direction are determined as preset angles.
  • the images captured based on multiple preset angles contain scene information within a 360-degree range of the current shooting point.
  • High Dynamic Range Imaging can be combined to generate a high-quality panoramic image.
  • the information acquisition terminal also includes an inertial measurement unit (IMU).
  • IMU inertial measurement unit
  • the IMU is used to correct the posture information corresponding to the collected point cloud data and image data to reduce errors caused by environmental or human factors (for example, the information acquisition terminal is not placed horizontally, etc.).
  • the control terminal is used to generate a floor plan diagram for each subspace in turn based on the point cloud data and panoramic images corresponding to multiple subspaces in the target physical space sent by the information collection terminal, and when generating the floor plan diagram, it is spliced with the previously generated floor plan diagram to finally generate a floor plan diagram for the target physical space.
  • the control terminal can be a terminal device with data processing capabilities, such as a smart phone, a tablet computer, or a laptop computer.
  • the floor plan generation system shown in Figure 7 may also include a cloud server, which may be a physical server or a virtual server in the cloud.
  • the control terminal communicates with the cloud server by accessing a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G/LTE, 5G and other mobile communication networks.
  • the cloud server may receive point cloud data and panoramic images corresponding to the multiple subspaces forwarded by the control terminal to generate a floor plan of the target physical space, and feed the floor plan back to the control terminal for display.
  • the process of the cloud server generating the floor plan of the target physical space is the same as the process of the control terminal generating the spatial structure diagram, but because the cloud server has stronger computing power, it is more efficient in generating the floor plan of the target physical space, which can further improve the user experience.
  • the cloud server may also be directly connected to the information collection terminal to directly obtain the point cloud data and panoramic images corresponding to the multiple subspaces collected by the information collection terminal to generate a floor plan of the target physical space.
  • the following is an explanation of the process of generating a floor plan of a target physical space based on a control terminal.
  • the following is an explanation of the process of generating a floor plan of a target physical space based on a control terminal.
  • FIG. 20 is a flow chart of a method for generating a floor plan according to a fifth embodiment of the present invention, which is applied to a control terminal. As shown in FIG. 20 , the method for generating a floor plan includes the following steps:
  • a target space outline of the first subspace is obtained based on point cloud data and/or panoramic images collected at at least one shooting point of the first subspace.
  • the target panoramic image is a panoramic image captured at at least one shooting point in the first subspace and is used to identify the target medium.
  • the apartment structure diagram of the first subspace is spliced with the apartment structure diagram of the second subspace, and the second subspace is the subspace in which the apartment structure diagram has been spliced with the first subspace.
  • the splicing result is determined as the floor plan of the target physical space.
  • step 2001 the specific process of the information collection terminal collecting point cloud data and panoramic images of multiple subspaces in the target physical space can be referred to the aforementioned embodiment and will not be described in detail in this embodiment.
  • the control terminal can directly and synchronously acquire the point cloud data and panoramic images of multiple subspaces obtained by the information acquisition terminal; if the information acquisition terminal is communicatively connected to the control terminal through a communication link, the control terminal receives the point cloud data and panoramic images of multiple subspaces sent by the information acquisition terminal based on the pre-established communication link.
  • the floor plan structure diagrams of multiple subspaces are generated one by one, and each time the floor plan structure diagram of a subspace is generated, it is spliced with the previously generated floor plan structure diagram until the floor plan structure diagram of the last subspace is generated and the splicing is completed. Finally, the final splicing result is determined to be the floor plan of the target physical space. Since the floor plan structure diagram of a single subspace requires less computing resources, it can adapt to the computing processing capabilities of most control devices, and the splicing is performed while the floor plan structure diagram is generated, which is conducive to the user to confirm the splicing result immediately, and ensure the accuracy of the generated floor plan of the target physical space.
  • the generation process and splicing process of the floor plan diagram of each subspace are similar.
  • the subspace to be spliced is called the first subspace
  • the subspace that has been spliced with the floor plan diagram before the first subspace is called the second subspace. It can be understood that the subspaces corresponding to the first subspace and the second subspace are constantly updated.
  • the generation process and the splicing process of the apartment structure diagram of the first subspace are first described, and the update of the first subspace and the second subspace are described in subsequent embodiments.
  • the floor plan of the target physical space is composed of floor plan diagrams of multiple subspaces.
  • the floor plan of the target physical space and the floor plan diagram of each subspace can be understood as the two-dimensional plan of the space.
  • the difference between the two is that the floor plan of the target physical space corresponds to more subspaces, and the two-dimensional plan is "larger", while the floor plan diagram of each subspace is only the two-dimensional plan of the current subspace, and the two-dimensional plan is "smaller".
  • the two-dimensional plan is more commonly understood as a bird's-eye view of the physical space.
  • the floor plan of a subspace includes a space outline for representing the walls in the subspace and a mapping medium for representing the doors and windows in the subspace. Therefore, when determining the floor plan of a subspace, the space outline and mapping medium of the subspace must be obtained first.
  • the target space contour of the first subspace can be obtained based on the point cloud data and/or panoramic image collected at at least one shooting point of the first subspace.
  • the first space contour of the first subspace can be determined based on the point cloud data collected at at least one shooting point of the first subspace; the second space contour of the first subspace can be determined based on the panoramic image collected at at least one shooting point of the first subspace.
  • the target space contour of the first subspace is determined based on the first space contour and/or the second space contour.
  • the target space contour contains multiple contour lines. Among them, there are some contour lines that do not match the actual position of the wall in the first subspace. In order to ensure that the target space contour can accurately reflect the first subspace, the contour lines in the above target space contour can be edited. Therefore, after determining the target space contour of the first subspace according to the first space contour and/or the second space contour, manual or automatic editing processing can also be performed on the target space contour.
  • the target space contour of the first subspace may be displayed in the two-dimensional point cloud image of the first subspace, and then, in response to the user's editing operation on the target space contour, the contour line of the target space contour is adjusted so that the contour line of the target space contour coincides with the wall line in the two-dimensional point cloud image.
  • the wall line in the two-dimensional point cloud image corresponds to the wall in the first subspace.
  • the two-dimensional point cloud image of the first subspace is obtained by plane mapping the point cloud data of at least one shooting point in the first subspace.
  • the point cloud data is actually a series of three-dimensional coordinate points, and any three-dimensional coordinate point can be represented by a three-dimensional Cartesian coordinate (x, y, z), where x, y, z are the coordinate values of the x-axis, y-axis, and z-axis, which have a common zero point and are orthogonal to each other.
  • the point cloud data collected at at least one shooting point in the first subspace is mapped to a two-dimensional plane to obtain a two-dimensional point cloud image of the first subspace, including: converting the three-dimensional coordinate point (x, y, z) corresponding to the point cloud data collected at at least one shooting point in the first subspace into a two-dimensional coordinate point (x, y), for example: setting the z-axis coordinate value of the three-dimensional coordinate point to 0, and then obtaining the two-dimensional point cloud image of the first subspace based on the converted two-dimensional coordinate point.
  • the point cloud data collected at at least one shooting point can be fused based on the relative position relationship to obtain dense point cloud data, and then mapped to obtain a two-dimensional point cloud image.
  • the point cloud data of at least one shooting point in the first subspace are first fused to determine the target point cloud data of the first subspace; then, the target point cloud data is mapped to a two-dimensional plane to obtain an initial two-dimensional point cloud image of the first subspace; thereafter, according to the user's correction operation on the initial two-dimensional point cloud image (for example: cropping out the area with unclear boundaries in the two-dimensional point cloud image, highlighting the unclear wall lines in the two-dimensional point cloud image, etc.), it is determined that the target two-dimensional point cloud image obtained after the correction operation is the two-dimensional point cloud image of the first subspace.
  • the two-dimensional point cloud image of the first subspace may also be used to obtain a first space contour of the first subspace, for example, by performing edge detection on the two-dimensional point cloud image of the first subspace to obtain the first space contour.
  • the shape and/or position of the contour lines in the target space contour can be adjusted based on preset contour line editing options, or contour lines for which there are no corresponding wall lines can be deleted, or contour lines corresponding to a certain wall line can be added.
  • FIG21 is a schematic diagram of the target space contour provided by the fifth embodiment of the present invention.
  • the contour line l in the target space contour does not coincide with the wall line L in the two-dimensional point cloud image of the first subspace, but in fact the contour line l and the wall line L correspond to the same wall in the first subspace.
  • the contour line of the target space contour is edited, there is no contour line in the target space contour corresponding to the wall line H in the two-dimensional point cloud image.
  • the size and position of the contour line l can be adjusted through the preset contour line editing option so that the contour line l coincides with the wall line L in the two-dimensional point cloud image; the contour line h is added through the preset contour line editing option, and the contour line h coincides with the wall line H in the two-dimensional point cloud image.
  • the edited target space contour is shown in the right figure in FIG21. In this embodiment, the contour lines of the target space contour finally determined are connected to each other.
  • the process of determining the mapping medium in the apartment structure diagram corresponding to the first subspace is as follows: first, identify the corresponding target medium from the panoramic image through methods such as image recognition, that is, the image of the physical medium (door body and window body) in the first subspace in the panoramic image; then, determine the mapping medium corresponding to the target medium, that is, the corresponding identification of the door body and window body in the space outline.
  • the target medium may be identifiable in the panoramic images corresponding to more than one shooting point in the first subspace.
  • the three panoramic images corresponding to shooting point 1, shooting point 2 and shooting point 3 in the first subspace all contain images corresponding to the door and window in the first subspace.
  • the purpose of acquiring panoramic images at at least one shooting point in the same subspace is to ensure the integrity of the scene information of each subspace, and the panoramic images acquired are usually redundant for determining the target medium. Therefore, when determining the target medium of the first subspace, it is not necessary to identify the target medium in all panoramic images of the first subspace.
  • a target panoramic image can be determined from the panoramic image of at least one shooting point for identifying the target medium.
  • the target panoramic image is a panoramic image that meets the preset recognition requirements, such as: a panoramic image with the widest field of view and the best light, or a panoramic image containing user marking information (such as: the best panoramic image).
  • the shooting point corresponding to the target panoramic image may be the same as or different from the shooting point corresponding to the point cloud data used to generate the target space contour of the first subspace.
  • the first subspace contains two shooting points, namely shooting point A and shooting point B, and a panoramic image A1 and point cloud data A2 are obtained at shooting point A, and a panoramic image B1 and point cloud data B2 are obtained at shooting point B.
  • the target space contour is generated based on point cloud data A2
  • it can be determined that the panoramic image A1 is the target panoramic image
  • the target space contour is generated based on point cloud data B2
  • it can be determined that the panoramic image A1 is the target panoramic image
  • the coordinate mapping between the three-dimensional point cloud coordinates corresponding to the collected point cloud data and the panoramic pixel coordinates of the panoramic image can be determined for the first subspace based on the pre-calibrated relative position and the relative position relationship between the actual shooting points in the first subspace.
  • a mapping between the target panoramic image and the target space contour of the first subspace can be established based on the coordinate mapping between the point cloud data of the first subspace and the target panoramic image, that is, the mapping relationship between the target panoramic image and the target space contour of the first subspace is predetermined.
  • a mapping medium for representing the target medium is determined on the target space contour of the first subspace, including: according to the mapping relationship between the target panoramic image of the first subspace and the target space contour, the panoramic pixel coordinates corresponding to the target medium in the target panoramic image and the mapped space contour coordinates are obtained to determine the mapping medium corresponding to the target medium in the target space contour of the first subspace.
  • the mapping medium is adapted to the target identification and the target display size of the target medium, and the target identification is used to distinguish target media of different types (door bodies or windows).
  • the specific method of coordinate mapping of panoramic images and point cloud data is not limited.
  • the panoramic pixel coordinates can be directly mapped to three-dimensional point cloud coordinates, and the three-dimensional point cloud coordinates can be mapped to panoramic pixel coordinates according to the relative posture relationship between the devices for acquiring the panoramic image and the point cloud data;
  • the panoramic pixel coordinates can also be first mapped to intermediate coordinates, and then the intermediate coordinates can be mapped to three-dimensional point cloud coordinates, with the help of relative posture relationship and intermediate coordinate system;
  • the three-dimensional point cloud coordinates can be first mapped to intermediate coordinates, and then the intermediate coordinates can be mapped to panoramic pixel coordinates.
  • the specific type of the intermediate coordinate system is not limited, nor is the specific method used in the coordinate mapping process. The mapping method used will be different depending on the different intermediate coordinate systems and the different relative posture relationships.
  • the apartment structure diagram of the first subspace is spliced with the apartment structure diagram of the second subspace, wherein the second subspace is a subspace whose apartment structure diagram has been spliced before the first subspace.
  • the panoramic image of subspace E contains a certain object in subspace E. Therefore, in practical applications, the adjacent relationship between multiple subspaces can be determined based on the panoramic image of at least one shooting point of the multiple subspaces, for example, by feature matching, etc.; then, based on the adjacent relationship between the multiple subspaces, the apartment structure diagram of the first subspace is spliced with the apartment structure diagram of the second subspace.
  • the above is the process of generating and splicing the apartment structure diagram of the first subspace to be spliced.
  • the following describes the updating of the first subspace and the second subspace, and the process of acquiring the floor plan of the target physical space in conjunction with FIG. 22 to FIG. 24 .
  • Figure 22 is a schematic diagram of the actual spatial structure of a target physical space provided by the fifth embodiment of the present invention.
  • the target physical space contains three subspaces, namely bedroom 1, bedroom 2 and living room 3, wherein bedroom 1 and living room 3 are connected through door body 1, bedroom 2 and living room 3 are connected through door body 2, and bedroom 1 and bedroom 2 are connected through window 3.
  • the subspace corresponding to the point cloud data or panoramic image with the earliest acquisition time point can be determined as the first subspace for generating the floor plan according to the acquisition time point of the point cloud data or panoramic image of each subspace; or, based on the number of adjacent subspaces corresponding to each subspace, the subspace with the largest number of adjacent subspaces can be determined as the first subspace for generating the floor plan; or, a subspace can be randomly selected from multiple subspaces as the first subspace for generating the floor plan.
  • bedroom 1 is taken as the first subspace to generate floor plan diagram 1 of bedroom 1.
  • Figure 23 is a schematic diagram of a floor plan provided in the fifth embodiment of the present invention.
  • the floor plan 1 includes a target space outline of the bedroom 1, and the target space outline is marked with mapping media representing the door body 1 and the window 3 in the bedroom 1.
  • floor plan 1 is the splicing result 1 of the floor plan of the first subspace and the floor plan of the second subspace.
  • the floor plan structure diagram of the second subspace is updated to the splicing process result, that is, the floor plan structure diagram of the second subspace is determined to be the splicing process result 1.
  • a first subspace to be spliced is re-determined from the subspaces that have not undergone the splicing process of the floor plan structure diagram, that is, a subspace is re-determined from the bedroom 2 and the living room 3 as the first subspace, so as to splice the floor plan structure diagram of the re-determined first subspace with the updated floor plan structure diagram of the second subspace.
  • the adjacent relationship between the multiple subspaces can be determined based on the panoramic image of at least one shooting point corresponding to each of the multiple subspaces; based on the adjacent relationship, a first subspace to be spliced is re-determined from the subspace that has not been spliced with the apartment structure diagram, wherein the re-determined first subspace is adjacent to the second subspace.
  • a first subspace to be spliced is re-determined from the subspace that has not been spliced with the apartment structure diagram, wherein the difference between the acquisition time point corresponding to the re-determined first subspace and the current moment is greater than the difference between the acquisition time points corresponding to other subspaces in the subspace that has not been spliced with the apartment structure diagram and the current moment.
  • a subspace is randomly selected from the subspace that has not been spliced with the apartment structure diagram as the new first subspace to be spliced.
  • bedroom 2 is determined as the new first subspace from bedroom 2 and living room 3 according to the adjacent relationship
  • a floor plan diagram 2 of bedroom 2 is generated, and the floor plan diagram 2 of bedroom 2 is spliced with the floor plan diagram of the second subspace (i.e., splicing result 1), and the splicing result 2 is shown in the right figure of FIG5.
  • bedroom 2 is the first subspace
  • bedroom 1 is the second subspace.
  • the floor plan of the second subspace is updated to the splicing result 2. Since there is only one subspace left that has not been spliced with the floor plan, the living room 3 is directly determined as the new first subspace, and the floor plan 3 of the living room 3 is generated, and the floor plan 3 of the living room 3 is spliced with the updated floor plan of the second subspace (i.e., the splicing result 2).
  • the splicing result 3 is shown in FIG24, which is a schematic diagram of another floor plan provided in the fifth embodiment of the present invention, wherein the living room 3 is the first subspace, and the second subspace includes the bedroom 1 and the bedroom 2.
  • the splicing result 3 is determined as the floor plan of the target physical space, that is, Figure 24 is determined as the floor plan of the target physical space.
  • the splicing result 3 in Figure 24 contains the floor plan diagrams of the three subspaces in the target physical space.
  • the floor plan structure diagrams of multiple subspaces are generated one by one, and each time the floor plan structure diagram of a subspace is generated, it is spliced with the previously generated floor plan structure diagram until the floor plan structure diagram of the last subspace is generated and the splicing is completed. Finally, the final splicing result is determined to be the floor plan of the target physical space. Since the generation of the floor plan structure diagram of a single subspace requires fewer computing resources, it can adapt to the computing processing capabilities of most control devices, and the splicing is performed while the floor plan structure diagram is generated, which is conducive to the user to confirm the splicing result immediately, and ensure the accuracy of the generated floor plan of the target physical space.
  • the mapping medium on the spatial outline in the floor plan diagram of each subspace is determined based on the panoramic image.
  • the panoramic image can better reflect the actual position of doors and windows (i.e., the target medium) in the actual space. Therefore, based on the assistance of the panoramic image, the floor plan diagram of each subspace is marked with more accurate door and window information, which can better reflect the scene information in the actual space. Therefore, the floor plan determined according to the floor plan diagrams of multiple subspaces can also accurately reflect the actual spatial structure of the target physical space.
  • Figure 25 is a structural schematic diagram of a floor plan generating device provided in the fifth embodiment of the present invention, and the device is used to generate a floor plan of a target physical space, wherein the target physical space includes multiple subspaces and is applied to a control terminal.
  • the device includes: an acquisition module 51, a splicing module 52 and a processing module 53.
  • the acquisition module 51 is used to acquire the point cloud data and panoramic images respectively corresponding to the multiple subspaces obtained by the information acquisition terminal, wherein the point cloud data and panoramic image of any subspace are acquired from at least one shooting point in any subspace.
  • the stitching module 52 is used to obtain, for a first subspace to be stitched, a target space contour of the first subspace according to point cloud data and/or panoramic images collected at at least one shooting point of the first subspace during the process of stitching the floor plan diagrams of the multiple subspaces in sequence; obtain a target medium identified in the target panoramic image, wherein the target medium is an image of a physical medium in the first subspace in the target panoramic image, and the target panoramic image is a panoramic image used to identify the target medium in the panoramic image collected at at least one shooting point of the first subspace; determine a mapping medium for representing the target medium on the target space contour of the first subspace to obtain the floor plan diagram of the first subspace; and stitch the floor plan diagram of the first subspace with that of the second subspace, wherein the second subspace is a subspace to which the floor plan diagram of the first subspace has been stitched before.
  • the processing module 53 is configured to determine the splicing result as the floor plan of the target physical space if there is no subspace in the multiple subspaces that has not been spliced with the floor plan structure diagram.
  • the processing module 53 is also used to update the floor plan diagram of the second subspace to the splicing result if there is a subspace among the multiple subspaces that has not undergone floor plan splicing processing; and to re-determine a first subspace to be spliced from the subspace that has not undergone floor plan splicing processing, so as to splice the floor plan diagram of the re-determined first subspace with the updated floor plan diagram of the second subspace.
  • the processing module 53 is specifically used to determine the adjacent relationship between multiple subspaces based on the panoramic image of at least one shooting point corresponding to the multiple subspaces respectively; based on the adjacent relationship, re-determine a first subspace to be spliced from the subspaces that have not been spliced with the apartment structure diagram, wherein the re-determined first subspace is adjacent to the second subspace.
  • the processing module 53 is further specifically used to re-determine a first subspace to be stitched from the subspace that has not undergone stitching of the house structure diagram according to the acquisition time point of the point cloud data and/or panoramic image corresponding to each subspace in the subspace that has not undergone stitching of the house structure diagram, wherein the difference between the acquisition time point corresponding to the re-determined first subspace and the current moment is greater than the difference between the acquisition time point corresponding to other subspaces in the subspace that has not undergone stitching of the house structure diagram and the current moment.
  • the stitching module 52 is specifically used to determine a first spatial contour of the first subspace based on point cloud data collected at at least one shooting point in the first subspace; determine a second spatial contour of the first subspace based on a panoramic image collected at at least one shooting point in the first subspace; and determine a target spatial contour of the first subspace based on the first spatial contour and/or the second spatial contour.
  • the stitching module 52 is further specifically used to determine the two-dimensional point cloud image of the first subspace based on the point cloud data collected at at least one shooting point in the first subspace; and determine the first spatial contour of the first subspace based on the two-dimensional point cloud image.
  • the stitching module 52 is further specifically used to obtain the panoramic pixel coordinates corresponding to the target medium in the target panoramic image and the mapped spatial contour coordinates according to the mapping relationship between the target panoramic image and the target spatial contour of the first subspace, so as to determine the mapping medium corresponding to the target medium in the target spatial contour of the first subspace; wherein the mapping medium is adapted to the target identifier and the target display size of the target medium, the target identifier is used to distinguish target media belonging to different types, and the mapping relationship is a mapping between the target panoramic image and the spatial contour established based on the coordinate mapping between the point cloud data of the first subspace and the target panoramic image.
  • the device shown in FIG. 25 can execute the steps in the aforementioned embodiments.
  • FIG26 is a flow chart of a method for generating a floor plan according to a sixth embodiment of the present invention, which is used to generate a floor plan of a target physical space, where the target physical space includes at least N spaces and is applied to a control terminal.
  • the method for generating a floor plan includes the following steps:
  • Step S261 Acquire point cloud data and panoramic images collected by the information collection terminal in each of N spaces in the target physical space, wherein the point cloud data and panoramic image are collected at at least one shooting point in each space.
  • Step S262 obtaining the Mth space outline for display for editing; wherein the Mth space outline is the space outline of the Mth space among the N spaces; the Mth space outline is obtained based on point cloud data and/or panoramic images collected from at least one shooting point of the Mth space.
  • Step S264 obtaining the M+1th space outline for display for editing; wherein the M+1th space outline is the space outline of the M+1th space in the N spaces, the M+1th space is an adjacent space to the Mth space, and the M+1th space outline is obtained based on point cloud data and/or panoramic images collected from at least one shooting point of the M+1th space.
  • Step S265 acquiring the second target medium identified in the second target panoramic image, so as to acquire the second mapping medium of the second target medium in the M+1th space outline according to the second target medium, so as to edit the M+1th space outline according to the second mapping medium to obtain the floor plan of the M+1th space.
  • Step S266 splice the floor plan structure diagram of the M+1th space with the floor plan structure diagram of the Mth space, and determine whether the M+1th space is the last space among the N spaces to generate a floor plan structure diagram; if not, execute step S267; if so, execute step S268.
  • Step S267 merge the Mth space and the M+1th space as the Mth space, and return to execute step S264.
  • Step S268 Use the splicing result as the floor plan of the target physical space for display, and the process ends.
  • the first target panoramic image in step S263 is a panoramic image collected at at least one shooting point in the Mth space, and is used to identify the first target medium; the first target medium is an image of the physical medium in the Mth space in the first target panoramic image.
  • the second target panoramic image in step S265 is a panoramic image collected at at least one shooting point in the M+1th space, and is used to identify the second target medium; the second target medium is an image of the physical medium in the M+1th space in the second target panoramic image.
  • the control terminal can be a terminal device with data processing capabilities, such as a smart phone, a tablet computer, or a laptop computer.
  • the control terminal can communicate with the information collection terminal through methods such as Bluetooth, Wireless Fidelity (WiFi) hotspot, etc.
  • WiFi Wireless Fidelity
  • the floor plan structure diagrams of N spaces are generated one by one, and each time the floor plan structure diagram of a space is generated, it is spliced with the previously generated floor plan structure diagram until the floor plan structure diagram of the last space is generated and the splicing is completed. Finally, the final splicing result is determined to be the floor plan of the target physical space. Since the floor plan structure diagram of a single space requires less computing resources, it can adapt to the computing and processing capabilities of most control devices, and the floor plan structure diagram is generated while the splicing is performed, which is conducive to the user to confirm the splicing result immediately, and ensure the accuracy of the floor plan of the generated target physical space.
  • N is used to represent the number of spaces included in the target physical space, and the value of N is an integer greater than or equal to 1.
  • the target physical space is a house
  • the value of N is 1; if the house contains 3 spaces (for example, 1 living room, 1 bedroom and 1 bathroom), the value of N is 3. It can be understood that when the value of N is greater than or equal to 2, any space in the target physical space must have a space adjacent to it.
  • the Mth space actually refers to the only space contained in the target physical space. In this case, the M+1th space does not exist. Therefore, when generating the floor plan of the target physical space, the floor plan of the Mth space is the floor plan of the target physical space.
  • the space corresponding to the Mth space and the M+1th space is actually continuously updated with the splicing process and does not specifically refer to a certain space among the N spaces.
  • the target physical space includes space a, space b and space c, where space a is adjacent to space b, and space b is adjacent to space c.
  • space a is the first space to generate the floor plan diagram, that is, space a is first determined to be the Mth space.
  • the M+1th space is determined. Since space b is adjacent to space a, space b is determined to be the M+1th space, and the floor plan diagram B of space b is generated; then, the floor plan diagram A and the floor plan diagram B are spliced to obtain the splicing result AB.
  • space b i.e., the M+1th space
  • space b i.e., the M+1th space
  • the Mth space and the M+1th space are merged as the Mth space, that is, the merged Mth space includes space a and space b
  • the floor plan of the merged Mth space is the above-mentioned splicing result AB.
  • space c is also adjacent to the merged Mth space. Then, space c is determined to be the M+1th space, and the floor plan diagram C of space c is generated; then, the floor plan diagram C is spliced with the floor plan diagram of the Mth space (i.e., the splicing result AB) to obtain the splicing result ABC. Since there is no space for which the floor plan diagram is not generated, the splicing result ABC is the floor plan of the target physical space.
  • the space corresponding to the Mth space is updated from space a to space a and space b, and the M+1th space is updated from space b to space c.
  • the following describes the method for generating the floor plan shown in FIG. 26 in conjunction with the scenario schematic diagram of generating the floor plan shown in FIG. 27 .
  • FIG27 is a schematic diagram of a scenario for generating a floor plan provided by the sixth embodiment of the present invention.
  • the information collection terminal and the control terminal are decoupled from each other, and the information collection terminal collects point cloud data and panoramic images in each of the N spaces and sends them to the control terminal.
  • the control terminal generates floor plan diagrams of the N spaces one by one, for example: generates floor plan diagram a of space 1, floor plan diagram b of space 2, etc. in sequence; and while generating the floor plan diagram, splices the generated floor plan diagram to finally obtain the floor plan of the target physical space.
  • the information collection terminal can be connected to the control terminal in communication with each other through methods such as Bluetooth, Wireless Fidelity (WiFi) hotspot, etc.
  • WiFi Wireless Fidelity
  • the information collection terminal can also be directly integrated into the control terminal as a whole, and the control terminal can directly and synchronously obtain the point cloud data and panoramic images collected by the information collection terminal in each of the N spaces, without the need to transmit the point cloud data and panoramic images based on the established communication connection. In this way, the efficiency of the control terminal in generating the floor plan of the target physical space can be improved.
  • a floor plan of the target physical space can also be generated through a cloud server.
  • the cloud server can be a physical server or a virtual server in the cloud, and the control terminal communicates with the cloud server by accessing a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G/LTE, 5G and other mobile communication networks.
  • the cloud server can receive the point cloud data and panoramic images corresponding to the N spaces forwarded by the control terminal to generate a floor plan of the target physical space, and feed the floor plan back to the control terminal for display.
  • the cloud server can also be directly connected to the information collection terminal to directly obtain the point cloud data and panoramic images corresponding to the multiple subspaces collected by the information collection terminal to generate a floor plan of the target physical space.
  • the process of the cloud server generating the floor plan of the target physical space is the same as the process of the control terminal generating the spatial structure diagram, but because the cloud server has stronger computing power, it is more efficient in generating the floor plan of the target physical space, which can further enhance the user experience.
  • the control terminal generates the floor plan of the target physical space as an example for explanation, but it is not limited to this.
  • the point cloud data and panoramic image of any space refer to the point cloud data and panoramic image collected at at least one shooting point in any space.
  • the setting of shooting points can be customized by the user based on the current collection situation when the user collects scene information of each space through the information collection terminal; or the information collection terminal or the control terminal can automatically generate reference shooting points for the space based on the space description information (such as space size, etc.) input by the user.
  • the space description information such as space size, etc.
  • the information collection terminal collects point cloud data and panoramic images at each shooting point, the corresponding information collection process is the same.
  • a certain shooting point Y is taken as an example for explanation.
  • the information collection terminal includes: laser sensor, camera, motor and processor (such as CPU).
  • the laser sensor and camera are used as sensing devices to collect scene information of each space at each shooting point, that is, point cloud data and panoramic images; the motor is used to drive the laser sensor and camera to rotate so as to collect point cloud data and panoramic images from various angles.
  • the information collection terminal also includes an inertial measurement unit (IMU for short).
  • IMU inertial measurement unit
  • the IMU is used to correct the posture information corresponding to the collected point cloud data and image data to reduce errors caused by environmental or human factors (such as: the information collection terminal is not placed horizontally, etc.).
  • the information collection terminal responds to the information collection instruction, drives the motor to drive the laser sensor to rotate 360 degrees to collect the point cloud data corresponding to the target shooting point Y; drives the motor to drive the camera to rotate 360 degrees to collect the panoramic image corresponding to the target shooting point Y.
  • the information collection instruction is sent by the user through the control terminal, or is triggered in response to the user's trigger operation on the information collection terminal.
  • the motor can drive the laser sensor and the camera to rotate at the same time to collect point cloud data and panoramic images at the same time, or it can drive the laser sensor and the camera to rotate in sequence, for example: first drive the laser sensor to rotate and then drive the camera to rotate, or first drive the camera to rotate and then drive the laser to rotate, so as to collect point cloud data and panoramic images in sequence, respectively.
  • This embodiment does not impose any restrictions on this.
  • the camera can be turned on synchronously to collect scene lighting information of the current shooting point for light measurement and determine the corresponding exposure parameters. Afterwards, the camera collects the panoramic image based on the determined exposure parameters.
  • the camera in the information collection terminal is a panoramic camera or a non-panoramic camera. If the camera in the information collection terminal is a non-panoramic camera, then during the above-mentioned 360-degree rotation process, the camera is controlled to capture images corresponding to the target shooting point Y at multiple preset angles, and the above-mentioned processor can stitch the images captured at multiple preset angles into a panoramic image through a panoramic image stitching algorithm such as a feature matching algorithm.
  • a panoramic image stitching algorithm such as a feature matching algorithm.
  • a certain reference direction can be 0 degrees, and a degree, (a+120) degrees and (a+240) degrees based on the reference direction are determined as preset angles, and the camera is controlled to capture at these three preset angles to obtain image 1, image 2 and image 3; then, stitch images 1 to 3 to obtain a panoramic image.
  • the number of preset angles can be customized by the user according to the viewing angle of the camera, and the images taken based on multiple preset angles contain scene information within a 360-degree range of the current point.
  • High Dynamic Range Imaging can be combined to generate a high-quality panoramic image.
  • the information collection terminal After the information collection terminal has collected point cloud data and panoramic images at a shooting point, it can directly send the point cloud data and panoramic images of the shooting point to the control terminal; or, first store them, and then after completing the collection of point cloud data and panoramic images at all shooting points in the current space, send the point cloud data and panoramic images of all shooting points in the space to the control terminal.
  • This embodiment does not impose any restrictions on this.
  • the process of generating a floor plan of a certain space based on the point cloud data and panoramic image of any one of the N spaces by the control terminal is the same.
  • space Z taking space Z as an example, the process of generating a floor plan of space Z is first illustrated.
  • Space Z can be used as the Mth space or the M+1th space.
  • the floor plan of space Z includes the space outline and mapping medium of space Z.
  • the space outline is used to represent the wall in the physical space
  • the mapping medium is used to represent the window and door in the physical space. Therefore, the process of obtaining the floor plan of space Z can be further divided into the process of determining the space outline of space Z and the process of determining the mapping medium of space Z, which are described below respectively.
  • the spatial contour of space Z may be obtained based on point cloud data and/or panoramic images collected at at least one shooting point in space Z. Specifically, the first spatial contour of space Z may be determined based on point cloud data collected at at least one shooting point in space Z; and the second spatial contour of space Z may be determined based on the panoramic image collected at at least one shooting point in space Z. Afterwards, the spatial contour of space Z is determined based on the first spatial contour and/or the second spatial contour.
  • the first spatial contour is determined to be the spatial contour of space Z; or, the second spatial contour is determined to be the spatial contour of space Z; or, a spatial contour with better contour line quality is selected from the above-mentioned first spatial contour and the above-mentioned second spatial contour as the spatial contour of space Z; or, the contour lines of the above-mentioned first spatial contour and the above-mentioned second spatial contour are fused to obtain a spatial contour with better contour line quality, and the fused spatial contour is determined to be the spatial contour of space Z.
  • the spatial contour of space Z includes multiple contour lines. Among them, there are some contour lines that do not match the actual position of the wall of space Z. In order to ensure that the spatial contour can accurately reflect space Z, the contour lines in the spatial contour of space Z can be edited. Therefore, after determining the spatial contour of space Z according to the first spatial contour and/or the second spatial contour, the spatial contour of space Z can also be manually or automatically edited.
  • the spatial outline of space Z may be displayed in the two-dimensional point cloud image of space Z; then, in response to a user's editing operation on the spatial outline of space Z, the contour line of the spatial outline of space Z is adjusted so that the contour line of the spatial outline of space Z coincides with the wall line in the two-dimensional point cloud image.
  • the wall line in the two-dimensional point cloud image corresponds to the wall in space Z.
  • the two-dimensional point cloud image of the space Z is obtained by plane mapping the point cloud data of at least one shooting point in the space Z.
  • the point cloud data collected at at least one shooting point can be fused based on the relative position relationship to obtain dense point cloud data, and then mapped to obtain a two-dimensional point cloud image.
  • the point cloud data of at least one shooting point in space Z are first fused to determine the target point cloud data of space Z; then, the target point cloud data is mapped to a two-dimensional plane to obtain an initial two-dimensional point cloud image of space Z; thereafter, based on the user's correction operation on the initial two-dimensional point cloud image (for example: cropping out areas with unclear boundaries in the two-dimensional point cloud image, highlighting unclear wall lines in the two-dimensional point cloud image, etc.), the target two-dimensional point cloud image obtained after the correction operation is determined to be the two-dimensional point cloud image of space Z.
  • the initial two-dimensional point cloud image for example: cropping out areas with unclear boundaries in the two-dimensional point cloud image, highlighting unclear wall lines in the two-dimensional point cloud image, etc.
  • the point cloud data is actually a series of three-dimensional coordinate points, and any three-dimensional coordinate point can be represented by three-dimensional Cartesian coordinates (x, y, z), where x, y, z are the coordinate values of the x-axis, y-axis, and z-axis that have a common zero point and are orthogonal to each other.
  • the target point cloud data of space Z is mapped to a two-dimensional plane to obtain an initial two-dimensional point cloud image of space Z, including: converting the three-dimensional coordinate point (x, y, z) corresponding to the target point cloud data of space Z into a two-dimensional coordinate point (x, y), for example: setting the z-axis coordinate value of the three-dimensional coordinate point to 0, and then obtaining the initial two-dimensional point cloud image of space Z based on the converted two-dimensional coordinate point.
  • the two-dimensional point cloud image of space Z may also be used to obtain a first spatial contour of space Z, for example, by performing edge detection on the two-dimensional point cloud image of space Z to obtain the first spatial contour of space Z.
  • the shape and/or position of the contour lines in the space outline of space Z can be adjusted based on the preset contour line editing options, or the contour lines for which there are no corresponding wall lines can be deleted, or the contour lines corresponding to a certain wall line can be added.
  • FIG28 is a schematic diagram of the spatial contour of space Z provided in the sixth embodiment of the present invention.
  • the contour line l in the spatial contour of space Z does not coincide with the wall line L in the two-dimensional point cloud image of space Z, but in fact the contour line l and the wall line L correspond to the same wall in space Z.
  • the contour line of the spatial contour of space Z is edited, there is no contour line corresponding to the wall line H in the two-dimensional point cloud image in the spatial contour of space Z.
  • the size and position of the contour line l can be adjusted through the preset contour line editing option so that the contour line l coincides with the wall line L in the two-dimensional point cloud image; the contour line h is added through the preset contour line editing option, and the contour line h coincides with the wall line H in the two-dimensional point cloud image.
  • the spatial contour of space Z after editing is shown in the right figure in FIG28. In this embodiment, the contour lines of the spatial contour of space Z finally determined are connected to each other.
  • mapping medium of space Z we must first identify the corresponding target medium from the panoramic image through methods such as image recognition, that is, the image of the physical medium (door and window) in space Z in the panoramic image; then, determine the mapping medium corresponding to the target medium.
  • more than one panoramic image corresponding to a shooting point in space Z may contain the target medium.
  • the three panoramic images corresponding to shooting point 1, shooting point 2 and shooting point 3 of space Z all contain images corresponding to the door and window in space Z.
  • the purpose of acquiring panoramic images at at least one shooting point in the same subspace is to ensure the integrity of the scene information of each subspace.
  • the acquired panoramic images are redundant for determining the target medium. Therefore, when determining the target medium of space Z, it is not necessary to identify the target medium in all panoramic images of space Z.
  • a target panoramic image can be determined from the panoramic image of at least one shooting point in the space Z for identifying the target medium.
  • the target panoramic image is a panoramic image that meets the preset recognition requirements, such as a panoramic image with the widest field of view and the best light, or a panoramic image containing user marking information (such as the best panoramic image).
  • the shooting point corresponding to the target panoramic image may be the same as or different from the shooting point corresponding to the point cloud data used to generate the spatial contour.
  • space Z contains two shooting points, namely shooting point A and shooting point B, and panoramic image A1 and point cloud data A2 are obtained at shooting point A, and panoramic image B1 and point cloud data B2 are obtained at shooting point B.
  • the spatial contour of space Z is generated based on point cloud data A2, it can be determined that panoramic image A1 is the target panoramic image, and it can also be determined that panoramic image B1 is the target panoramic image.
  • the spatial contour of space Z is generated based on point cloud data B2, it can be determined that panoramic image A1 is the target panoramic image, and it can also be determined that panoramic image B1 is the target panoramic image.
  • the coordinate mapping between the three-dimensional point cloud coordinates corresponding to the collected point cloud data of space Z and the panoramic pixel coordinates of the panoramic image can be determined based on the pre-calibrated relative position and the relative position relationship between the actual shooting points in space Z.
  • a mapping between the target panoramic image of space Z and the spatial contour of space Z can be established, that is, the mapping relationship between the target panoramic image of space Z and the spatial contour is predetermined.
  • the panoramic pixel coordinates corresponding to the target medium in the target panoramic image and the mapped spatial contour coordinates can be obtained, so as to determine the mapping medium corresponding to the target medium in the spatial contour of space Z, so as to obtain the floor plan of space Z.
  • the mapping medium is adapted to the target identification and target display size of the target medium, and the target identification is used to distinguish target media of different types (door bodies or windows).
  • the specific method of coordinate mapping of panoramic images and point cloud data is not limited.
  • the panoramic pixel coordinates can be directly mapped to three-dimensional point cloud coordinates, and the three-dimensional point cloud coordinates can be mapped to panoramic pixel coordinates according to the relative posture relationship between the devices for acquiring the panoramic image and the point cloud data;
  • the panoramic pixel coordinates can also be first mapped to intermediate coordinates, and then the intermediate coordinates can be mapped to three-dimensional point cloud coordinates, with the help of relative posture relationship and intermediate coordinate system;
  • the three-dimensional point cloud coordinates can be first mapped to intermediate coordinates, and then the intermediate coordinates can be mapped to panoramic pixel coordinates.
  • the specific type of the intermediate coordinate system is not limited, nor is the specific method used in the coordinate mapping process. The mapping method used will be different depending on the different intermediate coordinate systems and the different relative posture relationships.
  • the above is a process of obtaining the floor plan of any space Z in the target physical space.
  • the following describes the process of obtaining a floor plan of a target physical space including at least N spaces in combination with FIG. 22 in the fifth embodiment, and FIG. 29 to FIG. 31 .
  • the target physical space includes three spaces, namely, bedroom 1, bedroom 2 and living room 3, wherein bedroom 1 and living room 3 are connected through door body 1, bedroom 2 and living room 3 are connected through door body 2, and bedroom 1 and bedroom 2 are connected through window body 3.
  • the space corresponding to the point cloud data or panoramic image with the earliest collection time point can be determined as the first space to generate the floor plan based on the collection time point of the point cloud data or panoramic image of each space; or, based on the number of adjacent spaces corresponding to each space, the space with the largest number of adjacent spaces can be determined as the first space to generate the floor plan; or, a space can be randomly selected from multiple spaces as the first space to generate the floor plan.
  • FIG. 29 is a schematic diagram of a floor plan diagram provided by the sixth embodiment of the present invention.
  • the space adjacent to the Mth space is determined as the M+1th space, and a floor plan diagram of the M+1th space is generated.
  • multiple spaces in the target physical space are connected to each other through doors or windows.
  • space E and space F are connected through the same target door.
  • the target door is in an open state. Therefore, the panoramic image of space E contains an object in space E. Therefore, in practical applications, the adjacent relationship between multiple spaces can be determined based on the panoramic image of at least one shooting point of multiple spaces, for example, by feature matching.
  • the M+1th space can be determined from the space where the floor plan diagram has not been generated, based on the panoramic image of at least one shooting point of the Mth space and the panoramic image of at least one shooting point of each space in the remaining N spaces where the floor plan diagram has not been generated.
  • the total number of adjacent spaces corresponding to each space in the space where the floor plan diagram has not been generated is determined based on the panoramic image of at least one shooting point of the Mth space and the panoramic image of at least one shooting point of each space in the remaining N spaces where the floor plan diagram has not been generated; and the space whose total number of adjacent spaces is greater than or equal to the set threshold is determined to be the M+1th space.
  • a space can be randomly selected from them as the M+1th space.
  • the remaining spaces for which the floor plan diagram has not been generated are bedroom 2 and living room 3. Therefore, the M+1th space outline needs to be determined from bedroom 2 and living room 3.
  • the number of adjacent spaces corresponding to bedroom 2 and living room 3 is 2. Assuming that bedroom 2 is randomly selected as the M+1th space, the floor plan structure diagram S of the M+1th space (i.e., bedroom 2) is generated, as shown in the right figure in Figure 29.
  • FIG. 30 is a schematic diagram of a spliced apartment structure diagram provided by the sixth embodiment of the present invention.
  • the apartment structure diagram of the M+1th space and the apartment structure diagram of the Mth space may be spliced in the same spatial coordinate system.
  • the M+1th space ie, bedroom 2 is not the last space among the three spaces to generate a floor plan diagram.
  • the Mth space and the M+1th space are combined as the Mth space, that is, bedroom 1 and bedroom 2 are considered as a whole, and the whole is taken as the Mth space.
  • the Mth space includes bedroom 1 and bedroom 2, and the floor plan structure diagram of the Mth space is the above-mentioned splicing result RS, as shown in FIG30 .
  • the M+1th space adjacent to the Mth space (including bedroom 1 and bedroom 2) is re-determined. Since living room 3 is adjacent to both bedroom 1 and bedroom 2, living room 3 is also adjacent to the Mth space (including bedroom 1 and bedroom 2). Then, living room 3 is determined to be the M+1th space, and the floor plan structure diagram T of the M+1th space (living room 3) is generated, as shown in the left figure in FIG31, which is a schematic diagram of another spliced floor plan structure diagram provided by the sixth embodiment of the present invention.
  • the floor plan structure diagram T of the M+1th space is spliced with the floor plan structure diagram of the Mth space (i.e., the splicing result RS) to obtain the splicing result RST, as shown in the right figure in Figure 31; and it is determined whether the M+1th space is the last space among the three spaces to generate a floor plan structure diagram.
  • the stitching result RST is the floor plan diagram of the target physical space.
  • the process of obtaining the floor plan diagrams of bedroom 1, bedroom 2 and living room 3 can refer to the aforementioned embodiment and will not be repeated here.
  • the floor plan structure diagrams of N spaces are generated one by one, and each time the floor plan structure diagram of a space is generated, it is spliced with the previously generated floor plan structure diagram until the floor plan structure diagram of the last space is generated and the splicing is completed. Finally, the final splicing result is determined to be the floor plan of the target physical space. Since the floor plan structure diagram of a single space requires less computing resources, it can adapt to the computing and processing capabilities of most control devices, and the splicing is performed while the floor plan structure diagram is generated, which is conducive to the user to confirm the splicing result immediately, and ensure the accuracy of the floor plan of the generated target physical space.
  • the mapping medium on the space outline in the floor plan structure diagram of each space is determined based on the panoramic image.
  • the panoramic image can better reflect the actual position of the door body and window body (i.e., the target medium) in the actual space. Therefore, based on the assistance of the panoramic image, the floor plan structure diagram of each space is marked with more accurate door body and window information, which can better reflect the scene information in the actual space.
  • Figure 32 is a structural schematic diagram of a floor plan generating device provided in the sixth embodiment of the present invention, which is used to generate a floor plan of a target physical space, wherein the target physical space includes at least N spaces and is applied to a control terminal.
  • the device includes: a first acquisition module 61, a second acquisition module 62 and a processing module 63.
  • the first acquisition module 61 is used to acquire the point cloud data and panoramic images collected by the information collection terminal in each of the N spaces, wherein the point cloud data and panoramic images are collected from at least one shooting point in each of the spaces; acquire the Mth space outline for display for editing; wherein the Mth space outline is the space outline of the Mth space in the N spaces, and the Mth space outline is acquired based on the point cloud data and/or panoramic image collected from at least one shooting point in the Mth space; acquire the first target medium identified in the first target panoramic image, so as to acquire the first mapping medium of the first target medium in the Mth space outline based on the first target medium, so as to edit the Mth space outline based on the first mapping medium to acquire the apartment structure diagram of the Mth space; the first target panoramic image is a panoramic image for identifying the first target medium in the panoramic image collected from at least one shooting point in the Mth space, and the first target medium is an image of the physical medium in the Mth space in the first target panoramic image.
  • the second acquisition module 62 is used to acquire the M+1th space outline for display for editing; wherein the M+1th space outline is the space outline of the M+1th space among the N spaces, the M+1th space is an adjacent space of the Mth space, and the M+1th space outline is acquired based on point cloud data and/or panoramic images collected from at least one shooting point of the M+1th space; a second target medium identified in a second target panoramic image is acquired, so that a second mapping medium of the second target medium in the M+1th space outline is acquired based on the second target medium, so as to edit the M+1th space outline based on the second mapping medium to acquire the floor plan of the M+1th space; the second target panoramic image is a panoramic image for identifying the second target medium in the panoramic image collected from at least one shooting point of the M+1th space, and the second target medium is an image of a physical medium in the M+1th space in the first target panoramic image.
  • the processing module 63 is used to splice the floor plan structure diagram of the M+1th space with the floor plan structure diagram of the Mth space, and determine whether the M+1th space is the last space among the N spaces to generate a floor plan structure diagram; if not, the Mth space and the M+1th space are merged as the Mth space, and the second acquisition module 62 is returned to execute; if so, the splicing result is used as the floor plan of the target physical space for display, and the process ends.
  • the second acquisition module 62 is also used to determine the M+1th space from the spaces for which the floor plan diagram has not been generated based on a panoramic image of at least one shooting point of the Mth space and a panoramic image of at least one shooting point of each of the remaining spaces among the N spaces for which the floor plan diagram has not been generated.
  • the second acquisition module 62 is specifically used to determine the total number of adjacent spaces corresponding to each space in the space where the floor plan diagram has not been generated based on a panoramic image of at least one shooting point in the Mth space and a panoramic image of at least one shooting point in each of the remaining N spaces where the floor plan diagram has not been generated; and determine that the space whose total number of adjacent spaces is greater than or equal to a set threshold is the M+1th space.
  • the first acquisition module 61 is specifically used to display the outline of the Mth space in the two-dimensional point cloud image of the Mth space, and in response to the user's editing operation on the Mth space outline, adjust the outline line of the Mth space outline so that the outline line coincides with the wall line in the two-dimensional point cloud image; wherein, the two-dimensional point cloud image of the Mth space is obtained after planar mapping of point cloud data of at least one shooting point in the Mth space.
  • the second acquisition module 62 is specifically used to display the outline of the M+1th space in the two-dimensional point cloud image of the M+1th space, and in response to the user's editing operation on the outline of the M+1th space, adjust the outline line of the M+1th space outline so that the outline line coincides with the wall line in the two-dimensional point cloud image of the M+1th space; wherein the two-dimensional point cloud image of the M+1th space is obtained after plane mapping of the point cloud data of at least one shooting point in the M+1th space.
  • the first acquisition module 61 is specifically used to obtain the panoramic pixel coordinates corresponding to the first target medium in the first target panoramic image and the mapped spatial contour coordinates according to the mapping relationship between the first target panoramic image and the Mth spatial contour, so as to determine the first mapping medium corresponding to the first target medium in the Mth spatial contour, so as to obtain the floor plan of the Mth space; wherein the first mapping medium is adapted to the target identifier and the target display size of the first target medium, the target identifier is used to distinguish target media belonging to different types, and the mapping relationship is a mapping between the first target panoramic image and the Mth spatial contour established based on the coordinate mapping between the point cloud data of the Mth space and the first target panoramic image.
  • the second acquisition module 62 is specifically used to obtain the panoramic pixel coordinates corresponding to the second target medium in the second target panoramic image and the mapped spatial contour coordinates according to the mapping relationship between the second target panoramic image and the M+1th spatial contour, so as to determine the second mapping medium corresponding to the second target medium in the M+1th spatial contour, so as to obtain the floor plan of the M+1th space; wherein the second mapping medium is adapted to the target identification and target display size of the second target medium, the target identification is used to distinguish target media belonging to different types, and the mapping relationship is a mapping between the second target panoramic image and the M+1th spatial contour established based on the coordinate mapping between the point cloud data of the M+1th space and the second target panoramic image.
  • the processing module 63 is specifically used to splice the apartment structure diagram of the M+1th space with the apartment structure diagram of the Mth space in the same spatial coordinate system according to the adjacent relationship between the M+1th space and the Mth space.
  • the device shown in Figure 32 can execute the steps in the aforementioned embodiments.
  • the detailed execution process and technical effects can be found in the description of the aforementioned embodiments, which will not be repeated here.
  • the structure of the spatial structure diagram generating device shown in FIG. 6 and the floor plan generating device shown in FIG. 11, FIG. 14, FIG. 19, FIG. 25 and/or FIG. 32 can be respectively implemented as an electronic device.
  • the electronic device may include: a memory 71, a processor 72, and a communication interface 73.
  • the memory 71 stores executable code, and when the executable code is executed by the processor 72, the processor 72 can at least implement the spatial structure diagram generating method and/or floor plan generating method provided in the aforementioned embodiments.
  • an embodiment of the present invention provides a non-temporary machine-readable storage medium, on which executable code is stored.
  • the processor can at least implement the spatial structure diagram generation method provided in the aforementioned embodiment.
  • each embodiment can be implemented by adding a necessary general hardware platform, and of course can also be implemented by combining hardware and software.
  • the above technical solution is essentially or the part that contributes to the prior art can be embodied in the form of a computer product, and the present invention can be in the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明提供一种空间结构图和户型图生成方法、装置、设备和存储介质,包括:获取信息采集终端在目标空间中至少一个拍摄点位得到的点云数据和全景图像;根据至少一个拍摄点位的点云数据,获取目标空间的空间轮廓;获取在目标全景图像中识别的目标介质,目标介质为目标空间中的实体介质在目标全景图中的图像,目标全景图像是所述至少一个拍摄点位的全景图像中,用于识别目标介质的全景图像;在空间轮廓中确定所述目标介质对应的映射介质,以获取目标空间的空间结构图。本方案通过全景图像辅助确定目标空间中的目标介质即门体和窗体等在目标空间的空间轮廓中的位置,提升生成的空间结构图的准确性。

Description

空间结构图和户型图生成方法、装置、设备和存储介质 技术领域
本发明涉及模型重建技术领域,尤其涉及一种空间结构图和户型图生成方法、装置、设备和存储介质。
背景技术
目标空间二维模型通常用于辅助用户了解该空间的空间结构信息,例如,在房产服务场景中,目标空间的二维模型用于展示户型结构等。
为了获取目标空间的二维模型,通常会先对目标空间进行三维重建,比如:通过光学图像或结构光扫描对目标空间进行三维重建;之后,对三维重建得到的三维模型做横向截面切割,以获取对应的二维模型。但是,通过上述方式获得的二维模型往往缺失细节,且不够准确。
发明内容
本发明提供了一种空间结构图和户型图生成方法、装置、设备和存储介质,用以提高生成的空间结构图和户型图的准确性。
第一方面,本发明实施例提供一种空间结构图生成方法,应用于控制终端,所述方法包括:
获取信息采集终端在目标空间中至少一个拍摄点位得到的点云数据和全景图像;
根据所述至少一个拍摄点位的点云数据,获取所述目标空间的空间轮廓;
获取在目标全景图像中识别的目标介质,所述目标介质为所述目标空间中的实体介质在所述目标全景图像中的图像,所述目标全景图像是所述至少一个拍摄点位的全景图像中,用于识别所述目标介质的全景图像;
在所述空间轮廓中确定所述目标介质对应的映射介质,以获取所述目标空间的空间结构图。
第二方面,本发明实施例提供一种空间结构图生成装置,应用于控制终端,所述装置包括:
获取模块,用于获取信息采集终端在目标空间中至少一个拍摄点位得到的点云数据和全景图像;
处理模块,用于根据所述至少一个拍摄点位的点云数据,获取所述目标空间的空间轮廓;获取在目标全景图像中识别的目标介质,所述目标介质为所述目标空间中的实体介质在所述目标全景图像中的图像,所述目标全景图像是所述至少一个拍摄点位的全景图像中,用于识别所述目标介质的全景图像;在所述空间轮廓中确定所述目标介质对应的映射介质,以获取所述目标空间的空间结构图。第三方面,本发明实施例提供一种电子设备,包括:存储器、处理器、通信接口;其中,所述存储器上存储有可执行代码,当所述可执行代码被所述处理器执行时,使所述处理器执行如第一方面所述的空间结构图生成方法。
第四方面,本发明实施例提供了一种非暂时性机器可读存储介质,所述非暂时性机器可读存储介质上存储有可执行代码,当所述可执行代码被电子设备的处理器执行时,使所述处理器至少可以实现如第一方面所述的空间结构图生成方法。
第五方面,本发明实施例提供一种户型图生成方法,应用于目标控制终端,所述方法包括:
获取信息采集终端得到的目标物理空间中多个子空间分别对应的点云数据和全景图像,以确定所述多个子空间对应的多个空间轮廓;其中,所述多个子空间与所述多个空间轮廓一一对应,任一子空间的点云数据和全景图像是在所述任一子空间中的至少一个拍摄点位采集的;
显示所述多个子空间对应的多个空间轮廓以用于编辑;
针对多个子空间中的目标子空间,获取在目标全景图像中识别的目标介质,所述目标介质为所述目标子空间中的实体介质在所述目标全景图像中的图像,所述目标全景图像是所述目标子空间的至少一个拍摄点位采集的全景图像中,用于识别所述目标介质的全景图像;
在所述目标子空间的空间轮廓中确定所述目标介质对应的映射介质,以生成所述目标子空间的空间结构图;
响应于对所述多个子空间对应的多个空间结构图的获取完成操作,获取所述多个空间结构图拼接得到的所述目标物理空间的户型图。
第六方面,本发明实施例提供一种户型图生成装置,应用于目标控制终端,所述装置包括:
获取模块,用于获取信息采集终端得到的目标物理空间中多个子空间分别对应的点云数据和全景图像,以确定所述多个子空间对应的多个空间轮廓;其中,所述多个子空间与所述多个空间轮廓一一对应,任一子空间的点云数据和全景图像是在所述任一子空间中的至少一个拍摄点位采集的;
显示模块,用于显示所述多个子空间对应的多个空间轮廓以用于编辑;
处理模块,用于针对多个子空间中的目标子空间,获取在目标全景图像中识别的目标介质,所述目标介质为所述目标子空间中的实体介质在所述目标全景图像中的图像,所述目标全景图像是所述目标子空间的至少一个拍摄点位采集的全景图像中,用于识别所述目标介质的全景图像;在所述目标子空间的空间轮廓中确定所述目标介质对应的映射介质,以生成所述目标子空间的空间结构图;响应于对所述多个子空间对应的多个空间结构图的获取完成操作,获取所述多个空间结构图拼接得到的所述目标物理空间的户型图。
第七方面,本发明实施例提供一种户型图生成方法,应用于控制终端,所述方法包括:
获取信息采集终端发送的目标物理空间中多个子空间分别对应的点云数据和全景图像,其中,任一子空间的点云数据和全景图像是在所述任一子空间中的至少一个拍摄点位采集的;
在依次对所述多个子空间进行空间结构图获取处理的过程中,针对当前待编辑的目标子空间,根据在所述目标子空间的至少一个拍摄点位上采集的点云数据和/或全景图像,获取所述目标子空间的目标空间轮廓;
获取在目标全景图像中识别的目标介质,所述目标介质为所述目标子空间中的实体介质在所述目标全景图像中的图像,所述目标全景图像是在所述目标子空间的至少一个拍摄点位采集的全景图像中,用于识别所述目标介质的全景图像;
在所述目标子空间的目标空间轮廓上确定用于表示所述目标介质的映射介质,以获取所述目标子空间的空间结构图;
响应于所述目标子空间的空间结构图的获取完成操作,若所述多个子空间中不存在未获取到空间结构图的子空间,则获取所述多个空间的空间结构图拼接得到的所述目标物理空间的户型图。
第八方面,本发明实施例提供一种户型图生成装置,应用于控制终端,所述装置包括:
获取模块,用于获取信息采集终端发送的目标物理空间中多个子空间分别对应的点云数据和全景图像,其中,任一子空间的点云数据和全景图像是在所述任一子空间中的至少一个拍摄点位采集的;
处理模块,用于在依次对所述多个子空间进行空间结构图获取处理的过程中,针对当前待编辑的目标子空间,根据在所述目标子空间的至少一个拍摄点位上采集的点云数据和/或全景图像,获取所述目标子空间的目标空间轮廓;获取在目标全景图像中识别的目标介质,所述目标介质为所述目标子空间中的实体介质在所述目标全景图像中的图像,所述目标全景图像是在所述目标子空间的至少一个拍摄点位采集的全景图像中,用于识别所述 目标介质的全景图像;在所述目标子空间的目标空间轮廓上确定用于表示所述目标介质的映射介质,以获取所述目标子空间的空间结构图;响应于所述目标子空间的空间结构图的获取完成操作,若所述多个子空间中不存在未获取到空间结构图的子空间,则获取所述多个空间的空间结构图拼接得到的所述目标物理空间的户型图。
第九方面,本发明实施例提供一种户型图生成方法,所述方法用于生成目标物理空间的户型图,所述目标物理空间至少包括N个空间,应用于控制终端,所述方法包括:
步骤1、获取信息采集终端在所述N个空间的每一空间所采集的点云数据和全景图像,其中,所述点云数据和全景图像是在所述每一空间中的至少一个拍摄点位采集的;
步骤2、获取所述N个空间中第M个空间的第M个空间轮廓进行显示以用于编辑,所述第M个空间轮廓是根据第M个空间的至少一个拍摄点位采集的点云数据和/或全景图像获取的;
步骤3、获取在所述第M个空间的目标全景图像中识别的目标介质,以使得根据所述目标介质获取所述目标介质在所述第M个空间轮廓中的映射介质,以用于根据所述映射介质编辑所述第M个空间轮廓,以获取所述第M个空间的户型结构图;所述目标全景图像是在第M个空间的至少一个拍摄点位采集的全景图像中,用于识别所述目标介质的全景图像,所述目标介质为所述第M个空间中的实体介质在所述目标全景图像中的图像;
步骤4、判断所述第M个空间是否是所述N个空间中最后一个生成户型结构图的空间;
若否,执行步骤5、将M赋值为M+1并返回步骤2;
若是,执行步骤6、获取由N个户型结构图组成的所述目标物理空间的户型图,以用于展示,流程结束;其中,M、N为自然数,且1≤M≤N。
第十方面,本发明实施例提供一种户型图生成装置,所述装置用于生成目标物理空间的户型图,所述目标物理空间至少包括N个空间,应用于控制终端,所述装置包括:
获取模块,用于获取信息采集终端在所述N个空间的每一空间所采集的点云数据和全景图像,其中,所述点云数据和全景图像是在所述每一空间中的至少一个拍摄点位采集的;
第一处理模块,用于获取所述N个空间中第M个空间的第M个空间轮廓进行显示以用于编辑,所述第M个空间轮廓是根据第M个空间的至少一个拍摄点位采集的点云数据和/或全景图像获取的;获取在所述第M个空间的目标全景图像中识别的目标介质,以使得根据所述目标介质获取所述目标介质在所述第M个空间轮廓中的映射介质,以用于根据所述映射介质编辑所述第M个空间轮廓,以获取所述第M个空间的户型结构图;所述目标全景图像是在第M个空间的至少一个拍摄点位采集的全景图像中,用于识别所述目标介质的全景图像,所述目标介质为所述第M个空间中的实体介质在所述目标全景图像中的图像;
第二处理模块,用于判断所述第M个空间是否是所述N个空间中最后一个生成户型结构 图的空间;若否,则将M赋值为M+1并返回执行第一处理模块;若是,则获取由N个户型结构图组成的所述目标物理空间的户型图,以用于展示,流程结束;其中,M、N为自然数,且1≤M≤N。
第十一方面,本发明实施例提供一种户型图生成方法,所述方法用于生成目标物理空间的户型图,其中所述目标物理空间包括多个子空间,应用于控制终端,所述方法包括:
获取信息采集终端得到的所述多个子空间分别对应的点云数据和全景图像,其中,任一子空间的点云数据和全景图像是在所述任一子空间中的至少一个拍摄点位采集的;
在依次对所述多个子空间进行的户型结构图拼接处理的过程中,针对当前待拼接的第一子空间,根据在所述第一子空间的至少一个拍摄点位上采集的点云数据和/或全景图像,获取所述第一子空间的目标空间轮廓;
获取在目标全景图像中识别的目标介质,所述目标介质为所述第一子空间中的实体介质在所述目标全景图像中的图像,所述目标全景图像是在所述第一子空间的至少一个拍摄点位上采集的全景图像中,用于识别所述目标介质的全景图像;
在所述第一子空间的目标空间轮廓上确定用于表示所述目标介质的映射介质,以获取所述第一子空间的户型结构图;
将所述第一子空间的户型结构图与第二子空间的户型结构图进行拼接处理,所述第二子空间为所述第一子空间之前已进行户型结构图拼接的子空间;
若所述多个子空间中不存在未进行户型结构图拼接处理的子空间,则将所述拼接处理结果确定为所述目标物理空间的户型图。
第十二方面,本发明实施例提供一种户型图生成装置,所述装置用于生成目标物理空间的户型图,其中所述目标物理空间包括多个子空间,应用于控制终端,所述装置包括:
获取模块,用于获取信息采集终端得到的所述多个子空间分别对应的点云数据和全景图像,其中,任一子空间的点云数据和全景图像是在所述任一子空间中的至少一个拍摄点位采集的;
拼接模块,用于在依次对所述多个子空间进行的户型结构图拼接处理的过程中,针对当前待拼接的第一子空间,根据在所述第一子空间的至少一个拍摄点位上采集的点云数据和/或全景图像,获取所述第一子空间的目标空间轮廓;获取在目标全景图像中识别的目标介质,所述目标介质为所述第一子空间中的实体介质在所述目标全景图像中的图像,所述目标全景图像是所述在第一子空间的至少一个拍摄点位上采集的全景图像中,用于识别所述目标介质的全景图像;在所述第一子空间的目标空间轮廓上确定用于表示所述目标介质的映射介质,以获取所述第一子空间的户型结构图;将所述第一子空间的户型结构图与 第二子空间的户型结构图进行拼接处理,所述第二子空间为所述第一子空间之前已进行户型结构图拼接的子空间;
处理模块,用于若所述多个子空间中不存在未进行户型结构图拼接处理的子空间,则将所述拼接处理结果确定为所述目标物理空间的户型图。
第十三方面,本发明实施例提供一种户型图生成方法,其中所述方法用于生成目标物理空间的户型图,所述目标物理空间至少包括N个空间,应用于控制终端,所述方法包括:
步骤1、获取信息采集终端在所述N个空间的每一空间所采集的点云数据和全景图像,其中,所述点云数据和全景图像是在所述每一空间中的至少一个拍摄点位采集的;
步骤2、获取第M空间轮廓进行显示以用于编辑;其中,所述第M空间轮廓是所述N个空间中第M空间的空间轮廓,所述第M空间轮廓是根据所述第M空间的至少一个拍摄点位采集的点云数据和/或全景图像获取的;
步骤3、获取在第一目标全景图像中识别的第一目标介质,以使得根据所述第一目标介质获取所述第一目标介质在所述第M空间轮廓中的第一映射介质,以用于根据所述第一映射介质编辑所述第M空间轮廓以获取所述第M空间的户型结构图;所述第一目标全景图像是第M空间的所述至少一个拍摄点位采集的全景图像中,用于识别所述第一目标介质的全景图像,所述第一目标介质为所述第M空间中的实体介质在所述第一目标全景图像中的图像;
步骤4、获取第M+1空间轮廓进行显示以用于编辑;其中,所述第M+1空间轮廓是所述N个空间中第M+1空间的空间轮廓,所述第M+1空间为所述第M空间的相邻空间,所述第M+1空间轮廓是根据所述第M+1空间的至少一个拍摄点采集的点云数据和/或全景图像获取的;
步骤5、获取在第二目标全景图像中识别的第二目标介质,以使得根据所述第二目标介质获取所述第二目标介质在所述第M+1空间轮廓中的第二映射介质,以用于根据所述第二映射介质编辑所述第M+1空间轮廓以获取所述第M+1空间的户型结构图;所述第二目标全景图像是第M+1空间的所述至少一个拍摄点位采集的全景图像中,用于识别所述第二目标介质的全景图像,所述第二目标介质为所述第M+1空间中的实体介质在所述第一目标全景图像中的图像;
步骤6、将所述第M+1空间的户型结构图与所述第M空间的户型结构图进行拼接,并判断所述第M+1空间是否是所述N个空间中最后一个生成户型结构图的空间;
若否,执行步骤7、将所述第M空间和所述第M+1空间合并作为第M空间,并返回执行步骤4;
若是,执行步骤8、将所述拼接结果作为所述目标物理空间的户型图,以用于展示, 流程结束。
第十四方面,本发明实施例提供一种户型图生成装置,所述装置用于生成目标物理空间的户型图,其中所述目标物理空间至少包括N个空间,应用于控制终端,所述装置包括:
第一获取模块,用于获取信息采集终端在所述N个空间的每一空间所采集的点云数据和全景图像,其中,所述点云数据和全景图像是在所述每一空间中的至少一个拍摄点位采集的;获取第M空间轮廓进行显示以用于编辑;其中,所述第M空间轮廓是所述N个空间中第M空间的空间轮廓,所述第M空间轮廓是根据所述第M空间的至少一个拍摄点位采集的点云数据和/或全景图像获取的;获取在第一目标全景图像中识别的第一目标介质,以使得根据所述第一目标介质获取所述第一目标介质在所述第M空间轮廓中的第一映射介质,以用于根据所述第一映射介质编辑所述第M空间轮廓以获取所述第M空间的户型结构图;所述第一目标全景图像是第M空间的所述至少一个拍摄点位采集的全景图像中,用于识别所述第一目标介质的全景图像,所述第一目标介质为所述第M空间中的实体介质在所述第一目标全景图像中的图像;
第二获取模块,用于获取第M+1空间轮廓进行显示以用于编辑;其中,所述第M+1空间轮廓是所述N个空间中第M+1空间的空间轮廓,所述第M+1空间为所述第M空间的相邻空间,所述第M+1空间轮廓是根据所述第M+1空间的至少一个拍摄点采集的点云数据和/或全景图像获取的;获取在第二目标全景图像中识别的第二目标介质,以使得根据所述第二目标介质获取所述第二目标介质在所述第M+1空间轮廓中的第二映射介质,以用于根据所述第二映射介质编辑所述第M+1空间轮廓以获取所述第M+1空间的户型结构图;所述第二目标全景图像是第M+1空间的所述至少一个拍摄点位采集的全景图像中,用于识别所述第二目标介质的全景图像,所述第二目标介质为所述第M+1空间中的实体介质在所述第一目标全景图像中的图像;
处理模块,用于将所述第M+1空间的户型结构图与所述第M空间的户型结构图进行拼接,并判断所述第M+1空间是否是所述N个空间中最后一个生成户型结构图的空间;若否,则将所述第M空间和所述第M+1空间合并作为第M空间,并返回执行第二获取模块;若是,则将所述拼接结果作为所述目标物理空间的户型图,以用于展示,流程结束。
第十五方面,本发明实施例提供一种电子设备,包括:存储器、处理器、通信接口;其中,所述存储器上存储有可执行代码,当所述可执行代码被所述处理器执行时,使所述处理器执行如第五方面、第七方面、第九方面、第十一方面和/或第十三方面所述的户型图生成方法。
第十六方面,本发明实施例提供了一种非暂时性机器可读存储介质,所述非暂时性机 器可读存储介质上存储有可执行代码,当所述可执行代码被电子设备的处理器执行时,使所述处理器至少可以实现如第五方面、第七方面、第九方面、第十一方面和/或第十三方面所述的户型图生成方法。
本方案中,户型结构图中空间轮廓上的映射介质是基于全景图像确定的,由于相较于点云数据而言,全景图像更能反映实际空间中的门体和窗体等(即目标介质)的实际位置,因此,基于全景图像的辅助,各空间的户型结构图中标识有较为准确的门体和窗体信息,能够更好的反映实际空间中的场景信息,进而根据多个户型结构图确定的户型图也能够准确反映目标物理空间的实际空间结构。
附图说明
此处所说明的附图用来提供对本发明的进一步理解,构成本发明的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1为本发明第一实施例提供的一种空间结构图生成系统的示意图;
图2为本发明第一实施例提供的一种空间结构图生成方法的流程图;
图3为本发明第一实施例提供的一种点云图像的示意图;
图4为本发明第一实施例提供的一种空间结构图的示意图;
图5为本发明第一实施例提供的另一种空间结构图生成方法的流程图;
图6为本发明第一实施例提供的一种空间结构图生成装置的结构示意图;
图7为本发明第二实施例提供的一种户型图生成系统的示意图;
图8为本发明第二实施例提供的一种户型图生成方法的流程图;
图9为本发明第二实施例提供的一种目标物理空间的户型图的示意图;
图10为本发明第二实施例提供的另一种户型图生成方法的交互流程图;
图11为本发明第二实施例提供的一种户型图生成装置的结构示意图;
图12为本发明第三实施例提供的一种户型图生成方法的流程图;
图13为本发明第三实施例提供的一种空间轮廓的示意图;
图14为本发明第三实施例提供的一种户型图生成装置的结构示意图;
图15为本发明第四实施例提供的一种户型图生成方法的流程图;
图16为本发明第四实施例提供的一种生成户型图的场景示意图;
图17为本发明第四实施例提供的第M个空间轮廓的示意图;
图18为本发明第四实施例提供的一种户型图生成过程的示意图;
图19为本发明第四实施例提供的一种户型图生成装置的结构示意图;
图20为本发明第五实施例提供的一种户型图生成方法的流程图;
图21为本发明第五实施例提供的目标空间轮廓的示意图;
图22为本发明第五实施例提供的一种目标物理空间的实际空间结构的示意图;
图23为本发明第五实施例提供的一种户型结构图的示意图;
图24为本发明第五实施例提供的另一种户型结构图的示意图;
图25为本发明第五实施例提供的一种户型图生成装置的结构示意图;
图26为本发明第六实施例提供的一种户型图生成方法的流程图;
图27为本发明第六实施例提供的一种生成户型图的场景示意图;
图28为本发明第六实施例提供的空间Z的空间轮廓的示意图;
图29为本发明第六实施例提供的一种户型结构图的示意图;
图30为本发明第六实施例提供的一种拼接后的户型结构图的示意图;
图31为本发明第六实施例提供的另一种拼接后的户型结构图的示意图;
图32为本发明第六实施例提供的一种户型图生成装置的结构示意图;
图33为本发明实施例提供的一种电子设备的结构示意图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明。
以下通过多个实施例说明本发明提供的户型结构图生成方法和户型图生成方法。
第一实施例
一般地,一个空间中往往包含有至少一个子空间,比如:某一用于居住的建筑空间内包括一个客厅,一个厨房和两个卧室,其中,客厅、厨房和卧室均可以认为是一个子空间,也即单位空间。通常为了方便用户快速了解某一空间的空间结构,会预先生成该空间的二维平面结构图,也即户型结构图。
可以理解的是,针对包含有至少一个子空间的某一空间,其对应的户型结构图实际上是由至少一个子空间分别对应的空间结构图拼合得到的。其中,任一子空间的空间结构图 的准确性都会对最终生成的户型结构图的准确性产生影响。
为此,本发明实施例提供一种空间结构图生成方法,用于生成准确的目标空间的空间结构图。需要说明的是,本发明实施例中的目标空间指的是单位空间,也即上述任一子空间。
图1为本发明第一实施例提供的一种空间结构图生成系统的示意图,如图1所示,该空间结构图生成系统包括:信息采集终端、控制终端。其中,可选地,信息采集终端可直接集成于控制终端,与控制终端作为一个整体;信息采集终端也可以与控制终端相互解耦,分离设置,通过例如蓝牙、无线保真(Wireless Fidelity,简称WiFi)热点等方式与控制终端通信连接。
信息采集终端包括:激光传感器、相机、电机和处理器(比如CPU)。其中,激光传感器和相机作为感知设备,用于采集目标空间的场景信息,即目标空间的点云数据和图像数据。
实际应用中,目标空间对应有至少一个拍摄点位,针对目标拍摄点位,信息采集终端响应于信息采集指令,通过驱动电机带动激光传感器360度旋转,以采集目标拍摄点位对应的点云数据;通过驱动电机带动相机360度旋转,以在多个预设角度拍摄目标点位对应的图像。上述处理器可通过例如特征匹配算法等全景图像拼接算法,将在多个预设角度拍摄的图像缝合为全景图像。其中,目标拍摄点位为至少一个拍摄点位中的任一拍摄点位。
可选地,信息采集指令由控制终端发送,或者,响应于用户在信息采集终端上的触发操作,触发信息采集指令。
本实施例中,在某一拍摄点位上,对点云数据和图像数据的采集顺序不做限制,两者可同时进行采集,也可按先后顺序依次进行采集。在一可选实施例中,若先采集点云数据,再采集图像数据,为了提高采集的图像质量,可选地,在采集点云数据的过程中,可同步开启相机,以收集当前拍摄点位的场景光照信息进行测光,确定对应的曝光参数。之后,相机基于确定的曝光参数,采集图像数据。
可选地,在生成全景图像时,可结合高动态范围成像(High Dynamic Range Imaging,简称HDRI),生成高质量的全景图像。
可选地,多个预设角度可由用户根据相机的视角进行自定义设置,基于多个预设角度拍摄的图像包含有当前点位360度范围内的场景信息。
可选地,信息采集终端还包括惯性测量单元(Inertial measurement unit,简称IMU))。IMU用于对采集的点云数据和图像数据对应的位姿信息进行修正,减小由于环境或人为因素(比如:信息采集终端未水平放置等)导致的误差。
控制终端可以是智能手机、平板电脑、笔记本电脑等具有数据处理能力的终端设备。如图1所示,控制终端用于获取信息采集终端在目标空间中至少一个拍摄点位得到的点云数据和全景图像,以生成目标空间的空间结构图。
可选地,图1所示的空间结构图生成系统中还包括云端服务器,云端服务器可以为云端的物理服务器或虚拟服务器,控制终端通过接入基于通信标准的无线网络,如WiFi,2G、3G、4G/LTE、5G等移动通信网络与云端服务器通信连接。
在一可选实施例中,云端服务器可接收由控制终端转发的目标空间中至少一个拍摄点位的点云数据和全景图像,以生成目标空间的空间结构图,并将空间结构图反馈给控制终端,以用于显示。实际上,云端服务器生成目标空间的空间结构图的过程与控制终端生成空间结构图的处理过程相同,但由于云端服务器的计算能力更强,其生成目标空间的空间结构图的效率更高,能够进一步提升用户的使用体验。
在另一可选实施例中,云端服务器也可直接与信息采集终端通信连接,以直接获取信息采集终端在在目标空间中至少一个拍摄点位得到的点云数据和全景图像,生成目标空间的空间结构图。
以下以控制终端生成目标空间的空间结构图为例进行说明。
图2为本发明第一实施例提供的一种空间结构图生成方法的流程图,该空间结构图生成方法应用于图1所示的空间结构图生成系统中的控制终端。如图2所示,该空间结构图生成方法包括如下步骤:
201、获取信息采集终端在目标空间中至少一个拍摄点位得到的点云数据和全景图像。
202、根据至少一个拍摄点位的点云数据,获取目标空间的空间轮廓。
203、获取在目标全景图像中识别的目标介质,目标介质为目标空间中的实体介质在目标全景图像中的图像,目标全景图像是至少一个拍摄点位的全景图像中,用于识别目标介质的全景图像。
204、在空间轮廓中确定目标介质对应的映射介质,以获取目标空间的空间结构图。
为了便于理解,本实施例以目标空间为建筑空间为例进行说明,但不以此为限。实际上目标空间也可以是某一空间结构、容器或交通工具等。
本实施例中,控制终端和信息采集终端建立通信连接之后,信息采集终端响应于信息采集指令,依次在目标空间的至少一个拍摄点位上获取每个拍摄点位对应的点云数据和全景图像,并将采集到的至少一个拍摄点位的点云数据和全景图像发送给控制终端。
其中,至少一个拍摄点位可以是用户根据建模需要,自行在目标空间中选定的;也可以是控制终端基于用户输入的目标空间的描述信息,为目标空间生成的参考拍摄点位。
在具体实施过程中,至少一个拍摄点位中任一拍摄点位对应的点云数据和全景图像的获取过程是一致的。以拍摄点位X为例,假设信息采集终端当前所处的拍摄点位为拍摄点位X,信息采集终端响应于信息采集指令,获取拍摄点位X对应的点云数据和全景图像。其中,信息采集终端获取拍摄点位X的点云数据和全景图像的具体过程,可参考前述实施例,在此不再赘述。
信息采集终端在向控制终端发送获取到的点云数据和全景图像时,可选地,信息采集终端可以每获取一个拍摄点位的点云数据和全景图像,就向控制终端反馈获取到的点云数据和全景图像;信息终端也可以在获取目标空间中全部拍摄点位的点云数据和全景图像之后,将全部拍摄点位的点云数据和全景图像统一发送给控制终端。
需要说明的是,若信息采集终端集成于控制终端,则控制终端与信息采集终端同步获取目标空间的点云数据和全景图像。
之后,控制终端基于获取到的目标空间中至少一个拍摄点位的点云数据,获取目标空间的空间轮廓。
可以理解的是,由于至少一个拍摄点位的点云数据均是在目标空间中采集的,因此,基于目标空间中至少一个拍摄点位之间的相对位置关系,可以将至少一个拍摄点位的点云数据进行融合处理,以确定目标空间的目标点云数据。相较于单个拍摄点位的点云数据而言,融合处理后得到的目标点云数据包含的数据量更多,点云数据更加稠密,更能够反映目标空间的空间结构信息。进而,基于目标点云数据能够获取准确的目标空间的空间轮廓。
具体实施过程中,在根据目标点云数据确定目标空间的空间轮廓时,先将目标空间的目标点云数据映射到二维平面,以得到目标空间的点云图像;然后,再根据点云图像,确定目标空间的空间轮廓。作为一种可选地实现方式,控制终端可通过例如边缘检测算法,识别点云图像对应的空间轮廓。
可以理解的是,点云数据实际上是一系列的三维坐标点,任一三维坐标点可用三维笛卡尔坐标(x,y,z)表示,其中,x,y,z分别是拥有共同的零点且彼此相互正交的x轴,y轴,z轴的坐标值。
在将目标空间的目标点云数据映射到二维平面,以得到目标空间的点云图像时,在一可选实施例中,可将目标点云数据对应的三维坐标点(x,y,z)转换为二维坐标点(x,y),比如:将三维坐标点的z轴坐标值设置为0,进而,基于转换得到的二维坐标点得到目标空间的平面点云图像。在另一可选实施例中,也可先基于目标点云数据对应的三维坐标点(x,y,z)生成目标空间的三维空间结构图,然后获取三维空间结构图的俯视图,以作为目标空间的二维点云图像。
为了获取到准确的目标空间的空间轮廓,作为一种可选地实现方式,在得到目标空间的点云图像之后,所述方法还包括:接收用户对点云图像的修正操作,根据修正操作后得到的目标点云图像,确定目标空间的空间轮廓。
其中,上述修正操作包括裁剪处理。实际应用中,目标空间中往往存在会对点云数据采集产生影响的物体,比如:玻璃,镜子等,这些物体会导致获取到的点云数据中存在一些干扰数据,这些干扰数据反映在点云图像上,具体表现为点云图像中的规则墙线之外仍存在部分图像(即干扰数据对应干扰图像),或者点云图像的墙线模糊不清。其中,点云图像中的墙线对应于目标空间中的墙体。
图3为本发明第一实施例提供的一种点云图像的示意图,实际应用中,可选地,用户可通过点云图像编辑界面上的编辑按钮,对点云图像进行修正操作,例如,如图3所示,通过添加墙线的方式,裁减掉墙线之外的干扰图像,以得到目标点云图像。其中,目标点云图像中墙线清晰,基于目标点云图像能够识别到较为准确的空间轮廓。可以理解的是,目标空间的空间轮廓由多条轮廓线构成,轮廓线与点云图像中的墙线相对应。
但是,实际应用中,空间轮廓中可能包括与点云图像中的墙线不对应的目标轮廓线。因此,在另一可选实施例中,可响应于用户在目标空间的空间轮廓上对目标轮廓线的编辑操作,调整目标轮廓线的形态和/或位置,以使目标轮廓线与点云图像中的墙线重合。比如,调整目标轮廓线l的长短和位置,使目标轮廓线l与点云图像中的墙线L重合,其中目标轮廓线l和墙线L对应于目标空间中的同一墙体。
可选地,控制终端预设有其他针对空间轮廓的轮廓线的修正选项,比如:添加轮廓线选项、删除轮廓线选项等。
在获取到目标空间的空间轮廓之后,需要在空间轮廓中标注目标空间中的实体介质,比如:门体和窗体。本实施例中,先基于全景图像识别目标介质,进而确定目标介质对应的映射介质。为了便于区分,将目标空间中的门体和窗体在全景图像中对应的图像称为目标介质,将门体和窗体在空间轮廓中对应的标识称为映射介质。可选地,可基于预设的图像识别算法,识别全景图像中的目标介质。
实际应用中,可能不止一个拍摄点位对应的全景图像中包含有目标介质。在一可选实施例中,为了加快控制终端对目标介质的识别效率,在控制终端识别全景图像中的目标介质之前,还可以从至少一个拍摄点位的全景图像中,确定出目标全景图像以用于识别目标介质。其中,目标全景图像为符合预设识别要求的全景图像,比如:视野最广、光线最佳的全景图像,或者包含有用户标记信息(比如:最佳全景图像)的全景图像。
其中,目标全景图像对应的拍摄点位可以与用于生成空间轮廓的点云数据对应的拍摄 点位相同或不同。假设目标空间中包含有两个拍摄点位,分别为拍摄点位A和拍摄点位B,在拍摄点位A上获取了全景图像A1和点云数据A2,在拍摄点位B上获取了全景图像B1和点云数据B2。若基于点云数据A2生成了空间轮廓,则既可以确定全景图像A1为目标全景图像,也可以确定全景图像B1为目标全景图像。类似地,若基于点云数据B2生成了空间轮廓,则既可以确定全景图像A1为目标全景图像,也可以确定全景图像B1为目标全景图像。
可以理解的是,门体和窗体对应的有相应的尺寸信息,为了便于用户了解目标空间的空间结构,在目标空间的空间轮廓中添加的映射介质至少应该能够反映出目标空间中包含的门体和/或窗体的位置信息、大小信息和类型信息。
作为一种可选地实现方式,在空间轮廓中确定目标介质对应的映射介质,包括:
根据目标全景图像和空间轮廓之间的映射关系,获取目标介质在目标全景图像中对应的全景像素坐标,以及所映射的空间轮廓坐标,以在空间轮廓中确定目标介质对应的映射介质。
其中,映射介质的与目标介质的目标标识以及目标显示尺寸相适配,目标标识用于区分属于不同类型的目标介质,比如:属于门体的目标介质或者属于窗体的目标介质对应于不同的目标标识。
上述目标全景图像与空间轮廓之间的映射关系,是根据点云数据和目标全景图像之间的坐标映射,所建立的目标全景图和空间轮廓之间的映射。
可以理解的是,激光传感器和相机之间的相对位姿在进行点云数据和全景图像采集之前,已预先进行标定。基于预先标定的相对位姿,以及实际拍摄点位之间的相对位置关系能够确定采集到的点云数据对应的三维点云坐标和全景图像的全景像素坐标之间的坐标映射。
在本发明实施例中,不限定对全景图像和点云数据坐标映射的具体方式,可选地,可以直接根据获取全景图像和点云数据的设备之间的相对位姿关系,将全景像素坐标映射为三维点云坐标,以及将三维点云坐标映射为全景像素坐标;也可以借助相对位姿关系和中间坐标系,先将全景像素坐标映射为中间坐标,再将中间坐标映射为三维点云坐标;以及先将三维点云坐标映射为中间坐标,再将中间坐标映射为全景像素坐标。在此,不限定中间坐标系的具体类型,也不限定在坐标映射过程中使用的具体方式,根据中间坐标系的不同,以及相对位姿关系的不同,所采用的映射方式也会不同。
图4为本发明第一实施例提供的一种空间结构图的示意图。假设目标空间Y的实际门体和窗体的分布情况如图4中的左图所示,包含有门体a,窗体b和窗体c。基于本发明实施例提供的空间结构图生成方法,生成的目标空间Y的空间结构图如图4中的右图所示。其中, 在目标空间的空间轮廓j上标识有目标空间中的目标介质门体a,窗体b和窗体c对应的映射介质,且映射介质对应的位置、大小和类型与目标介质在目标空间中的实际情况匹配。
本方案中,通过识别目标全景图像中的目标介质(门体和窗体在目标全景图像中对应的图像),确定门体和窗体在目标空间的空间轮廓中对应的映射介质,从而最终获得的目标空间的空间结构图中包含有准确的空间轮廓,且在空间轮廓上的正确位置标识有用于表示门体和窗体的映射介质,从而空间结构图能够准确反映目标空间的实际空间结构。
图5为本发明第一实施例提供的另一种空间结构图生成方法的流程图,如图5所示,该空间结构图生成方法包括如下步骤:
501、获取信息采集终端在目标空间中至少一个拍摄点位得到的点云数据和全景图像。
502、接收用户对接收到的至少一个拍摄点位的点云数据和全景图像的有效性确认操作,以根据至少一个拍摄点位的点云数据,获取目标空间的空间轮廓。
503、获取在目标全景图像中识别的目标介质,目标介质为目标空间中的实体介质在目标全景图像中的图像,目标全景图像是至少一个拍摄点位的全景图像中,用于识别目标介质的全景图像。
504、在空间轮廓中确定目标介质对应的映射介质,以获取目标空间的空间结构图。
其中,步骤501、503和504中的具体实施过程可参考前述实施例,本实施例中不再赘述。
本实施中,对点云数据和全景图像的有效性进行确认包括:确认接收到的至少一个拍摄点位的全景图像是否符合拍摄要求,以及确认目前接收到的多个拍摄点位的点云数据能否完整表示目标空间。
在具体实施过程中,针对某一拍摄点位的全景图像,若全景图像中的门体或窗体被遮挡,或者全景图像存在拼接错误(比如:图像错位),则认为该拍摄点位的全景图像不符合拍摄要求,即该全景图像无效。
若某一全景图像无效,则使信息采集终端重新在该拍摄点位上获取点云数据和全景图像。若某一全景图像有效,则对该全景图像执行有效性确认操作。可选地,所述有效性确认操作为用户在控制终端界面上针对该全景图像触发的确认操作。
点云数据的有效性确认目的在于,确认接收到的至少一个拍摄点位的点云数据是否能够完整表示目标空间,即确认是否要增加新的拍摄点位进行点云数据的采集。
具体实施过程中,若用户确认目前接收到的多个拍摄点位的点云数据能够完整表示目标空间,即当前接收到的多个点云数据包含有目标空间的所有场景信息,则对点云数据执行有效性确认操作。若用户确认目前接收到的多个拍摄点位的点云数据不能完整表示目标 空间,则增加新的拍摄点位,并在新的拍摄点位上获取点云数据,以弥补除目前接收到的至少一个拍摄点位的点云数据外,目标空间对应缺少的点云数据。
在一可选实施例中,为便于用户确认目前接收到的多个拍摄点位的点云数据能否完整表示目标空间,在获取当前拍摄点位的点云数据后,将目前接收到的至少一个拍摄点位的点云数据进行融合,并映射到二维平面,以得到至少一个拍摄点位的点云图像。之后,通过判断显示的点云图像是否与目标空间的实际空间结构一致,确定目前接收到的至少一个拍摄点位的点云数据是否能够完整表示目标空间。若一致,则能够完整表示目标空间;若不一致,则不能够完整表示目标空间。
上述实施例中针对至少一个拍摄点位的点云数据和全景图像的有效性确认操作,实际上是在确认用于生成目标空间的空间结构图的原始数据的正确性以及完整性,基于对原始数据的正确性和完整性的确认操作,能够生成准确的空间结构图。
以下将详细描述本发明的一个或多个实施例的空间结构图生成装置。本领域技术人员可以理解,这些装置均可使用市售的硬件组件通过本方案所教导的步骤进行配置来构成。
图6为本发明第一实施例提供的一种空间结构图生成装置的结构示意图,该装置应用于控制终端,如图6所示,该装置包括:获取模块11和处理模块12。
获取模块11,用于获取信息采集终端在目标空间中至少一个拍摄点位得到的点云数据和全景图像。
处理模块12,用于根据所述至少一个拍摄点位的点云数据,获取所述目标空间的空间轮廓;获取在目标全景图像中识别的目标介质,所述目标介质为所述目标空间中的实体介质在所述目标全景图像中的图像,所述目标全景图像是所述至少一个拍摄点位的全景图像中,用于识别所述目标介质的全景图像;在所述空间轮廓中确定所述目标介质对应的映射介质,以获取所述目标空间的空间结构图。
可选地,处理模块12,具体用于融合所述至少一个拍摄点位的点云数据,以确定所述目标空间的目标点云数据;将所述目标点云数据映射到二维平面,以得到所述目标空间的点云图像;根据所述点云图像,确定所述目标空间的空间轮廓。
可选地,获取模块11,还用于接收用户对所述点云图像的修正操作。
相应地,处理模块12,具体用于根据所述修正操作后得到的目标点云图像,确定所述目标空间的空间轮廓。
可选地,处理模块12,具体用于响应于在所述目标空间的空间轮廓上对所述目标轮廓线的编辑操作,调整所述目标轮廓线的形态和/或位置,以使所述目标轮廓线与所述点云图像中的墙线重合。
可选地,处理模块12,具体用于根据所述目标全景图像和所述空间轮廓之间的映射关系,获取所述目标介质在所述目标全景图像中对应的全景像素坐标,以及所映射的空间轮廓坐标,以在所述空间轮廓中确定所述目标介质对应的映射介质;其中,所述映射介质的与所述目标介质的目标标识以及目标显示尺寸相适配,所述目标标识用于区分属于不同类型的目标介质,所述映射关系为根据所述点云数据和所述目标全景图像之间的坐标映射,所建立的所述目标全景图像和所述空间轮廓之间的映射。
图6所示装置可以执行前述实施例中的步骤,详细的执行过程和技术效果参见前述实施例中的描述,在此不再赘述。
第二实施例
图7为本发明第二实施例提供的一种户型图生成系统的示意图,如图7所示,该户型图生成系统包括:信息采集终端和目标控制终端。其中,可选地,信息采集终端可直接集成于目标控制终端,与目标控制终端作为一个整体;信息采集终端也可以与目标控制终端相互解耦,分离设置,信息采集终端通过例如蓝牙、无线保真(Wireless Fidelity,简称WiFi)热点等方式与目标控制终端通信连接。
可以理解的是,一个物理空间通常包含有多个子空间,以目标物理空间为建筑空间为例,一个建筑空间内可能包含有一个客厅,一个厨房和两个卧室,其中,客厅、厨房和卧室均可以认为是一个子空间。
在生成目标物理空间的户型图时,信息采集终端用于采集目标物理空间中的多个子空间分别对应的点云数据和全景图像。其中,任一子空间的点云数据和全景图像是在任一子空间中的至少一个拍摄点位采集的。比如:子空间X的点云数据和全景图像包括:在子空间X中的拍摄点位a上采集的点云数据a和全景图像a,以及在子空间X中的拍摄点位b上采集的点云数据b和全景图像b。其中,拍摄点位a和拍摄点位b可以是用户根据建模需要,自行在子空间X中选定的;也可以是目标控制终端基于用户输入的子空间X的描述信息(比如空间大小),为子空间X生成的参考拍摄点位。
实际上,在任一子空间中的任一拍摄点位上,信息采集终端对应的信息采集过程是相同的。为便于理解,以在子空间X中的目标拍摄点位Y上的点云数据和全景图像的采集过程为例,对信息采集终端的数据采集过程进行说明。
如图7所示,信息采集终端包括:激光传感器、相机、电机和处理器(比如CPU)。其中,激光传感器和相机作为感知设备,用于采集子空间X的场景信息,即子空间X的点云数据和全景图像。
针对目标拍摄点位Y,信息采集终端响应于信息采集指令,通过驱动电机带动激光传 感器360度旋转,以采集目标拍摄点位Y对应的点云数据;通过驱动电机带动相机360度旋转,以采集目标拍摄点位Y对应的全景图像。
可选地,信息采集终端中的相机为全景相机或非全景相机。若信息采集终端中的相机为非全景相机,则在上述360度旋转过程中,控制该相机在多个预设角度拍摄目标拍摄点位Y对应的图像,上述处理器可通过例如特征匹配算法等全景图像拼接算法,将在多个预设角度拍摄的图像缝合为全景图像。
可选地,多个预设角度可由用户根据相机的视角进行自定义设置,基于多个预设角度拍摄的图像包含有当前点位360度范围内的场景信息,比如:若相机的视角为180度,则可以某一基准方向为0度,将基于该基准方向的a度和(a+180)度确定为预设角度。
可选地,信息采集指令由目标控制终端发送,或者,响应于用户在信息采集终端上的触发操作,触发信息采集指令。
在目标拍摄点位Y上,点云数据和全景图像可同时进行采集,也可按先后顺序依次进行采集,本实施例对此不作限制。
在一可选实施例中,若先采集点云数据,再采集全景图像,为了提高采集的全景图像质量,可选地,在采集点云数据的过程中,可同步开启相机,以收集当前拍摄点位的场景光照信息进行测光,确定对应的曝光参数。之后,相机基于确定的曝光参数,采集全景图像。
可选地,在生成全景图像时,可结合高动态范围成像(High Dynamic Range Imaging,简称HDRI),生成高质量的全景图像。
可选地,信息采集终端还包括惯性测量单元(Inertial measurement unit,简称IMU))。IMU用于对采集的点云数据和图像数据对应的位姿信息进行修正,减小由于环境或人为因素(比如:信息采集终端未水平放置等)导致的误差。
可选地,图7所示的户型图生成系统中还包括云端服务器,云端服务器可以为云端的物理服务器或虚拟服务器,目标控制终端通过接入基于通信标准的无线网络,如WiFi,2G、3G、4G/LTE、5G等移动通信网络与云端服务器通信连接。
在一可选实施例中可选地,云端服务器可接收由目标控制终端转发的多个子空间分别对应的点云数据和全景图像,以生成目标物理空间的户型图,并将户型图反馈给控制终端,以用于显示。实际上,云端服务器生成目标物理空间的户型图的过程与目标控制终端生成空间结构图的处理过程相同,但由于云端服务器的计算能力更强,其生成目标物理空间的户型图的效率更高,能够进一步提升用户的使用体验。
在另一可选实施例中,云端服务器也可直接与信息采集终端通信连接,以直接获取信 息采集终端采集的多个子空间分别对应的点云数据和全景图像,以生成目标物理空间的户型图。
本实施例中,以目标控制终端为例,说明基于信息采集终端得到的目标物理空间中多个子空间分别对应的点云数据和全景图像,生成目标物理空间的户型图的过程。其中,户型图可以理解为目标物理空间的二维平面结构图。
目标控制终端可以是智能手机、平板电脑、笔记本电脑等具有数据处理能力的终端设备。以下结合具体实施例,从目标控制终端的角度,对目标物理空间的户型图生成过程进行说明。
图8为本发明第二实施例提供的一种户型图生成方法的流程图,应用于目标控制终端,如图8所示,该户型图生成方法包括如下步骤:
801、获取信息采集终端得到的目标物理空间中多个子空间分别对应的点云数据和全景图像,以确定多个子空间对应的多个空间轮廓。
其中,多个子空间与多个空间轮廓一一对应,任一子空间的点云数据和全景图像是在任一子空间中的至少一个拍摄点位采集的。
802、显示多个子空间对应的多个空间轮廓以用于编辑。
803、针对多个子空间中的目标子空间,获取在目标全景图像中识别的目标介质,目标介质为目标子空间中的实体介质在目标全景图像中的图像,目标全景图像是目标子空间的至少一个拍摄点位采集的全景图像中,用于识别目标介质的全景图像。
804、在目标子空间的空间轮廓中确定目标介质对应的映射介质,以生成目标子空间的空间结构图。
805、响应于对多个子空间对应的多个空间结构图的获取完成操作,获取多个空间结构图拼接得到的目标物理空间的户型图。
本实施例中,目标物理空间可以是建筑空间、复杂连通结构或容器、交通工具等。为便于理解,本实施例以目标物理空间为建筑空间(比如:某一办公区域,或某一居住房屋的室内区域等)为例进行说明,但不以此为限。
步骤801中,若信息采集终端集成于目标控制终端,则目标控制终端可直接同步获取信息采集终端得到的多个子空间的点云数据和全景图像;若信息采集终端通过通信链路与目标控制终端通信连接,则目标控制终端基于预先建立的通信链路,接收信息采集终端发送的多个子空间的点云数据和全景图像。其中,信息采集终端获取目标物理空间中多个子空间分别对应的点云数据和全景图像的过程可参考前述实施例,本实施例中主要对目标控制终端获取多个子空间分别对应的点云数据和全景图像后的处理过程进行说明。
首先,根据获取到的多个子空间分别对应的点云数据和全景图像,确定多个子空间分别对应的空间轮廓。
可选地,针对多个子空间中的任一个子空间(称为目标子空间),根据目标子空间的点云数据和全景图像,确定目标子空间的空间轮廓,包括:根据目标子空间中至少一个拍摄点位的点云数据,和/或,至少一个拍摄点位的全景图像,确定目标子空间的空间轮廓。
在具体实施过程中,可根据至少一个拍摄点位的点云数据,获取第一空间轮廓,可以将第一空间轮廓直接作为目标子空间的空间轮廓;或者,根据至少一个拍摄点位的全景图像,获取第二空间轮廓,可以将第二空间轮廓直接作为目标子空间的空间轮廓;或者,在上述第一空间轮廓和上述第二空间轮廓中选择轮廓线质量较好地空间轮廓作为目标子空间的空间轮廓,也可以对上述第一空间轮廓和上述第二空间轮廓的轮廓线做融合处理,得到轮廓线更优质的空间轮廓,可以直接将融合处理后的空间轮廓作为目标子空间的空间轮廓。
可选地,根据目标子空间中至少一个拍摄点位的点云数据,确定目标子空间的空间轮廓,包括:
基于目标子空间中至少一个拍摄点位之间的相对位置关系,将至少一个拍摄点位的点云数据进行融合处理;确定融合处理后的点云数据为目标子空间的目标点云数据;将目标点云数据映射到二维平面,以得到目标子空间的初始点云图像;接收用户对初始点云图像的修正操作,确定修正操作后得到的点云图像为所述目标子空间的点云图像,通过例如边缘检测算法,确定目标子空间的空间轮廓。
可以理解的是,点云数据实际上是一系列的三维坐标点,任一三维坐标点可用三维笛卡尔坐标(x,y,z)表示,其中,x,y,z分别是拥有共同的零点且彼此相互正交的x轴,y轴,z轴的坐标值。
在一可选实施例中,将目标点云数据映射到二维平面,以得到目标子空间的初始点云图像,包括:将目标点云数据对应的三维坐标点(x,y,z)转换为二维坐标点(x,y),比如:将三维坐标点的z轴坐标值设置为0,进而,基于转换得到的二维坐标点得到目标子空间的初始点云图像,该初始点云图像为二维图像。或者,先基于目标点云数据对应的三维坐标点(x,y,z)生成目标子空间的三维空间结构图,然后获取三维空间结构图的俯视图,以作为目标子空间的初始点云图像。
其中,上述对初始点云图像的修正操作包括裁剪处理。实际应用中,目标子空间中往往存在会对点云数据采集产生影响的物体,比如:玻璃,镜子等,这些物体会导致获取到的点云数据中存在一些干扰数据,这些干扰数据反映在点云图像上,具体表现为点云图像 中的规则墙线之外仍存在部分图像(即干扰数据对应干扰图像),或者点云图像的墙线模糊不清。其中,点云图像中的墙线对应于目标空间中的墙体。
参照第一实施例中的图3,实际应用中,可选地,用户可通过点云图像编辑界面上的编辑按钮,对点云图像进行修正操作。例如,如图3所示,通过添加墙线的方式,裁减掉墙线之外的干扰图像,以得到目标点云图像。其中,目标点云图像中墙线清晰,基于目标点云图像能够识别到较为准确的空间轮廓。
在确定多个子空间分别对应的空间轮廓后,可将多个空间轮廓同时显示以对多个空间轮廓同时进行编辑,并行生成多个子空间分别对应的空间结构图;或者,依次显示多个空间轮廓以对多个空间轮廓进行逐个编辑,逐个生成多个子空间分别对应的空间结构图。
可以理解的是,无论是并行生成多个子空间分别对应的空间结构图,还是串行生成多个子空间分别对应的空间结构图,多个子空间中的任一个子空间(称为目标子空间)的空间结构图的生成过程是相同的。因此,下文以目标子空间的空间结构图的生成过程为例,进行说明。
目标子空间的空间轮廓由多条轮廓线构成,实际应用中,自动识别的空间轮廓中可能存在与实际墙体位置不符的目标轮廓线。可选地,目标控制终端预设有针对空间轮廓的轮廓线的修正选项,比如:调整轮廓线的形态和/或位置选项、添加轮廓线选项、删除轮廓线选项等。
为保证空间轮廓能够准确反映目标子空间的空间结构,在显示目标子空间的空间轮廓时,具体地,在目标子空间的点云图像上显示空间轮廓。其中,点云图像是根据目标子空间的至少一个拍摄点位的点云数据确定的,其中,点云图像中包含有墙线,墙线对应于目标子空间中的墙体。
针对空间轮廓中与实际墙体位置不符的目标轮廓线,可选地,响应于用户在目标子空间的空间轮廓上对目标轮廓线的编辑操作,调整目标轮廓线的形态和/或位置,以使目标轮廓线与点云图像中的墙线重合。比如,调整目标轮廓线l的长短和位置,使目标轮廓线l与点云图像中的墙线L重合,其中目标轮廓线l和墙线L对应于目标空间中的同一墙体。或者,删除不存在对应墙线的目标轮廓线;或者,添加某一墙线对应的目标轮廓线。
在编辑完目标子空间的空间轮廓之后,需要在得到的空间轮廓中标注目标子空间中的实体介质,即门体和窗体。具体地,先基于全景图像识别目标介质,进而确定目标介质对应的映射介质。为了便于区分,将目标空间中的门体和窗体在全景图像中对应的图像称为目标介质,将门体和窗体在空间轮廓中对应的标识称为映射介质。可选地,可基于预设的图像识别算法,识别全景图像中的目标介质。
实际应用中,可能不止一个拍摄点位对应的全景图像中包含有目标介质。为了加快目标控制终端对目标介质的识别效率,可选地,在目标控制终端识别全景图像中的目标介质之前,还可以从至少一个拍摄点位的全景图像中,确定出目标全景图像以用于识别目标介质。
其中,目标全景图像为符合预设识别要求的全景图像,比如:视野最广、光线最佳的全景图像,或者包含有用户标记信息(比如:最佳全景图像)的全景图像。
其中,针对目标子空间,目标全景图像对应的拍摄点位可以与用于生成空间轮廓的点云数据对应的拍摄点位相同或不同。假设目标空间中包含有两个拍摄点位,分别为拍摄点位A和拍摄点位B,在拍摄点位A上获取了全景图像A1和点云数据A2,在拍摄点位B上获取了全景图像B1和点云数据B2。若基于点云数据A2生成了空间轮廓,则既可以确定全景图像A1为目标全景图像,也可以确定全景图像B1为目标全景图像。类似地,若基于点云数据B2生成了空间轮廓,则既可以确定全景图像A1为目标全景图像,也可以确定全景图像B1为目标全景图像。
可以理解的是,门体和窗体对应的有相应的尺寸信息,为了便于用户了解目标子空间的空间结构,在目标子空间的空间轮廓中添加的映射介质至少应该能够反映出目标子空间中包含的门体和/或窗体的位置信息、大小信息和类型信息。
作为一种可选地实现方式,在目标子空间的空间轮廓中确定目标介质对应的映射介质,包括:
根据目标子空间的目标全景图像和空间轮廓之间的映射关系,获取目标介质在目标全景图像中对应的全景像素坐标以及所映射的空间轮廓坐标,以在目标子空间的空间轮廓中确定目标介质对应的映射介质。
其中,映射介质的与目标介质的目标标识以及目标显示尺寸相适配,目标标识用于区分属于不同类型的目标介质,比如:属于门体的目标介质或者属于窗体的目标介质对应于不同的目标标识。
上述目标全景图像与空间轮廓之间的映射关系,是根据目标子空间的点云数据和目标全景图像之间的坐标映射,所建立的目标全景图像和空间轮廓之间的映射。
可以理解的是,激光传感器和相机之间的相对位姿在进行点云数据和全景图像采集之前,已预先标定。基于预先标定的相对位姿,以及目标子空间中实际拍摄点位之间的相对位置关系能够确定采集到的点云数据对应的三维点云坐标和全景图像的全景像素坐标之间的坐标映射。
在本发明实施例中,不限定对全景图像和点云数据坐标映射的具体方式,可选地,可 以直接根据获取全景图像和点云数据的设备之间的相对位姿关系,将全景像素坐标映射为三维点云坐标,以及将三维点云坐标映射为全景像素坐标;也可以借助相对位姿关系和中间坐标系,先将全景像素坐标映射为中间坐标,再将中间坐标映射为三维点云坐标;以及先将三维点云坐标映射为中间坐标,再将中间坐标映射为全景像素坐标。在此,不限定中间坐标系的具体类型,也不限定在坐标映射过程中使用的具体方式,根据中间坐标系的不同,以及相对位姿关系的不同,所采用的映射方式也会不同。
基于上述方法,可获取多个子空间分别对应的空间结构图。
响应于对多个子空间对应的多个空间结构图的获取完成操作,获取多个空间结构图拼接得到的目标物理空间的户型图。
可以理解的是,目标物理空间中的多个子空间相互之间通过门体或窗体连通。比如子空间1和子空间2通过同一目标门体连通,由于拍摄全景图像时,目标门体处于打开状态,基于此,子空间1的全景图像1中可能包含有子空间2的部分图像m。从空间上来说,图像m对应的区域在子空间1之外,但处于拍摄全景图像1时相机的视野范围之内。因此,实际应用中,根据子空间1的全景图像1和子空间2的全景图像2,可以通过例如特征匹配等方式,确定子空间1和子空间2之间的空间连接关系;之后,基于该空间连接关系,可将子空间1的空间结构图与子空间2的空间结构图进行拼接。类似地,对其他子空间的空间结构图也进行拼接,拼接完成后的图像即为目标物理空间的户型图。
可选地,还可根据子空间分别对应的点云数据,确定子空间之间的空间连接关系。
为便于理解,举例来说,图9为本发明第二实施例提供的一种目标物理空间的户型图的示意图,其中,图9中的a为目标物理空间的实际空间结构,图9中的b为目标物理空间中多个子空间的空间结构图,图9中的c为目标物理空间的户型图。
如图9中的a所示,假设目标物理空间中包含有3个子空间,分别为卧室1、卧室2和客厅3,其中卧室1和客厅3通过门体1连通,卧室2和客厅3通过门体2连通,卧室1和卧室2通过窗体3连通。
基于本发明实施例提供的方法,生成的卧室1的空间结构图x、卧室2的空间结构图y和客厅3的空间结构图z,如图9中的b所示,各空间结构图中包含有空间轮廓,空间轮廓上标识有表示门体和窗体的映射介质。将空间结构图x、空间结构图y和空间结构图z进行拼接得到的目标物理空间的户型图,如图9中的c所示。
本方案中,针对具有多个子空间的目标物理空间,在生成目标物理空间的户型图时,先生成每个子空间的空间结构图,然后再将多个子空间对应的多个空间结构图进行拼接,以得到目标物理空间的户型图。其中,目标子空间的空间结构图包含有该子空间的空间轮 廓以及该子空间中的窗体和门体(即目标介质)对应的映射介质。基于对空间轮廓的编辑,能够获得的目标子空间对应的准确的空间轮廓;基于全景图像确定目标子空间中目标介质对应的映射介质在空间轮廓中的位置,能够在空间轮廓上的正确位置标识目标介质对应的映射介质,从而确定的空间结构图能够准确反映目标子空间的实际空间结构信息,进而根据多个空间结构图确定的户型图也能够准确反映目标物理空间的实际空间结构。
为提高目标物理空间的户型图的生成效率,还可同时通过多个控制终端对多个子空间的空间轮廓分别进行编辑,并生成相应的空间结构图。通过多个控制终端的相互协作,减少生成目标物理空间的户型图所需的时间。
图10为本发明第二实施例提供的另一种户型图生成方法的交互流程图,该户型图生成方法包括如下步骤:
1001、目标控制终端获取信息采集终端得到的目标物理空间中多个子空间分别对应的点云数据和全景图像,以确定多个子空间对应的多个空间轮廓。
其中,多个子空间与多个空间轮廓一一对应,任一子空间的点云数据和全景图像是在任一子空间中的至少一个拍摄点位采集的。
1002、目标控制终端显示多个子空间对应的多个空间轮廓以用于与其他控制终端同步编辑(1002-1),并将多个子空间对应的多个空间轮廓发送给其他控制终端(1002-2),以在其他控制终端上显示多个子空间对应的多个空间轮廓以用于与目标控制终端同步编辑(1002-3)。
1003、目标控制终端和其他控制终端针对多个子空间中的目标子空间,获取在目标全景图像中识别的目标介质,目标介质为目标子空间中的实体介质在目标全景图像中的图像;在目标子空间的空间轮廓中确定目标介质对应的映射介质,以生成目标子空间的空间结构图。
其中,目标全景图像是目标子空间的至少一个拍摄点位采集的全景图像中,用于识别目标介质的全景图像。
1004、目标控制终端获取其他控制终端获取的子空间的空间结构图。
1005、目标控制终端响应于对多个子空间对应的多个空间结构图的获取完成操作,获取多个空间结构图拼接得到的目标物理空间的户型图。
本实施例中,目标控制终端和其他控制终端可分别对不同的子空间的空间轮廓进行编辑,并生成相应子空间的空间结构图。比如:目标控制终端用于对子空间1的空间轮廓进行编辑,并生成子空间1的空间结构图;其他控制终端用于对子空间2的空间轮廓进行编辑,并生成子空间2的空间结构图。其中,各设备进行空间轮廓编辑和生成空间结构图的过程 相同,可参考前述实施例,本实施例中不进行赘述。需要说明的是,其他控制终端同步有目标物理空间中多个子空间分别对应的点云数据和全景图像。
为了便于区分不同设备当前编辑的空间轮廓,可选地,目标控制终端和其他控制终端在显示多个空间轮廓时,还包括:显示多个空间轮廓分别对应的终端设备标识。其中,终端设备标识用于指示当前编辑各空间轮廓的终端设备,终端设备包括目标控制终端和其他控制终端。
目标控制终端与其他控制终端通信连接,可选地,在生成目标物理空间的户型图的过程中,目标控制终端可根据其他控制终端回传的编辑数据,同步更新显示界面上的多个空间轮廓。
可选地,其他控制终端可在生成空间结构图后,主动将空间结构图反馈给目标控制终端;或者,响应于目标控制终端的空间结构图获取指令,发送生成的空间结构图给目标控制终端。若目标控制终端获取到所有子空间对应的空间结构图,则对获取到的空间结构图进行拼接,以得到的目标物理空间的户型图。实际应用中,其他控制终端的数量可以不止一个,比如可以与目标物理空间的子空间的数量相匹配,本实施例中不对其他控制终端的数量进行限制。
以下将详细描述本发明的一个或多个实施例的户型图生成装置。本领域技术人员可以理解,这些装置均可使用市售的硬件组件通过本方案所教导的步骤进行配置来构成。
图11为本发明第二实施例提供的一种户型图生成装置的结构示意图,该装置应用于目标控制终端,如图11所示,该装置包括:获取模块21、显示模块22和处理模块23。
获取模块21,用于获取信息采集终端得到的目标物理空间中多个子空间分别对应的点云数据和全景图像,以确定所述多个子空间对应的多个空间轮廓;其中,所述多个子空间与所述多个空间轮廓一一对应,任一子空间的点云数据和全景图像是在所述任一子空间中的至少一个拍摄点位采集的。
显示模块22,用于显示所述多个子空间对应的多个空间轮廓以用于编辑。
处理模块23,用于针对多个子空间中的目标子空间,获取在目标全景图像中识别的目标介质,所述目标介质为所述目标子空间中的实体介质在所述目标全景图像中的图像,所述目标全景图像是所述目标子空间的至少一个拍摄点位采集的全景图像中,用于识别所述目标介质的全景图像;在所述目标子空间的空间轮廓中确定所述目标介质对应的映射介质,以生成所述目标子空间的空间结构图;响应于对所述多个子空间对应的多个空间结构图的获取完成操作,获取所述多个空间结构图拼接得到的所述目标物理空间的户型图。
可选地,所述装置还包括发送模块,用于将所述多个子空间对应的多个空间轮廓发送 给其他控制终端,以在所述其他控制终端上显示所述多个子空间对应的多个空间轮廓以用于与所述目标控制终端同步编辑。
对应地,所述获取模块21,还用于获取所述其他控制终端获取的所述目标子空间的空间结构图。
可选地,所述显示模块22,还用于显示所述多个空间轮廓分别对应的终端设备标识,所述终端设备标识用于指示当前编辑各空间轮廓的终端设备,所述终端设备包括所述目标控制终端和所述其他控制终端。
可选地,所述处理模块23,具体用于根据所述目标子空间的至少一个拍摄点位的点云数据,获取第一空间轮廓;根据所述目标子空间的至少一个拍摄点位的全景图像,获取第二空间轮廓;根据所述第一空间轮廓和所述第二空间轮廓,确定所述目标子空间的空间轮廓。
可选地,所述显示模块22,具体用于响应于在所述目标子空间的空间轮廓上对目标轮廓线的编辑操作,调整所述目标轮廓线的形态和/或位置,以使所述目标轮廓线与点云图像中的墙线重合,所述点云图像是根据所述目标子空间的至少一个拍摄点位的点云数据确定的,所述目标子空间的空间轮廓由多条轮廓线构成。
可选地,所述处理模块23,还具体用于根据所述多个子空间分分别对应的点云数据和/或全景图像,确定所述多个子空间之间的空间连接关系;根据所述空间连接关系,拼接所述对个空间结构图,以得到所述目标物理空间的户型图。
可选地,所述处理模块23,还具体根据所述目标子空间的目标全景图像和空间轮廓之间的映射关系,获取所述目标介质在所述目标全景图像中对应的全景像素坐标以及所映射的空间轮廓坐标,以在所述目标子空间的空间轮廓中确定所述目标介质对应的映射介质;其中,所述映射介质的与所述目标介质的目标标识以及目标显示尺寸相适配,所述目标标识用于区分属于不同类型的目标介质,所述映射关系为根据所述目标子空间的点云数据和所述目标全景图像之间的坐标映射,所建立的所述目标全景图像和所述空间轮廓之间的映射。
图11所示装置可以执行前述实施例中的步骤,详细的执行过程和技术效果参见前述实施例中的描述,在此不再赘述。
第三实施例
本实施例中,目标物理空间的户型图,可以理解为,目标物理空间的二维平面结构图。基于目标物理空间的户型图,用户能够获取到目标物理空间中,各子空间的分布信息以及子空间相互之间的连接关系。以目标物理空间为居住空间为例,基于某一居住空间的户型 图,用户能够了解该居住空间包含的客厅、卧室等子空间的位置,以及子空间中窗体或门体的朝向,进而了解该建筑空间的采光情况。
因此,实际应用中,为目标物理空间生成准确的户型图,有利于用户更好的了解的该目标物理空间。以二手房买卖场景为例,准确的户型图,能够更好的展示待出售的房屋结构,有利于买房更好的了解的该待出售房屋,从而提高成交率。
参照第二实施例中的图7,如图7所示,该户型图生成系统包括:信息采集终端和控制终端(即图7中的目标控制终端)。其中,可选地,信息采集终端可直接集成于控制终端,与控制终端作为一个整体;信息采集终端也可以与控制终端相互解耦,分离设置,信息采集终端通过例如蓝牙、无线保真(Wireless Fidelity,简称WiFi)热点等方式与控制终端通信连接。
信息采集终端用于采集目标物理空间中的多个子空间分别对应的点云数据和全景图像。其中,任一子空间中可能包含有不止一个拍摄点位,因此,本实施中,任一子空间的点云数据和全景图像是在任一子空间中的至少一个拍摄点位采集的。比如:子空间X的点云数据和全景图像包括:在子空间X中的拍摄点位a上采集的点云数据Xa和全景图像Xa,以及在子空间X中的拍摄点位b上采集的点云数据Xb和全景图像Xb。
可选地,拍摄点位可以是用户根据建模需要,自行在子空间中选定的;也可以是控制终端基于用户输入的子空间的描述信息(比如空间大小等),为子空间自动生成的参考拍摄点位。
实际应用中,信息采集终端依次在目标物理空间的多个拍摄点位上采集点云数据和全景图像,针对任一子空间中的任一拍摄点位,其对应的信息采集过程是相同的。为便于理解,以在子空间X中的目标拍摄点位Y上采集点云数据和全景图像的过程为例,对信息采集终端的数据采集过程进行说明。
如图7所示,信息采集终端包括:激光传感器、相机、电机和处理器(比如CPU)。其中,激光传感器和相机作为感知设备,用于采集子空间X的场景信息,即子空间X的点云数据和全景图像。
针对目标拍摄点位Y,信息采集终端响应于信息采集指令,驱动电机带动激光传感器360度旋转,以采集目标拍摄点位Y对应的点云数据;驱动电机带动相机360度旋转,以采集目标拍摄点位Y对应的全景图像。
可选地,信息采集指令由控制终端发送,或者,响应于用户在信息采集终端上的触发操作,触发信息采集指令。
可选地,在同一拍摄点位上,电机可同时带动激光传感器和相机旋转,以同时采集点 云数据和全景图像,也可按先后顺序依次分别带动激光传感器和相机旋转,先后分别采集点云数据和全景图像,本实施例对此不作限制。
在一可选实施例中,若先采集点云数据,再采集全景图像,为了提高采集的全景图像质量,可选地,在采集点云数据的过程中,可同步开启相机,以收集当前拍摄点位的场景光照信息进行测光,确定对应的曝光参数。之后,相机基于确定的曝光参数,采集全景图像。
可选地,信息采集终端中的相机为全景相机或非全景相机。若信息采集终端中的相机为非全景相机,则在上述360度旋转过程中,控制该相机在多个预设角度拍摄目标拍摄点位Y对应的图像,上述处理器可通过例如特征匹配算法等全景图像拼接算法,将在多个预设角度拍摄的图像缝合为全景图像。
可选地,多个预设角度可由用户根据相机的视角进行自定义设置,基于多个预设角度拍摄的图像包含有当前点位360度范围内的场景信息,比如:若相机的视角为180度,则可以某一基准方向为0度,将基于该基准方向的a度和(a+180)度确定为预设角度。
可选地,在生成全景图像时,可结合高动态范围成像(High Dynamic Range Imaging,简称HDRI),生成高质量的全景图像。
可选地,信息采集终端还包括惯性测量单元(Inertial measurement unit,简称IMU))。IMU用于对采集的点云数据和图像数据对应的位姿信息进行修正,减小由于环境或人为因素(比如:信息采集终端未水平放置等)导致的误差。
控制终端用于基于信息采集终端发送的目标物理空间中多个子空间分别对应的点云数据和全景图像,生成目标物理空间的户型图。控制终端可以是智能手机、平板电脑、笔记本电脑等具有数据处理能力的终端设备。
可选地,图7所示的户型图生成系统中还包括云端服务器,云端服务器可以为云端的物理服务器或虚拟服务器,控制终端通过接入基于通信标准的无线网络,如WiFi,2G、3G、4G/LTE、5G等移动通信网络与云端服务器通信连接。
在一可选实施例中,云端服务器可接收由控制终端转发的多个子空间分别对应的点云数据和全景图像,以生成目标物理空间的户型图,并将户型图反馈给控制终端,以用于显示。实际上,云端服务器生成目标物理空间的户型图的过程与控制终端生成空间结构图的处理过程相同,但由于云端服务器的计算能力更强,其生成目标物理空间的户型图的效率更高,能够进一步提升用户的使用体验。
在另一可选实施例中,云端服务器也可直接与信息采集终端通信连接,以直接获取信息采集终端采集的多个子空间分别对应的点云数据和全景图像,以生成目标物理空间的户 型图。
以下结合具体实施例,以控制终端为例,从控制终端的角度,对目标物理空间的户型图生成过程进行说明。
图12为本发明第三实施例提供的一种户型图生成方法的流程图,应用于控制终端,如图12所示,该户型图生成方法包括如下步骤:
1201、获取信息采集终端得到的目标物理空间中多个子空间分别对应的点云数据和全景图像,其中,任一子空间的点云数据和全景图像是在任一子空间中的至少一个拍摄点位采集的。
1202、在依次对多个子空间进行空间结构图获取处理的过程中,针对当前待编辑的目标子空间,根据在目标子空间的至少一个拍摄点位上采集的点云数据和/或全景图像,获取目标子空间的目标空间轮廓。
1203、获取在目标全景图像中识别的目标介质,目标介质为目标子空间中的实体介质在目标全景图像中的图像,目标全景图像是在目标子空间的至少一个拍摄点位采集的全景图像中,用于识别目标介质的全景图像。
1204、在目标子空间的目标空间轮廓上确定用于表示目标介质的映射介质,以获取目标子空间的空间结构图。
1205、响应于目标子空间的空间结构图的获取完成操作,若多个子空间中不存在未获取到空间结构图的子空间,则获取多个空间的空间结构图拼接得到的目标物理空间的户型图。
其中,信息采集终端获取目标物理空间中多个子空间分别对应的点云数据和全景图像的过程可参考前述实施例,本实施例中不再赘述。
步骤1201中,若信息采集终端集成于目标控制终端,则控制终端可直接同步获取信息采集终端得到的多个子空间的点云数据和全景图像;若信息采集终端通过通信链路与控制终端通信连接,则控制终端基于预先建立的通信链路,接收信息采集终端发送的多个子空间的点云数据和全景图像。
本实施例中,在生成目标物理空间的户型图时,先依次获取多个子空间分别对应的空间结构图,然后,再将多个子空间对应的空间结构图进行拼接,从而确定目标物理空间的户型图。逐个生成每个子空间对应的空间结构图时,需要的计算资源较少,能够适应多数控制设备的计算处理能力,从而能够增加本实施例的户型图生成方法的应用场景。
在依次对多个子空间进行空间结构图获取处理的过程中,任一个子空间对应的空间结构图获取过程是相同的。本实施例中,以当前待编辑的目标子空间为例进行说明。
需要说明的是,根据多个子空间对应的空间结构图获取情况,可将目标物理空间中的多个子空间划分为:已获取到空间结构图的子空间,未获取到空间结构图的子空间,以及正在获取空间结构图的子空间也即当前待编辑的目标子空间。
首先,根据在目标子空间的至少一个拍摄点位上采集的点云数据和/或全景图像,获取目标子空间的目标空间轮廓。
可选地,可根据至少一个拍摄点位的点云数据,获取第一空间轮廓,将第一空间轮廓直接作为目标子空间的目标空间轮廓;或者,根据至少一个拍摄点位的全景图像,获取第二空间轮廓,将第二空间轮廓直接作为目标子空间的目标空间轮廓;或者,在上述第一空间轮廓和上述第二空间轮廓中选择轮廓线质量较好地空间轮廓作为目标子空间的目标空间轮廓;或者,对上述第一空间轮廓和上述第二空间轮廓的轮廓线做融合处理,得到轮廓线更优质的空间轮廓,直接将融合处理后的空间轮廓作为目标子空间的空间轮廓。
可选地,在将第一空间轮廓和/或第二空间轮廓确定为目标空间轮廓之前,还可以对第一空间轮廓和/或第二空间轮廓执行人工或自动化编辑处理,以将编辑处理后的空间轮廓作为目标空间的空间轮廓。
以根据至少一个拍摄点位的点云数据,获取空间轮廓,并对该空间轮廓进行人工编辑处理的过程为例,举例说明目标空间轮廓的确定过程。
具体实施过程中,首先,将在目标子空间的至少一个拍摄点位上采集的点云数据映射到二维平面,以确定目标子空间的二维点云图像。由于目标子空间中的至少一个拍摄点位的相对位置关系已知,因此,可基于该相对位置关系,将在至少一个拍摄点位上采集的点云数据进行融合得到稠密的的点云数据,进而映射得到二维平面点云图像。之后,在二维点云图像中显示基于二维点云图像识别的空间轮廓,其中,空间轮廓由多条轮廓线构成。最后,响应于用户对空间轮廓的修正操作,调整空间轮廓中目标轮廓线的形态和/或位置,以使目标轮廓线与二维点云图像中的墙线重合,并确定与墙线重合的轮廓线构成的空间轮廓为目标子空间的目标空间轮廓。其中,点云图像中的墙线对应于目标子空间中的墙体。
其中,可以理解的是,点云数据实际上是一系列的三维坐标点,任一三维坐标点可用三维笛卡尔坐标(x,y,z)表示,其中,x,y,z分别是拥有共同的零点且彼此相互正交的x轴,y轴,z轴的坐标值。
在一可选实施例中,将在目标子空间的至少一个拍摄点位上采集的点云数据映射到二维平面,确定目标子空间的二维点云图像,包括:将在目标子空间的至少一个拍摄点位上采集的点云数据对应的三维坐标点(x,y,z)转换为二维坐标点(x,y),比如:将三维坐标点的z轴坐标值设置为0,进而,基于转换得到的二维坐标点得到目标子空间的二维 点云图像。或者,先基于在目标子空间的至少一个拍摄点位上采集的点云数据对应的三维坐标点(x,y,z)生成目标子空间的三维空间结构图,然后获取三维空间结构图的俯视图,以作为目标子空间的二维点云图像。
图13为本发明第三实施例提供的一种空间轮廓的示意图,如图13中的左图所示,空间轮廓Z中的轮廓线l与点云图像中的墙线L并不重合,但实际上轮廓线l和墙线L对应于目标子空间中的同一墙体。基于此,可将轮廓线l作为目标轮廓线,通过预设的编辑选项调整目标轮廓线l的长短和位置,使目标轮廓线l与点云图像中的墙线L重合,如图13中的右图所示,将与墙线重合的轮廓线构成的空间轮廓Z’为目标子空间的目标空间轮廓。
可选地,针对空间轮廓中与实际墙体位置不符的目标轮廓线,还可以删除不存在对应墙线的目标轮廓线;或者,添加某一墙线对应的目标轮廓线。
可选地,在得到目标子空间的二维点云图像之后,为了获取准确的空间轮廓,还可以对二维点云图像进行裁剪处理,比如,裁剪掉二维点云图像中边界模糊不清的区域,这部分区域对应的点云数据,可能是激光传感器受目标子空间中的玻璃、镜子等干扰而采集到的干扰数据。或者,对二维点云图像中不清晰的墙线进行突出显示,以便进行准确的空间轮廓识别。
在编辑完目标子空间的空间轮廓之后,需要在得到的空间轮廓中标注目标子空间中的实体介质,即门体和窗体。具体地,先基于全景图像识别目标介质,进而确定目标介质对应的映射介质,从而获取目标子空间的空间结构图。为了便于区分,将目标空间中的门体和窗体在全景图像中对应的图像称为目标介质,将门体和窗体在空间轮廓中对应的标识称为映射介质。可选地,可通过例如图像识别算法,从全景图像中识别出目标介质,基于目标介质确定映射介质。
实际应用中,可能不止一个拍摄点位对应的全景图像中包含有目标介质。为了加快控制终端对目标介质的识别效率,可选地,在控制终端识别全景图像中的目标介质之前,还可以从至少一个拍摄点位的全景图像中,确定出目标全景图像以用于识别目标介质。其中,目标全景图像为符合预设识别要求的全景图像,比如:视野最广、光线最佳的全景图像,或者包含有用户标记信息(比如:最佳全景图像)的全景图像。
其中,针对目标子空间,目标全景图像对应的拍摄点位可以与用于生成空间轮廓的点云数据对应的拍摄点位相同或不同。假设目标子空间中包含有两个拍摄点位,分别为拍摄点位A和拍摄点位B,在拍摄点位A上获取了全景图像A1和点云数据A2,在拍摄点位B上获取了全景图像B1和点云数据B2。若基于点云数据A2生成了空间轮廓,则既可以确定全景图像A1为目标全景图像,也可以确定全景图像B1为目标全景图像。类似地,若基于点云数据B2 生成了空间轮廓,则既可以确定全景图像A1为目标全景图像,也可以确定全景图像B1为目标全景图像。
可以理解的是,门体和窗体对应的有相应的尺寸信息,为了便于用户了解目标子空间的空间结构,在目标子空间的空间轮廓中添加的映射介质至少应该能够反映出目标子空间中包含的门体和/或窗体的位置信息、大小信息和类型信息。
作为一种可选地实现方式,在目标子空间的空间轮廓中确定目标介质对应的映射介质,包括:
根据目标子空间的目标全景图像和空间轮廓之间的映射关系,获取目标介质在目标全景图像中对应的全景像素坐标以及所映射的空间轮廓坐标,以在目标子空间的空间轮廓中确定目标介质对应的映射介质。
其中,映射介质的与目标介质的目标标识以及目标显示尺寸相适配,目标标识用于区分属于不同类型的目标介质,比如:属于门体的目标介质或者属于窗体的目标介质对应于不同的目标标识。
上述目标全景图像与空间轮廓之间的映射关系,是根据目标子空间的点云数据和目标全景图像之间的坐标映射,所建立的目标全景图像和空间轮廓之间的映射。
其中,激光传感器和相机之间的相对位姿在进行点云数据和全景图像采集之前,已预先标定。基于预先标定的相对位姿,以及目标子空间中实际拍摄点位之间的相对位置关系能够确定采集到的点云数据对应的三维点云坐标和全景图像的全景像素坐标之间的坐标映射。
在本发明实施例中,不限定对全景图像和点云数据坐标映射的具体方式,可选地,可以直接根据获取全景图像和点云数据的设备之间的相对位姿关系,将全景像素坐标映射为三维点云坐标,以及将三维点云坐标映射为全景像素坐标;也可以借助相对位姿关系和中间坐标系,先将全景像素坐标映射为中间坐标,再将中间坐标映射为三维点云坐标;以及先将三维点云坐标映射为中间坐标,再将中间坐标映射为全景像素坐标。在此,不限定中间坐标系的具体类型,也不限定在坐标映射过程中使用的具体方式,根据中间坐标系的不同,以及相对位姿关系的不同,所采用的映射方式也会不同。
响应于目标子空间的空间结构图的获取完成操作,确定是否还存在未获取到空间结构图的子空间,若多个子空间中存在未获取到空间结构图的至少一个子空间,则从至少一个子空间中确定出一个待编辑子空间,以获取该待编辑子空间的空间结构图。
可选地,从至少一个子空间中确定出一个待编辑子空间,包括:从至少一个子空间中随机确定一个子空间作为待编辑子空间。
可选地,从至少一个子空间中确定出一个待编辑子空间,还包括:根据至少一个子空间对应的点云数据和全景图像的采集时间点,从至少一个子空间中确定出一个待编辑子空间,其中,待编辑子空间对应的采集时间点与当前时刻的差值大于至少一个子空间中其他子空间对应的所述采集时间点与当前时刻的差值。比如,未获取到空间结构图的子空间包括:子空间a、子空间b和子空间c,其对应的点云数据的采集时间,分别为t1、t2和t3,其中,t1早于t2,t2早于t3。根据t1、t2和t3与当前时刻t的差值,确定子空间a为待编辑子空间,当获取到子空间a的空间结构图后,确定子空间b为待编辑子空间,以此类推,直至获取到所有子空间的空间结构图。
可选地,从至少一个子空间中确定出一个待编辑子空间,还包括:根据多个子空间分别对应的点云数据,确定多个子空间之间的连接关系;根据该连接关系,从至少一个子空间中确定出一个待编辑子空间,其中,待编辑子空间与目标子空间相连接。或者,根据多个子空间分别对应的全景图像,确定多个子空间之间的连接关系;根据该连接关系,从至少一个子空间中确定出一个待编辑子空间,其中,待编辑子空间与目标子空间相连接。
可以理解的是,目标物理空间中的多个子空间相互之间通过门体或窗体连通。比如子空间1和子空间2通过同一目标门体连通,由于采集点云数据和全景图像时,目标门体处于打开状态。基于此,子空间1的点云数据1与子空间2的点云数据2中可能包含有同一物体对应的特征点m,子空间1的全景图像3和子空间2的全景图像4中可能包含有同一物体对应的图像n。从空间上来说,图像n对应的区域在子空间1之外,但处于拍摄全景图像3时相机的视野范围之内。
因此,实际应用中,可根据子空间1的点云数据1和子空间2的点云数据2,和/或,子空间1的全景图像3和子空间2的全景图像4,通过例如特征匹配等方式,确定子空间1和子空间2之间的空间连接关系。
可选地,该连接关系还可用于拼接多个子空间的空间结构图。
实际应用中,若多个子空间中不存在未获取到空间结构图的子空间,则表明已获取到所有子空间对应的空间结构图。之后,可根据多个子空间的连接关系,将多个子空间的空间结构图进行拼接,以得到的目标物理空间的户型图。
为便于理解上述户型图生成方法,结合第二实施例中的图9举例来说。如图9中的a所示,假设目标物理空间中包含有3个子空间,分别为卧室1、卧室2和客厅3,其中卧室1和客厅3通过门体1连通,卧室2和客厅3通过门体2连通,卧室1和卧室2通过窗体3连通。
基于本发明实施例提供的方法,假设先生成的卧室1的空间结构图x;之后,从卧室2和客厅3中确定卧室2为待编辑子空间,并确定卧室2的空间结构图y;之后,确定客厅3为 待编辑子空间,并确定客厅3的空间结构图z,如图9中的b所示。其中,各空间结构图中包含有空间轮廓,空间轮廓上标识有表示门体和窗体的映射介质。由于在确定空间结构图z之后,不存在未获取到空间结构图的子空间,因此,根据卧室1、卧室2和客厅3的连接关系,将空间结构图x、空间结构图y和空间结构图z进行拼接得到的目标物理空间的户型图,如图9中的c所示。
本实施例中,针对具有多个子空间的目标物理空间,在生成目标物理空间的户型图时,先依次获取多个子空间分别对应的空间结构图,然后,再将多个子空间对应的空间结构图进行拼接,从而确定目标物理空间的户型图。逐个生成每个子空间对应的空间结构图时,需要的计算资源较少,能够适应多数控制设备的计算处理能力,从而能够增加本实施例的户型图生成方法的应用场景。并且,基于全景图像确定目标子空间中目标介质对应的映射介质在空间轮廓中的位置,能够在空间轮廓上的正确位置标识目标介质对应的映射介质,从而确定的空间结构图能够准确反映目标子空间的实际空间结构信息,进而根据多个空间结构图确定的户型图也能够准确反映目标物理空间的实际空间结构。
以下将详细描述本发明的一个或多个实施例的户型图生成装置。本领域技术人员可以理解,这些装置均可使用市售的硬件组件通过本方案所教导的步骤进行配置来构成。
图14为本发明第三实施例提供的一种户型图生成装置的结构示意图,该装置应用于控制终端,如图14所示,该装置包括:获取模块31和处理模块32。
获取模块31,用于获取信息采集终端得到的目标物理空间中多个子空间分别对应的点云数据和全景图像,其中,任一子空间的点云数据和全景图像是在所述任一子空间中的至少一个拍摄点位采集的。
处理模块32,用于在依次对所述多个子空间进行空间结构图获取处理的过程中,针对当前待编辑的目标子空间,根据在所述目标子空间的至少一个拍摄点位上采集的点云数据和/或全景图像,获取所述目标子空间的目标空间轮廓;获取在目标全景图像中识别的目标介质,所述目标介质为所述目标子空间中的实体介质在所述目标全景图像中的图像,所述目标全景图像是在所述目标子空间的至少一个拍摄点位采集的全景图像中,用于识别所述目标介质的全景图像;在所述目标子空间的目标空间轮廓上确定用于表示所述目标介质的映射介质,以获取所述目标子空间的空间结构图;响应于所述目标子空间的空间结构图的获取完成操作,若所述多个子空间中不存在未获取到空间结构图的子空间,则获取所述多个空间的空间结构图拼接得到的所述目标物理空间的户型图。
可选地,处理模块32,还用于响应于所述目标子空间的空间结构图的获取完成操作,若所述多个子空间中存在未获取到空间结构图的至少一个子空间,则从所述至少一个子空 间中确定出一个待编辑子空间,以获取所述待编辑子空间的空间结构图。
可选地,处理模块32,具体用于根据所述至少一个子空间对应的点云数据和全景图像的采集时间点,从所述至少一个子空间中确定出一个待编辑子空间,其中,所述待编辑子空间对应的所述采集时间点与当前时刻的差值大于所述至少一个子空间中其他子空间对应的所述采集时间点与当前时刻的差值。
可选地,处理模块32,还具体用于根据所述多个子空间分别对应的点云数据,确定多个子空间之间的连接关系;根据所述连接关系,从所述至少一个子空间中确定出一个待编辑子空间,其中,所述待编辑子空间与所述目标子空间相连接。
可选地,处理模块32,还具体用于将在所述目标子空间的至少一个拍摄点位上采集的点云数据映射到二维平面,以确定所述目标子空间的二维点云图像;显示基于所述二维点云图像识别的空间轮廓,所述空间轮廓由多条轮廓线构成;响应于用户对所述空间轮廓的修正操作,调整所述空间轮廓中目标轮廓线的形态和/或位置,以使所述目标轮廓线与所述二维点云图像中的墙线重合;确定与所述墙线重合的轮廓线构成的空间轮廓为所述目标子空间的目标空间轮廓。
可选地,处理模块32,还具体用于获取与所述目标介质的目标标识以及目标显示尺寸匹配的映射介质,所述目标标识用于区分属于不同类型的目标介质;根据所述目标介质在所述目标全景图像中对应的全景像素坐标,以及获取所述目标子空间点云数据和全景图像的设备之间的相对位姿,确定所述映射介质对应的三维点云坐标;根据所述映射介质对应的三维点云坐标,将所述映射介质添加到所述目标子空间的目标空间轮廓上。
图14所示装置可以执行前述实施例中的步骤,详细的执行过程和技术效果参见前述实施例中的描述,在此不再赘述。
第四实施例
图15为本发明第四实施例提供的一种户型图生成方法的流程图,应用于控制终端,如图15所示,该户型图生成方法包括如下步骤:
步骤S151、获取信息采集终端在目标物理空间中N个空间的每一空间所采集的点云数据和全景图像,其中,点云数据和全景图像是在每一空间中的至少一个拍摄点位采集的。
步骤S152、获取N个空间中第M个空间的第M个空间轮廓进行显示以用于编辑,第M个空间轮廓是根据第M个空间的至少一个拍摄点位采集的点云数据和/或全景图像获取的。
步骤S153、获取在第M个空间的目标全景图像中识别的目标介质,以使得根据目标介质获取目标介质在第M个空间轮廓中的映射介质,以用于根据映射介质编辑第M个空间轮廓,以获取第M个空间的户型结构图。
其中,目标全景图像是在第M个空间的至少一个拍摄点位采集的全景图像中,用于识别目标介质的全景图像,目标介质为第M个空间中的实体介质在目标全景图像中的图像。
步骤S154、判断第M个空间是否是N个空间中最后一个生成户型结构图的空间;若否,则执行步骤S155;若是,则执行步骤S156。
步骤S155、将M赋值为M+1并返回步骤S152。
步骤S156、获取由N个户型结构图组成的目标物理空间的户型图,以用于展示,流程结束。
其中,目标物理空间中至少包括N个空间,M、N为自然数,且1≤M≤N。
控制终端可以是智能手机、平板电脑、笔记本电脑等具有数据处理能力的终端设备。
本实施例中,针对至少包括N个空间的目标物理空间,在生成目标物空间的户型图时,按照从1至N的顺序,先逐个生成每个空间的户型结构图,然后,再将N个空间的户型结构图进行拼接,以生成目标物理空间的户型图。本实施例中,逐个生成每个空间的户型结构图,一方面需要的计算资源较少,能够适应多数控制设备的计算处理能力;另一方面,便于用户对每个空间的户型结构图进行独立编辑,以便为N个空间分别生成准确的户型结构图,从而得到准确的目标物理空间的户型图。除此之外,本实施例中,各空间户型结构图中空间轮廓上的映射介质是基于全景图像确定的,由于相较于点云数据而言,全景图像更能反映实际空间中的门体和窗体等(即目标介质)的实际位置,因此,基于全景图像的辅助,各空间的户型结构图中标识有较为准确的门体和窗体信息,能够更好的反映实际空间中的场景信息。
以下结合图16所示的生成户型图的场景示意图,对图15所示的户型图生成方法进行说明。
图16为本发明第四实施例提供的一种生成户型图的场景示意图。如图16所示,信息采集终端与控制终端相互解耦,信息采集终端在N个空间中每一空间采集点云数据和全景图像,并将其发送给控制终端,以使控制终端基于接收到的点云数据和全景图像生成目标物理空间的户型图。实际应用中,信息采集终端可通过例如蓝牙、无线保真(Wireless Fidelity,简称WiFi)热点等方式与控制终端通信连接。
可选地,信息采集终端也可直接集成于控制终端,与控制终端作为一个整体,控制终端可直接同步获取信息采集终端在N个空间的每一空间所采集的点云数据和全景图像,而不需要基于建立的通信连接传输点云数据和全景图像。由此,可以提高控制终端生成目标物理空间的户型图的效率。
可选地,还可以通过云端服务器生成目标物理空间的户型图。其中,云端服务器可以 为云端的物理服务器或虚拟服务器,控制终端通过接入基于通信标准的无线网络,如WiFi,2G、3G、4G/LTE、5G等移动通信网络与云端服务器通信连接。
云端服务器可接收由控制终端转发的N个空间分别对应的点云数据和全景图像,以生成目标物理空间的户型图,并将户型图反馈给控制终端,以用于显示。或者,云端服务器也可直接与信息采集终端通信连接,以直接获取信息采集终端采集的多个子空间分别对应的点云数据和全景图像,以生成目标物理空间的户型图。实际上,云端服务器生成目标物理空间的户型图的过程与控制终端生成空间结构图的处理过程相同,但由于云端服务器的计算能力更强,其生成目标物理空间的户型图的效率更高,能够进一步提升用户的使用体验。
本实施例中,以信息采集终端与控制终端相互解耦,并由控制终端生成目标物理空间的户型图为例,进行说明。
由于每个空间中包含有至少一个拍摄点位,因此,每个空间的点云数据和全景图像是信息采集终端在该空间中的至少一个拍摄点位上采集的。比如:第X个空间的点云数据和全景图像包括:在第X个空间中的拍摄点位a上采集的点云数据Xa和全景图像Xa,以及在第X个空间中的拍摄点位b上采集的点云数据Xb和全景图像Xb。
信息采集终端在各拍摄点位上采集点云数据和全景图像时,对应的信息采集过程是相同的,本实施例中,以某一拍摄点位Y为例,举例说明。
首先,对信息采集终端的基本构成做简要说明。信息采集终端包括:激光传感器、相机、电机和处理器(比如CPU)等。其中,激光传感器和相机作为感知设备,用于在各拍摄点位上采集每一空间的场景信息,即点云数据和全景图像;电机用于带动激光传感器和相机旋转,以便从各个角度采集点云数据和全景图像。可选地,信息采集终端还包括惯性测量单元(Inertial measurement unit,简称IMU))。IMU用于对采集的点云数据和图像数据对应的位姿信息进行修正,减小由于环境或人为因素(比如:信息采集终端未水平放置等)导致的误差。
接着,对信息采集终端在拍摄点位Y上的数据采集过程进行说明。
针对拍摄点位Y,信息采集终端响应于信息采集指令,驱动电机带动激光传感器360度旋转,以采集目标拍摄点位Y对应的点云数据;驱动电机带动相机360度旋转,以采集目标拍摄点位Y对应的全景图像。其中,信息采集指令由用户通过控制终端发送,或者,响应于用户在信息采集终端上的触发操作,触发信息采集指令。
可选地,在拍摄点位Y上,电机可同时带动激光传感器和相机旋转,以同时采集点云数据和全景图像,也可按先后顺序依次带动激光传感器和相机旋转,先后分别采集点云数 据和全景图像,本实施例对此不作限制。
在一可选实施例中,若先采集点云数据,再采集全景图像,为了提高采集的全景图像质量,可选地,在采集点云数据的过程中,可同步开启相机,以收集当前拍摄点位的场景光照信息进行测光,确定对应的曝光参数。之后,相机基于确定的曝光参数,采集全景图像。
可选地,信息采集终端中的相机为全景相机或非全景相机。若信息采集终端中的相机为非全景相机,则在上述360度旋转过程中,控制该相机在多个预设角度拍摄目标拍摄点位Y对应的图像,上述处理器可通过例如特征匹配算法等全景图像拼接算法,将在多个预设角度拍摄的图像缝合为全景图像。比如:若相机的视角为120度,则可以某一基准方向为0度,将基于该基准方向的a度、(a+120)度和(a+240)度确定为预设角度,控制相机在这3个预设角度拍摄得到图像1、图像2和图像3;之后,缝合图像1至图像3,以得到全景图像。
可以理解的是,预设角度之间的间隔越小,获取到的图像数量就越多,进而缝合后得到的全景图像就越准确。具体实施过程中,预设角度的数量可由用户根据相机的视角进行自定义设置,基于多个预设角度拍摄的图像包含有当前点位360度范围内的场景信息。
可选地,在生成全景图像时,可结合高动态范围成像(High Dynamic Range Imaging,简称HDRI),生成高质量的全景图像。
信息采集终端在一个拍摄点位采集完点云数据和全景图像后,可直接将该拍摄点位的点云数据和全景图像发送给控制终端;或者,先进行存储,之后在完成对当前所在空间的所有拍摄点位上点云数据和全景图像的采集后,将该空间的所有拍摄点位的点云数据和全景图像发送给控制终端,本实施例对此不做限制。
控制终端在接收到目标物理空间中N个空间的每一空间的点云数据和全景图像后,如图16所示,逐个生成N个空间的户型结构图,比如:先生成第1个空间的户型结构图,再生成第2个空间的户型结构图,依次类推,直至生成第N个空间的户型结构图。
实际应用中,可根据N个空间分别对应的点云数据和/或全景图像的采集时间点,确定N个空间获取户型结构图的先后顺序,即对N个空间进行排序。举例来说,假设N的取值为3,即目标物理空间中包含3个空间,分别为空间a,空间b和空间c,其中,空间a对应的点云数据的采集时间点为t1,空间b对应的点云数据的采集时间点为t2,空间c对应的点云数据的采集时间点为t3,其中,t1早于t2,t2早于t3。可根据t1,t2和t3的先后顺序,对上述3个空间进行排序,例如:空间a为第1个空间,空间b为第1个空间,空间c为第3个空间。然后,按照从1到3的顺序,逐个生成每个空间的户型结构图。可选地,也可随机确定N个 空间对应的生成户型结构图的先后顺序。
接下来,以当前要生成户型结构图的空间为N个空间中的第M个空间为例,对生成户型结构图的过程进行举例说明。可以理解的是,在按照一定的顺序,逐个生成每一空间的户型结构图时,针对任一个空间,其对应的户型结构图生成过程是相同的。
本实施例中,目标物理空间的户型图由N个空间的户型结构图组成,目标物理空间的户型图和每个空间的户型结构图均可以理解为该空间的二维平面图。两者的区别在于目标物理空间的户型图对应的空间更多,二维平面图更“大”,而每一空间的户型结构图仅为当前空间的二维平面图,二维平面图更“小”。实际应用中,例如AR看房等场景,二维平面图更通俗的理解为物理空间的俯视图。
为表示物理空间,每一的户型结构图包含有空间轮廓和映射介质。其中,空间轮廓用于表示物理空间中的墙体,映射介质用于表示物理空间中的窗体和门体。因此,每一空间的户型结构图的获取过程,可以进一步划分为空间轮廓的确定过程,以及映射介质的确定过程,以下分别进行说明。
另外,需要说明的是,每一空间对应于一个空间轮廓,本实施例中,为便于表述,将第M个空间的空间轮廓记为第M个空间轮廓。举例来说,第2个空间轮廓为第2个空间的空间轮廓,第3个空间轮廓为第3个空间的空间轮廓。
针对第M个空间,在确定第M个空间轮廓时,可根据第M个空间的至少一个拍摄点位采集的点云数据和/或全景图像,确定第M个空间的第M个空间轮廓,并显示第M个空间轮廓以对该第M个空间轮廓进行编辑。
具体地,可根据第M个空间的至少一个拍摄点位的点云数据,获取第一空间轮廓,将第一空间轮廓直接作为第M个空间轮廓;或者,根据第M个空间的至少一个拍摄点位的全景图像,获取第二空间轮廓,将第二空间轮廓直接作为第M个空间轮廓;或者,在上述第一空间轮廓和上述第二空间轮廓中选择轮廓线质量较好地空间轮廓作为第M个空间轮廓;或者,对上述第一空间轮廓和上述第二空间轮廓的轮廓线做融合处理,得到轮廓线更优质的空间轮廓,直接将融合处理后的空间轮廓作为第M个空间轮廓。
可选地,还可以对第一空间轮廓和/或第二空间轮廓执行人工或自动化编辑处理,以将编辑处理后的空间轮廓作为第M个空间轮廓。
实际应用中,第M个空间轮廓中包含有多条轮廓线。其中,存在一些与第M个空间中的墙体的实际位置不匹配的轮廓线,为了保证第M个空间轮廓能够准确的反映第M个空间,第M个空间轮廓可被编辑。
为便于编辑第M个空间轮廓,可选地,可在第M个空间的二维点云图像中显示第M个空 间轮廓;之后,响应于用户对第M个空间轮廓的编辑操作,调整第M个空间轮廓的轮廓线,以使第M个空间轮廓的轮廓线与二维点云图像中的墙线重合。二维点云图像中的墙线对应于第M个空间中的墙体。
其中,第M个空间的二维点云图像是将第M个空间中至少一个拍摄点位的点云数据进行平面映射后得到的。由于第M个空间中的至少一个拍摄点位的相对位置关系已知,因此,可基于该相对位置关系,将在至少一个拍摄点位上采集的点云数据进行融合得到稠密的的点云数据,进而映射得到二维点云图像。具体地,先融合N个空间中第M个空间的至少一个拍摄点位的点云数据,以确定第M个空间的目标点云数据;然后,将目标点云数据映射到二维平面,以得到第M个空间的初始二维点云图像;之后,根据用户对初始二维点云图像的修正操作(比如:裁剪掉二维点云图像中边界模糊不清的区域,对二维点云图像中不清晰的墙线进行突出显示等),确定修正操作后得到的目标二维点云图像为第M个空间的二维点云图像。
其中,可以理解的是,点云数据实际上是一系列的三维坐标点,任一三维坐标点可用三维笛卡尔坐标(x,y,z)表示,其中,x,y,z分别是拥有共同的零点且彼此相互正交的x轴,y轴,z轴的坐标值。
在一可选实施例中,将第M个空间的目标点云数据映射到二维平面,以得到第M个空间的初始二维点云图像,包括:将第M个空间的目标点云数据对应的三维坐标点(x,y,z)转换为二维坐标点(x,y),比如:将三维坐标点的z轴坐标值设置为0,进而,基于转换得到的二维坐标点得到第M个空间的初始二维点云图像。或者,先基于第M个空间的目标点云数据对应的三维坐标点(x,y,z)生成第M个空间的三维空间结构图,然后获取三维空间结构图的俯视图,以作为第M个空间的初始二维点云图像。
其中,第M个空间的二维点云图像还可用于获取第M个空间的第M个空间轮廓,比如:通过对第M个空间的二维点云图像进行边缘检测,获取第M个空间轮廓。
针对第M个空间轮廓中与实际墙体位置不匹配的轮廓线,可以基于预设的轮廓线编辑选项,调整第M个空间轮廓中轮廓线的形态和/或位置,或者删除不存在对应墙线的轮廓线,或者添加某一墙线对应的轮廓线。
举例来说,图17为本发明第四实施例提供的第M个空间轮廓的示意图,如图17中的左图所示,在对第M个空间轮廓的轮廓线编辑前,第M个空间轮廓中的轮廓线l与第M个空间的二维点云图像中的墙线L并不重合,但实际上轮廓线l和墙线L对应于第M个空间中的同一墙体。基于此,可通过预设的轮廓线编辑选项,调整轮廓线l的长短和位置,使轮廓线l与二维点云图像中的墙线L重合,得到编辑后的第M个空间轮廓,如图17中的右图所示。
另外,如图17中的左图所示,在对第M个空间轮廓的轮廓线编辑前,第M个空间轮廓中并不存在与二维点云图像中的墙线H对应的轮廓线,基于此,可通过预设的轮廓线编辑选项,添加轮廓线h,且轮廓线h与二维点云图像中的墙线H重合,如图17中的右图所示。本实施例中,最终确定的第M个空间轮廓的轮廓线之间相互连接。
以上是第M个空间轮廓的确定过程,以下介绍第M个空间轮廓中映射介质的确定过程。
针对第M个空间,其对应的户型结构图中的映射介质,可通过如下方式获得:先通过例如图像识别等方式,从全景图像中识别对应的目标介质,即第M个空间中的实体介质(门体和窗体)在全景图像中的图像;然后,根据目标介质确定对应的映射介质。
实际应用中,第M个空间中可能不止一个拍摄点位对应的全景图像中包含有目标介质。为了加快控制终端对目标介质的识别效率,可选地,在控制终端识别全景图像中的目标介质之前,还可以从至少一个拍摄点位的全景图像中,确定出目标全景图像以用于识别目标介质。其中,目标全景图像为符合预设识别要求的全景图像,比如:视野最广、光线最佳的全景图像,或者包含有用户标记信息(比如:最佳全景图像)的全景图像。
其中,针对第M个空间,目标全景图像对应的拍摄点位可以与用于生成空间轮廓的点云数据对应的拍摄点位相同或不同。假设第M个空间中包含有两个拍摄点位,分别为拍摄点位A和拍摄点位B,在拍摄点位A上获取了全景图像A1和点云数据A2,在拍摄点位B上获取了全景图像B1和点云数据B2。若基于点云数据A2生成了第M个空间轮廓,则既可以确定全景图像A1为目标全景图像,也可以确定全景图像B1为目标全景图像。类似地,若基于点云数据B2生成了第M个空间轮廓,则既可以确定全景图像A1为目标全景图像,也可以确定全景图像B1为目标全景图像。
由于在进行点云数据和全景图像采集之前,激光传感器和相机之间的相对位姿已预先标定。因此,针对第M个空间,基于预先标定的相对位姿,以及第M个空间中实际拍摄点位之间的相对位置关系,能够确定采集到的点云数据对应的三维点云坐标和全景图像的全景像素坐标之间的坐标映射。
进一步地,可根据第M个空间的点云数据和目标全景图像之间的坐标映射,建立的目标全景图像和第M个空间轮廓之间的映射,即预先确定第M个空间的目标全景图像和第M个空间轮廓之间的映射关系。
从而,根据目标介质获取目标介质在第M个空间轮廓中的映射介质,包括:根据第M个空间的目标全景图像和第M个空间轮廓之间的映射关系,获取目标介质在目标全景图像中对应的全景像素坐标以及所映射的空间轮廓坐标,以在第M个空间轮廓中确定目标介质对应的映射介质,以获取第M个空间的户型结构图。
其中,映射介质的与目标介质的目标标识以及目标显示尺寸相适配,目标标识用于区分属于不同类型(门体或窗体)的目标介质。
在本发明实施例中,不限定对全景图像和点云数据坐标映射的具体方式,可选地,可以直接根据获取全景图像和点云数据的设备之间的相对位姿关系,将全景像素坐标映射为三维点云坐标,以及将三维点云坐标映射为全景像素坐标;也可以借助相对位姿关系和中间坐标系,先将全景像素坐标映射为中间坐标,再将中间坐标映射为三维点云坐标;以及先将三维点云坐标映射为中间坐标,再将中间坐标映射为全景像素坐标。在此,不限定中间坐标系的具体类型,也不限定在坐标映射过程中使用的具体方式,根据中间坐标系的不同,以及相对位姿关系的不同,所采用的映射方式也会不同。
在确定第M个空间的户型结构图之后,判断第M个空间是否是N个空间中最后一个生成户型结构图的空间。若M小于N,则表示还存在未获取到户型结构图的空间,将M赋值为M+1,获取第M+1个空间的户型结构图;若M等于N,则表示不存在未获取到户型结构图的空间,将第1个户型结构图,第2个户型结构图,…,和第N个户型结构图在同一空间坐标系下进行拼接,以组成目标物理空间的户型图。
本实施例中,目标物理空间中的N个子空间,可根据每一空间的点云数据,确定N个空间之间的连接关系;或者,根据每一空间的全景图像,确定N个空间之间的连接关系。之后,根据该连接关系,将N个户型结构图转换于同一空间坐标系下,并在该空间坐标系下进行拼接。
可以理解的是,目标物理空间中的N个空间相互之间通过门体或窗体连通。比如第E个空间和第F个空间通过同一目标门体连通,由于采集点云数据和全景图像时,目标门体处于打开状态。基于此,第E个空间的点云数据1与第F个空间的点云数据2中可能包含有同一物体对应的特征点m,第E个空间的全景图像3和第F个空间的全景图像4中可能包含有同一物体对应的图像n。由此,可根据第E个空间的点云数据1和第F个空间的点云数据2,和/或,第E个空间的全景图像3和第F个空间的全景图像4,通过例如特征匹配等方式,确定第E个空间和第F个空间之间的连接关系。其中,前述同一物体,无论是在第E个空间进行数据采集,还是在第F个空间进行信息采集,都处于相机或者激光传感器的采集范围之内。
为便于理解本实施例的户型图生成方法,结合图18举例来说。图18为本发明第四实施例提供的一种户型图生成过程的示意图,其中,图18中的a为目标物理空间的实际空间结构,图18中的b为目标物理空间中3个空间的户型结构图,图18中的c为目标物理空间的户型图。
如图18中的a所示,假设目标物理空间中包含有3个空间,分别为卧室1、卧室2和客厅 3,其中卧室1和客厅3通过门体1连通,卧室2和客厅3通过门体2连通,卧室1和卧室2通过窗体3连通。假设根据其分别对应的点云数据采集时间,确定卧室1为第1个空间,卧室2为第2个空间,客厅3为第3个空间。
基于本发明实施例提供的方法,按照从1到3的顺序,依次生成第1个空间的户型结构图,第2个空间的户型结构图和第3个空间的户型结构图,如图18中的b所示。其中,各户型结构图中包含有空间轮廓,空间轮廓上标识有表示门体和窗体的映射介质。由于在确定第3个空间的户型结构图之后,不存在未获取到户型结构图的空间,因此,根据卧室1、卧室2和客厅3的连接关系,将第1个空间的户型结构图,第2个空间的户型结构图和第3个空间的户型结构图在同一空间坐标系下进行拼接得到的目标物理空间的户型图,如图18中的c所示。
以下将详细描述本发明的一个或多个实施例的户型图生成装置。本领域技术人员可以理解,这些装置均可使用市售的硬件组件通过本方案所教导的步骤进行配置来构成。
图19为本发明第四实施例提供的一种户型图生成装置的结构示意图,该装置用于生成目标物理空间的户型图,所述目标物理空间至少包括N个空间,应用于控制终端,如图19所示,该装置包括:获取模块41,第一处理模块42和第二处理模块43。
获取模块41,用于获取信息采集终端在所述N个空间的每一空间所采集的点云数据和全景图像,其中,所述点云数据和全景图像是在所述每一空间中的至少一个拍摄点位采集的。
第一处理模块42,用于获取所述N个空间中第M个空间的第M个空间轮廓进行显示以用于编辑,所述第M个空间轮廓是根据第M个空间的至少一个拍摄点位采集的点云数据和/或全景图像获取的;获取在所述第M个空间的目标全景图像中识别的目标介质,以使得根据所述目标介质获取所述目标介质在所述第M个空间轮廓中的映射介质,以用于根据所述映射介质编辑所述第M个空间轮廓,以获取所述第M个空间的户型结构图;所述目标全景图像是在第M个空间的至少一个拍摄点位采集的全景图像中,用于识别所述目标介质的全景图像,所述目标介质为所述第M个空间中的实体介质在所述目标全景图像中的图像。
第二处理模块43,用于判断所述第M个空间是否是所述N个空间中最后一个生成户型结构图的空间;若否,则将M赋值为M+1并返回执行第一处理模块;若是,则获取由N个户型结构图组成的所述目标物理空间的户型图,以用于展示,流程结束;其中,M、N为自然数,且1≤M≤N。
可选地,第一处理模块42,具体用于在所述N个空间中第M个空间的二维点云图像中显示所述第M个空间的第M个空间轮廓;其中,所述二维点云图像是将所述第M个空间中至少 一个拍摄点位的点云数据进行平面映射后得到的;响应于对所述第M个空间轮廓的编辑操作,调整所述第M个空间轮廓的轮廓线,以使所述轮廓线与所述二维点云图像中的墙线重合。
可选地,第一处理模块42,具体用于融合所述N个空间中第M个空间的至少一个拍摄点位的点云数据,以确定所述第M个空间的目标点云数据;将所述目标点云数据映射到二维平面,以得到所述第M个空间的初始二维点云图像;根据用户对所述初始二维点云图像的修正操作,确定所述第M个空间的二维点云图像。
可选地,第一处理模块42,具体用于根据所述第M个空间的目标全景图像和所述第M个空间轮廓之间的映射关系,获取所述目标介质在所述目标全景图像中对应的全景像素坐标以及所映射的空间轮廓坐标,以在所述第M个空间轮廓中确定所述目标介质对应的映射介质;其中,所述映射介质的与所述目标介质的目标标识以及目标显示尺寸相适配,所述目标标识用于区分属于不同类型的目标介质,所述映射关系为根据所述第M个空间的点云数据和所述目标全景图像之间的坐标映射,所建立的所述目标全景图像和所述第M个空间轮廓之间的映射。
可选地,第一处理模块42,具体用于根据所述N个空间分别对应的点云数据和/或全景图像的采集时间点,对所述N个空间进行排序。
图19所示装置可以执行前述实施例中的步骤,详细的执行过程和技术效果参见前述实施例中的描述,在此不再赘述。
第五实施例
参照第二实施例中图7所示的户型图生成系统,如图7所示,该户型图生成系统包括:信息采集终端和控制终端。其中,信息采集终端可直接集成于控制终端,与控制终端作为一个整体;信息采集终端也可以与控制终端相互解耦,分离设置,信息采集终端通过例如蓝牙、无线保真(Wireless Fidelity,简称WiFi)热点等方式与控制终端通信连接。
信息采集终端包括:激光传感器、相机、电机和处理器(比如CPU)。其中,激光传感器和相机作为感知设备,用于采集目标物理空间中的多个子空间分别对应的点云数据和全景图像,即多个子空间的场景信息。为获取每个子空间中完整的空间信息,任一子空间中可能设置有不止一个拍摄点位,比如:子空间X中对应有拍摄点位1、拍摄点位2和拍摄点位3。因此,本实施中,任一子空间的点云数据和全景图像是指在任一子空间中的至少一个拍摄点位上采集的点云数据和全景图像。
实际应用中,拍摄点位的设置,可以是用户在通过信息采集终端采集每一子空间的场景信息时,基于当前的采集情况进行自定义的;也可以是信息采集终端或控制终端基于用 户输入的子空间的描述信息(比如空间大小等),为子空间自动生成的参考拍摄点位。
信息采集终端在任一子空间的任一个拍摄点位上采集点云数据和全景图像的过程,实际上是相同的。为便于理解,以某一拍摄点位Y为例,举例说明。
针对拍摄点位Y,信息采集终端响应于信息采集指令,驱动电机带动激光传感器360度旋转,以采集目标拍摄点位Y对应的点云数据;驱动电机带动相机360度旋转,以采集目标拍摄点位Y对应的全景图像。其中,信息采集指令由控制终端发送,或者,响应于用户在信息采集终端上的操作,自动触发信息采集指令。
在拍摄点位Y上,电机可同时带动激光传感器和相机旋转,以同时采集点云数据和全景图像,也可按先后顺序依次分别带动激光传感器和相机旋转,比如:先带动激光传感器旋转后带动相机旋转,或者先带动相机旋转后带动激光器旋转,以先后分别采集点云数据和全景图像,本实施例对此不作限制。
可选地,若先采集点云数据,再采集全景图像,为了提高采集的全景图像质量,在采集点云数据的过程中,可同步开启相机,以收集当前拍摄点位的场景光照信息进行测光,确定对应的曝光参数。之后,相机基于确定的曝光参数,采集全景图像。
可选地,信息采集终端中的相机为全景相机或非全景相机。若信息采集终端中的相机为非全景相机,则在上述360度旋转过程中,控制该相机在多个预设角度拍摄目标拍摄点位Y对应的图像,上述处理器可通过例如特征匹配算法等全景图像拼接算法,将在多个预设角度拍摄的图像缝合为全景图像。其中,多个预设角度可由用户根据相机的视角进行自定义设置,比如:若相机的视角为180度,则可以某一基准方向为0度,将基于该基准方向的a度和(a+180)度确定为预设角度。本实施例中,基于多个预设角度拍摄的图像包含有当前拍摄点位360度范围内的场景信息,
可选地,在生成全景图像时,可结合高动态范围成像(High Dynamic Range Imaging,简称HDRI),生成高质量的全景图像。
可选地,信息采集终端还包括惯性测量单元(Inertial measurement unit,简称IMU))。IMU用于对采集的点云数据和图像数据对应的位姿信息进行修正,减小由于环境或人为因素(比如:信息采集终端未水平放置等)导致的误差。
控制终端用于基于信息采集终端发送的目标物理空间中多个子空间分别对应的点云数据和全景图像,依次生成每一子空间的户型结构图,并在生成户型结构图时,与之前生成的户型结构图进行拼接,以最终生成目标物理空间的户型图。
控制终端可以是智能手机、平板电脑、笔记本电脑等具有数据处理能力的终端设备。
可选地,图7所示的户型图生成系统中还可包括云端服务器,云端服务器可以为云端 的物理服务器或虚拟服务器,控制终端通过接入基于通信标准的无线网络,如WiFi,2G、3G、4G/LTE、5G等移动通信网络与云端服务器通信连接。
在一可选实施例中,云端服务器可接收由控制终端转发的多个子空间分别对应的点云数据和全景图像,以生成目标物理空间的户型图,并将户型图反馈给控制终端,以用于显示。实际上,云端服务器生成目标物理空间的户型图的过程与控制终端生成空间结构图的处理过程相同,但由于云端服务器的计算能力更强,其生成目标物理空间的户型图的效率更高,能够进一步提升用户的使用体验。
在另一可选实施例中,云端服务器也可直接与信息采集终端通信连接,以直接获取信息采集终端采集的多个子空间分别对应的点云数据和全景图像,以生成目标物理空间的户型图。
以下以基于控制终端生成目标物理空间的户型图的过程进行说明。以下以基于控制终端生成目标物理空间的户型图的过程进行说明。
图20为本发明第五实施例提供的一种户型图生成方法的流程图,应用于控制终端,如图20所示,该户型图生成方法包括如下步骤:
2001、获取信息采集终端得到的目标物理空间中多个子空间分别对应的点云数据和全景图像,其中,任一子空间的点云数据和全景图像是在任一子空间中的至少一个拍摄点位采集的。
2002、在依次对多个子空间进行的户型结构图拼接处理的过程中,针对当前待拼接的第一子空间,根据在第一子空间的至少一个拍摄点位上采集的点云数据和/或全景图像,获取第一子空间的目标空间轮廓。
2003、获取在目标全景图像中识别的目标介质,目标介质为第一子空间中的实体介质在目标全景图像中的图像,目标全景图像是在第一子空间的至少一个拍摄点位上采集的全景图像中,用于识别目标介质的全景图像。
2004、在第一子空间的目标空间轮廓上确定用于表示目标介质的映射介质,以获取第一子空间的户型结构图。
2005、将第一子空间的户型结构图与第二子空间的户型结构图进行拼接处理,第二子空间为第一子空间之前已进行户型结构图拼接的子空间。
2006、若多个子空间中不存在未进行户型结构图拼接处理的子空间,则将拼接处理结果确定为目标物理空间的户型图。
步骤2001中,信息采集终端采集目标物理空间中多个子空间的点云数据和全景图像的具体过程,可参考前述实施例,本实施例中不进行赘述。
其中,若信息采集终端集成于目标控制终端,则控制终端可直接同步获取信息采集终端得到的多个子空间的点云数据和全景图像;若信息采集终端通过通信链路与控制终端通信连接,则控制终端基于预先建立的通信链路,接收信息采集终端发送的多个子空间的点云数据和全景图像。
本实施例中,在生成目标物理空间的户型图时,逐个生成多个子空间的户型结构图,并且,每生成一个子空间的户型结构图,就与之前生成的户型结构图进行拼接,直至生成最后一个子空间的户型结构图并完成拼接。最后,确定最终的拼接结果为目标物理空间的户型图。由于生成单个子空间的户型结构图,需要的计算资源较少,能够适应多数控制设备的计算处理能力,并且边生成户型结构图边进行拼接,有利于用户即时的对拼接结果进行确认,保证生成的目标物理空间的户型图的准确性。
实际上,每个子空间的户型结构图的生成过程和拼接过程是相似的。本实施例中,将当前待拼接的子空间称为第一子空间,将在第一子空间之前已进行户型结构图拼接的子空间称为第二子空间。可以理解的是,第一子空间和第二子空间对应的子空间是在不断更新的。
在此,先对第一子空间的户型结构图的生成过程和拼接过程进行说明,第一子空间和第二子空间的更新情况在后续实施例中进行说明。
目标物理空间的户型图由多个子空间的户型结构图组成,目标物理空间的户型图和每个子空间的户型结构图均可以理解为该空间的二维平面图。两者的区别在于目标物理空间的户型图对应的子空间更多,二维平面图更“大”,而每一子空间的户型结构图仅为当前子空间的二维平面图,二维平面图更“小”。在例如AR看房等场景中,二维平面图更通俗的理解为物理空间的俯视图。
某一子空间的户型结构图中,包含有用于表示该子空间中的墙体的空间轮廓,以及用于表示该子空间中的门体和窗体的映射介质。由此,在确定某一子空间的户型结构图时,要先获取该子空间的空间轮廓和映射介质。
在一可选的实施例中,可根据在第一子空间的至少一个拍摄点位上采集的点云数据和/或全景图像,获取第一子空间的目标空间轮廓。具体地,可根据在第一子空间的至少一个拍摄点位上采集的点云数据,确定第一子空间的第一空间轮廓;根据在第一子空间的至少一个拍摄点位上采集的全景图像,确定第一子空间的第二空间轮廓。之后,根据第一空间轮廓和/或第二空间轮廓,确定第一子空间的目标空间轮廓。比如:确定第一空间轮廓为目标空间轮廓;或者,确定第二空间轮廓为目标空间轮廓;或者在上述第一空间轮廓和上述第二空间轮廓中选择轮廓线质量较好地空间轮廓作为目标子空间的目标空间轮廓;或 者,对上述第一空间轮廓和上述第二空间轮廓的轮廓线做融合处理,得到轮廓线更优质的空间轮廓,确定融合处理后的空间轮廓为目标空间轮廓。
实际应用中,目标空间轮廓中包含有多条轮廓线。其中,存在一些与第一子空间中的墙体的实际位置不匹配的轮廓线,为保证目标空间轮廓能够准确的反映第一子空间,上述目标空间轮廓中的轮廓线可被编辑。由此,在根据第一空间轮廓和/或第二空间轮廓,确定第一子空间的目标空间轮廓之后,还可以对目标空间轮廓执行人工或自动化编辑处理。
为便于编辑目标空间轮廓,可选地,可在第一子空间的二维点云图像中显示第一子空间的目标空间轮廓;之后,响应于用户对目标空间轮廓的编辑操作,调整目标空间轮廓的轮廓线,以使目标空间轮廓的轮廓线与二维点云图像中的墙线重合。二维点云图像中的墙线对应于第一子空间中的墙体。
其中,第一子空间的二维点云图像是将第一子空间中至少一个拍摄点位的点云数据进行平面映射后得到的。
可以理解的是,点云数据实际上是一系列的三维坐标点,任一三维坐标点可用三维笛卡尔坐标(x,y,z)表示,其中,x,y,z分别是拥有共同的零点且彼此相互正交的x轴,y轴,z轴的坐标值。
在一可选实施例中,将在第一子空间的至少一个拍摄点位上采集的点云数据映射到二维平面,得到第一子空间的二维点云图像,包括:将在第一子空间的至少一个拍摄点位上采集的点云数据对应的三维坐标点(x,y,z)转换为二维坐标点(x,y),比如:将三维坐标点的z轴坐标值设置为0,进而,基于转换得到的二维坐标点得到第一子空间的二维点云图像。或者,先基于在第一子空间的至少一个拍摄点位上采集的点云数据对应的三维坐标点(x,y,z)生成第一子空间的三维空间结构图,然后获取三维空间结构图的俯视图,以作为第一子空间的二维点云图像。
由于第一子空间中的至少一个拍摄点位的相对位置关系已知,因此,可基于该相对位置关系,将在至少一个拍摄点位上采集的点云数据进行融合得到稠密的点云数据,进而映射得到二维点云图像。具体地,先融合第一子空间的至少一个拍摄点位的点云数据,以确定第一子空间的目标点云数据;然后,将目标点云数据映射到二维平面,以得到第一子空间的初始二维点云图像;之后,根据用户对初始二维点云图像的修正操作(比如:裁剪掉二维点云图像中边界模糊不清的区域,对二维点云图像中不清晰的墙线进行突出显示等),确定修正操作后得到的目标二维点云图像为第一子空间的二维点云图像。
其中,第一子空间的二维点云图像还可用于获取第一子空间的第一空间轮廓,比如:通过对第一子空间的二维点云图像进行边缘检测,获取第一空间轮廓。
针对目标空间轮廓中与实际墙体位置不匹配的轮廓线,可以基于预设的轮廓线编辑选项,调整目标空间轮廓中轮廓线的形态和/或位置,或者删除不存在对应墙线的轮廓线,或者添加某一墙线对应的轮廓线。
图21为本发明第五实施例提供的目标空间轮廓的示意图。举例来说,如图21中的左图所示,在对目标空间轮廓的轮廓线编辑前,目标空间轮廓中的轮廓线l与第一子空间的二维点云图像中的墙线L并不重合,但实际上轮廓线l和墙线L对应于第一子空间中的同一墙体。再例如,在对目标空间轮廓的轮廓线编辑前,目标空间轮廓中并不存在与二维点云图像中的墙线H对应的轮廓线。基于此,可通过预设的轮廓线编辑选项,调整轮廓线l的尺寸和位置,使轮廓线l与二维点云图像中的墙线L重合;通过预设的轮廓线编辑选项,添加轮廓线h,且轮廓线h与二维点云图像中的墙线H重合。编辑后的目标空间轮廓如图21中的右图所示。本实施例中,最终确定的目标空间轮廓的轮廓线之间相互连接。
第一子空间对应的户型结构图中的映射介质的确定过程如下:先通过例如图像识别等方式,从全景图像中识别对应的目标介质,即第一子空间中的实体介质(门体和窗体)在全景图像中的图像;然后,确定目标介质对应的映射介质,即门体和窗体在空间轮廓中对应的标识。
实际应用中,第一子空间中可能不止一个拍摄点位对应的全景图像中能够识别到目标介质,比如:第一子空间的拍摄点位1、拍摄点位2和拍摄点位3对应的3个全景图像中均包含有第一子空间中的门体和窗体对应的图像。
可以理解的是,在同一子空间中的至少一个拍摄点位上分别获取全景图像是为了保证各子空间的场景信息的完整性,通常获取到的全景图像对于确定目标介质来说是富余的。因此,在确定第一子空间的目标介质时,并不需要在第一子空间的所有全景图像中识别目标介质。
为了加快控制终端对目标介质的识别效率,可选地,在控制终端识别全景图像中的目标介质之前,可以从至少一个拍摄点位的全景图像中,确定出目标全景图像以用于识别目标介质。其中,目标全景图像为符合预设识别要求的全景图像,比如:视野最广、光线最佳的全景图像,或者包含有用户标记信息(比如:最佳全景图像)的全景图像。
其中,针对第一子空间,目标全景图像对应的拍摄点位可以与用于生成第一子空间的目标空间轮廓的点云数据对应的拍摄点位相同或不同。假设第一子空间中包含有两个拍摄点位,分别为拍摄点位A和拍摄点位B,在拍摄点位A上获取了全景图像A1和点云数据A2,在拍摄点位B上获取了全景图像B1和点云数据B2。若基于点云数据A2生成了目标空间轮廓,则既可以确定全景图像A1为目标全景图像,也可以确定全景图像B1为目标全景图像。类似 地,若基于点云数据B2生成了目标空间轮廓,则既可以确定全景图像A1为目标全景图像,也可以确定全景图像B1为目标全景图像。
由于在进行点云数据和全景图像采集之前,激光传感器和相机之间的相对位姿已预先标定。因此,针对第一子空间,基于预先标定的相对位姿,以及第一子空间中实际拍摄点位之间的相对位置关系,能够确定采集到的点云数据对应的三维点云坐标和全景图像的全景像素坐标之间的坐标映射。
进一步地,可根据第一子空间的点云数据和目标全景图像之间的坐标映射,建立的目标全景图像和第一子空间的目标空间轮廓之间的映射,即预先确定第一子空间的目标全景图像和目标空间轮廓之间的映射关系。
从而,在第一子空间的目标空间轮廓上确定用于表示目标介质的映射介质,包括:根据第一子空间的目标全景图像和目标空间轮廓之间的映射关系,获取目标介质在目标全景图像中对应的全景像素坐标以及所映射的空间轮廓坐标,以在第一子空间的目标空间轮廓中确定目标介质对应的映射介质。
其中,映射介质与目标介质的目标标识以及目标显示尺寸相适配,目标标识用于区分属于不同类型(门体或窗体)的目标介质。
在本发明实施例中,不限定对全景图像和点云数据坐标映射的具体方式,可选地,可以直接根据获取全景图像和点云数据的设备之间的相对位姿关系,将全景像素坐标映射为三维点云坐标,以及将三维点云坐标映射为全景像素坐标;也可以借助相对位姿关系和中间坐标系,先将全景像素坐标映射为中间坐标,再将中间坐标映射为三维点云坐标;以及先将三维点云坐标映射为中间坐标,再将中间坐标映射为全景像素坐标。在此,不限定中间坐标系的具体类型,也不限定在坐标映射过程中使用的具体方式,根据中间坐标系的不同,以及相对位姿关系的不同,所采用的映射方式也会不同。
在生成第一子空间的户型结构图之后,将第一子空间的户型结构图与第二子空间的户型结构图进行拼接处理,其中,第二子空间为在第一子空间之前已进行户型结构图拼接的子空间。
由于目标物理空间中的多个空间相互之间通过门体或窗体连通,比如:子空间E和子空间F通过同一目标门体连通,在采集点云数据和全景图像时,目标门体处于打开状态,因此,子空间E的全景图像中包含有子空间E中的某一物体。从而,实际应用中,可根据多个子空间的至少一个拍摄点位的全景图像,通过例如特征匹配等方式,确定多个子空间之间的相邻关系;之后,基于多个子空间之间的相邻关系,将第一子空间的户型结构图与第二子空间的户型结构图进行拼接处理。
以上为当前待拼接的第一子空间的户型结构图的生成过程和拼接过程。
以下结合图22至图24对第一子空间和第二子空间的更新情况,以及目标物理空间的户型图的获取过程进行说明。
图22为本发明第五实施例提供的一种目标物理空间的实际空间结构的示意图,如图22所示,目标物理空间中包含有3个子空间,分别为卧室1、卧室2和客厅3,其中卧室1和客厅3通过门体1连通,卧室2和客厅3通过门体2连通,卧室1和卧室2通过窗体3连通。
在开始生成目标物理空间的户型图时,可根据每一子空间的点云数据或全景图像的采集时间点,确定采集时间点最早的点云数据或全景图像对应的子空间为第一个生成户型结构图的子空间;或者,根据每个子空间对应的相邻的子空间的数量,确定相邻子空间数量最多的子空间为第一个生成户型结构图的子空间;或者,从多个子空间中随机选择一个子空间,作为第一个生成户型结构图的子空间。
本实施例中,假设第一个生成户型结构图的子空间为卧室1,则将卧室1作为第一子空间,生成卧室1的户型结构图1。
图23为本发明第五实施例提供的一种户型结构图的示意图,如图23中的左图所示,户型结构图1中包含有卧室1的目标空间轮廓,目标空间轮廓上标识有表示卧室1中的门体1和窗体3的映射介质。
由于在卧室1之前并不存在已进行户型结构图拼接的第二子空间,因此户型结构图1即为第一子空间的户型结构图与第二子空间的户型结构图的拼接处理结果1。
之后,判断3个子空间中是否存在未进行户型结构图拼接处理的子空间。
基于本实施例中的假设,3个子空间中,还存在2个未进行户型结构图拼接处理的子空间,分别为卧室2和客厅3。
针对多个子空间中存在未进行户型结构图拼接处理的子空间的情形,首先,将第二子空间的户型结构图更新为拼接处理结果,即确定第二子空间的户型结构图为拼接处理结果1。之后,从未进行户型结构图拼接处理的子空间中重新确定出一个待拼接的第一子空间,即从卧室2和客厅3中重新确定一个子空间作为第一子空间,以将重新确定出的第一子空间的户型结构图与更新后的第二子空间的户型结构图进行拼接。
具体实施过程中,可选地,可根据多个子空间分别对应的至少一个拍摄点位的全景图像,确定多个子空间之间的相邻关系;根据该相邻关系,从未进行户型结构图拼接处理的子空间中重新确定出一个待拼接的第一子空间,其中,重新确定出的第一子空间与第二子空间相邻。或者,根据未进行户型结构图拼接处理的子空间中每一子空间对应的点云数据和/或全景图像的采集时间点,从未进行户型结构图拼接处理的子空间中重新确定出一个 待拼接的第一子空间,其中,重新确定出的第一子空间对应的采集时间点与当前时刻的差值大于未进行户型结构图拼接处理的子空间中其他子空间对应的采集时间点与当前时刻的差值。或者,从从未进行户型结构图拼接处理的子空间中随机选择一个子空间作为新的待拼接的第一子空间。
本实施例中,假设根据相邻关系,从卧室2和客厅3中确定卧室2为新的第一子空间,则生成卧室2的户型结构图2,并将卧室2的户型结构图2与第二子空间的户型结构图(即拼接处理结果1)进行拼接处理,其拼接处理结果2如图5中的右图所示。其中,卧室2为第一子空间,卧室1为第二子空间。
之后,判断3个子空间中是否存在未进行户型结构图拼接处理的子空间。
基于本实施例中的假设,3个子空间中,还存在1个未进行户型结构图拼接处理的子空间,即客厅3。
由此,将第二子空间的户型结构图更新为拼接处理结果2。由于仅剩一个未进行户型结构图拼接处理的子空间,则直接确定客厅3为新的第一子空间,生成客厅3的户型结构图3,并将客厅3的户型结构图3与更新后的第二子空间的户型结构图(即拼接处理结果2)进行拼接处理,其拼接处理结果3如图24中所示,图24为本发明第五实施例提供的另一种户型结构图的示意图,其中,客厅3为第一子空间,第二子空间包括卧室1和卧室2。
之后,判断3个子空间中是否存在未进行户型结构图拼接处理的子空间。
基于本实施例中的假设,多个子空间中不存在未进行户型结构图拼接处理的子空间,因此,将拼接处理结果3确定为目标物理空间的户型图,即将图24确定为目标物理空间的户型图。图24中的拼接处理结果3中包含有目标物理空间中的3个子空间的户型结构图。
本实施例中,在生成目标物理空间的户型图时,逐个生成多个子空间的户型结构图,并且,每生成一个子空间的户型结构图,就与之前生成的户型结构图进行拼接处理,直至生成最后一个子空间的户型结构图并完成拼接。最后,确定最终的拼接结果为目标物理空间的户型图。由于生成单个子空间的户型结构图,需要的计算资源较少,能够适应多数控制设备的计算处理能力,并且边生成户型结构图边进行拼接,有利于用户即时的对拼接结果进行确认,保证生成的目标物理空间的户型图的准确性。另外,本实施例中,各子空间的户型结构图中空间轮廓上的映射介质是基于全景图像确定的,由于相较于点云数据而言,全景图像更能反映实际空间中的门体和窗体等(即目标介质)的实际位置,因此,基于全景图像的辅助,各子空间的户型结构图中标识有较为准确的门体和窗体信息,能够更好的反映实际空间中的场景信息,进而根据多个子空间的户型结构图确定的户型图也能够准确反映目标物理空间的实际空间结构。
以下将详细描述本发明的一个或多个实施例的户型图生成装置。本领域技术人员可以理解,这些装置均可使用市售的硬件组件通过本方案所教导的步骤进行配置来构成。
图25为本发明第五实施例提供的一种户型图生成装置的结构示意图,该装置用于生成目标物理空间的户型图,其中所述目标物理空间包括多个子空间,应用于控制终端,如图25所示,该装置包括:获取模块51,拼接模块52和处理模块53。
获取模块51,用于获取信息采集终端得到的所述多个子空间分别对应的点云数据和全景图像,其中,任一子空间的点云数据和全景图像是在所述任一子空间中的至少一个拍摄点位采集的。
拼接模块52,用于在依次对所述多个子空间进行的户型结构图拼接处理的过程中,针对当前待拼接的第一子空间,根据在所述第一子空间的至少一个拍摄点位上采集的点云数据和/或全景图像,获取所述第一子空间的目标空间轮廓;获取在目标全景图像中识别的目标介质,所述目标介质为所述第一子空间中的实体介质在所述目标全景图像中的图像,所述目标全景图像是在所述第一子空间的至少一个拍摄点位上采集的全景图像中,用于识别所述目标介质的全景图像;在所述第一子空间的目标空间轮廓上确定用于表示所述目标介质的映射介质,以获取所述第一子空间的户型结构图;将所述第一子空间的户型结构图与第二子空间的户型结构图进行拼接处理,所述第二子空间为所述第一子空间之前已进行户型结构图拼接的子空间。
处理模块53,用于若所述多个子空间中不存在未进行户型结构图拼接处理的子空间,则将所述拼接处理结果确定为所述目标物理空间的户型图。
可选地,所述处理模块53,还用于若所述多个子空间中存在未进行户型结构图拼接处理的子空间,则将所述第二子空间的户型结构图更新为所述拼接处理结果;从所述未进行户型结构图拼接处理的子空间中重新确定出一个待拼接的第一子空间,以将重新确定出的第一子空间的户型结构图与更新后的第二子空间的户型结构图进行拼接。
可选地,所述处理模块53,具体用于根据所述多个子空间分别对应的至少一个拍摄点位的全景图像,确定多个子空间之间的相邻关系;根据所述相邻关系,从所述未进行户型结构图拼接处理的子空间中重新确定出一个待拼接的第一子空间,其中,重新确定出的第一子空间与所述第二子空间相邻。
可选地,所述处理模块53,还具体用于根据所述未进行户型结构图拼接处理的子空间中每一子空间对应的点云数据和/或全景图像的采集时间点,从所述未进行户型结构图拼接处理的子空间中重新确定出一个待拼接的第一子空间,其中,重新确定出的第一子空间对应的所述采集时间点与当前时刻的差值大于所述未进行户型结构图拼接处理的子空间 中其他子空间对应的所述采集时间点与当前时刻的差值。
可选地,所述拼接模块52,具体用于根据在所述第一子空间的至少一个拍摄点位上采集的点云数据,确定所述第一子空间的第一空间轮廓;根据在所述第一子空间的至少一个拍摄点位上采集的全景图像,确定所述第一子空间的第二空间轮廓;根据所述第一空间轮廓和/或所述第二空间轮廓,确定所述第一子空间的目标空间轮廓。
可选地,所述拼接模块52,还具体用于根据在所述第一子空间的至少一个拍摄点位上采集的点云数据,确定所述第一子空间的二维点云图像;根据所述二维点云图像,确定所述第一子空间的第一空间轮廓。
可选地,所述拼接模块52,还具体用于根据所述第一子空间的目标全景图像和目标空间轮廓之间的映射关系,获取所述目标介质在所述目标全景图像中对应的全景像素坐标以及所映射的空间轮廓坐标,以在所述第一子空间的目标空间轮廓中确定所述目标介质对应的映射介质;其中,所述映射介质的与所述目标介质的目标标识以及目标显示尺寸相适配,所述目标标识用于区分属于不同类型的目标介质,所述映射关系为根据所述第一子空间的点云数据和所述目标全景图像之间的坐标映射,所建立的所述目标全景图像和所述空间轮廓之间的映射。
图25所示装置可以执行前述实施例中的步骤,详细的执行过程和技术效果参见前述实施例中的描述,在此不再赘述。
第六实施例
图26为本发明第六实施例提供的一种户型图生成方法的流程图,用于生成目标物理空间的户型图,该目标物理空间至少包括N个空间,应用于控制终端,如图26所示,该户型图生成方法包括如下步骤:
步骤S261、获取信息采集终端在目标物理空间中N个空间的每一空间所采集的点云数据和全景图像,其中,点云数据和全景图像是在每一空间中的至少一个拍摄点位采集的。
步骤S262、获取第M空间轮廓进行显示以用于编辑;其中,第M空间轮廓是N个空间中第M空间的空间轮廓;第M空间轮廓是根据第M空间的至少一个拍摄点位采集的点云数据和/或全景图像获取的。
步骤S263、获取在第一目标全景图像中识别的第一目标介质,以使得根据第一目标介质获取第一目标介质在第M空间轮廓中的第一映射介质,以用于根据第一映射介质编辑第M空间轮廓以获取第M空间的户型结构图。
步骤S264、获取第M+1空间轮廓进行显示以用于编辑;其中,第M+1空间轮廓是N个空间中第M+1空间的空间轮廓,第M+1空间为第M空间的相邻空间,第M+1空间轮廓是根据第M+1 空间的至少一个拍摄点位采集的点云数据和/或全景图像获取的。
步骤S265、获取在第二目标全景图像中识别的第二目标介质,以使得根据第二目标介质获取第二目标介质在第M+1空间轮廓中的第二映射介质,以用于根据第二映射介质编辑第M+1空间轮廓以获取第M+1空间的户型结构图。
步骤S266、将第M+1空间的户型结构图与第M空间的户型结构图进行拼接,并判断第M+1空间是否是N个空间中最后一个生成户型结构图的空间;若否,则执行步骤S267;若是,则执行步骤S268。
步骤S267、将第M空间和第M+1空间合并作为第M个空间,并返回执行步骤S264。
步骤S268、将拼接结果作为目标物理空间的户型图,以用于展示,流程结束。
其中,步骤S263中的第一目标全景图像是在第M空间的至少一个拍摄点位上采集的全景图像中,用于识别第一目标介质的全景图像;第一目标介质为第M空间中的实体介质在第一目标全景图像中的图像。步骤S265中的第二目标全景图像是第M+1空间的至少一个拍摄点位采集的全景图像中,用于识别第二目标介质的全景图像;第二目标介质为第M+1空间中的实体介质在第二目标全景图像中的图像。
控制终端可以是智能手机、平板电脑、笔记本电脑等具有数据处理能力的终端设备。控制终端可通过例如蓝牙、无线保真(Wireless Fidelity,简称WiFi)热点等方式与信息采集终端通信连接。
本实施例中,在生成目标物理空间的户型图时,逐个生成N个空间的户型结构图,并且,每生成一个空间的户型结构图,就与之前生成的户型结构图进行拼接,直至生成最后一个空间的户型结构图并完成拼接。最后,确定最终的拼接结果为目标物理空间的户型图。由于生成单个空间的户型结构图,需要的计算资源较少,能够适应多数控制设备的计算处理能力,并且边生成户型结构图边进行拼接,有利于用户即时的对拼接结果进行确认,保证生成的目标物理空间的户型图的准确性。除此之外,本实施例中,各空间户型结构图中空间轮廓上的映射介质是基于全景图像确定的,由于相较于点云数据而言,全景图像更能反映实际空间中的门体和窗体等(即目标介质)的实际位置,因此,基于全景图像的辅助,各空间的户型结构图中标识有较为准确的门体和窗体信息,能够更好的反映实际空间中的场景信息。
在具体介绍本发明实施例提供的户型图生成方法之前,先对本发明实施例中的第M空间和第M+1空间等做简要说明。
本实施例中,N用于表示目标物理空间包括的空间数量,N的取值为大于等于1的整数。例如,假设目标物理空间为一房屋,若该房屋中仅包含一个空间(比如:1个客厅),则N 的取值为1;若该房屋中包含3个空间(比如:1个客厅、1个卧室和1个卫生间)则N的取值为3。可以理解的是,当N的取值大于等于2时,目标物理空间中的任一个空间必然存在与其相邻的空间。
当N的取值为1时,第M空间实际上就是指目标物理空间包含的唯一一个空间,在这种情况下,并不存在第M+1空间。因此,在生成目标物理空间的户型图时,第M空间的户型结构图就是目标物理空间的户型图。
当N的取值为2时,假设目标物理空间中包括空间a和空间b,若第M空间为空间a,则第M+1空间为空间b;反之,若第M空间为空间b,则第M+1空间为空间a。因此,在生成目标物理空间的户型图时,第M空间的户型结构图和第M+1空间的户型结构图的拼接结果,即为目标物理空间的户型图。
当N的取值大于等于3时,第M空间和第M+1空间对应的空间实际上是随着拼接过程不断更新的,并不特指N个空间中的某个空间。
以N的取值为3,举例来说:假设目标物理空间中包括空间a、空间b和空间c,其中,空间a与空间b相邻,空间b与空间c相邻。
在开始生成目标物理空间的户型图时,假设空间a为第一个要生成户型结构图的空间,即先确定空间a为第M空间。在生成空间a的户型结构图A之后,确定第M+1空间,由于空间b与空间a相邻,则确定空间b为第M+1空间,并生成空间b的户型结构图B;接着,将户型结构图A与户型结构图B进行拼接,得到拼接结果AB。
由于还未生成空间c的户型结构图,即空间b(也即第M+1空间)不是3个空间中最后一个生成户型结构图的空间。因此,将第M空间和第M+1空间进行合并作为第M空间,即合并后的第M空间包括空间a和空间b,合并后的第M空间的户型结构图为上述拼接结果AB。
在根据合并后的第M空间重新确定第M+1空间时,由于空间c与空间b相邻,从而空间c与合并后的第M空间也相邻。进而,确定空间c为第M+1空间,并生成空间c的户型结构图C;接着,将户型结构图C与第M空间的户型结构图(即拼接结果AB)进行拼接,得到拼接结果ABC。由于不存在未生成户型结构图的空间,因此,拼接结果ABC即为目标物理空间的户型图。
在上述过程中,第M空间对应的空间从空间a更新为了空间a和空间b,第M+1空间从空间b更新为了空间c。
以下结合图27所示的生成户型图的场景示意图,对图26所示的户型图生成方法进行说明。
图27为本发明第六实施例提供的一种生成户型图的场景示意图。如图27所示,信息采 集终端与控制终端相互解耦,信息采集终端在N个空间中每一空间采集点云数据和全景图像,并发送给控制终端。以使控制终端逐个生成N个空间的户型结构图,比如:依次生成空间1的户型结构图a,空间2的户型结构图b等;并边生成户型结构图,边对生成的户型结构图进行拼接处理,以最终得到目标物理空间的户型图。其中,信息采集终端可通过例如蓝牙、无线保真(Wireless Fidelity,简称WiFi)热点等方式与控制终端通信连接。
可选地,信息采集终端也可直接集成于控制终端,与控制终端作为一个整体,控制终端可直接同步获取信息采集终端在N个空间的每一空间所采集的点云数据和全景图像,而不需要基于建立的通信连接传输点云数据和全景图像。由此,可以提高控制终端生成目标物理空间的户型图的效率。
可选地,还可以通过云端服务器生成目标物理空间的户型图。其中,云端服务器可以为云端的物理服务器或虚拟服务器,控制终端通过接入基于通信标准的无线网络,如WiFi,2G、3G、4G/LTE、5G等移动通信网络与云端服务器通信连接。
云端服务器可接收由控制终端转发的N个空间分别对应的点云数据和全景图像,以生成目标物理空间的户型图,并将户型图反馈给控制终端,以用于显示。或者,云端服务器也可直接与信息采集终端通信连接,以直接获取信息采集终端采集的多个子空间分别对应的点云数据和全景图像,以生成目标物理空间的户型图。实际上,云端服务器生成目标物理空间的户型图的过程与控制终端生成空间结构图的处理过程相同,但由于云端服务器的计算能力更强,其生成目标物理空间的户型图的效率更高,能够进一步提升用户的使用体验。本实施例中,以控制终端生成目标物理空间的户型图为例,进行说明,但并不局限于此。
实际应用中,为获取每个空间中完整的空间信息,任一空间中可能设置有不止一个拍摄点位,比如:空间X中对应有拍摄点位1、拍摄点位2和拍摄点位3。因此,本实施中,任一空间的点云数据和全景图像是指在任一空间中的至少一个拍摄点位上采集的点云数据和全景图像。
实际应用中,拍摄点位的设置,可以是用户在通过信息采集终端采集每一空间的场景信息时,基于当前的采集情况进行自定义的;也可以是信息采集终端或控制终端基于用户输入的空间描述信息(比如空间大小等),为空间自动生成的参考拍摄点位。
信息采集终端在各拍摄点位上采集点云数据和全景图像时,对应的信息采集过程是相同的,本实施例中,以某一拍摄点位Y为例,举例说明。
首先,对信息采集终端的基本构成做简要说明。信息采集终端包括:激光传感器、相机、电机和处理器(比如CPU)等。其中,激光传感器和相机作为感知设备,用于在各拍 摄点位上采集每一空间的场景信息,即点云数据和全景图像;电机用于带动激光传感器和相机旋转,以便从各个角度采集点云数据和全景图像。可选地,信息采集终端还包括惯性测量单元(Inertial measurement unit,简称IMU))。IMU用于对采集的点云数据和图像数据对应的位姿信息进行修正,减小由于环境或人为因素(比如:信息采集终端未水平放置等)导致的误差。
接着,对信息采集终端在拍摄点位Y上的数据采集过程进行说明。
针对拍摄点位Y,信息采集终端响应于信息采集指令,驱动电机带动激光传感器360度旋转,以采集目标拍摄点位Y对应的点云数据;驱动电机带动相机360度旋转,以采集目标拍摄点位Y对应的全景图像。其中,信息采集指令由用户通过控制终端发送,或者,响应于用户在信息采集终端上的触发操作,触发信息采集指令。
在拍摄点位Y上,电机可同时带动激光传感器和相机旋转,以同时采集点云数据和全景图像,也可按先后顺序依次分别带动激光传感器和相机旋转,比如:先带动激光传感器旋转后带动相机旋转,或者先带动相机旋转后带动激光器旋转,以先后分别采集点云数据和全景图像,本实施例对此不作限制。
在一可选实施例中,若先采集点云数据,再采集全景图像,为了提高采集的全景图像质量,可选地,在采集点云数据的过程中,可同步开启相机,以收集当前拍摄点位的场景光照信息进行测光,确定对应的曝光参数。之后,相机基于确定的曝光参数,采集全景图像。
可选地,信息采集终端中的相机为全景相机或非全景相机。若信息采集终端中的相机为非全景相机,则在上述360度旋转过程中,控制该相机在多个预设角度拍摄目标拍摄点位Y对应的图像,上述处理器可通过例如特征匹配算法等全景图像拼接算法,将在多个预设角度拍摄的图像缝合为全景图像。比如:若相机的视角为120度,则可以某一基准方向为0度,将基于该基准方向的a度、(a+120)度和(a+240)度确定为预设角度,控制相机在这3个预设角度拍摄得到图像1、图像2和图像3;之后,缝合图像1至图像3,以得到全景图像。
可以理解的是,预设角度之间的间隔越小,获取到的图像数量就越多,进而缝合后得到的全景图像就越准确。具体实施过程中,预设角度的数量可由用户根据相机的视角进行自定义设置,基于多个预设角度拍摄的图像包含有当前点位360度范围内的场景信息。
可选地,在生成全景图像时,可结合高动态范围成像(High Dynamic Range Imaging,简称HDRI),生成高质量的全景图像。
信息采集终端在一个拍摄点位采集完点云数据和全景图像后,可直接将该拍摄点位的 点云数据和全景图像发送给控制终端;或者,先进行存储,之后在完成对当前所在空间的所有拍摄点位上点云数据和全景图像的采集后,将该空间的所有拍摄点位的点云数据和全景图像发送给控制终端,本实施例对此不做限制。
实际应用中,控制终端基于N个空间中任一个空间的点云数据和全景图像,生成某一空间的户型结构图的过程都是相同的。本实施例中,以空间Z为例,先举例说明空间Z的户型结构图的生成过程。其中,空间Z可以作为第M空间,也可以作为第M+1空间。
为表示空间Z对应的物理空间,空间Z的户型结构图包含有空间Z的空间轮廓和映射介质。其中,空间轮廓用于表示物理空间中的墙体,映射介质用于表示物理空间中的窗体和门体。因此,空间Z的户型结构图的获取过程,可以进一步划分为空间Z的空间轮廓的确定过程,以及空间Z的映射介质的确定过程,以下分别进行说明。
首先,介绍空间Z的空间轮廓的获取过程。
在一可选的实现方式中,可根据在空间Z的至少一个拍摄点位上采集的点云数据和/或全景图像,获取空间Z的空间轮廓。具体地,可根据在空间Z的至少一个拍摄点位上采集的点云数据,确定空间Z的第一空间轮廓;根据在空间Z的至少一个拍摄点位上采集的全景图像,确定空间Z的第二空间轮廓。之后,根据第一空间轮廓和/或第二空间轮廓,确定空间Z的空间轮廓。比如:确定第一空间轮廓为空间Z的空间轮廓;或者,确定第二空间轮廓为空间Z的空间轮廓;或者在上述第一空间轮廓和上述第二空间轮廓中选择轮廓线质量较好地空间轮廓作为空间Z的空间轮廓;或者,对上述第一空间轮廓和上述第二空间轮廓的轮廓线做融合处理,得到轮廓线更优质的空间轮廓,确定融合处理后的空间轮廓为空间Z的空间轮廓。
实际应用中,空间Z的空间轮廓中包含有多条轮廓线。其中,存在一些与空间Z的墙体的实际位置不匹配的轮廓线,为保证该空间轮廓能够准确的反映空间Z,上述空间Z的空间轮廓中的轮廓线可被编辑。由此,在根据第一空间轮廓和/或第二空间轮廓,确定空间Z的空间轮廓之后,还可以对空间Z的空间轮廓执行人工或自动化编辑处理。
为便于编辑空间Z的空间轮廓,可选地,可在空间Z的二维点云图像中显示空间Z的空间轮廓;之后,响应于用户对空间Z的空间轮廓的编辑操作,调整空间Z的空间轮廓的轮廓线,以使空间Z的空间轮廓的轮廓线与二维点云图像中的墙线重合。二维点云图像中的墙线对应于空间Z中的墙体。
其中,空间Z的二维点云图像是将空间Z中至少一个拍摄点位的点云数据进行平面映射后得到的。
由于空间Z中的至少一个拍摄点位的相对位置关系已知,因此,可基于该相对位置关 系,将在至少一个拍摄点位上采集的点云数据进行融合得到稠密的点云数据,进而映射得到二维点云图像。
具体地,先融合空间Z的至少一个拍摄点位的点云数据,以确定空间Z的目标点云数据;然后,将目标点云数据映射到二维平面,以得到空间Z的初始二维点云图像;之后,根据用户对初始二维点云图像的修正操作(比如:裁剪掉二维点云图像中边界模糊不清的区域,对二维点云图像中不清晰的墙线进行突出显示等),确定修正操作后得到的目标二维点云图像为空间Z的二维点云图像。
其中,可以理解的是,点云数据实际上是一系列的三维坐标点,任一三维坐标点可用三维笛卡尔坐标(x,y,z)表示,其中,x,y,z分别是拥有共同的零点且彼此相互正交的x轴,y轴,z轴的坐标值。
在一可选实施例中,将空间Z的目标点云数据映射到二维平面,以得到空间Z的初始二维点云图像,包括:将空间Z的目标点云数据对应的三维坐标点(x,y,z)转换为二维坐标点(x,y),比如:将三维坐标点的z轴坐标值设置为0,进而,基于转换得到的二维坐标点得到空间Z的初始二维点云图像。或者,先基于空间Z的目标点云数据对应的三维坐标点(x,y,z)生成空间Z的三维空间结构图,然后获取三维空间结构图的俯视图,以作为空间Z的初始二维点云图像。
其中,空间Z的二维点云图像还可用于获取空间Z的第一空间轮廓,比如:通过对空间Z的二维点云图像进行边缘检测,获取空间Z的第一空间轮廓。
针对空间Z的空间轮廓中与实际墙体位置不匹配的轮廓线,可以基于预设的轮廓线编辑选项,调整空间Z的空间轮廓中轮廓线的形态和/或位置,或者删除不存在对应墙线的轮廓线,或者添加某一墙线对应的轮廓线。
为便于理解,举例来说,图28为本发明第六实施例提供的空间Z的空间轮廓的示意图。如图28中的左图所示,在对空间Z的空间轮廓的轮廓线编辑前,空间Z的空间轮廓中的轮廓线l与空间Z的二维点云图像中的墙线L并不重合,但实际上轮廓线l和墙线L对应于空间Z中的同一墙体。再例如,在对空间Z的空间轮廓的轮廓线编辑前,空间Z的空间轮廓中并不存在与二维点云图像中的墙线H对应的轮廓线。基于此,可通过预设的轮廓线编辑选项,调整轮廓线l的尺寸和位置,使轮廓线l与二维点云图像中的墙线L重合;通过预设的轮廓线编辑选项,添加轮廓线h,且轮廓线h与二维点云图像中的墙线H重合。编辑后的空间Z的空间轮廓如图28中的右图所示。本实施例中,最终确定的空间Z的空间轮廓的轮廓线之间相互连接。
之后,介绍空间Z的映射介质的获取过程。
概括来说,要获取空间Z的映射介质,要先通过例如图像识别等从全景图像中识别对应的目标介质,即空间Z中的实体介质(门体和窗体)在全景图像中的图像;然后,再确定目标介质对应的映射介质。
实际应用中,空间Z中可能不止一个拍摄点位对应的全景图像中包含有目标介质,比如:空间Z的拍摄点位1、拍摄点位2和拍摄点位3对应的3个全景图像中均包含有空间Z中的门体和窗体对应的图像。
可以理解的是,在同一子空间中的至少一个拍摄点位上分别获取全景图像是为了保证各子空间的场景信息的完整性,通常获取到的全景图像对于确定目标介质来说是富余的。因此,在确定空间Z的目标介质时,并不需要在空间Z的所有全景图像中识别目标介质。
为了加快控制终端对目标介质的识别效率,可选地,在控制终端识别全景图像中的目标介质之前,可以从空间Z的至少一个拍摄点位的全景图像中,确定出目标全景图像以用于识别目标介质。其中,目标全景图像为符合预设识别要求的全景图像,比如:视野最广、光线最佳的全景图像,或者包含有用户标记信息(比如:最佳全景图像)的全景图像。
其中,针对空间Z,目标全景图像对应的拍摄点位可以与用于生成空间轮廓的点云数据对应的拍摄点位相同或不同。假设空间Z中包含有两个拍摄点位,分别为拍摄点位A和拍摄点位B,在拍摄点位A上获取了全景图像A1和点云数据A2,在拍摄点位B上获取了全景图像B1和点云数据B2。若基于点云数据A2生成了空间Z的空间轮廓,则既可以确定全景图像A1为目标全景图像,也可以确定全景图像B1为目标全景图像。类似地,若基于点云数据B2生成了空间Z的空间轮廓,则既可以确定全景图像A1为目标全景图像,也可以确定全景图像B1为目标全景图像。
由于在进行点云数据和全景图像采集之前,激光传感器和相机之间的相对位姿已预先标定。因此,针对空间Z,基于预先标定的相对位姿,以及空间Z中实际拍摄点位之间的相对位置关系,能够确定采集到的空间Z的点云数据对应的三维点云坐标和全景图像的全景像素坐标之间的坐标映射。
进一步地,可根据空间Z的点云数据和目标全景图像之间的坐标映射,建立的空间Z的目标全景图像和空间Z的空间轮廓之间的映射,即预先确定空间Z的目标全景图像和空间轮廓之间的映射关系。
从而,可以根据空间Z的目标全景图像和空间轮廓之间的映射关系,获取目标介质在目标全景图像中对应的全景像素坐标以及所映射的空间轮廓坐标,以在空间Z的空间轮廓中确定目标介质对应的映射介质,以获取空间Z的户型结构图。
其中,映射介质的与目标介质的目标标识以及目标显示尺寸相适配,目标标识用于区 分属于不同类型(门体或窗体)的目标介质。
在本发明实施例中,不限定对全景图像和点云数据坐标映射的具体方式,可选地,可以直接根据获取全景图像和点云数据的设备之间的相对位姿关系,将全景像素坐标映射为三维点云坐标,以及将三维点云坐标映射为全景像素坐标;也可以借助相对位姿关系和中间坐标系,先将全景像素坐标映射为中间坐标,再将中间坐标映射为三维点云坐标;以及先将三维点云坐标映射为中间坐标,再将中间坐标映射为全景像素坐标。在此,不限定中间坐标系的具体类型,也不限定在坐标映射过程中使用的具体方式,根据中间坐标系的不同,以及相对位姿关系的不同,所采用的映射方式也会不同。
以上为目标物理空间中任一空间Z的户型结构图的获取过程。
以下结合第五实施例中的图22,以及图29至图31对至少包含N个空间的目标物理空间的户型图获取过程进行说明。
如图22所示,目标物理空间中包含有3个空间,分别为卧室1、卧室2和客厅3,其中卧室1和客厅3通过门体1连通,卧室2和客厅3通过门体2连通,卧室1和卧室2通过窗体3连通。
在开始生成目标物理空间的户型图时,可根据每个空间的点云数据或全景图像的采集时间点,确定采集时间点最早的点云数据或全景图像对应的空间为第一个生成户型结构图的空间;或者,根据每个空间对应的相邻的空间的数量,确定相邻空间数量最多的空间为第一个生成户型结构图的空间;或者,从多个空间中随机选择一个空间,作为第一个生成户型结构图的空间。
本实施例中,假设第一个生成户型结构图的空间为卧室1,则将卧室1作为第M空间,并生成第M空间(即卧室1)的户型结构图R,如图29中的左图所示。其中,图29为本发明第六实施例提供的一种户型结构图的示意图。
之后,确定与第M空间相邻的空间作为第M+1空间,并生成第M+1空间的户型结构图。
可以理解的是,目标物理空间中的多个空间相互之间通过门体或窗体连通,比如:空间E和空间F通过同一目标门体连通,在采集点云数据和全景图像时,目标门体处于打开状态,因此,空间E的全景图像中包含有空间E中的某一物体。从而,实际应用中,可根据多个空间的至少一个拍摄点位的全景图像,通过例如特征匹配等方式,确定多个空间之间的相邻关系。
因此,在确定与第M空间相邻的第M+1空间时,作为一种可选地实现方式,可根据第M空间的至少一个拍摄点位的全景图像和N个空间中剩余的未生成户型结构图的空间中每一空间的至少一个拍摄点位的全景图像,从未生成户型结构图的空间中确定出第M+1空间。具体地,根据第M空间的至少一个拍摄点位的全景图像和N个空间中剩余的未生成户型结构 图的空间中每一空间的至少一个拍摄点位的全景图像,确定未生成户型结构图的空间中每一空间对应的相邻空间的数量总和;确定相邻空间的数量总和大于或等于设定阈值的空间为第M+1空间。具体实施过程中,若多个空间对应的相邻空间的数量总和相等,则可从中随机选择一个空间作为第M+1空间。
基于本实施例中的假设,剩余的未生成户型结构图的空间为卧室2和客厅3,因此,要从卧室2和客厅3中确定出第M+1个空间轮廓。
由于卧室2与卧室1和客厅3相邻,客厅3和卧室1和卧室2相邻,因此,卧室2和客厅3对应的相邻空间的数量均为2。假设随机选定卧室2为第M+1空间,则生成第M+1空间(即卧室2)的户型结构图S,如图29中的右图所示。
之后,将户型结构图R与户型结构图S进行拼接,得到拼接结果RS,如图30所示;并判断第M+1空间是否是3个空间中最后一个生成户型结构图的空间。其中,图30为本发明第六实施例提供的一种拼接后的户型结构图的示意图。
可选地,可根据第M+1空间与第M空间的相邻关系,在同一空间坐标系下,将第M+1空间的户型结构图与第M空间的户型结构图进行拼接。
基于本实施例中的假设,3个空间中的客厅3还未生成对应的户型结构图。因此,第M+1空间(即卧室2)并不是3个空间中最后一个生成户型结构图的空间。
因此,将第M空间和第M+1空间进行合并作为第M空间,即将卧室1和卧室2共同看做一个整体,将该整体作为第M空间。此时,第M空间包括卧室1和卧室2,第M空间的户型结构图为上述拼接结果RS,如图30所示。
之后,重新确定与第M空间(包括卧室1和卧室2)相邻的第M+1空间。由于客厅3与卧室1和卧室2均相邻,从而客厅3与第M空间(包括卧室1和卧室2)也相邻。进而,确定客厅3为第M+1空间,并生成第M+1空间的(客厅3)的户型结构图T,如图31中的左图所示,图31为本发明第六实施例提供的另一种拼接后的户型结构图的示意图。
之后,将第M+1空间的户型结构图T与第M空间的户型结构图(即拼接结果RS)进行拼接,得到拼接结果RST,如图31中的右图所示;并判断第M+1个空间是否是3个空间中最后一个生成户型结构图的空间。
基于本实施例中的假设,在生成客厅3的户型结构图后,目标物理空间中不存在未生成户型结构图的空间,因此,拼接结果RST即为目标物理空间的户型图。
其中,卧室1、卧室2和客厅3的户型结构图获取过程可参考前述实施例,在此不再赘述。
本实施例中,在生成目标物理空间的户型图时,逐个生成N个空间的户型结构图,并 且,每生成一个空间的户型结构图,就与之前生成的户型结构图进行拼接,直至生成最后一个空间的户型结构图并完成拼接。最后,确定最终的拼接结果为目标物理空间的户型图。由于生成单个空间的户型结构图,需要的计算资源较少,能够适应多数控制设备的计算处理能力,并且边生成户型结构图边进行拼接,有利于用户即时的对拼接结果进行确认,保证生成的目标物理空间的户型图的准确性。除此之外,本实施例中,各空间户型结构图中空间轮廓上的映射介质是基于全景图像确定的,由于相较于点云数据而言,全景图像更能反映实际空间中的门体和窗体等(即目标介质)的实际位置,因此,基于全景图像的辅助,各空间的户型结构图中标识有较为准确的门体和窗体信息,能够更好的反映实际空间中的场景信息。
以下将详细描述本发明的一个或多个实施例的户型图生成装置。本领域技术人员可以理解,这些装置均可使用市售的硬件组件通过本方案所教导的步骤进行配置来构成。
图32为本发明第六实施例提供的一种户型图生成装置的结构示意图,该装置用于生成目标物理空间的户型图,所述目标物理空间至少包括N个空间,应用于控制终端,如图32所示,该装置包括:第一获取模块61,第二获取模块62和处理模块63。
第一获取模块61,用于获取信息采集终端在所述N个空间的每一空间所采集的点云数据和全景图像,其中,所述点云数据和全景图像是在所述每一空间中的至少一个拍摄点位采集的;获取第M空间轮廓进行显示以用于编辑;其中,所述第M空间轮廓是所述N个空间中第M空间的空间轮廓,所述第M空间轮廓是根据所述第M空间的至少一个拍摄点位采集的点云数据和/或全景图像获取的;获取在第一目标全景图像中识别的第一目标介质,以使得根据所述第一目标介质获取所述第一目标介质在所述第M空间轮廓中的第一映射介质,以用于根据所述第一映射介质编辑所述第M空间轮廓以获取所述第M空间的户型结构图;所述第一目标全景图像是第M空间的所述至少一个拍摄点位采集的全景图像中,用于识别所述第一目标介质的全景图像,所述第一目标介质为所述第M空间中的实体介质在所述第一目标全景图像中的图像。
第二获取模块62,用于获取第M+1空间轮廓进行显示以用于编辑;其中,所述第M+1空间轮廓是所述N个空间中第M+1空间的空间轮廓,所述第M+1空间为所述第M空间的相邻空间,所述第M+1空间轮廓是根据所述第M+1空间的至少一个拍摄点采集的点云数据和/或全景图像获取的;获取在第二目标全景图像中识别的第二目标介质,以使得根据所述第二目标介质获取所述第二目标介质在所述第M+1空间轮廓中的第二映射介质,以用于根据所述第二映射介质编辑所述第M+1空间轮廓以获取所述第M+1空间的户型结构图;所述第二目标全景图像是第M+1空间的所述至少一个拍摄点位采集的全景图像中,用于识别所述第二目标介 质的全景图像,所述第二目标介质为所述第M+1空间中的实体介质在所述第一目标全景图像中的图像。
处理模块63,用于将所述第M+1空间的户型结构图与所述第M空间的户型结构图进行拼接,并判断所述第M+1空间是否是所述N个空间中最后一个生成户型结构图的空间;若否,则将所述第M空间和所述第M+1空间合并作为第M空间,并返回执行第二获取模块62;若是,则将所述拼接结果作为所述目标物理空间的户型图,以用于展示,流程结束。
可选地,所述第二获取模块62,还用于根据所述第M空间的至少一个拍摄点位的全景图像和所述N个空间中剩余的未生成户型结构图的空间中每一空间的至少一个拍摄点位的全景图像,从所述未生成户型结构图的空间中确定出所述第M+1空间。
可选地,所述第二获取模块62,具体用于根据所述第M空间的至少一个拍摄点位的全景图像和所述N个空间中剩余的未生成户型结构图的空间中每一空间的至少一个拍摄点位的全景图像,确定所述未生成户型结构图的空间中每一空间对应的相邻空间的数量总和;确定所述相邻空间的数量总和大于或等于设定阈值的空间为所述第M+1空间。
可选地,所述第一获取模块61,具体用于在所述第M空间的二维点云图像中显示第M空间轮廓,响应于用户对所述第M空间轮廓的编辑操作,调整所述第M空间轮廓的轮廓线,以使所述轮廓线与所述二维点云图像中的墙线重合;其中,所述第M空间的二维点云图像是将所述第M空间中至少一个拍摄点位的点云数据进行平面映射后得到的。
所述第二获取模块62,具体用于在所述第M+1空间的二维点云图像中显示第M+1空间轮廓,响应于用户对所述第M+1空间轮廓的编辑操作,调整所述第M+1空间轮廓的轮廓线,以使所述轮廓线与所述第M+1空间的二维点云图像中的墙线重合;其中,所述第M+1空间的二维点云图像是将所述第M+1空间中至少一个拍摄点位的点云数据进行平面映射后得到的。
可选地,所述第一获取模块61,具体用于根据所述第一目标全景图像和所述第M空间轮廓之间的映射关系,获取所述第一目标介质在所述第一目标全景图像中对应的全景像素坐标以及所映射的空间轮廓坐标,以在所述第M空间轮廓中确定所述第一目标介质对应的第一映射介质,以获取所述第M空间的户型结构图;其中,所述第一映射介质的与所述第一目标介质的目标标识以及目标显示尺寸相适配,所述目标标识用于区分属于不同类型的目标介质,所述映射关系为根据所述第M空间的点云数据和所述第一目标全景图像之间的坐标映射,所建立的所述第一目标全景图像和所述第M空间轮廓之间的映射。
可选地,所述第二获取模块62,具体用于根据所述第二目标全景图像和所述第M+1空间轮廓之间的映射关系,获取所述第二目标介质在所述第二目标全景图像中对应的全景像素坐标以及所映射的空间轮廓坐标,以在所述第M+1空间轮廓中确定所述第二目标介质对 应的第二映射介质,以获取所述第M+1空间的户型结构图;其中,所述第二映射介质的与所述第二目标介质的目标标识以及目标显示尺寸相适配,所述目标标识用于区分属于不同类型的目标介质,所述映射关系为根据所述第M+1空间的点云数据和所述第二目标全景图像之间的坐标映射,所建立的所述第二目标全景图像和所述第M+1空间轮廓之间的映射。
可选地,所述处理模块63具体用于根据所述第M+1空间与所述第M空间的相邻关系,在同一空间坐标系下,将所述第M+1空间的户型结构图与所述第M空间的户型结构图进行拼接。
图32所示装置可以执行前述实施例中的步骤,详细的执行过程和技术效果参见前述实施例中的描述,在此不再赘述。
在一个可能的设计中,上述图6所示空间结构图生成装置,以及图11、图14、图19、图25和/或图32所示的户型图生成装置的结构可分别实现为一电子设备。如图33所示,该电子设备可以包括:存储器71、处理器72、通信接口73。其中,存储器71上存储有可执行代码,当所述可执行代码被处理器72执行时,使处理器72至少可以实现如前述实施例中提供的空间结构图生成方法和/或户型图生成方法。
另外,本发明实施例提供了一种非暂时性机器可读存储介质,所述非暂时性机器可读存储介质上存储有可执行代码,当所述可执行代码被电子设备的处理器执行时,使所述处理器至少可以实现如前述实施例中提供的空间结构图生成方法。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助加必需的通用硬件平台的方式来实现,当然也可以通过硬件和软件结合的方式来实现。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以计算机产品的形式体现出来,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (47)

  1. 一种空间结构图生成方法,其特征在于,应用于控制终端,所述方法包括:
    获取信息采集终端在目标空间中至少一个拍摄点位得到的点云数据和全景图像;
    根据所述至少一个拍摄点位的点云数据,获取所述目标空间的空间轮廓;
    获取在目标全景图像中识别的目标介质,所述目标介质为所述目标空间中的实体介质在所述目标全景图像中的图像,所述目标全景图像是所述至少一个拍摄点位的全景图像中,用于识别所述目标介质的全景图像;
    在所述空间轮廓中确定所述目标介质对应的映射介质,以获取所述目标空间的空间结构图。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述至少一个拍摄点位的点云数据,获取所述目标空间的空间轮廓,包括:
    融合所述至少一个拍摄点位的点云数据,以确定所述目标空间的目标点云数据;
    将所述目标点云数据映射到二维平面,以得到所述目标空间的点云图像;
    根据所述点云图像,确定所述目标空间的空间轮廓。
  3. 根据权利要求2所述的方法,其特征在于,根据所述点云图像,确定所述目标空间的空间轮廓,包括:
    接收用户对所述点云图像的修正操作;
    根据所述修正操作后得到的目标点云图像,确定所述目标空间的空间轮廓。
  4. 根据权利要求2所述的方法,其特征在于,所述空间轮廓由多条轮廓线构成,所述空间轮廓中包括与所述点云图像中的墙线不对应的目标轮廓线,所述方法还包括:
    响应于在所述目标空间的空间轮廓上对所述目标轮廓线的编辑操作,调整所述目标轮廓线的形态和/或位置,以使所述目标轮廓线与所述点云图像中的墙线重合。
  5. 根据权利要求1所述的方法,其特征在于,所述在所述空间轮廓中确定所述目标介质对应的映射介质,包括:
    根据所述目标全景图像和所述空间轮廓之间的映射关系,获取所述目标介质在所述目标全景图像中对应的全景像素坐标,以及所映射的空间轮廓坐标,以在所述空间轮廓中确定所述目标介质对应的映射介质;其中,所述映射介质的与所述目标介质的目标标识以及目标显示尺寸相适配,所述目标标识用于区分属于不同类型的目标介质,所述映射关系为根据所述点云数据和所述目标全景图像之间的坐标映射,所建立的所述目标全景图像和所述空间轮廓之间的映射。
  6. 一种空间结构图生成装置,应用于控制终端,其特征在于,所述装置包括:
    获取模块,用于获取信息采集终端在目标空间中至少一个拍摄点位得到的点云数据和全景图像;
    处理模块,用于根据所述至少一个拍摄点位的点云数据,获取所述目标空间的空间轮廓;获取在目标全景图像中识别的目标介质,所述目标介质为所述目标空间中的实体介质在所述目标全景图中的图像,所述目标全景图像是所述至少一个拍摄点位的全景图像中,用于识别所述目标介质的全景图像;在所述空间轮廓中确定所述目标介质对应的映射介质,以获取所述目标空间的空间结构图。
  7. 一种电子设备,其特征在于,包括:存储器、处理器、通信接口;其中,所述存储器上存储有可执行代码,当所述可执行代码被所述处理器执行时,使所述处理器执行如权利要求1至5中任一项所述的空间结构图生成方法。
  8. 一种非暂时性机器可读存储介质,其特征在于,所述非暂时性机器可读存储介质上存储有可执行代码,当所述可执行代码被电子设备的处理器执行时,所述处理器执行如权利要求1至5中任一项所述的空间结构图生成方法。
  9. 一种户型图生成方法,其特征在于,应用于目标控制终端,所述方法包括:
    获取信息采集终端得到的目标物理空间中多个子空间分别对应的点云数据和全景图像,以确定所述多个子空间对应的多个空间轮廓;其中,所述多个子空间与所述多个空间轮廓一一对应,任一子空间的点云数据和全景图像是在所述任一子空间中的至少一个拍摄点位采集的;
    显示所述多个子空间对应的多个空间轮廓以用于编辑;
    针对多个子空间中的目标子空间,获取在目标全景图像中识别的目标介质,所述目标介质为所述目标子空间中的实体介质在所述目标全景图像中的图像,所述目标全景图像是所述目标子空间的至少一个拍摄点位采集的全景图像中,用于识别所述目标介质的全景图像;
    在所述目标子空间的空间轮廓中确定所述目标介质对应的映射介质,以生成所述目标子空间的空间结构图;
    响应于对所述多个子空间对应的多个空间结构图的获取完成操作,获取所述多个空间结构图拼接得到的所述目标物理空间的户型图。
  10. 根据权利要求9所述的方法,其特征在于,所述方法还包括:
    将所述多个子空间对应的多个空间轮廓发送给其他控制终端,以在所述其他控制终端上显示所述多个子空间对应的多个空间轮廓以用于与所述目标控制终端同步编辑;
    获取所述其他控制终端获取的所述目标子空间的空间结构图。
  11. 根据权利要求10所述的方法,其特征在于,所述方法还包括:
    显示所述多个空间轮廓分别对应的终端设备标识,所述终端设备标识用于指示当前编辑各空间轮廓的终端设备,所述终端设备包括所述目标控制终端和所述其他控制终端。
  12. 根据权利要求9所述的方法,其特征在于,确定所述多个子空间中的目标子空间的空间轮廓,包括:
    根据所述目标子空间的至少一个拍摄点位的点云数据,获取第一空间轮廓;
    根据所述目标子空间的至少一个拍摄点位的全景图像,获取第二空间轮廓;
    根据所述第一空间轮廓和所述第二空间轮廓,确定所述目标子空间的空间轮廓。
  13. 根据权利要求9至12中任一项所述的方法,其特征在于,对所述目标子空间的空间轮廓的编辑包括:
    响应于在所述目标子空间的空间轮廓上对目标轮廓线的编辑操作,调整所述目标轮廓线的形态和/或位置,以使所述目标轮廓线与点云图像中的墙线重合,所述点云图像是根据所述目标子空间的至少一个拍摄点位的点云数据确定的,所述目标子空间的空间轮廓由多条轮廓线构成。
  14. 根据权利要求9所述的方法,其特征在于,所述获取所述多个空间结构图拼接得到的所述目标物理空间的户型图,包括:
    根据所述多个子空间分分别对应的点云数据和/或全景图像,确定所述多个子空间之间的空间连接关系;
    根据所述空间连接关系,拼接所述多个空间结构图,以得到所述目标物理空间的户型图。
  15. 根据权利要求9所述的方法,其特征在于,所述在所述目标子空间的空间轮廓中确定所述目标介质对应的映射介质,包括:
    根据所述目标子空间的目标全景图像和空间轮廓之间的映射关系,获取所述目标介质在所述目标全景图像中对应的全景像素坐标以及所映射的空间轮廓坐标,以在所述目标子空间的空间轮廓中确定所述目标介质对应的映射介质;其中,所述映射介质的与所述目标介质的目标标识以及目标显示尺寸相适配,所述目标标识用于区分属于不同类型的目标介质,所述映射关系为根据所述目标子空间的点云数据和所述目标全景图像之间的坐标映射,所建立的所述目标全景图像和所述空间轮廓之间的映射。
  16. 一种户型图生成装置,其特征在于,应用于目标控制终端,所述装置包括:
    获取模块,用于获取信息采集终端得到的目标物理空间中多个子空间分别对应的点云数据和全景图像,以确定所述多个子空间对应的多个空间轮廓;其中,所述多个子空间与 所述多个空间轮廓一一对应,任一子空间的点云数据和全景图像是在所述任一子空间中的至少一个拍摄点位采集的;
    显示模块,用于显示所述多个子空间对应的多个空间轮廓以用于编辑;
    处理模块,用于针对多个子空间中的目标子空间,获取在目标全景图像中识别的目标介质,所述目标介质为所述目标子空间中的实体介质在所述目标全景图像中的图像,所述目标全景图像是所述目标子空间的至少一个拍摄点位采集的全景图像中,用于识别所述目标介质的全景图像;在所述目标子空间的空间轮廓中确定所述目标介质对应的映射介质,以生成所述目标子空间的空间结构图;响应于对所述多个子空间对应的多个空间结构图的获取完成操作,获取所述多个空间结构图拼接得到的所述目标物理空间的户型图。
  17. 一种户型图生成方法,其特征在于,应用于控制终端,所述方法包括:
    获取信息采集终端得到的目标物理空间中多个子空间分别对应的点云数据和全景图像,其中,任一子空间的点云数据和全景图像是在所述任一子空间中的至少一个拍摄点位采集的;
    在依次对所述多个子空间进行空间结构图获取处理的过程中,针对当前待编辑的目标子空间,根据在所述目标子空间的至少一个拍摄点位上采集的点云数据和/或全景图像,获取所述目标子空间的目标空间轮廓;
    获取在目标全景图像中识别的目标介质,所述目标介质为所述目标子空间中的实体介质在所述目标全景图像中的图像,所述目标全景图像是在所述目标子空间的至少一个拍摄点位采集的全景图像中,用于识别所述目标介质的全景图像;
    在所述目标子空间的目标空间轮廓上确定用于表示所述目标介质的映射介质,以获取所述目标子空间的空间结构图;
    响应于所述目标子空间的空间结构图的获取完成操作,若所述多个子空间中不存在未获取到空间结构图的子空间,则获取所述多个空间的空间结构图拼接得到的所述目标物理空间的户型图。
  18. 根据权利要求17所述的方法,其特征在于,所述方法还包括:
    响应于所述目标子空间的空间结构图的获取完成操作,若所述多个子空间中存在未获取到空间结构图的至少一个子空间,则从所述至少一个子空间中确定出一个待编辑子空间,以获取所述待编辑子空间的空间结构图。
  19. 根据权利要求18所述的方法,其特征在于,所述从所述至少一个子空间中确定出一个待编辑子空间,包括:
    根据所述至少一个子空间对应的点云数据和全景图像的采集时间点,从所述至少一个 子空间中确定出一个待编辑子空间,其中,所述待编辑子空间对应的所述采集时间点与当前时刻的差值大于所述至少一个子空间中其他子空间对应的所述采集时间点与当前时刻的差值。
  20. 根据权利要求18所述的方法,其特征在于,所述从所述至少一个子空间中确定出一个待编辑子空间,包括:
    根据所述多个子空间分别对应的点云数据,确定多个子空间之间的连接关系;
    根据所述连接关系,从所述至少一个子空间中确定出一个待编辑子空间,其中,所述待编辑子空间与所述目标子空间相连接。
  21. 根据权利要求17所述的方法,其特征在于,所述根据在所述目标子空间的至少一个拍摄点位上采集的点云数据,获取所述目标子空间的目标空间轮廓,包括:
    将在所述目标子空间的至少一个拍摄点位上采集的点云数据映射到二维平面,以确定所述目标子空间的二维点云图像;
    显示基于所述二维点云图像识别的空间轮廓,所述空间轮廓由多条轮廓线构成;
    响应于用户对所述空间轮廓的修正操作,调整所述空间轮廓中目标轮廓线的形态和/或位置,以使所述目标轮廓线与所述二维点云图像中的墙线重合;
    确定与所述墙线重合的轮廓线构成的空间轮廓为所述目标子空间的目标空间轮廓。
  22. 根据权利要求17所述的方法,其特征在于,所述在所述目标子空间的目标空间轮廓上确定用于表示所述目标介质的映射介质,包括:
    根据所述目标子空间的目标全景图像和空间轮廓之间的映射关系,获取所述目标介质在所述目标全景图像中对应的全景像素坐标以及所映射的空间轮廓坐标,以在所述目标子空间的空间轮廓中确定所述目标介质对应的映射介质;其中,所述映射介质的与所述目标介质的目标标识以及目标显示尺寸相适配,所述目标标识用于区分属于不同类型的目标介质,所述映射关系为根据所述目标子空间的点云数据和所述目标全景图像之间的坐标映射,所建立的所述目标全景图像和所述空间轮廓之间的映射。
  23. 一种户型图生成装置,其特征在于,应用于控制终端,所述装置包括:
    获取模块,用于获取信息采集终端得到的目标物理空间中多个子空间分别对应的点云数据和全景图像,其中,任一子空间的点云数据和全景图像是在所述任一子空间中的至少一个拍摄点位采集的;
    处理模块,用于在依次对所述多个子空间进行空间结构图获取处理的过程中,针对当前待编辑的目标子空间,根据在所述目标子空间的至少一个拍摄点位上采集的点云数据和/或全景图像,获取所述目标子空间的目标空间轮廓;获取在目标全景图像中识别的目标 介质,所述目标介质为所述目标子空间中的实体介质在所述目标全景图像中的图像,所述目标全景图像是在所述目标子空间的至少一个拍摄点位采集的全景图像中,用于识别所述目标介质的全景图像;在所述目标子空间的目标空间轮廓上确定用于表示所述目标介质的映射介质,以获取所述目标子空间的空间结构图;响应于所述目标子空间的空间结构图的获取完成操作,若所述多个子空间中不存在未获取到空间结构图的子空间,则获取所述多个空间的空间结构图拼接得到的所述目标物理空间的户型图。
  24. 一种户型图生成方法,所述方法用于生成目标物理空间的户型图,所述目标物理空间至少包括N个空间,应用于控制终端,其特征在于,所述方法包括:
    步骤1、获取信息采集终端在所述N个空间的每一空间所采集的点云数据和全景图像,其中,所述点云数据和全景图像是在所述每一空间中的至少一个拍摄点位采集的;
    步骤2、获取所述N个空间中第M个空间的第M个空间轮廓进行显示以用于编辑,所述第M个空间轮廓是根据第M个空间的至少一个拍摄点位采集的点云数据和/或全景图像获取的;
    步骤3、获取在所述第M个空间的目标全景图像中识别的目标介质,以使得根据所述目标介质获取所述目标介质在所述第M个空间轮廓中的映射介质,以用于根据所述映射介质编辑所述第M个空间轮廓,以获取所述第M个空间的户型结构图;所述目标全景图像是在第M个空间的至少一个拍摄点位采集的全景图像中,用于识别所述目标介质的全景图像,所述目标介质为所述第M个空间中的实体介质在所述目标全景图像中的图像;
    步骤4、判断所述第M个空间是否是所述N个空间中最后一个生成户型结构图的空间;
    若否,执行步骤5、将M赋值为M+1并返回步骤2;
    若是,执行步骤6、获取由N个户型结构图组成的所述目标物理空间的户型图,以用于展示,流程结束;其中,M、N为自然数,且1≤M≤N。
  25. 根据权利要求24所述的方法,其特征在于,所述获取所述N个空间中第M个空间的第M个空间轮廓进行显示以用于编辑,包括:
    在所述N个空间中第M个空间的二维点云图像中显示所述第M个空间的第M个空间轮廓;其中,所述二维点云图像是将所述第M个空间中至少一个拍摄点位的点云数据进行平面映射后得到的;
    响应于对所述第M个空间轮廓的编辑操作,调整所述第M个空间轮廓的轮廓线,以使所述轮廓线与所述二维点云图像中的墙线重合。
  26. 根据权利要求25所述的方法,其特征在于,所述二维点云图像通过如下方法获得:
    融合所述N个空间中第M个空间的至少一个拍摄点位的点云数据,以确定所述第M个空间的目标点云数据;
    将所述目标点云数据映射到二维平面,以得到所述第M个空间的初始二维点云图像;
    根据用户对所述初始二维点云图像的修正操作,确定所述第M个空间的二维点云图像。
  27. 根据权利要求24所述的方法,其特征在于,所述根据所述目标介质获取所述目标介质在所述第M个空间轮廓中的映射介质,包括:
    根据所述第M个空间的目标全景图像和所述第M个空间轮廓之间的映射关系,获取所述目标介质在所述目标全景图像中对应的全景像素坐标以及所映射的空间轮廓坐标,以在所述第M个空间轮廓中确定所述目标介质对应的映射介质;其中,所述映射介质的与所述目标介质的目标标识以及目标显示尺寸相适配,所述目标标识用于区分属于不同类型的目标介质,所述映射关系为根据所述第M个空间的点云数据和所述目标全景图像之间的坐标映射,所建立的所述目标全景图像和所述第M个空间轮廓之间的映射。
  28. 根据权利要求24所述的方法,其特征在于,所述方法包括:
    根据所述N个空间分别对应的点云数据和/或全景图像的采集时间点,对所述N个空间进行排序。
  29. 一种户型图生成装置,所述装置用于生成目标物理空间的户型图,所述目标物理空间至少包括N个空间,应用于控制终端,其特征在于,所述装置包括:
    获取模块,用于获取信息采集终端在所述N个空间的每一空间所采集的点云数据和全景图像,其中,所述点云数据和全景图像是在所述每一空间中的至少一个拍摄点位采集的;
    第一处理模块,用于获取所述N个空间中第M个空间的第M个空间轮廓进行显示以用于编辑,所述第M个空间轮廓是根据第M个空间的至少一个拍摄点位采集的点云数据和/或全景图像获取的;获取在所述第M个空间的目标全景图像中识别的目标介质,以使得根据所述目标介质获取所述目标介质在所述第M个空间轮廓中的映射介质,以用于根据所述映射介质编辑所述第M个空间轮廓,以获取所述第M个空间的户型结构图;所述目标全景图像是在第M个空间的至少一个拍摄点位采集的全景图像中,用于识别所述目标介质的全景图像,所述目标介质为所述第M个空间中的实体介质在所述目标全景图像中的图像;
    第二处理模块,用于判断所述第M个空间是否是所述N个空间中最后一个生成户型结构图的空间;若否,则将M赋值为M+1并返回执行第一处理模块;若是,则获取由N个户型结构图组成的所述目标物理空间的户型图,以用于展示,流程结束;其中,M、N为自然数,且1≤M≤N。
  30. 一种户型图生成方法,所述方法用于生成目标物理空间的户型图,其中所述目标物理空间包括多个子空间,应用于控制终端,其特征在于,所述方法包括:
    获取信息采集终端得到的所述多个子空间分别对应的点云数据和全景图像,其中,任 一子空间的点云数据和全景图像是在所述任一子空间中的至少一个拍摄点位采集的;
    在依次对所述多个子空间进行的户型结构图拼接处理的过程中,针对当前待拼接的第一子空间,根据在所述第一子空间的至少一个拍摄点位上采集的点云数据和/或全景图像,获取所述第一子空间的目标空间轮廓;
    获取在目标全景图像中识别的目标介质,所述目标介质为所述第一子空间中的实体介质在所述目标全景图像中的图像,所述目标全景图像是在所述第一子空间的至少一个拍摄点位上采集的全景图像中,用于识别所述目标介质的全景图像;
    在所述第一子空间的目标空间轮廓上确定用于表示所述目标介质的映射介质,以获取所述第一子空间的户型结构图;
    将所述第一子空间的户型结构图与第二子空间的户型结构图进行拼接处理,所述第二子空间为所述第一子空间之前已进行户型结构图拼接的子空间;
    若所述多个子空间中不存在未进行户型结构图拼接处理的子空间,则将所述拼接处理结果确定为所述目标物理空间的户型图。
  31. 根据权利要求30所述的方法,其特征在于,所述方法还包括:
    若所述多个子空间中存在未进行户型结构图拼接处理的子空间,则将所述第二子空间的户型结构图更新为所述拼接处理结果;
    从所述未进行户型结构图拼接处理的子空间中重新确定出一个待拼接的第一子空间,以将重新确定出的第一子空间的户型结构图与更新后的第二子空间的户型结构图进行拼接。
  32. 根据权利要求31所述的方法,其特征在于,所述从所述未进行户型结构图拼接处理的子空间中重新确定出一个待拼接的第一子空间,包括:
    根据所述多个子空间分别对应的至少一个拍摄点位的全景图像,确定多个子空间之间的相邻关系;
    根据所述相邻关系,从所述未进行户型结构图拼接处理的子空间中重新确定出一个待拼接的第一子空间,其中,重新确定出的第一子空间与所述第二子空间相邻。
  33. 根据权利要求31所述的方法,其特征在于,所述从所述未进行户型结构图拼接处理的子空间中重新确定出一个待拼接的第一子空间,包括:
    根据所述未进行户型结构图拼接处理的子空间中每一子空间对应的点云数据和/或全景图像的采集时间点,从所述未进行户型结构图拼接处理的子空间中重新确定出一个待拼接的第一子空间,其中,重新确定出的第一子空间对应的所述采集时间点与当前时刻的差值大于所述未进行户型结构图拼接处理的子空间中其他子空间对应的所述采集时间点与 当前时刻的差值。
  34. 根据权利要求30所述的方法,其特征在于,所述根据在所述第一子空间的至少一个拍摄点位上采集的点云数据和/或全景图像,获取所述第一子空间的目标空间轮廓,包括:
    根据在所述第一子空间的至少一个拍摄点位上采集的点云数据,确定所述第一子空间的第一空间轮廓;
    根据在所述第一子空间的至少一个拍摄点位上采集的全景图像,确定所述第一子空间的第二空间轮廓;
    根据所述第一空间轮廓和/或所述第二空间轮廓,确定所述第一子空间的目标空间轮廓。
  35. 根据权利要求34所述的方法,其特征在于,所述根据在所述第一子空间的至少一个拍摄点位上采集的点云数据,确定所述第一子空间的第一空间轮廓,包括:
    根据在所述第一子空间的至少一个拍摄点位上采集的点云数据,确定所述第一子空间的二维点云图像;
    根据所述二维点云图像,确定所述第一子空间的第一空间轮廓。
  36. 根据权利要求30所述的方法,其特征在于,所述在所述第一子空间的目标空间轮廓上确定用于表示所述目标介质的映射介质,包括:
    根据所述第一子空间的目标全景图像和目标空间轮廓之间的映射关系,获取所述目标介质在所述目标全景图像中对应的全景像素坐标以及所映射的空间轮廓坐标,以在所述第一子空间的目标空间轮廓中确定所述目标介质对应的映射介质;其中,所述映射介质的与所述目标介质的目标标识以及目标显示尺寸相适配,所述目标标识用于区分属于不同类型的目标介质,所述映射关系为根据所述第一子空间的点云数据和所述目标全景图像之间的坐标映射,所建立的所述目标全景图像和所述空间轮廓之间的映射。
  37. 一种户型图生成装置,所述装置用于生成目标物理空间的户型图,其中所述目标物理空间包括多个子空间,应用于控制终端,其特征在于,所述装置包括:
    获取模块,用于获取信息采集终端得到的所述多个子空间分别对应的点云数据和全景图像,其中,任一子空间的点云数据和全景图像是在所述任一子空间中的至少一个拍摄点位采集的;
    拼接模块,用于在依次对所述多个子空间进行的户型结构图拼接处理的过程中,针对当前待拼接的第一子空间,根据在所述第一子空间的至少一个拍摄点位上采集的点云数据和/或全景图像,获取所述第一子空间的目标空间轮廓;获取在目标全景图像中识别的目 标介质,所述目标介质为所述第一子空间中的实体介质在所述目标全景图像中的图像,所述目标全景图像是在所述第一子空间的至少一个拍摄点位上采集的全景图像中,用于识别所述目标介质的全景图像;在所述第一子空间的目标空间轮廓上确定用于表示所述目标介质的映射介质,以获取所述第一子空间的户型结构图;将所述第一子空间的户型结构图与第二子空间的户型结构图进行拼接处理,所述第二子空间为所述第一子空间之前已进行户型结构图拼接的子空间;
    处理模块,用于若所述多个子空间中不存在未进行户型结构图拼接处理的子空间,则将所述拼接处理结果确定为所述目标物理空间的户型图。
  38. 一种户型图生成方法,所述方法用于生成目标物理空间的户型图,其中所述目标物理空间至少包括N个空间,应用于控制终端,其特征在于,所述方法包括:
    步骤1、获取信息采集终端在所述N个空间的每一空间所采集的点云数据和全景图像,其中,所述点云数据和全景图像是在所述每一空间中的至少一个拍摄点位采集的;
    步骤2、获取第M空间轮廓进行显示以用于编辑;其中,所述第M空间轮廓是所述N个空间中第M空间的空间轮廓,所述第M空间轮廓是根据所述第M空间的至少一个拍摄点位采集的点云数据和/或全景图像获取的;
    步骤3、获取在第一目标全景图像中识别的第一目标介质,以使得根据所述第一目标介质获取所述第一目标介质在所述第M空间轮廓中的第一映射介质,以用于根据所述第一映射介质编辑所述第M空间轮廓以获取所述第M空间的户型结构图;所述第一目标全景图像是第M空间的所述至少一个拍摄点位采集的全景图像中,用于识别所述第一目标介质的全景图像,所述第一目标介质为所述第M空间中的实体介质在所述第一目标全景图像中的图像;
    步骤4、获取第M+1空间轮廓进行显示以用于编辑;其中,所述第M+1空间轮廓是所述N个空间中第M+1空间的空间轮廓,所述第M+1空间为所述第M空间的相邻空间,所述第M+1空间轮廓是根据所述第M+1空间的至少一个拍摄点位采集的点云数据和/或全景图像获取的;
    步骤5、获取在第二目标全景图像中识别的第二目标介质,以使得根据所述第二目标介质获取所述第二目标介质在所述第M+1空间轮廓中的第二映射介质,以用于根据所述第二映射介质编辑所述第M+1空间轮廓以获取所述第M+1空间的户型结构图;所述第二目标全景图像是第M+1空间的所述至少一个拍摄点位采集的全景图像中,用于识别所述第二目标介质的全景图像,所述第二目标介质为所述第M+1空间中的实体介质在所述第二目标全景图像中的图像;
    步骤6、将所述第M+1空间的户型结构图与所述第M空间的户型结构图进行拼接,并判 断所述第M+1空间是否是所述N个空间中最后一个生成户型结构图的空间;
    若否,执行步骤7、将所述第M空间和所述第M+1空间合并作为第M空间,并返回执行步骤4;
    若是,执行步骤8、将所述拼接结果作为所述目标物理空间的户型图,以用于展示,流程结束。
  39. 根据权利要求38所述的方法,其特征在于,所述方法还包括:
    根据所述第M空间的至少一个拍摄点位的全景图像和所述N个空间中剩余的未生成户型结构图的空间中每一空间的至少一个拍摄点位的全景图像,从所述未生成户型结构图的空间中确定出所述第M+1空间。
  40. 根据权利要求39所述的方法,其特征在于,所述根据所述第M空间的至少一个拍摄点位的全景图像和所述N个空间中剩余的未生成户型结构图的空间中每一空间的至少一个拍摄点位的全景图像,从所述未生成户型结构图的空间中确定出所述第M+1空间,包括:
    根据所述第M空间的至少一个拍摄点位的全景图像和所述N个空间中剩余的未生成户型结构图的空间中每一空间的至少一个拍摄点位的全景图像,确定所述未生成户型结构图的空间中每一空间对应的相邻空间的数量总和;
    确定所述相邻空间的数量总和大于或等于设定阈值的空间为所述第M+1空间。
  41. 根据权利要求38所述的方法,其特征在于,所述获取第M空间轮廓进行显示以用于编辑,包括:
    在所述第M空间的二维点云图像中显示第M空间轮廓,响应于用户对所述第M空间轮廓的编辑操作,调整所述第M空间轮廓的轮廓线,以使所述轮廓线与所述二维点云图像中的墙线重合;其中,所述第M空间的二维点云图像是将所述第M空间中至少一个拍摄点位的点云数据进行平面映射后得到的;
    所述获取第M+1空间轮廓进行显示以用于编辑,包括:
    在所述第M+1空间的二维点云图像中显示第M+1空间轮廓,响应于用户对所述第M+1空间轮廓的编辑操作,调整所述第M+1空间轮廓的轮廓线,以使所述轮廓线与所述第M+1空间的二维点云图像中的墙线重合;其中,所述第M+1空间的二维点云图像是将所述第M+1空间中至少一个拍摄点位的点云数据进行平面映射后得到的。
  42. 根据权利要求38所述的方法,其特征在于,所述根据所述第一目标介质获取所述第一目标介质在所述第M空间轮廓中的第一映射介质,以用于根据所述第一映射介质编辑所述第M空间轮廓以获取所述第M空间的户型结构图,包括:
    根据所述第一目标全景图像和所述第M空间轮廓之间的映射关系,获取所述第一目标 介质在所述第一目标全景图像中对应的全景像素坐标以及所映射的空间轮廓坐标,以在所述第M空间轮廓中确定所述第一目标介质对应的第一映射介质,以获取所述第M空间的户型结构图;
    其中,所述第一映射介质的与所述第一目标介质的目标标识以及目标显示尺寸相适配,所述目标标识用于区分属于不同类型的目标介质,所述映射关系为根据所述第M空间的点云数据和所述第一目标全景图像之间的坐标映射,所建立的所述第一目标全景图像和所述第M空间轮廓之间的映射。
  43. 根据权利要求38所述的方法,其特征在于,所述根据所述第二目标介质获取所述第二目标介质在所述第M+1空间轮廓中的第二映射介质,以用于根据所述第二映射介质编辑所述第M+1空间轮廓以获取所述第M+1空间的户型结构图,包括:
    根据所述第二目标全景图像和所述第M+1空间轮廓之间的映射关系,获取所述第二目标介质在所述第二目标全景图像中对应的全景像素坐标以及所映射的空间轮廓坐标,以在所述第M+1空间轮廓中确定所述第二目标介质对应的第二映射介质,以获取所述第M+1空间的户型结构图;
    其中,所述第二映射介质的与所述第二目标介质的目标标识以及目标显示尺寸相适配,所述目标标识用于区分属于不同类型的目标介质,所述映射关系为根据所述第M+1空间的点云数据和所述第二目标全景图像之间的坐标映射,所建立的所述第二目标全景图像和所述第M+1空间轮廓之间的映射。
  44. 根据权利要求38所述的方法,其特征在于,所述将所述第M+1空间的户型结构图与所述第M空间的户型结构图进行拼接,包括:
    根据所述第M+1空间与所述第M空间的相邻关系,在同一空间坐标系下,将所述第M+1空间的户型结构图与所述第M空间的户型结构图进行拼接。
  45. 一种户型图生成装置,所述装置用于生成目标物理空间的户型图,其中所述目标物理空间至少包括N个空间,应用于控制终端,其特征在于,所述装置包括:
    第一获取模块,用于获取信息采集终端在所述N个空间的每一空间所采集的点云数据和全景图像,其中,所述点云数据和全景图像是在所述每一空间中的至少一个拍摄点位采集的;获取第M空间轮廓进行显示以用于编辑;其中,所述第M空间轮廓是所述N个空间中第M空间的空间轮廓,所述第M空间轮廓是根据所述第M空间的至少一个拍摄点位采集的点云数据和/或全景图像获取的;获取在第一目标全景图像中识别的第一目标介质,以使得根据所述第一目标介质获取所述第一目标介质在所述第M空间轮廓中的第一映射介质,以用于根据所述第一映射介质编辑所述第M空间轮廓以获取所述第M空间的户型结构图;所述 第一目标全景图像是第M空间的所述至少一个拍摄点位采集的全景图像中,用于识别所述第一目标介质的全景图像,所述第一目标介质为所述第M空间中的实体介质在所述第一目标全景图像中的图像;
    第二获取模块,用于获取第M+1空间轮廓进行显示以用于编辑;其中,所述第M+1空间轮廓是所述N个空间中第M+1空间的空间轮廓,所述第M+1空间为所述第M空间的相邻空间,所述第M+1空间轮廓是根据所述第M+1空间的至少一个拍摄点采集的点云数据和/或全景图像获取的;获取在第二目标全景图像中识别的第二目标介质,以使得根据所述第二目标介质获取所述第二目标介质在所述第M+1空间轮廓中的第二映射介质,以用于根据所述第二映射介质编辑所述第M+1空间轮廓以获取所述第M+1空间的户型结构图;所述第二目标全景图像是第M+1空间的所述至少一个拍摄点位采集的全景图像中,用于识别所述第二目标介质的全景图像,所述第二目标介质为所述第M+1空间中的实体介质在所述第二目标全景图像中的图像;
    处理模块,用于将所述第M+1空间的户型结构图与所述第M空间的户型结构图进行拼接,并判断所述第M+1空间是否是所述N个空间中最后一个生成户型结构图的空间;若否,则将所述第M空间和所述第M+1空间合并作为第M空间,并返回执行第二获取模块;若是,则将所述拼接结果作为所述目标物理空间的户型图,以用于展示,流程结束。
  46. 一种电子设备,其特征在于,包括:存储器、处理器、通信接口;其中,所述存储器上存储有可执行代码,当所述可执行代码被所述处理器执行时,使所述处理器执行如权利要求9至15、17至22、24至28、30至36和/或38至44中任一项所述的户型图生成方法。
  47. 一种非暂时性机器可读存储介质,其特征在于,所述非暂时性机器可读存储介质上存储有可执行代码,当所述可执行代码被电子设备的处理器执行时,所述处理器执行如权利要求9至15、17至22、24至28、30至36和/或38至44中任一项所述的户型图生成方法。
PCT/CN2022/133313 2022-11-21 2022-11-21 空间结构图和户型图生成方法、装置、设备和存储介质 WO2024108350A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/133313 WO2024108350A1 (zh) 2022-11-21 2022-11-21 空间结构图和户型图生成方法、装置、设备和存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/133313 WO2024108350A1 (zh) 2022-11-21 2022-11-21 空间结构图和户型图生成方法、装置、设备和存储介质

Publications (1)

Publication Number Publication Date
WO2024108350A1 true WO2024108350A1 (zh) 2024-05-30

Family

ID=91194871

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/133313 WO2024108350A1 (zh) 2022-11-21 2022-11-21 空间结构图和户型图生成方法、装置、设备和存储介质

Country Status (1)

Country Link
WO (1) WO2024108350A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190378330A1 (en) * 2018-06-06 2019-12-12 Ke.Com (Beijing) Technology Co., Ltd. Method for data collection and model generation of house
WO2020006941A1 (zh) * 2018-07-03 2020-01-09 上海亦我信息技术有限公司 一种基于拍照重建三维空间场景的方法
CN111402404A (zh) * 2020-03-16 2020-07-10 贝壳技术有限公司 全景图补全方法、装置、计算机可读存储介质及电子设备
CN113823001A (zh) * 2021-09-23 2021-12-21 北京有竹居网络技术有限公司 户型图生成方法、装置、设备及介质
CN114925439A (zh) * 2022-06-14 2022-08-19 北京有竹居网络技术有限公司 用于生成平面布局图的方法、装置、设备和存储介质
CN115330966A (zh) * 2022-08-15 2022-11-11 北京城市网邻信息技术有限公司 户型图生成方法、系统、设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190378330A1 (en) * 2018-06-06 2019-12-12 Ke.Com (Beijing) Technology Co., Ltd. Method for data collection and model generation of house
WO2020006941A1 (zh) * 2018-07-03 2020-01-09 上海亦我信息技术有限公司 一种基于拍照重建三维空间场景的方法
CN111402404A (zh) * 2020-03-16 2020-07-10 贝壳技术有限公司 全景图补全方法、装置、计算机可读存储介质及电子设备
CN113823001A (zh) * 2021-09-23 2021-12-21 北京有竹居网络技术有限公司 户型图生成方法、装置、设备及介质
CN114925439A (zh) * 2022-06-14 2022-08-19 北京有竹居网络技术有限公司 用于生成平面布局图的方法、装置、设备和存储介质
CN115330966A (zh) * 2022-08-15 2022-11-11 北京城市网邻信息技术有限公司 户型图生成方法、系统、设备及存储介质

Similar Documents

Publication Publication Date Title
US11704833B2 (en) Monocular vision tracking method, apparatus and non-transitory computer-readable storage medium
JP6879891B2 (ja) 平面セグメントを用いて点群を完成させる方法およびシステム
WO2021036353A1 (zh) 基于拍照的3d建模系统及方法、自动3d建模装置及方法
CN110568447B (zh) 视觉定位的方法、装置及计算机可读介质
EP2328125B1 (en) Image splicing method and device
JP2021520584A (ja) 住宅のデータ収集およびモデル生成方法
US20120007943A1 (en) Method for determining the relative position of a first and a second imaging device and devices therefore
US9710958B2 (en) Image processing apparatus and method
CN103945210A (zh) 一种实现浅景深效果的多摄像头拍摄方法
CN109032348A (zh) 基于增强现实的智能制造方法与设备
JPWO2019230813A1 (ja) 三次元再構成方法および三次元再構成装置
CN115641401A (zh) 一种三维实景模型的构建方法及相关装置
JP2006098256A (ja) 3次元サーフェスモデル作成システム、画像処理システム、プログラム及び情報記録媒体
JP7241812B2 (ja) 情報可視化システム、情報可視化方法、及びプログラム
CN114004935A (zh) 通过三维建模系统进行三维建模的方法和装置
WO2024108350A1 (zh) 空间结构图和户型图生成方法、装置、设备和存储介质
CN110191284B (zh) 对房屋进行数据采集的方法、装置、电子设备和存储介质
CN116524022B (zh) 偏移数据计算方法、图像融合方法、装置及电子设备
CN117291989A (zh) 一种相机模型自标定方法、装置、介质和电子设备
JPH06348815A (ja) Cgシステムにおける建物の景観の3次元モデルの設定方法
CN112419503A (zh) 建筑物模型的生成方法、装置、计算设备和存储介质
CN114898068B (zh) 三维建模方法、装置、设备及存储介质
CN113327329B (zh) 基于三维模型的室内投影方法、装置及系统
CN112288817B (zh) 基于图像的三维重建处理方法及装置
CN115830161B (zh) 户型图生成方法、装置、设备和存储介质