CN113298708A - Three-dimensional house type generation method, device and equipment - Google Patents

Three-dimensional house type generation method, device and equipment Download PDF

Info

Publication number
CN113298708A
CN113298708A CN202110272326.XA CN202110272326A CN113298708A CN 113298708 A CN113298708 A CN 113298708A CN 202110272326 A CN202110272326 A CN 202110272326A CN 113298708 A CN113298708 A CN 113298708A
Authority
CN
China
Prior art keywords
dimensional
images
house type
splicing
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110272326.XA
Other languages
Chinese (zh)
Inventor
冉盛辉
虞新阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Innovation Co
Original Assignee
Alibaba Singapore Holdings Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Singapore Holdings Pte Ltd filed Critical Alibaba Singapore Holdings Pte Ltd
Priority to CN202110272326.XA priority Critical patent/CN113298708A/en
Publication of CN113298708A publication Critical patent/CN113298708A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/04Architectural design, interior design

Abstract

The embodiment of the application provides a method, a device and equipment for generating a three-dimensional house type, wherein the method comprises the following steps: acquiring at least two images of a room from different visual angles; generating three-dimensional layout diagrams respectively corresponding to the at least two images; determining a stitching position corresponding to the three-dimensional layout map based on the at least two images; and splicing the three-dimensional layout drawing according to the splicing position to generate a three-dimensional house type corresponding to the room. According to the technical scheme, the three-dimensional house type of the room is output and shot through the common image, so that the problems of high price, inconvenience in operation and high shooting skill requirement when a panoramic camera is used for obtaining a panoramic image are effectively solved, the application range of the method is expanded, the operations such as decoration design and the like aiming at one room are greatly facilitated, and the rapid development of related applications is facilitated.

Description

Three-dimensional house type generation method, device and equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a device for generating a three-dimensional house type.
Background
In the field of smart home technology, in order to enable a room rendering operation, the method generally comprises the following steps: the user inputs the house type graph to be rendered, and then selects different decoration styles, so that the room rendering graph with different decoration styles can be generated. In the prior art, in order to ensure the effect and accuracy of a room rendering map, a house type map to be rendered, which is input by a user, is generally a panoramic image obtained by indoor acquisition by using an expensive and non-portable panoramic camera.
However, for the user, the panoramic image obtained by capturing the image with the specific panoramic camera is expensive, inconvenient to operate, and requires high shooting skills, so that the application range of the technical scheme is limited, and the rapid development of the number of users is not facilitated.
Disclosure of Invention
The embodiment of the application provides a method, a device and equipment for generating a three-dimensional house type, which are used for solving the problems of high price, inconvenience in operation and high shooting skill requirement when a panoramic camera is used for acquiring images to obtain a panoramic image.
In a first aspect, an embodiment of the present application provides a method for generating a three-dimensional house type, including:
acquiring at least two images of a room from different visual angles;
generating three-dimensional layout diagrams respectively corresponding to the at least two images;
determining a stitching position corresponding to the three-dimensional layout map based on the at least two images;
and splicing the three-dimensional layout drawing according to the splicing position to generate a three-dimensional house type corresponding to the house.
In a second aspect, an embodiment of the present application provides a three-dimensional house type generation apparatus, including:
the first acquisition module is used for acquiring at least two images of a room from different visual angles;
the first generation module is used for generating three-dimensional layout diagrams respectively corresponding to the at least two images;
a first determining module, configured to determine a stitching location corresponding to the three-dimensional layout map based on the at least two images;
and the first processing module is used for splicing the three-dimensional layout according to the splicing position to generate a three-dimensional house type corresponding to the room.
In a third aspect, an embodiment of the present application provides an electronic device, including: a memory, a processor; wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the method for generating a three-dimensional house type according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer storage medium for storing a computer program, where the computer program is used to make a computer execute a method for generating a three-dimensional house type as shown in the first aspect.
In a fifth aspect, an embodiment of the present application provides a method for generating a three-dimensional house type, including:
acquiring at least two images of a room from different visual angles;
generating two-dimensional layout maps respectively corresponding to the at least two images;
determining camera parameters corresponding to the at least two images;
generating a three-dimensional house type corresponding to the at least two images based on the at least two images, the two-dimensional layout map, and the camera parameters.
In a sixth aspect, an embodiment of the present application provides a three-dimensional house type generation apparatus, including:
the second acquisition module is used for acquiring at least two images of a room from different visual angles;
a second generating module, configured to generate two-dimensional layout maps corresponding to the at least two images, respectively;
a second determination module to determine camera parameters corresponding to the at least two images;
a second processing module for generating a three-dimensional house type corresponding to the at least two images based on the at least two images, the two-dimensional layout map and the camera parameters.
In a seventh aspect, an embodiment of the present application provides an electronic device, including: a memory, a processor; wherein the memory is configured to store one or more computer instructions, and wherein the one or more computer instructions, when executed by the processor, implement the method for generating a three-dimensional house type according to the fifth aspect.
In an eighth aspect, an embodiment of the present invention provides a computer storage medium for storing a computer program, where the computer program is used to make a computer execute a method for generating a three-dimensional house type as shown in the fifth aspect.
In a ninth aspect, an embodiment of the present invention provides a method for generating a three-dimensional house type, including:
acquiring at least two three-dimensional layout maps of different view angles of a room;
determining splicing positions corresponding to the at least two three-dimensional layout maps;
and splicing the at least two three-dimensional layout maps according to the splicing position to generate a three-dimensional house type corresponding to the house.
In a tenth aspect, an embodiment of the present invention provides a method for generating a three-dimensional house type, including:
the third acquisition module is used for acquiring at least two three-dimensional layout maps of different view angles of a room;
the third determining module is used for determining the splicing positions corresponding to the at least two three-dimensional layout maps;
and the third processing module is used for splicing the at least two three-dimensional layout maps according to the splicing position to generate a three-dimensional house type corresponding to the room.
In an eleventh aspect, an embodiment of the present application provides an electronic device, including: a memory, a processor; wherein the memory is configured to store one or more computer instructions, and the one or more computer instructions, when executed by the processor, implement the method for generating a three-dimensional house type as shown in the ninth aspect.
In a twelfth aspect, an embodiment of the present invention provides a computer storage medium for storing a computer program, where the computer program is used to make a computer execute a method for generating a three-dimensional house type as shown in the ninth aspect.
The method, the device and the equipment for generating the three-dimensional house type provided by the embodiment of the application generate the three-dimensional layout corresponding to the at least two images respectively by acquiring the at least two images at different visual angles of the room, determine the splicing position corresponding to the three-dimensional layout based on the at least two images, and then splice the three-dimensional layout according to the splicing position to generate the three-dimensional house type corresponding to the room, thereby effectively realizing the image acquisition operation on the room by using a common image acquisition device, outputting and shooting the three-dimensional house type of the room by using the acquired images, and effectively solving the problems of high price, inconvenient operation and high shooting skill requirement when acquiring the panoramic image by using a panoramic camera, moreover, the application range of the method is effectively expanded, the operations such as decoration and design aiming at a room are greatly facilitated, and the method is favorable for promoting the rapid development of related applications.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic diagram of a method for generating a three-dimensional house type according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of a method for generating a three-dimensional house type according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of shooting angles of at least two images provided by an embodiment of the present application;
fig. 4 is a schematic flowchart of generating at least two three-dimensional layout diagrams corresponding to the at least two images according to an embodiment of the present application;
fig. 5 is a schematic flowchart of determining a stitching position corresponding to the at least two three-dimensional layout maps based on the at least two images according to an embodiment of the present application;
fig. 6 is a schematic flowchart of determining a splicing position corresponding to the at least two three-dimensional layout maps based on the two-dimensional wall images corresponding to all the wall information and the at least two images according to the embodiment of the present application;
fig. 7 is a schematic flowchart of determining a stitching position corresponding to two adjacent three-dimensional layout maps based on the image similarity according to the embodiment of the present application;
fig. 8 is a schematic flow chart of another method for generating a three-dimensional house type according to an embodiment of the present application;
fig. 9 is a schematic flow chart illustrating a process of generating a three-dimensional house type corresponding to a room by performing a splicing process on the at least two three-dimensional layout maps according to the splicing position according to the embodiment of the present application;
fig. 10 is a schematic flow chart illustrating the process of optimizing the spliced house type data to generate the three-dimensional house type according to the embodiment of the present application;
fig. 11 is a schematic flowchart of a method for generating a three-dimensional house type according to an embodiment of the present application;
fig. 12 is a schematic flow chart of a method for generating a three-dimensional house type according to an embodiment of the present application;
fig. 13 is a schematic flowchart of a process for acquiring camera parameters according to an embodiment of the present application;
FIG. 14 is a schematic flowchart of determining a splicing position according to an embodiment of the present application;
FIG. 15 is a schematic diagram of a splicing operation provided in an exemplary embodiment of the present application;
fig. 16 is a schematic flowchart of a method for generating a three-dimensional house type according to an embodiment of the present application;
fig. 17 is a schematic flowchart of another method for generating a three-dimensional house type according to an embodiment of the present application;
fig. 18 is a schematic structural diagram of a three-dimensional house type generation apparatus according to an embodiment of the present application;
fig. 19 is a schematic structural view of an electronic device corresponding to the three-dimensional house type generation apparatus shown in fig. 18;
fig. 20 is a schematic structural diagram of a three-dimensional house type generation apparatus according to an embodiment of the present application;
fig. 21 is a schematic structural view of an electronic device corresponding to the three-dimensional house type generation apparatus shown in fig. 20;
fig. 22 is a schematic structural diagram of a three-dimensional house type generation apparatus according to an embodiment of the present application;
fig. 23 is a schematic structural view of an electronic device corresponding to the three-dimensional house type generating apparatus shown in fig. 22.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the examples of this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and "a" and "an" typically include at least two, but do not exclude the presence of at least one.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a good or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such good or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a commodity or system that includes the element.
In addition, the sequence of steps in each method embodiment described below is only an example and is not strictly limited.
In order to facilitate those skilled in the art to understand the technical solutions provided in the embodiments of the present application, the following description is provided for the related technologies:
in the field of smart home technologies, in order to enable room rendering operations, the method generally includes the following steps: the user inputs the house type graph to be rendered, and then selects different decoration styles, so that the room rendering graph with different decoration styles can be generated. At present, a relatively common house type image acquisition mode is an indoor panoramic acquisition scheme adopting a panoramic camera, and specifically, the implementation steps of acquiring an indoor layout image (layout) based on the panoramic camera include:
(1) and acquiring images of the room through a panoramic camera to obtain a 360-degree panoramic image in the room.
(2) Floor-wall, ceiling-wall and wall-wall boundary lines in panoramic images are identified by Convolutional Neural Networks (CNN).
(3) And determining the house type data of the indoor house type based on the boundary line information, the camera internal parameters and the shooting ground clearance of the camera.
However, the above technical solution has the following drawbacks: panoramic images are not easy to acquire, a professional panoramic camera is required for shooting, the price is high, the operation is inconvenient, the shooting skill requirement is high, and an ordinary user cannot operate, so that the application range of the technical scheme is limited, and the rapid development of the number of users is not facilitated.
In order to solve the above technical problem, the present embodiment provides a method, an apparatus, and a device for generating a three-dimensional house type, where an execution main body of the method may be a three-dimensional house type generation apparatus, and the generation apparatus may be communicatively connected with an image acquisition apparatus.
The image acquisition device may be any computing device with certain image acquisition function and computing capability, and in particular, the image acquisition device may be a camera, a video camera, an intelligent terminal (a mobile phone, a tablet computer) with a shooting function, and the like. Further, the basic structure of the image pickup apparatus may include: at least one processor. The number of processors depends on the configuration and type of image acquisition device. The image capturing device may also include a Memory, which may be volatile, such as a RAM, or non-volatile, such as a Read-Only Memory (ROM), a flash Memory, or both. The memory typically stores an Operating System (OS), one or more application programs, and may also store program data and the like. Besides the processing unit and the memory, the image capturing device also includes some basic configurations, such as a network card chip, an IO bus, a display component, and some peripheral devices. Alternatively, some peripheral devices may include, for example, a keyboard, a mouse, a stylus, a printer, and the like. Other peripheral devices are well known in the art and will not be described in detail herein. Alternatively, the image capturing device may be a pc (personal computer) terminal, a handheld terminal (e.g., a smart phone, a tablet computer), or the like.
The generation device is a device that can provide a computing processing service in a network virtual environment, and is generally a device that performs information planning and data processing using a network. In physical implementation, the generating device may be any device capable of providing computing service, responding to a service request, and performing processing, for example: can be cluster servers, regular servers, cloud hosts, virtual centers, and the like. The generating device mainly comprises a processor, a hard disk, a memory, a system bus and the like, and is similar to a general computer framework.
In the above embodiment, the image capturing device may be in network connection with the generating device, and the network connection may be a wireless or wired network connection. If the image acquisition device is in communication connection with the generation device, the network format of the mobile network may be any one of 2G (gsm), 2.5G (gprs), 3G (WCDMA, TD-SCDMA, CDMA2000, UTMS), 4G (LTE), 4G + (LTE +), WiMax, and the like.
In the embodiment of the present application, the image capturing device is configured to perform an image capturing operation on a room, so that a plurality of images of the room with different viewing angles can be obtained, where the plurality of images are images with a number greater than or equal to two, and a shooting viewing angle of each of the plurality of images can be smaller than or equal to a set value, that is, an image obtained by the image capturing device is not a panoramic image. After the plurality of images are acquired, the images may be uploaded to a generation apparatus of a three-dimensional house type so that the generation apparatus may perform analysis processing on the uploaded images.
The generation device of the three-dimensional house type is used for receiving a plurality of images uploaded by the client, namely the generation device can obtain a plurality of images in a non-panoramic format, and then as shown in fig. 1, the generation device can analyze the plurality of images to generate a two-dimensional layout corresponding to each image, and the generated two-dimensional layout can include wall information in a room; then, analyzing and processing the two-dimensional layout by combining each image and camera parameters corresponding to the image (the camera parameters can comprise at least one of camera internal parameters and camera external parameters), and generating a three-dimensional layout corresponding to each image; then, the generated three-dimensional layout maps are analyzed and processed based on the at least two images, and the stitching positions corresponding to the three-dimensional layout maps are determined, it can be understood that the plurality of images can generate the plurality of three-dimensional layout maps, and a stitching position can be determined between any two three-dimensional layout maps, that is, when the number of the three-dimensional layout maps is N, the determined stitching positions are also N. After the splicing positions are obtained, all the obtained three-dimensional layout maps can be spliced according to the splicing positions, so that a three-dimensional house type corresponding to a room can be generated.
The technical solution provided by this embodiment is to generate three-dimensional layout maps corresponding to at least two images by obtaining at least two images of a room from different viewing angles, determine a splicing position corresponding to the three-dimensional layout maps based on the at least two images, and then splice the three-dimensional layout maps according to the splicing position to generate a three-dimensional room type corresponding to the room, so as to effectively implement image acquisition operation on the room by using a common image acquisition device, and output and shoot the three-dimensional room type of the room by using the obtained images, where the three-dimensional room type may include information such as door and window positions, thereby effectively solving the problems of high price, inconvenient operation and high shooting skill requirement when acquiring a panoramic image by using a panoramic camera, and effectively expanding the application range of the method, greatly facilitates the operations such as decoration and design aiming at a room, and is beneficial to promoting the rapid development of related applications.
The following describes a method, an apparatus, and a device for generating a three-dimensional house type according to various embodiments of the present application with an exemplary application scenario.
Fig. 2 is a schematic flow chart of a method for generating a three-dimensional house type according to an embodiment of the present disclosure; referring to fig. 2, the embodiment provides a method for generating a three-dimensional house type, and the execution subject of the method may be a three-dimensional house type generating device, and it is understood that the three-dimensional house type generating device may be implemented as software, or a combination of software and hardware. Specifically, the method for generating the three-dimensional house type may include:
step S201: at least two images of a room from different perspectives are acquired.
Step S202: a three-dimensional layout corresponding to each of the at least two images is generated.
Step S203: based on the at least two images, a stitching location corresponding to the three-dimensional layout is determined.
Step S204: and splicing the three-dimensional layout drawing according to the splicing position to generate a three-dimensional house type corresponding to the room.
The above steps are explained in detail below:
step S201: at least two images of a room from different perspectives are acquired.
When a three-dimensional house type generation requirement exists for a room, image shooting operation can be carried out on the room through the image acquisition device, so that at least two images of the room with different visual angles can be obtained, after the at least two images are acquired by the image acquisition device, the at least two images can be transmitted to the three-dimensional house type generation device, and therefore the at least two images can be stably acquired by the generation device. Of course, the skilled person can also use other ways to obtain at least two images of a room from different perspectives, such as: at least two images of a room can be stored in a preset area, and at least two images of different visual angles of the room can be acquired by accessing the preset area.
In addition, the shooting angle of view of any one of the obtained at least two images is less than or equal to a set value, where the set value is a preset shooting angle limit value used for limiting the image to be a non-panoramic image, and the size or value range of the set value may be different in different application scenarios, for example: in some application scenarios, the set value may be 200 ° or the like; alternatively, in other application scenarios, the set value may be 220 ° or the like. When the shooting angle of view of an image is less than or equal to a set value, the image can be determined to be not a panoramic image, and when the shooting angle of view of the image is greater than the set value, the image can be determined to be a panoramic image. Specifically, in the present embodiment, at least two images obtained by the image capturing device (instead of the panoramic camera used for generating the panoramic image) are not panoramic images, so that the problem of high image capturing cost when the panoramic camera needs to be used to obtain panoramic images in the prior art is effectively solved.
In some examples, the difference in the capture perspective of at least two images indicates that: an overlapping area exists between shooting areas corresponding to shooting angles of at least two images, and it is noted that when image acquisition operation is performed on a room, the size of the shooting angle of an acquired image is related to the number of images, and when the shooting angle of an image is large, the number of acquired images can be small; when the shooting angle of the image is small, the number of obtained images may be large.
Specifically, in this embodiment, the number of the acquired at least two images of the room from different viewing angles is not limited, and those skilled in the art may perform different configurations according to different room types, for example: when the images are shot for a room with a symmetrical rectangular house type, only two images can be obtained when the shooting visual angles corresponding to the two images can cover the whole room, the shooting visual angles corresponding to the two images are different at the moment, and the sum of the shooting visual angle ranges corresponding to the two images is larger than 360 degrees, so that a complete three-dimensional house type corresponding to the room is generated; when the shooting visual angles corresponding to the two images cannot cover the whole room, three images or four images can be acquired, the shooting visual angles corresponding to the three images or the four images are different, and the sum of the shooting visual angle ranges corresponding to the three images or the four images is larger than 360 degrees, so that a complete three-dimensional room type corresponding to the room is generated.
For a square room, when three images can cover the whole room, the three images can be acquired, the shooting visual angles corresponding to the three images are different, and the sum of the shooting visual angle ranges corresponding to the three images is larger than 360 degrees; when the four images can cover the whole room, the four images can be acquired, the shooting visual angles corresponding to the four images are different, and the sum of the shooting visual angle ranges corresponding to the four images is larger than 360 degrees. In some examples, as shown in fig. 3, when the image capturing device is a mobile phone, since the field angle of the main camera of most users' mobile phones is about 60 degrees, when four images are acquired for a room, the four images may be captured from four corners to diagonally opposite sides, that is, the four capturing angles may correspond to four corners of the room, and in this case, the four images may correspond to different capturing angles.
Step S202: a three-dimensional layout corresponding to each of the at least two images is generated.
After the at least two images are acquired, the at least two images may be analyzed, so that three-dimensional layout maps corresponding to the at least two images, respectively, may be generated. Specifically, the implementation manner of generating the three-dimensional layout diagram is not limited in this embodiment, and a person skilled in the art may set the three-dimensional layout diagram according to a specific application scenario and an application requirement, for example: a machine learning model for analyzing the at least two images is trained in advance, and the at least two images are analyzed by the machine learning model, so that three-dimensional layout maps corresponding to the at least two images can be generated, wherein the three-dimensional layout maps can include: texture information, three-dimensional wall surface information, spatial position relationship between the respective wall surfaces, and the like.
Of course, those skilled in the art may also use other ways to generate the three-dimensional layout maps corresponding to the at least two images, as long as the accuracy and reliability of the three-dimensional layout map acquisition can be ensured.
Step S203: based on the at least two images, a stitching location corresponding to the three-dimensional layout is determined.
After the three-dimensional layout maps corresponding to the at least two images are acquired, since the three-dimensional layout maps correspond to the at least two images respectively, and the shooting angles corresponding to the at least two images are different, the number of the three-dimensional layout maps is at least two, and the at least two three-dimensional layout maps correspond to different areas of a room. In order to be able to generate a complete three-dimensional house type corresponding to the room, after the at least two images are acquired, the three-dimensional layout may be analyzed based on the at least two images to determine a stitching location corresponding to the three-dimensional layout.
In addition, the embodiment does not limit the specific implementation manner of determining the splicing position corresponding to the three-dimensional layout diagram, and a person skilled in the art may set the splicing position according to specific application requirements, for example: and determining the adjacent relation between any two images based on at least two images, wherein when the at least two images are four images and are respectively an image a, an image b, an image c and an image d, the adjacent relation exists between the image a and the image b, the adjacent relation exists between the image a and the image d, and the adjacent relation does not exist between the image a and the image c. After the adjacent relationship between any two images is obtained, the stitching position corresponding to the three-dimensional layout map can be determined based on the adjacent relationship between any two images.
Step S204: and splicing the three-dimensional layout drawing according to the splicing position to generate a three-dimensional house type corresponding to the room.
After the splicing position is obtained, the three-dimensional layout maps corresponding to the at least two images can be spliced according to the splicing position, so that a three-dimensional house type corresponding to a room can be generated, the three-dimensional house type is the three-dimensional space layout map corresponding to the room, the three-dimensional space layout map can comprise information such as wall information, door and window positions, position information and size information of the room, and the trend layout of the room can be visually seen through the three-dimensional space layout map.
After the three-dimensional house type corresponding to the room is generated, decoration, rendering or design processing may be performed on the room based on the three-dimensional house type, specifically, the decoration parameters may be adjusted based on different application requirements and design requirements, and when the decoration parameters are different, room decoration models corresponding to different decoration parameters may be generated.
The method for generating a three-dimensional house type according to this embodiment generates three-dimensional layout maps corresponding to at least two images by obtaining at least two images of a room from different viewing angles, determines a splicing position corresponding to the three-dimensional layout maps based on the at least two images, and then splices the three-dimensional layout maps according to the splicing position to generate a three-dimensional house type corresponding to the room, so as to effectively implement image acquisition operations on the room by using a common image acquisition device, and thus output and shoot the three-dimensional house type of the room by using the obtained images, the three-dimensional house type may include information such as door and window positions, and the like, thereby effectively solving the problems of high price, inconvenient operation, and high shooting skill requirement when acquiring images by using a panoramic camera to obtain a panoramic view, and effectively expanding the application range of the method, greatly facilitates the operations such as decoration and design aiming at a room, and is beneficial to promoting the rapid development of related applications.
Fig. 4 is a schematic flowchart of generating at least two three-dimensional layout diagrams corresponding to at least two images according to an embodiment of the present disclosure; on the basis of the foregoing embodiment, referring to fig. 4, this embodiment provides an implementation manner of generating three-dimensional layout diagrams corresponding to at least two images, specifically, the generating of the three-dimensional layout diagrams corresponding to at least two images in this embodiment may include:
step S401: two-dimensional layout maps corresponding to the at least two images, respectively, are generated.
The two-dimensional layout is a graph for dividing wall surfaces in each room in the image, and can reflect the position relation between the wall surfaces in the image. After the at least two images are acquired, the at least two images may be analyzed to generate two-dimensional layouts corresponding to the at least two images, respectively, and the generated two-dimensional layouts may include wall information in a room. In generating the two-dimensional layout maps corresponding to the at least two images, respectively, the following steps may be included: at least two images are analyzed and processed through a deep learning network, so that two-dimensional layout maps corresponding to the at least two images can be obtained.
Specifically, when the at least two images are analyzed through the deep learning network, two-dimensional layout maps respectively corresponding to the at least two images can be generated through the detection of the wall corner edges; alternatively, two-dimensional layout maps corresponding to at least two images may be generated by detecting key points of the corner; alternatively, the wall surface region may be directly divided, and a two-dimensional layout corresponding to each of the at least two images may be generated.
Of course, those skilled in the art may also use other manners to generate the two-dimensional layout maps corresponding to the at least two images, as long as the quality and efficiency of generating the two-dimensional layout maps can be ensured, and details are not described herein again.
Step S402: camera parameters corresponding to the at least two images are determined.
After the at least two images are acquired, the at least two images may be analyzed to determine camera parameters corresponding to the at least two images, and specifically, the camera parameters may include at least one of: camera internal reference and camera external reference; at this time, determining the camera parameters corresponding to the at least two images may include: calculating vanishing point information corresponding to the at least two images; calculating camera internal parameters corresponding to the at least two images according to the vanishing point information; and determining camera external parameters corresponding to the at least two images according to the vanishing point information and the camera internal parameters.
The vanishing point is also called a vanishing point and is used for describing two parallel straight lines in the physical world at infinity, and the two straight lines can converge and intersect to one point in the two-dimensional projection process of the camera, wherein the point is the vanishing point or the vanishing point. Specifically, calculating vanishing point information corresponding to the at least two images may include: the method includes the steps of obtaining line segment information included in at least two images, calculating vanishing points corresponding to the at least two images based on the line segment information, and calculating camera intrinsic parameters corresponding to the at least two images based on vanishing point characteristics corresponding to the vanishing point information after the vanishing point information is obtained.
After the vanishing point information and the camera internal parameters are acquired, the vanishing point information and the camera internal parameters may be analyzed to determine camera external parameters corresponding to the at least two images, and the camera external parameters may include: a rotation matrix and/or a translation matrix between the world coordinate system and the camera coordinate system.
Step S403: at least two three-dimensional maps are generated based on the at least two images, the two-dimensional maps, and the camera parameters.
After the at least two images, the two-dimensional layout map and the camera parameters are acquired, the at least two images, the two-dimensional layout map and the camera parameters may be analyzed, so that three-dimensional layout maps corresponding to the at least two images may be generated. In some examples, generating a three-dimensional layout corresponding to the at least two images, respectively, based on the at least two images, the two-dimensional layout, and the camera parameters may include: acquiring (setting) height information between image shooting positions corresponding to the at least two images and the ground; determining a spatial constraint relationship corresponding to pixel points in the at least two images based on the height information, the at least two images and the camera parameters; and performing three-dimensional reconstruction operation on the pixel points in the at least two-dimensional layout maps based on the space constraint relationship to generate a three-dimensional layout map.
Specifically, when the image acquisition device is used for image acquisition operation, a height information is corresponding between an image shooting position corresponding to the image acquisition device and the ground, and it can be understood that the height information can be preset or input by a user; when different application scenarios or application requirements are met, different height information or the same height information may be set or input, for example: the height information may be 1.5m, 1.4m, etc.
After the height information between the image shooting position and the ground is set, the height information between the image shooting position and the ground can be obtained, and then the height information, the at least two images and the camera parameters can be analyzed and processed to determine the space constraint relation between the image shooting position and the ground and the pixel points in the at least two images.
In some examples, the spatially constrained relationship to the pixel points in the at least two images may include at least one of: for each point on the intersection line of the wall surface and the floor, the coordinate Z (i.e., the height information H existing between the image capturing position and the floor) in the world coordinate system is known, and then the mapping from the point in the pixel coordinate system to the world coordinate system can be realized. For the points on the wall surface, the depth information of the distance camera is consistent with the points on the intersecting line of the wall surface and the ground, so that the points on each wall surface can be reconstructed in a three-dimensional manner. For the points on the roof, the height of the roof from the ground can be obtained through the three-dimensional coordinates of the points on the intersection line of each wall surface and the roof, and then each point on the roof can be subjected to three-dimensional reconstruction.
After the space constraint relationship is obtained, the three-dimensional reconstruction operation can be performed on the pixel points in the at least two-dimensional layout maps based on the space constraint relationship, so that the three-dimensional layout map can be generated, and the accuracy and reliability of obtaining the three-dimensional layout map are effectively guaranteed.
Fig. 5 is a schematic flowchart of determining a stitching position corresponding to a three-dimensional layout map based on at least two images according to an embodiment of the present application; on the basis of the foregoing embodiment, with reference to fig. 5, this embodiment provides an implementation manner for determining a stitching position corresponding to a three-dimensional layout, and specifically, determining a stitching position corresponding to a three-dimensional layout based on at least two images in this embodiment includes:
step S501: wall information included in each three-dimensional layout is extracted.
Step S502: and generating a two-dimensional wall surface image corresponding to the wall surface information.
Step S503: and determining a splicing position corresponding to the three-dimensional layout drawing based on the two-dimensional wall surface images and the at least two images corresponding to all the wall surface information.
After the three-dimensional layout maps corresponding to the at least two images are acquired, because the three-dimensional layout maps may include at least one wall, in order to accurately determine the splicing positions corresponding to the at least two three-dimensional layout maps, each three-dimensional layout map may be analyzed to extract wall information included in each three-dimensional layout map, it may be understood that the number of extracted wall information may be one or more, after the wall information is acquired, a two-dimensional wall image corresponding to the wall information may be generated based on the three-dimensional layout maps, specifically, a two-dimensional wall image may be generated for each wall information, and therefore, one three-dimensional layout map may correspond to one or more two-dimensional wall images.
In some examples, generating the two-dimensional wall image corresponding to the wall information may include: acquiring a constraint relation for generating a two-dimensional wall surface image; and generating a two-dimensional wall surface image corresponding to the wall surface information based on the constraint relation and the three-dimensional layout. Wherein, the constraint relationship may include: the area corresponding to the wall information is positively correlated with the image resolution of the two-dimensional wall image.
Specifically, when the image capturing operation is performed, the images corresponding to different angles of view or different shooting angles may include different wall surface information, for example: the number of the wall information included in some pictures is one or more, and when the number of the wall information is multiple, the sizes of the wall areas corresponding to the multiple wall information can be different. As can be seen from the above statements, the three-dimensional layout corresponding to the image may include one or more pieces of wall surface information, and when the number of the wall surface information is multiple, the wall surface areas of the multiple pieces of wall surface information may be different. Therefore, different from the two-dimensional wall surface image corresponding to one three-dimensional layout diagram, in order to accurately generate the two-dimensional wall surface image corresponding to the wall surface information, a scaling relationship existing between the wall surface information in the three-dimensional layout diagram and the two-dimensional wall surface image is configured in advance, and specifically, when the area corresponding to the wall surface information in the three-dimensional layout diagram is large, the two-dimensional wall surface image with higher image resolution can be generated. When the area of the region corresponding to the wall information in the three-dimensional layout is small, a two-dimensional wall image with a small image resolution can be generated.
After the scaling relationship existing between the wall information in the three-dimensional layout drawing and the two-dimensional wall image is configured, the constraint relationship used for generating the two-dimensional wall image can be obtained, and then the constraint relationship and the three-dimensional layout drawing can be analyzed and processed, so that the two-dimensional wall image corresponding to the wall information can be generated.
After the two-dimensional wall images corresponding to all the wall information in the three-dimensional layout drawing are acquired, all the two-dimensional wall images and at least two images can be analyzed to determine the splicing positions corresponding to the at least two three-dimensional layout drawings.
In this embodiment, the wall information included in each three-dimensional layout drawing is extracted to generate the two-dimensional wall images corresponding to the wall information, and then the splicing positions corresponding to the at least two three-dimensional layout drawings are determined based on the two-dimensional wall images corresponding to all the wall information and the at least two images, so that the accuracy and reliability of determining the splicing positions are effectively ensured, and the splicing operation of the at least two three-dimensional layout drawings based on the splicing positions is further facilitated.
Fig. 6 is a schematic flow chart illustrating a process of determining a splicing position corresponding to a three-dimensional layout map based on two-dimensional wall images and at least two images corresponding to all wall information according to the embodiment of the present application; on the basis of the foregoing embodiment, referring to fig. 6, this embodiment provides another implementation manner for determining a stitching position corresponding to a three-dimensional layout, and specifically, the determining a stitching position corresponding to a three-dimensional layout based on two-dimensional wall images and at least two images corresponding to all wall information in this embodiment may include:
step S601: and acquiring the image similarity corresponding to any two-dimensional wall images.
Step S602: based on the at least two images, an image adjacency relationship of the at least two three-dimensional layouts is determined.
Step S603: and determining the splicing position corresponding to the two adjacent three-dimensional layout maps based on the image similarity.
After all the two-dimensional wall images corresponding to all the three-dimensional layout maps are acquired, image similarity calculation operation can be performed on any two-dimensional wall images, and therefore image similarity between any two-dimensional wall images can be acquired. After the at least two images are acquired, the at least two images may be analyzed to determine an image adjacency relationship of the at least two three-dimensional layouts. In some examples, determining the image adjacency relationship for the at least two three-dimensional layouts based on the at least two images may include: determining a first neighboring relationship corresponding to at least two images; based on the first adjacency relationship, image adjacency relationships of the at least two three-dimensional layouts are determined.
Specifically, after the at least two images are acquired, the image similarity calculation operation may be performed on the at least two images, so that a first adjacent relationship corresponding to the at least two images may be determined, and it is understood that the first adjacent relationship may include an adjacent relationship and a non-adjacent relationship, for example, when the at least two images include an image a, an image b, an image c, and an image d, there is an adjacent relationship between the image a and the image b, an adjacent relationship between the image a and the image c, and no adjacent relationship between the image a and the image d, that is, there is a non-adjacent relationship between the image a and the image d.
Since the three-dimensional layout is determined based on the images, after determining the first adjacent relationship corresponding to the at least two images, the image adjacent relationship of the at least two three-dimensional layouts may be determined based on the first adjacent relationship, in particular, the image adjacent relationship of the at least two three-dimensional layouts corresponds to the first adjacent relationship.
After the image similarity and the image adjacency relationship between the at least two three-dimensional layout diagrams are obtained, the at least two three-dimensional layout diagrams can be analyzed and processed based on the image similarity, so that the splicing positions corresponding to the two adjacent three-dimensional layout diagrams can be obtained.
It should be noted that the execution sequence between the above steps S601 to S602 in this embodiment is not limited to the execution sequence described in the embodiment, and those skilled in the art may adjust the execution sequence between the steps S601 to S602 according to the specific application requirement and design requirement, for example: step S602 may be performed before step S601, or step S602 may be performed simultaneously with step S602, which is not described herein again.
In some examples, after determining the first neighboring relationship corresponding to the at least two images, the method in this embodiment may further include: scaling at least two images so that the wall surface heights included in all the images are the same.
Specifically, in order to further improve the quality and efficiency of analyzing and identifying the images, after analyzing and processing at least two images and determining the first adjacent relationship corresponding to the at least two images, scaling the at least two images may be performed to make the wall heights included in all the images the same, and specifically, the wall heights included in any one of the at least two images may be used as a reference to adjust the wall heights included in other images; alternatively, a preset wall height may be acquired, and the wall heights included in all the images may be adjusted based on the set wall height. Of course, a person skilled in the art may also perform scaling processing on at least two images in other manners as long as it is ensured that the heights of the wall surfaces included in all the images are the same; therefore, when the image matching operation is carried out on at least two images, the accuracy of image matching can be effectively improved.
In the embodiment, the image similarity corresponding to any two-dimensional wall images is obtained, the image adjacent relation of the at least two three-dimensional layout maps is determined based on the at least two images, and the splicing position corresponding to the two adjacent three-dimensional layout maps is determined based on the image similarity, so that the accuracy and reliability of determining the splicing position are effectively ensured.
Fig. 7 is a schematic flowchart of determining a stitching position corresponding to two adjacent three-dimensional layout maps based on image similarity according to an embodiment of the present application; based on the foregoing embodiment, with continued reference to fig. 7, this embodiment provides an implementation manner for determining a stitching position corresponding to two adjacent three-dimensional layout maps based on image similarity, and specifically, determining the stitching position corresponding to two adjacent three-dimensional layout maps based on image similarity in this embodiment may include:
step S701: acquiring a first splicing wall surface pair corresponding to the highest image similarity;
step S702: and determining the splicing positions corresponding to the two adjacent three-dimensional layout maps based on the first splicing wall pair.
For example, two adjacent three-dimensional layout maps are an image a and an image B, the image a may correspond to a two-dimensional wall surface image a1, a two-dimensional wall surface image a2, and a two-dimensional wall surface image a3, the image B may correspond to a two-dimensional wall surface image B1 and a two-dimensional wall surface image B2, and then the corresponding image similarity between any two-dimensional wall surface images may be obtained, for example: a first similarity between two-dimensional wall image a1 and two-dimensional wall image b1, a second similarity between two-dimensional wall image a2 and two-dimensional wall image b1, a third similarity between two-dimensional wall image a3 and two-dimensional wall image b1, a fourth similarity between two-dimensional wall image a1 and two-dimensional wall image b2, a fifth similarity between two-dimensional wall image a2 and two-dimensional wall image b2, and a sixth similarity between two-dimensional wall image a3 and two-dimensional wall image b 2.
After the image similarity is obtained, the highest image similarity may be obtained, and if the highest image similarity is the fourth similarity, the two-dimensional wall surface image a1 and the two-dimensional wall surface image b2 may be obtained as the first mosaic wall surface corresponding to the highest image similarity. After the first spliced wall pair is obtained, the first spliced wall pair may be analyzed to determine the splicing positions corresponding to the two adjacent three-dimensional layout maps.
In some examples, determining the stitching location corresponding to the two adjacent three-dimensional layouts based on the first pair of stitching walls may include: performing feature extraction operation on the first spliced wall surface to obtain a first wall surface feature and a second wall surface feature; and determining at least one splicing position corresponding to the two adjacent three-dimensional layout graphs based on the first wall surface characteristic and the second wall surface characteristic.
Specifically, after the first spliced wall surface pair is obtained, the feature extraction operation can be performed on the first spliced wall surface pair, so that the first wall surface feature and the second wall surface feature can be obtained. After the first wall surface feature and the second wall surface feature are obtained, the first wall surface feature and the second wall surface feature can be analyzed and matched, so that the position with the highest matching degree between the two wall surfaces can be obtained, and then the position with the highest matching degree can be determined to be at least one splicing position corresponding to the two adjacent three-dimensional layout maps.
It should be noted that after one splicing position corresponding to the first spliced wall surface is acquired, other splicing positions may be determined based on the determined adjacent relationship between the one splicing position and the at least two three-dimensional layout maps, so that all splicing positions corresponding to all three-dimensional layout maps are acquired.
In this embodiment, the first splicing wall pair corresponding to the highest image similarity is obtained, and then the splicing positions corresponding to the two adjacent three-dimensional layout maps are determined based on the first splicing wall pair, so that the accuracy and reliability of determining the splicing positions are effectively ensured, and the quality and efficiency of splicing at least two three-dimensional layout maps based on the splicing positions are further improved.
Fig. 8 is a schematic flow chart of another method for generating a three-dimensional house type according to an embodiment of the present application; on the basis of the foregoing embodiment, with continuing reference to fig. 8, after determining the splicing positions corresponding to two adjacent three-dimensional layout maps, the method in this embodiment may further include:
step S801: and detecting whether the splicing position is reasonable.
After the splicing position is obtained, the three-dimensional layout map can be spliced based on the splicing position, and the splicing effect after splicing may meet the set requirement, or the splicing effect may not meet the set requirement. Therefore, in order to ensure the quality and effect of the splicing processing of the three-dimensional layout based on the splicing position, after the splicing position is obtained, whether the splicing position is reasonable or not can be detected. Specifically, detecting whether the splice location is reasonable may include: pre-splicing the three-dimensional layout drawing based on the splicing position to obtain a pre-spliced house type; identifying wall features corresponding to all walls in the pre-spliced house type; and detecting whether the splicing position is reasonable or not based on the wall features corresponding to the walls.
Specifically, after the splicing position is acquired, the three-dimensional layout can be processed by splicing based on the splicing position, so that a pre-spliced house type can be obtained, and after the pre-spliced house type is acquired, wall surface feature extraction operation can be performed on the pre-spliced house type, so that wall surface features corresponding to all wall surfaces in the pre-spliced house type can be obtained. After the wall features corresponding to the walls are obtained, the wall features corresponding to the walls can be analyzed to detect whether the splicing position is reasonable. It should be noted that, for different wall features, different ways may be used to detect whether the splice location is reasonable.
In some examples, the wall features include: during the size characteristic of wall, based on the wall characteristic that each wall corresponds in this embodiment, whether it is reasonable can include to detect the concatenation position: determining the size deviation information between any two wall surfaces in the pre-spliced house type based on the corresponding size characteristics of each wall surface; when the size deviation information is smaller than a set value, determining that the splicing position is reasonable; or, when the size deviation information is greater than or equal to the set value, the determined splicing position is not reasonable.
Specifically, when the wall surface features include dimensional features of the wall surface, for example: when the wall features include height information and width information of the wall, the size deviation information between any two walls in the pre-spliced house type can be determined, for example: and after the size deviation information is acquired, the size deviation information can be analyzed and compared with a set value, and when the size deviation information is smaller than the set value, the image splicing effect in the pre-splicing house type meets the set requirement, so that the splicing position can be determined reasonably. When the size deviation position is larger than or equal to the set value, the image splicing effect in the pre-splicing room type does not meet the set requirement, and the splicing position can be determined to be unreasonable.
In other examples, the wall features include: when the relative position characteristic between the wall, based on the wall characteristic that each wall corresponds in this embodiment, whether it is reasonable to detect the concatenation position can include: detecting whether wall surface intersection or wall surface shielding occurs between any two wall surfaces in the pre-spliced house type or not based on the relative position characteristics of the wall surfaces; when the wall surface is crossed or shielded, the splicing position is determined to be unreasonable; or when the wall surface intersection does not occur and the wall surface shielding does not occur, the splicing position is determined to be reasonable.
Of course, those skilled in the art may also use other methods to detect whether the splicing position is reasonable, as long as the accuracy and reliability of detecting whether the splicing position is reasonable can be ensured, and details are not repeated herein.
Step S802: and when the splicing position is reasonable, splicing the three-dimensional layout map based on the splicing position. Alternatively, the first and second electrodes may be,
step S803: and when the splicing position is not reasonable, acquiring a second splicing wall surface pair corresponding to the second highest image similarity, and determining the splicing position corresponding to the second splicing wall surface pair.
When the splicing position is reasonable, the splicing effect of the pre-spliced house type meets the preset requirement, and then the three-dimensional layout can be spliced directly based on the splicing position. When the result of detecting the splicing position is that the splicing position is unreasonable, it is described that the splicing effect of the pre-splicing room type does not meet the preset requirement at this moment, in order to improve the quality and effect of the pre-splicing room type, the second splicing wall pair corresponding to the second high image similarity can be obtained, then the corresponding splicing position is determined by determining the specific implementation mode of the corresponding splicing position corresponding to the second splicing wall pair and the implementation mode of the corresponding splicing position corresponding to the first splicing wall pair, and the description is not repeated here.
In this embodiment, whether the splicing position is reasonable is detected, when the splicing position is reasonable, the three-dimensional layout map is spliced based on the splicing position, when the splicing position is unreasonable, the second splicing wall pair corresponding to the image similarity with the second height is obtained, and the splicing position corresponding to the second splicing wall pair is determined, so that different data processing operations can be executed based on different results detected for the splicing position, and the quality and the effect of splicing the three-dimensional layout map are further ensured.
Fig. 9 is a schematic flow chart illustrating a process of generating a three-dimensional house type corresponding to a room by performing a splicing process on a three-dimensional layout according to a splicing position according to the embodiment of the present application; on the basis of the foregoing embodiment, referring to fig. 9, this embodiment provides an implementation manner of generating a three-dimensional house type corresponding to a room, and specifically, in this embodiment, performing a splicing process on a three-dimensional layout according to a splicing position, and generating the three-dimensional house type corresponding to the room may include:
step S901: splicing all the three-dimensional layout maps based on the splicing positions to obtain spliced room type data corresponding to the rooms;
step S902: detecting whether gaps exist at all splicing positions in the spliced house type data or not;
step S903: determining the spliced house type data as a three-dimensional house type when gaps do not exist at all splicing positions in the spliced house type data; alternatively, the first and second electrodes may be,
step S904: and when a gap exists at a splicing position in the spliced house type data, optimizing the spliced house type data to generate the three-dimensional house type.
After the splicing position is obtained, all the three-dimensional layout maps can be spliced based on the splicing position, so that spliced room type data (corresponding to the whole room) corresponding to the room can be obtained. After the splicing room type data is acquired, the splicing room type data can be analyzed to detect whether gaps exist at all splicing positions in the spliced room type data. When no gap exists at all the splicing positions in the spliced room type data, the spliced room type data obtained based on the splicing positions meets the set requirement, and therefore the spliced room type data can be determined to be the three-dimensional room type. When a gap exists at a splicing position in the spliced house type data, the spliced house type data obtained based on the splicing position does not meet the set requirement, the splicing quality and effect of the house type data are carried out, and then the spliced house type data can be optimized to generate the three-dimensional house type.
For example, the splice locations include: position a, position b, position c, the three-dimensional house type includes: and splicing the data A and the data B based on the splicing position a, splicing the data B and the data C based on the splicing position B, and splicing the data A and the data C based on the splicing position C, so that spliced backroom type data corresponding to one room can be obtained. Detecting whether gaps exist among the spliced house type data corresponding to the position a, the position b and the position c, and determining the spliced house type data as a three-dimensional house type if no gap exists among the position a, the position b and the position c; when there is the clearance in position B department, splice data A and data B based on concatenation position a and generate data AB, splice data A and data C based on concatenation position C and generate CA after, when carrying out the concatenation operation to data AB and data CA based on concatenation position B, there is the error condition between data AB and the data CA, at this moment, in order to guarantee the quality and the effect of house type data concatenation, then can carry out optimization to concatenation back house type data, in order to generate three-dimensional house type, the three-dimensional house type that generates satisfies the settlement requirement.
In the embodiment, the spliced house type data can be directly determined as the final three-dimensional house type when the generated spliced house type data meets the set requirement; when the generated spliced house type data does not meet the set requirement, the spliced house type data can be optimized, so that a three-dimensional house type meeting the set requirement can be generated, and the accuracy and reliability of the method are further improved.
Fig. 10 is a schematic flow chart illustrating optimization processing of spliced house type data to generate a three-dimensional house type according to the embodiment of the present application; on the basis of the foregoing embodiment, referring to fig. 10, this embodiment provides an implementation manner of generating a three-dimensional house type, and specifically, performing optimization processing on spliced house type data in this embodiment may include:
step S1001: optimizing all splicing positions based on the fact that gaps exist in one splicing position in the spliced house type data to obtain the optimized splicing position.
When a gap exists at a splicing position in the splicing back room type data, the obtained splicing back room type data does not meet the setting requirement, and at the moment, in order to enable the splicing back room type data to meet the setting requirement, all splicing positions can be optimized based on the gap existing at the splicing position in the splicing back room type data, so that the optimized splicing position is obtained. In some examples, optimizing all of the splice positions based on a gap existing in at least one splice position in the splice house type data may include: acquiring the splicing matching degree of the three-dimensional layout corresponding to each splicing position; determining the two three-dimensional layout maps corresponding to the highest splicing matching degree as a reference image pair; and adjusting the splicing position corresponding to the three-dimensional layout diagram based on the reference image pair and the gap to obtain the optimized splicing position.
Specifically, after all the three-dimensional layout maps are spliced based on the splicing positions, the splicing matching degree of the three-dimensional layout maps corresponding to each splicing position may be obtained, after the splicing matching degree corresponding to each splicing position is obtained, the highest splicing matching degree may be obtained, then the two three-dimensional layout maps corresponding to the highest splicing matching degree are obtained, and then the determined two three-dimensional layout maps are determined as a reference image pair. It will be appreciated that the number of reference image pairs may be one or more. After the reference image pair is acquired, the stitching position corresponding to the three-dimensional layout can be adjusted based on the reference image pair and the gap, so that the optimized stitching position can be obtained.
In some examples, the reference image pair may include: a first image pair located in a first direction and a second image pair located in a second direction; at this time, in this embodiment, adjusting the splicing position corresponding to the three-dimensional layout diagram based on the reference diagram and the gap, and obtaining the optimized splicing position may include: determining a first adjustment distance of the splice location in the first direction based on the first image pair and the gap; determining a second adjustment distance of the splice location in a second direction based on the second image pair and the gap; and adjusting the splicing position in the first direction and the second direction based on the first adjusting distance and the second adjusting distance respectively to obtain the optimized splicing position.
The first direction may be a length direction for a room, the second direction may be a width direction for the room, and the gap existing at the splicing position may include at least one of: a length gap in the length direction and a width gap in the width direction. Therefore, in order to ensure the quality and effect of optimizing the splicing position, the splicing position may be optimized twice, i.e., in the length direction and the width direction, respectively.
Specifically, after the first image pair and the gap are acquired, a first adjustment distance of the splicing position in the length direction may be determined based on the first image pair and the gap, after the second image pair and the gap are acquired, a second adjustment distance of the splicing position in the length direction may be determined based on the second image pair and the gap, and then the splicing position may be adjusted in the first direction and the second direction based on the first adjustment distance and the second adjustment distance, respectively, to obtain an optimized splicing position.
Step S1002: and splicing all the three-dimensional layout maps based on the optimized splicing positions to generate a three-dimensional house type corresponding to the room.
After the optimized splicing positions are obtained, all the three-dimensional layout maps can be spliced based on the optimized splicing positions, so that a three-dimensional house type corresponding to a room can be generated, and the generated three-dimensional house type meets the set requirements.
In the embodiment, all the splicing positions are optimized based on the existence of a gap at one splicing position in the spliced room type data to obtain the optimized splicing positions, and then all the three-dimensional layout maps are spliced based on the optimized splicing positions, so that the three-dimensional room type which meets the setting requirement and corresponds to a room can be obtained, the quality and the effect of generating the three-dimensional room type are further ensured, and the stability and the reliability of the method are improved.
Fig. 11 is a schematic flowchart of a method for generating a three-dimensional house type according to an embodiment of the present application; on the basis of any one of the above embodiments, with reference to fig. 11, the method in this embodiment may further include:
step S1101: entities included in the room are identified as well as entity characteristic information.
Step S1102: and carrying out fusion processing on the entity, the entity characteristic information and the three-dimensional house type to generate a target three-dimensional house type.
After at least two images of a room from different perspectives are acquired, the at least two images may be analyzed to identify an entity and entity feature information included in the room, where the entity included in the room may include: door and window, furniture, household electrical appliances etc. entity characteristic information can include: the entity outline and the entity geometric dimension, and the entity characteristic data comprises at least one of the following: the method comprises the following steps of starting points of the entities, finishing points of the entities, identification geometrical sizes of the entities, type information of the entities and spatial position relations between the entities and other entities.
After the entity and the entity characteristic information are obtained, the entity characteristic information and the three-dimensional house type can be fused, so that a target three-dimensional house type fused with the entity is generated, a user can more visually obtain the entity included in a room through the target three-dimensional house type, the quality and the effect of rendering or decoration on the target three-dimensional house type are guaranteed or improved, and the practicability of the method is further improved.
On the basis of any one of the above embodiments, in order to improve the practicability of the method, after the three-dimensional house type corresponding to the room is generated, the method of the embodiment may further include:
step S1201: and generating a construction house type corresponding to the three-dimensional house type.
Step S1202: and obtaining construction checking information according to the three-dimensional house type and the construction house type.
After the three-dimensional house type is obtained, construction operation can be performed based on the three-dimensional house type, so that a construction house type corresponding to the three-dimensional house type can be generated. After the construction house type is acquired, the three-dimensional house type and the construction house type can be compared, so that construction check information can be acquired, and the construction check information is used for identifying matching information between the construction house type and the three-dimensional house type, for example: matching information between a door in the three-dimensional house type and a door in the construction house type, matching information between a wall in the three-dimensional house type and a wall in the construction house type, and the like.
In some examples, after the construction checking information is acquired, the construction checking information can be displayed, so that a user can quickly and directly acquire the quality and effect of construction operation based on the three-dimensional house type through the construction checking information, and the practicability of the method is further improved.
In some examples, after the construction verification information is acquired, a house type object that does not meet the setting requirement in the construction house type may be extracted, for example: doors, windows, walls, etc. After the house type object is obtained, the construction house type can be corrected based on the data corresponding to the house type object in the three-dimensional house type, so that the corrected construction house type can be obtained, and the corrected construction house type meets the setting requirement, so that the quality and the effect of construction operation based on the three-dimensional house type are guaranteed, and the construction house type can meet the setting requirement of a user.
On the basis of any one of the above embodiments, after generating a three-dimensional room type corresponding to a room, the method in this embodiment may further include:
step S1301: and acquiring a display request of the three-dimensional house type.
Step S1302: and displaying the three-dimensional house type by utilizing a setting device based on the display request.
After the three-dimensional house type corresponding to the room is obtained, when the user needs to display the three-dimensional house type, the user can input execution operation to the three-dimensional house type generation device, and after the execution operation is obtained, a display request of the three-dimensional house type can be generated; and then displaying the three-dimensional house type by utilizing the setting equipment based on the display request.
In some examples, the presentation request may include at least one of: an augmented reality display request, a virtual reality display request, a mixed reality display request, and an image reality display request; correspondingly, the setting device may include at least one of: augmented reality equipment, virtual reality equipment, mixed reality equipment, image reality equipment.
Specifically, when an Augmented Reality (AR) display request is obtained, the corresponding set device may be an AR device, and it may be understood that the AR device may be a head-mounted display device, for example: and the AR glasses can display the obtained three-dimensional house type by utilizing the AR equipment based on the AR display request.
When a Virtual Reality (VR) display request is obtained, the corresponding setting device may be an augmented Reality VR device, and it may be understood that the VR device may be a head-mounted display device, for example: and VR glasses, and then displaying the obtained three-dimensional house type by utilizing VR equipment based on the VR display request.
When a Mixed Reality (MR) demonstration request is obtained, the corresponding setting device may be an augmented Reality MR device, and it is understood that the MR device may be a head-mounted display device, for example: the MR glasses can then be used to present the obtained three-dimensional house type with the MR device based on the MR presentation request.
When an image Reality (CR) display request is obtained, the corresponding setting device may be an augmented Reality CR device, and it may be understood that the CR device may be a head-mounted display device, for example: the CR glasses may then display the obtained three-dimensional house type with the CR device based on the CR display request.
In the embodiment, the three-dimensional house type is displayed by acquiring the display request of the three-dimensional house type and utilizing the setting device based on the display request, so that the three-dimensional house type can be displayed based on the display request and the setting device when the display requirement for the three-dimensional house type exists, a user can directly know the house layout and the room effect of the three-dimensional house type, and the practicability of the method is further improved.
On the basis of any one of the above embodiments, after generating a three-dimensional room type corresponding to a room, the method in this embodiment may further include:
step S1401: and acquiring the generation quality of the three-dimensional house type.
Step S1402: and when the generation quality does not meet the set condition, generating an image re-shooting request so as to re-acquire at least two images of different view angles of the room based on the image re-shooting request.
After the three-dimensional house type is obtained, the three-dimensional house type may be analyzed to obtain the generation quality of the three-dimensional house type, specifically, the specific obtaining manner of the generation quality of the three-dimensional house type is not limited in this embodiment, and a person skilled in the art may set the three-dimensional house type according to a specific application scenario or an application requirement, for example: an evaluation rule for analyzing the three-dimensional house type is preset, and the three-dimensional house type is analyzed by the evaluation rule, so that the generation quality of the three-dimensional house type can be obtained; alternatively, a machine learning model for determining the generation quality of the three-dimensional house type is configured in advance, and after the three-dimensional house type is acquired, the three-dimensional house type may be input to the machine learning model, so that the generation quality of the three-dimensional house type may be acquired.
It is understood that the generation quality may be represented in a scoring or ranking manner, for example, the generation quality is 80 points, 90 points, or 95 points, etc.; alternatively, the generated quality may be a first level for identifying a higher quality, or the generated quality may be a second level for identifying a general quality, or the generated quality may be a third level for identifying a lower quality, and so on.
After the generation quality is acquired, whether the generation quality meets a setting condition may be detected, where the setting condition is configured in advance for analyzing the three-dimensional house type, and it is understood that the setting condition may be different for different application scenarios. After the generation quality and the setting condition are acquired, whether the generation quality meets the setting condition or not can be detected, when the generation quality does not meet the setting condition, the generated three-dimensional house type cannot meet the design requirement of the user, at this time, an image rephotography request can be generated, at least two images of different view angles of a room are obtained again based on the image rephotography request, and then the three-dimensional house type can be regenerated based on the at least two newly obtained images, so that the regenerated three-dimensional house type can meet the design requirement of the user. When the generation quality meets the set conditions, the generated three-dimensional house type can meet the design requirements of users, and the generation quality and the effect of the three-dimensional house type are further ensured.
For example, setting conditions for analyzing the three-dimensional house type are preset, the setting conditions include a minimum quality limit value that meets the design requirements of the user, for example, the minimum quality limit value may be 90 minutes, the generation quality of the three-dimensional house type may be acquired after the three-dimensional house type is acquired, and when the generation quality is 93 minutes or 95 minutes, it is proved that the generation quality of the three-dimensional house type meets the setting conditions, and the generated three-dimensional house type may be output. When the generation quality is 85 minutes or 88 minutes, the generation quality of the three-dimensional house type is proved not to meet the set condition, at this time, an image re-shooting request can be generated, at least two images of different view angles of the room are re-acquired based on the image re-shooting request, then the three-dimensional house type can be re-established based on the re-acquired at least two images, the generation quality of the three-dimensional house type is acquired, and when the generation quality meets the set condition, the re-established three-dimensional house type can be output.
In this embodiment, by obtaining the generation quality of the three-dimensional house type, when the generation quality does not meet a set condition, an image rephotography request is generated to reacquire at least two images of different viewing angles of a room based on the image rephotography request, which effectively realizes that the three-dimensional house type can be regenerated based on the reacquired at least two images, so that the regenerated three-dimensional house type can meet the design requirement of a user, and the generation quality and effect of the three-dimensional house type are further improved.
In specific application, referring to fig. 12, taking an example of obtaining four images for a room by a mobile phone as an illustration, the embodiment of the present application provides a method for generating a three-dimensional house type, which can use several hand-taken images taken at different angles, and perform methods such as house type reconstruction, house type splicing, global optimization and the like through the hand-taken images, so as to finally obtain a whole house type image. Therefore, the user potential can be greatly released, and the development of the intelligent home decoration industry is promoted. In addition, the execution subject of the method may be a generation apparatus of a three-dimensional house type, and the generation apparatus may include: the system comprises an input module, a 2d layout detection module, an internal and external parameter calibration module, a single-figure 3d layout reconstruction module, a layout splicing module, a global optimization module, an entity identification module and an output module, wherein the 2d layout detection module is in communication connection with the input module. Specifically, the method may comprise the steps of:
step 1: and acquiring four hand-shot pictures through the input module.
Since the field angle of the main camera of most of the user mobile phones is about 60 degrees, the shooting angle of 4 pictures can be performed diagonally from four corners of the wall, as shown in fig. 3.
It should be noted that, for the number of input handprints, a different number of handprints may be input for different scenarios, for example, in some application scenarios, 2, 3 or more handprints may also be input for subsequent operations of the whole house layout reconstruction. In addition, the shooting position, shooting angle, and the like of the hand shot image may be arbitrarily adjusted as long as there is a coverage area between the shot images.
Step 2: and acquiring four 2dlayout images corresponding to the four hand shots by using a 2d layout detection module.
And step 3: and acquiring camera parameters corresponding to the four hand-shot images by utilizing the internal and external parameter calibration module.
Specifically, referring to fig. 13, when camera parameters corresponding to four images are determined, an internal and external parameter calibration algorithm based on vanishing points may be used to obtain the camera parameters, and when the image acquisition device is a main camera of a mobile phone, distortion of the main camera of the mobile phone is relatively small, so calibration of a distortion coefficient of the camera may not be considered. When the camera parameters include camera internal parameters and camera external parameters, the following describes in detail the respective steps included in determining the camera parameters corresponding to the four images:
step 3.1: and (5) extracting line segments.
The four hand-taken pictures are obtained through a mobile phone, and the Line Segment extraction operation is performed on the four hand-taken pictures through a Line Segment Detector (LSD for short) or a deep learning network, so that the Line Segment included in each hand-taken picture is obtained.
Step 3.2: and (5) vanishing point calculation.
Based on the line segments included in each image, all vanishing points corresponding to all the line segments are counted by using a constraint condition that the directions of the three vanishing points are mutually perpendicular, the number of the line segments covered by a group of mutually perpendicular 3 vanishing points is counted, a group of vanishing points with the largest number of the line segments is obtained, and the vanishing points are used as a group of vanishing points (vpx, vpy, vpz) of a target, wherein vpx, vpy and vpz are respectively used for representing vanishing points in the x, y and z directions.
Step 3.3: and (5) calculating camera internal parameters.
Based on the three vanishing points (vpx, vpy, vpz) obtained previously, the calculation of camera intrinsic parameters is performed.
The camera reference matrix is denoted by K, which is expressed as follows:
Figure BDA0002974863190000311
where fx is focal length information in the X direction, fy is focal length information in the Y direction, cx is coordinate information in the X direction, and cy is coordinate information in the Y direction, and K is usually regarded as fx ═ fy, and therefore, there are only three unknowns of principal point coordinates f, cx, and cy.
In addition, w represents the corresponding intermediate result after the camera internal reference matrix is operated, K is calculated by w, and the relation between w and K is as follows: w ═ w (KK)T)-1For each group of vanishing points vpi and vpj, three vanishing points can be obtained from one graph, and any two vanishing points are vertical; a linear equation can be generated for the elements of w:
Figure BDA0002974863190000312
and constraining the three pairs of vanishing points and obtaining an equation Aw as 0 together, wherein A is a3 x 4 matrix (three vanishing points can obtain an equation set), w is obtained from a zero vector of A, namely the value of w is obtained through A, and then the value of an internal parameter matrix K can be obtained through cholesky decomposition of w, so that the calibration of the internal parameters of the camera is completed.
Step 3.4: and (5) carrying out external parameter calculation on the camera.
For each hand-shot image, assuming that the world coordinate system and the camera coordinate system are completely overlapped, the camera external parameter between the world coordinate system and the camera coordinate system only has rotation transformation and no translation, and at the moment, only the rotation matrix between the world coordinate system and the camera coordinate system needs to be calculated, so that the calibration operation of the camera external parameter is completed.
The relationship between vanishing points and camera parameters is as follows:
Figure BDA0002974863190000321
Figure BDA0002974863190000322
wherein, alpha is a set coefficient, VpZAnd VpxRespectively vanishing point in Z direction and vanishing point in X direction, K is camera internal reference matrix, rx、ryAnd rzThe rotation amounts of the world coordinate system and the camera coordinate system in the X direction, the Y direction and the Z direction are respectively.
Based on the above formula, an expression between the rotation vector and the vanishing point can be obtained as follows:
Figure BDA0002974863190000323
here, Vpx and Vpz represent vanishing points in the x direction and the z direction, respectively, direction vectors in two directions are directly calculated through the vanishing points, and after vectors in any two directions are acquired, vectors in the other direction are cross-multiplied by the first two vectors, that is, rz=ry×rxThereby obtaining a rotation matrix R and completing the calibration of the camera external parameter.
And 4, step 4: the 3d layout corresponding to four 2d layouts is obtained using the 3d layout reconstruction module of the single figure.
Specifically, 3d reconstruction is performed on each pixel point in the 2d layout by using a 3d layout reconstruction module based on the hand-shot image, the 2d layout image and the camera parameters, so that a single-image 3d layout image is obtained.
When the camera is used for shooting images, the height from the ground is set to be h (for example: 1.5m or 1.6 m), and the mapping relation between the world coordinate system and the pixel coordinate system is as follows:
Figure BDA0002974863190000324
where α is a preset coefficient, (u, v) is a point in a pixel coordinate system, K is a camera internal reference, R is a rotation matrix (corresponding to a camera external reference), (X, Y, Z) is a point in a world coordinate system, X is a value of a midpoint in the world coordinate system in a horizontal direction, Y is a value of the midpoint in the world coordinate system in a vertical direction, Z is a value of the midpoint in the world coordinate system in a height direction, where Z is h.
Specifically, the following constraint relationship exists between the pixel point in the 2d layout and the pixel point in the 3d layout: for each point on the intersection of the wall and the ground with the floor, the coordinate Z in the world coordinate system is known, and mapping from the point in the pixel coordinate system to the world coordinate system can be achieved. For the points on the wall surface, the depth information of the distance camera is consistent with the points on the intersecting line of the wall surface and the ground, so that the points on each wall surface can be reconstructed in 3 d. For the points on the roof, the height of the roof from the ground can be obtained through the 3d coordinates of the points on the intersection line of each wall surface and the roof, and each point on the roof can be rebuilt in a 3d mode. Therefore, 3d reconstruction operation is performed on the 2d pixel points, and the 3d layout corresponding to the 2d layout graph is obtained.
And 5: and determining the splicing position of the 3d layout by using a layout splicing module, and carrying out splicing operation based on the splicing position.
In order to obtain an overall three-dimensional house type for a room, 3dlayout corresponding to 4 views needs to be spliced. The layout splicing module can find the corresponding splicing position between two adjacent 3d layout images, and then splicing operation can be carried out based on the splicing position. Specifically, referring to fig. 14, the splicing operation based on the splicing position includes the following steps:
step 5.1: and (4) preprocessing.
The pre-processing procedure can implement the following two functions:
(1) the four hand-shot images are sequenced to obtain the adjacent relation between any two hand-shot images, and particularly, which two views are adjacent to each other can be judged.
The method comprises the steps that the adjacent relation between any two hand shots is judged through feature point matching operation, specifically, the number of matched feature points between any two hand shots can be obtained, and when the number of matched feature points is large, the probability that the two hand shots are determined to be adjacent images is large; when the number of the matched feature points is small, the probability that the two hand shots are determined to be adjacent images is small.
(2) And the sizes of the views are zoomed, so that the height of the wall surface before splicing all the views is ensured to be equal.
Step 5.2: and generating an image of the wall surface 2 d.
And generating a corresponding 2d wall surface image for each wall surface included in the 3d layout based on the reconstructed 3d layout, wherein a corresponding scaling relationship exists between the 2d wall surface image and the wall surface in the 3d layout, and the 2d wall surface image of each wall surface included in the 3d layout can be obtained based on the scaling relationship.
Step 5.3: registration of wall 2d images.
Based on the results of the above step 5.1 and step 5.2, image registration operation is performed on the 2d wall surface images corresponding to the different 3d layout images, and the stitching positions corresponding to the two adjacent 3d layout images are obtained. Specifically, the matching algorithm may be based on the traditional feature descriptors such as sift and the like to extract and match feature points, or may be based on a deep learning algorithm to directly judge the similarity between two images, and then determine the splicing position corresponding to the 3d layout image.
After the splicing wall corresponding to the splicing position is obtained, whether the two determined splicing walls are reasonable or not can be judged, for example, the width difference of the two walls is too large, the conditions that the walls are mutually shielded or crossed and the like occur after splicing, the splicing position or the splicing wall is determined to be not suitable for splicing, and therefore another pair of splicing walls with inferior matching characteristic strength needs to be searched until a proper registration wall pair is obtained.
Step 6: and carrying out optimization operation by utilizing a global optimization module.
After the mosaic positions are obtained, mosaic operation can be performed on the four views based on the mosaic positions, that is, matching surfaces and mosaic positions are determined among the views 1-2, 2-3, 3-4 and 4-1, but after image mosaic operation is performed by using the mosaic positions, the images may not be spliced end to end, as shown in fig. 15.
Therefore, after the splicing position is acquired, whether splicing is not combined between the spliced image data can be judged after the splicing operation is performed on the splicing position. Judging whether the splicing positions of all the views are matched with the obtained splicing positions or not, and if not, optimizing the views; if a is not included, the splicing position is accurate, and fine adjustment is not needed.
For example, referring to fig. 15, after the view 12 is obtained by stitching the view 1 and the view 2 and the view 34 is obtained by stitching the view 3 and the view 4, when the view 12 and the view 34 are subjected to the stitching operation, a situation occurs in which the correct engagement is not achieved, that is, a gap of length a is generated. In brief, view 1-view 4 are pieced together, and there is an original place of stitching between view 2 and view 3, but it is not pieced together and a gap of a is found.
In order to solve the above problem, the global optimization module may simultaneously consider the gap corresponding to the splicing position between 12 and 34, and then use one pair with strong registration features to guide the splicing distance of the other pair, thereby ensuring the end-to-end consideration of the spliced house type data.
For example: when the number of feature point matches between view 1 and view 2 is not as large as the number of matching feature points between view 3 and view 4, the positions of view 1 and view 2 may be adjusted with reference to view 3 and view 4 so that the gap a disappears. In addition, after the splicing of the view 12 and the view 34 is optimized, the splicing distance can be optimized for two pairs of the view 1 and the view 4 and the view 2 and the view 3, so that the global optimization is completed. In the optimization process, the view 1 and the view 4 and the view 2 and the view 3 have a mutual guidance relationship, the image pairs with a plurality of matched feature points are determined as a reference, and the other image pair is adjusted, so that the independent optimization operation can be performed in the length direction and the width direction, and the optimization quality and effect are effectively guaranteed.
And 7: and (5) hard-package identification.
After the complete 2d and 3d layout images of the room are acquired, the position and size information of the door and window on each wall surface is identified by the hardbound identification module, specifically, the position of the door and window can be detected and identified by using a cnn network, then the identified entity and the three-dimensional house type are subjected to fusion processing, so that the three-dimensional house type with the entity and the entity characteristics can be generated, and through the series of operations, the 2d house type layout image and the 3d house type layout image of the whole house can be obtained through the input 42 d hand-shot images shot at different angles, and the entity information including the position of the door and window and the like is beneficial to improving the quality and the effect of the user for browsing the three-dimensional house type.
According to the generation method provided by the application embodiment, a plurality of hand-shot pictures are shot at different angles of a house through a mobile phone of a user, then the hand-shot pictures are analyzed and processed, the house type pictures of the room are reconstructed through a layout splicing algorithm, and house type information such as 2d layout, 3d layout, doors and windows of the whole room is obtained, so that the problem of inconvenient operation when panoramic equipment is used for image acquisition is effectively solved, the use threshold of the generation method of the three-dimensional house type is greatly reduced, the application range of the method is widened, the method has great value for increasing the number of users in the field of intelligent home decoration, and the practicability of the method is further improved.
Fig. 16 is a schematic flowchart of a method for generating a three-dimensional house type according to an embodiment of the present application; referring to fig. 16, the present embodiment provides a method for generating a three-dimensional house type, and the execution subject of the method may be a three-dimensional house type generating device, and it is understood that the three-dimensional house type generating device may be implemented as software, or a combination of software and hardware. Specifically, the method for generating the three-dimensional house type may include:
step S1601: at least two images of a room from different perspectives are acquired.
The implementation manner and the implementation effect of the above steps in this embodiment are similar to the implementation manner and the implementation effect of step S201 in the above embodiment, and the above statements may be specifically referred to, and are not repeated herein.
Step S1602: a two-dimensional layout corresponding to each of the at least two images is generated.
Step S1603: camera parameters corresponding to the at least two images are determined.
Step S1604: based on the at least two images, the two-dimensional layout map, and the camera parameters, a three-dimensional house type corresponding to the at least two images is generated.
The implementation manner and implementation effect of the above steps in this embodiment are similar to those of steps S401 to S403 in the above embodiment, and the above statements may be specifically referred to, and are not repeated herein.
According to the technical scheme, the three-dimensional house type corresponding to the single image can be obtained through 2d layout drawing identification operation, camera parameter identification operation and other methods based on at least two images of different visual angles of a room, and therefore the practicability of the method is effectively improved.
In some examples, the images have different shooting angles, and there is an overlapping region between the shooting regions corresponding to the shooting angles of at least two images.
In some examples, the two-dimensional layout includes wall information in a room.
In some examples, determining camera parameters corresponding to the at least two images may include: calculating vanishing point information corresponding to the at least two images; calculating camera internal parameters corresponding to the at least two images according to the vanishing point information; and determining camera external parameters corresponding to the at least two images according to the vanishing point information and the camera internal parameters.
In some examples, generating a three-dimensional house type corresponding to the at least two images based on the at least two images, the two-dimensional layout map, and the camera parameters may include: acquiring height information between image shooting positions corresponding to the at least two images and the ground; determining a spatial constraint relationship corresponding to pixel points in the at least two images based on the height information, the at least two images and the camera parameters; and performing three-dimensional reconstruction operation on the pixel points in the two-dimensional layout diagram based on the space constraint relation to generate three-dimensional house types respectively corresponding to the at least two images.
The specific implementation manner, implementation effect, and implementation principle of the method in this embodiment are similar to those of the method in the embodiment corresponding to fig. 2 to fig. 15, and the above statements may be specifically referred to, and are not repeated herein.
Fig. 17 is a schematic flowchart of another method for generating a three-dimensional house type according to an embodiment of the present application; referring to fig. 17, the present embodiment provides another three-dimensional house type generation method, and the execution subject of the method may be a three-dimensional house type generation device, and it is understood that the three-dimensional house type generation device may be implemented as software, or a combination of software and hardware. Specifically, the method for generating the three-dimensional house type may include:
step S1701: at least two three-dimensional maps of different perspectives of a room are acquired.
Step 1702: determining a stitching location corresponding to at least two three-dimensional layout maps.
Step S1703: and splicing the at least two three-dimensional layout maps according to the splicing position to generate a three-dimensional house type corresponding to the room.
The above steps are explained in detail below:
step S1701: at least two three-dimensional maps of different perspectives of a room are acquired.
Specifically, the embodiment does not limit the specific implementation manner of obtaining the at least two three-dimensional layout diagrams at different viewing angles of the room, and a person skilled in the art can set the three-dimensional layout diagrams according to specific application requirements and design requirements, for example: at least two hand-shot images aiming at a room can be obtained, and at least two three-dimensional layout maps can be obtained by analyzing and processing the hand-shot images; or, at least two three-dimensional layout maps corresponding to a room are stored in advance, and the at least two three-dimensional layout maps can be obtained by accessing a preset area.
It should be noted that the at least two three-dimensional layouts have different viewing angles, that is, any two of the at least two three-dimensional layouts have different viewing angles, so that the reconstruction operation of the region in the room within a larger viewing angle range can be realized. In other examples, in order to accurately obtain the three-dimensional house type corresponding to the whole room, an overlapping region exists between any two three-dimensional layout maps in at least two three-dimensional layout maps, so that reconstruction operation on regions in all view angle ranges in the room can be realized.
Step 1702: determining a stitching location corresponding to at least two three-dimensional layout maps.
Step S1703: and splicing the at least two three-dimensional layout maps according to the splicing position to generate a three-dimensional house type corresponding to the room.
The implementation manner and implementation effect of the above steps in this embodiment are similar to those of steps S203 to S204 in the above embodiment, and the above statements may be specifically referred to, and are not repeated herein.
According to the technical scheme, the splicing positions corresponding to the at least two three-dimensional layout maps are determined by acquiring the at least two three-dimensional layout maps at different visual angles of the room, then the at least two three-dimensional layout maps are spliced according to the splicing positions, and the three-dimensional room type corresponding to the room is generated, so that the method for splicing the three-dimensional layout maps corresponding to different views is effectively realized, the problem that the spliced views have large parallax splicing when the view splicing operation is performed on one room can be effectively solved, and the practicability of the method is further improved.
In some examples, the stitching at least two three-dimensional layout maps according to the stitching location, and the generating the three-dimensional house type corresponding to the room may include: splicing all the three-dimensional layout maps based on the splicing positions to obtain spliced room type data corresponding to the rooms; detecting whether gaps exist at all splicing positions in the spliced house type data or not; determining the spliced house type data as a three-dimensional house type when gaps do not exist at all splicing positions in the spliced house type data; or when a gap exists at a splicing position in the spliced house type data, optimizing the spliced house type data to generate the three-dimensional house type.
In some examples, optimizing the spliced posterior house type data to generate a three-dimensional house type includes: optimizing all splicing positions based on the fact that a gap exists at one splicing position in the spliced house type data to obtain the optimized splicing position; and splicing all the three-dimensional layout maps based on the optimized splicing positions to generate a three-dimensional house type corresponding to the room.
In some examples, optimizing all of the splice positions based on a gap existing in a splice position in the splice backroom type data may include: acquiring the splicing matching degree of at least two three-dimensional layout drawings corresponding to any splicing position; when the splicing matching degree is larger than or equal to a preset threshold value, determining two three-dimensional layout maps corresponding to the splicing matching degree as a reference image pair; and adjusting the splicing positions corresponding to the at least two three-dimensional layout maps based on the reference image pair and the gap to obtain the optimized splicing positions.
In some examples, the reference image pair comprises: a first image pair located in a first direction and a second image pair located in a second direction; adjusting the splicing positions corresponding to the at least two three-dimensional layout maps based on the reference map and the gap, and obtaining the optimized splicing positions may include: determining a first adjustment distance of the splice location in the first direction based on the first image pair and the gap; determining a second adjustment distance of the splice location in a second direction based on the second image pair and the gap; and adjusting the splicing position in the first direction and the second direction based on the first adjusting distance and the second adjusting distance respectively to obtain the optimized splicing position.
The specific implementation manner, implementation effect, and implementation principle of the method in this embodiment are similar to those of the method in the embodiment corresponding to fig. 2 to fig. 15, and the above statements may be specifically referred to, and are not repeated herein.
Fig. 18 is a schematic structural diagram of a three-dimensional house type generation apparatus according to an embodiment of the present application; referring to fig. 18, the present embodiment provides a three-dimensional house type generation apparatus, which may perform the three-dimensional house type generation method shown in fig. 2, and specifically, the generation apparatus may include:
the first acquisition module 11 is configured to acquire at least two images of a room from different viewing angles;
a first generating module 12 for generating three-dimensional layout diagrams corresponding to at least two images, respectively;
a first determining module 13, configured to determine a stitching position corresponding to the three-dimensional layout map based on the at least two images;
and the first processing module 14 is configured to perform splicing processing on the three-dimensional layout according to the splicing position, and generate a three-dimensional house type corresponding to the room.
In some examples, the images have different shooting angles, and there is an overlapping region between the shooting regions corresponding to the shooting angles of at least two images.
In some examples, when the first generation module 12 generates the three-dimensional layout maps corresponding to the at least two images, respectively, the first generation module 12 is configured to perform: generating two-dimensional layout maps respectively corresponding to the at least two images; determining camera parameters corresponding to the at least two images; based on the at least two images, the two-dimensional layout map, and the camera parameters, a three-dimensional layout map corresponding to each of the at least two images is generated.
In some examples, the two-dimensional layout includes wall information in a room.
In some examples, the camera parameters include at least one of: camera internal reference and camera external reference; when the first generation module 12 determines camera parameters corresponding to at least two images, the first generation module 12 is configured to perform: calculating vanishing point information corresponding to the at least two images; calculating camera internal parameters corresponding to the at least two images according to the vanishing point information; and determining camera external parameters corresponding to the at least two images according to the vanishing point information and the camera internal parameters.
In some examples, when the first generation module 12 generates a three-dimensional layout map corresponding to at least two images respectively based on the at least two images, the two-dimensional layout map and the camera parameters, the first generation module 12 is configured to perform: acquiring height information between image shooting positions corresponding to the at least two images and the ground; determining a spatial constraint relationship corresponding to pixel points in the at least two images based on the height information, the at least two images and the camera parameters; and performing three-dimensional reconstruction operation on the pixel points in the two-dimensional layout diagram based on the space constraint relation to generate a three-dimensional layout diagram.
In some examples, when the first determination module 13 determines the stitching location corresponding to the three-dimensional layout map based on the at least two images, the first determination module 13 is configured to perform: extracting wall information included in each three-dimensional layout drawing; generating a two-dimensional wall surface image corresponding to the wall surface information; and determining a splicing position corresponding to the three-dimensional layout drawing based on the two-dimensional wall surface images and the at least two images corresponding to all the wall surface information.
In some examples, when the first determining module 13 generates the two-dimensional wall surface image corresponding to the wall surface information, the first determining module 13 is configured to perform: acquiring a constraint relation for generating a two-dimensional wall surface image; and generating a two-dimensional wall surface image corresponding to the wall surface information based on the constraint relation and the three-dimensional layout.
In some examples, the constraint relationship includes: the area corresponding to the wall information is positively correlated with the image resolution of the two-dimensional wall image.
In some examples, when the first determining module 13 determines the splicing position corresponding to the three-dimensional layout map based on the two-dimensional wall surface image and the at least two images corresponding to all the wall surface information, the first determining module 13 is configured to perform: acquiring image similarity corresponding to any two-dimensional wall images; determining an image adjacency relationship of at least two three-dimensional layouts based on at least two images; and determining the splicing position corresponding to the two adjacent three-dimensional layout maps based on the image similarity.
In some examples, when the first determining module 13 determines the stitching position corresponding to two adjacent three-dimensional layout maps based on the image similarity, the first determining module 13 is configured to perform: acquiring a first splicing wall surface pair corresponding to the highest image similarity; and determining the splicing positions corresponding to the two adjacent three-dimensional layout maps based on the first splicing wall pair.
In some examples, when the first determining module 13 determines the splicing position corresponding to the two adjacent three-dimensional layout maps based on the first splicing wall pair, the first determining module 13 is configured to perform: performing feature extraction operation on the first spliced wall surface to obtain a first wall surface feature and a second wall surface feature; and determining at least one splicing position corresponding to the two adjacent three-dimensional layout graphs based on the first wall surface characteristic and the second wall surface characteristic.
In some examples, when the first determination module 13 determines the image adjacency relationship of the at least two three-dimensional layouts based on the at least two images, the first determination module 13 is configured to perform: determining a first neighboring relationship corresponding to at least two images; based on the first adjacency relationship, image adjacency relationships of the at least two three-dimensional layouts are determined.
In some examples, after determining the first neighboring relationship corresponding to the at least two images, the first processing module 14 in this embodiment is configured to perform: scaling at least two images so that the wall surface heights included in all the images are the same.
In some examples, after determining the stitching location corresponding to two adjacent three-dimensional layout maps, the first processing module 14 in this embodiment is configured to perform: detecting whether the splicing position is reasonable; when the splicing position is reasonable, splicing the three-dimensional layout chart based on the splicing position; or when the splicing position is not reasonable, acquiring a second splicing wall surface pair corresponding to the second highest image similarity, and determining the splicing position corresponding to the second splicing wall surface pair.
In some examples, when the first processing module 14 detects whether the splice location is reasonable, the first processing module 14 is configured to perform the following steps: pre-splicing the three-dimensional layout drawing based on the splicing position to obtain a pre-spliced house type; identifying wall features corresponding to all walls in the pre-spliced house type; and detecting whether the splicing position is reasonable or not based on the wall features corresponding to the walls.
In some examples, the wall features include: dimensional characteristics of the wall surface; when the first processing module 14 detects whether the splicing position is reasonable or not based on the wall features corresponding to the respective walls, the first processing module 14 is configured to execute the following steps: determining the size deviation information between any two wall surfaces in the pre-spliced house type based on the corresponding size characteristics of each wall surface; when the size deviation information is smaller than a set value, determining that the splicing position is reasonable; or, when the size deviation information is greater than or equal to the set value, the determined splicing position is not reasonable.
In some examples, the wall features include: relative position features between wall surfaces; when the first processing module 14 detects whether the splicing position is reasonable or not based on the wall features corresponding to the respective walls, the first processing module 14 is configured to execute the following steps: detecting whether wall surface intersection or wall surface shielding occurs between any two wall surfaces in the pre-spliced house type or not based on the relative position characteristics of the wall surfaces; when the wall surface is crossed or shielded, the splicing position is determined to be unreasonable; or when the wall surface intersection does not occur and the wall surface shielding does not occur, the splicing position is determined to be reasonable.
In some examples, when the first processing module 14 performs a splicing process on the three-dimensional layout according to the splicing position to generate a three-dimensional house type corresponding to the room, the first processing module 14 is configured to perform the following steps: splicing all the three-dimensional layout maps based on the splicing positions to obtain spliced room type data corresponding to the rooms; detecting whether gaps exist at all splicing positions in the spliced house type data or not; determining the spliced house type data as a three-dimensional house type when gaps do not exist at all splicing positions in the spliced house type data; or when a gap exists at a splicing position in the spliced house type data, optimizing the spliced house type data to generate the three-dimensional house type.
In some examples, when the first processing module 14 performs optimization processing on the spliced house type data to generate a three-dimensional house type, the first processing module 14 is configured to perform the following steps: optimizing all splicing positions based on the fact that a gap exists at one splicing position in the spliced house type data to obtain the optimized splicing position; and splicing all the three-dimensional layout maps based on the optimized splicing positions to generate a three-dimensional house type corresponding to the room.
In some examples, when the first processing module 14 optimizes all the splice positions based on the existence of gaps in a splice position in the splice house type data, and obtains the optimized splice position, the first processing module 14 is configured to perform the following steps: acquiring the splicing matching degree of the three-dimensional layout corresponding to each splicing position; determining the two three-dimensional layout maps corresponding to the highest splicing matching degree as a reference image pair; and adjusting the splicing position corresponding to the three-dimensional layout diagram based on the reference image pair and the gap to obtain the optimized splicing position.
In some examples, the reference image pair comprises: a first image pair located in a first direction and a second image pair located in a second direction; when the first processing module 14 adjusts the splicing position corresponding to the three-dimensional layout drawing based on the reference drawing and the gap to obtain the optimized splicing position, the first processing module 14 is configured to execute the following steps: determining a first adjustment distance of the splice location in the first direction based on the first image pair and the gap; determining a second adjustment distance of the splice location in a second direction based on the second image pair and the gap; and adjusting the splicing position in the first direction and the second direction based on the first adjusting distance and the second adjusting distance respectively to obtain the optimized splicing position.
In some examples, the first processing module 14 in this embodiment is configured to perform: identifying entities included in the room and entity characteristic information; and carrying out fusion processing on the entity, the entity characteristic information and the three-dimensional house type to generate a target three-dimensional house type.
In some examples, after generating the three-dimensional house type corresponding to the room, the first processing module 14 in this embodiment is further configured to: generating a construction house type corresponding to the three-dimensional house type; and obtaining construction checking information according to the three-dimensional house type and the construction house type.
In some examples, after generating the three-dimensional house type corresponding to the room, the first processing module 14 in this embodiment is further configured to: acquiring a display request of the three-dimensional house type; and displaying the three-dimensional house type by utilizing a setting device based on the display request.
In some examples, the presentation request includes at least one of: an augmented reality display request, a virtual reality display request, a mixed reality display request, and an image reality display request; correspondingly, the setting device comprises at least one of the following: augmented reality equipment, virtual reality equipment, mixed reality equipment, image reality equipment.
In some examples, after generating the three-dimensional house type corresponding to the room, the first processing module 14 in this embodiment is further configured to: acquiring the generation quality of the three-dimensional house type; and when the generation quality does not meet the set condition, generating an image re-shooting request so as to re-acquire at least two images of different view angles of the room based on the image re-shooting request.
The apparatus shown in fig. 18 can perform the method of the embodiment shown in fig. 1-15, and the detailed description of this embodiment can refer to the related description of the embodiment shown in fig. 1-15. The implementation process and technical effect of the technical solution are described in the embodiments shown in fig. 1 to fig. 15, and are not described herein again.
In one possible design, the structure of the three-dimensional house type generating device shown in fig. 18 may be implemented as an electronic device, which may be a mobile phone, a tablet computer, a server, or other devices. As shown in fig. 19, the electronic device may include: a first processor 21 and a first memory 22. Wherein the first memory 22 is used for storing a program for the corresponding electronic device to execute the method for generating the three-dimensional house type provided in the embodiments shown in fig. 1-15, and the first processor 21 is configured to execute the program stored in the first memory 22.
The program comprises one or more computer instructions, wherein the one or more computer instructions, when executed by the first processor 21, are capable of performing the steps of:
acquiring at least two images of a room from different visual angles;
generating three-dimensional layout diagrams respectively corresponding to the at least two images;
determining a stitching position corresponding to the three-dimensional layout map based on the at least two images;
and splicing the at least two three-dimensional layout maps according to the splicing position to generate a three-dimensional house type corresponding to the room.
Further, the first processor 21 is also used to execute all or part of the steps in the embodiments shown in fig. 1-15.
The electronic device may further include a first communication interface 23 for communicating with other devices or a communication network.
In addition, an embodiment of the present invention provides a computer storage medium for storing computer software instructions for an electronic device, which includes a program for executing the method for generating a three-dimensional house type in the method embodiments shown in fig. 1 to 15.
Fig. 20 is a schematic structural diagram of a three-dimensional house type generation apparatus according to an embodiment of the present application; referring to fig. 20, the present embodiment provides a three-dimensional house type generation apparatus for performing the three-dimensional house type generation method shown in fig. 16, and specifically, the generation apparatus may include:
a second acquiring module 31, configured to acquire at least two images of a room from different viewing angles.
A second generating module 32, configured to generate two-dimensional layout maps corresponding to the at least two images, respectively.
A second determining module 33 for determining camera parameters corresponding to the at least two images.
A second processing module 34 for generating a three-dimensional house type corresponding to the at least two images based on the at least two images, the two-dimensional layout map and the camera parameters.
In some examples, the images have different shooting angles, and there is an overlapping region between the shooting regions corresponding to the shooting angles of at least two images.
In some examples, the two-dimensional layout includes wall information in a room.
In some examples, when the second determination module 33 determines the camera parameters corresponding to the at least two images, the second determination module 33 is configured to perform: calculating vanishing point information corresponding to the at least two images; calculating camera internal parameters corresponding to the at least two images according to the vanishing point information; and determining camera external parameters corresponding to the at least two images according to the vanishing point information and the camera internal parameters.
In some examples, when the second processing module 34 generates a three-dimensional house type corresponding to the at least two images based on the at least two images, the two-dimensional layout map, and the camera parameters, the second processing module 34 is configured to perform: acquiring height information between image shooting positions corresponding to the at least two images and the ground; determining a spatial constraint relationship corresponding to pixel points in the at least two images based on the height information, the at least two images and the camera parameters; and performing three-dimensional reconstruction operation on the pixel points in the two-dimensional layout diagram based on the space constraint relation to generate a three-dimensional house type corresponding to at least two images.
The apparatus shown in fig. 20 can perform the method of the embodiments shown in fig. 1, 12-14 and 16, and the detailed description of this embodiment can refer to the related descriptions of the embodiments shown in fig. 1, 12-14 and 16. The implementation process and technical effect of the technical solution are described in the embodiments shown in fig. 1, fig. 12 to fig. 14, and fig. 16, and are not described again here.
In one possible implementation, the structure of the three-dimensional house type generating apparatus shown in fig. 20 may be implemented as an electronic device, which may be a mobile phone, a tablet computer, a server, or other various devices. As shown in fig. 21, the electronic device may include: a second processor 41 and a second memory 42. Wherein the second memory 43 is used for storing a program of the corresponding electronic device to execute the method for generating the three-dimensional house type provided in the embodiments shown in fig. 1, 12-14 and 16, and the second processor 41 is configured to execute the program stored in the second memory 42.
The program comprises one or more computer instructions, wherein the one or more computer instructions, when executed by the second processor 41, are capable of performing the steps of:
acquiring at least two images of a room from different visual angles;
generating two-dimensional layout maps respectively corresponding to the at least two images;
determining camera parameters corresponding to the at least two images;
based on the at least two images, the two-dimensional layout map, and the camera parameters, a three-dimensional house type corresponding to the at least two images is generated.
Optionally, the second processor 41 is further configured to perform all or part of the steps in the embodiments shown in fig. 1, 12-14, and 16.
The electronic device may further include a second communication interface 43 for the terminal to communicate with other devices or a communication network.
In addition, an embodiment of the present invention provides a computer storage medium for storing computer software instructions for an electronic device, which includes a program for executing the method for generating a three-dimensional house type in the method embodiments shown in fig. 1, 12 to 14, and 16.
Fig. 22 is a schematic structural diagram of a three-dimensional house type generation apparatus according to an embodiment of the present application; referring to fig. 22, the present embodiment provides a three-dimensional house type generation apparatus, which may execute the three-dimensional house type generation method corresponding to fig. 17, specifically, the generation apparatus may include:
a third obtaining module 51, configured to obtain at least two three-dimensional layout maps of different viewing angles of a room;
a third determining module 52, configured to determine a splicing position corresponding to at least two three-dimensional layout maps;
and the third processing module 53 is configured to perform splicing processing on the at least two three-dimensional layout maps according to the splicing position, so as to generate a three-dimensional house type corresponding to the room.
In some examples, when the third processing module 53 performs a stitching process on at least two three-dimensional layout maps according to the stitching position to generate a three-dimensional house type corresponding to the room, the third processing module 53 is configured to perform: splicing all the three-dimensional layout maps based on the splicing positions to obtain spliced room type data corresponding to the rooms; detecting whether gaps exist at all splicing positions in the spliced house type data or not; determining the spliced house type data as a three-dimensional house type when gaps do not exist at all splicing positions in the spliced house type data; or when a gap exists at a splicing position in the spliced house type data, optimizing the spliced house type data to generate the three-dimensional house type.
In some examples, when the third processing module 53 performs optimization processing on the spliced house type data to generate a three-dimensional house type, the third processing module 53 is configured to perform: optimizing all splicing positions based on the fact that a gap exists at one splicing position in the spliced house type data to obtain the optimized splicing position; and splicing all the three-dimensional layout maps based on the optimized splicing positions to generate a three-dimensional house type corresponding to the room.
In some examples, when the third processing module 53 optimizes the splice location based on the gap existing at the splice location to obtain an optimized splice location, the third processing module 53 is configured to perform: acquiring the splicing matching degree of at least two three-dimensional layout maps corresponding to each splicing position; determining the two three-dimensional layout maps corresponding to the highest splicing matching degree as a reference image pair; and adjusting the splicing positions corresponding to the at least two three-dimensional layout maps based on the reference image pair and the gap to obtain the optimized splicing positions.
In some examples, the reference image pair comprises: a first image pair located in a first direction and a second image pair located in a second direction; when the third processing module 53 adjusts the splicing positions corresponding to the at least two three-dimensional layout maps based on the reference map and the gap to obtain the optimized splicing position, the third processing module 53 is configured to perform: determining a first adjustment distance of the splice location in the first direction based on the first image pair and the gap; determining a second adjustment distance of the splice location in a second direction based on the second image pair and the gap; and adjusting the splicing position in the first direction and the second direction based on the first adjusting distance and the second adjusting distance respectively to obtain the optimized splicing position.
The apparatus shown in fig. 22 can perform the method of the embodiments shown in fig. 1, fig. 12-fig. 14, and fig. 17, and the related descriptions of the embodiments shown in fig. 1, fig. 12-fig. 14, and fig. 17 can be referred to for the parts of this embodiment not described in detail. The implementation process and technical effect of the technical solution are described in the embodiments shown in fig. 1, fig. 12 to fig. 14, and fig. 17, and are not described again here.
In one possible implementation, the structure of the three-dimensional house type generating apparatus shown in fig. 22 can be implemented as an electronic device, which can be a mobile phone, a tablet computer, a server, and other various devices. As shown in fig. 23, the electronic device may include: a third processor 61 and a third memory 62. Wherein the third memory 63 is used for storing a program for executing the three-dimensional house type generation method provided in the embodiment shown in fig. 17, and the third processor 61 is configured for executing the program stored in the third memory 62.
The program comprises one or more computer instructions, wherein the one or more computer instructions, when executed by the third processor 61, are capable of performing the steps of:
acquiring at least two three-dimensional layout maps of different view angles of a room;
determining splicing positions corresponding to at least two three-dimensional layout drawings;
and splicing the at least two three-dimensional layout maps according to the splicing position to generate a three-dimensional house type corresponding to the room.
Optionally, the third processor 61 is further configured to perform all or part of the steps in the foregoing embodiment shown in fig. 17.
The electronic device may further include a third communication interface 63, which is used for the terminal to communicate with other devices or a communication network.
In addition, an embodiment of the present invention provides a computer storage medium for storing computer software instructions for an electronic device, which includes a program for executing the method for generating a three-dimensional house type in the method embodiment shown in fig. 17.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by adding a necessary general hardware platform, and of course, can also be implemented by a combination of hardware and software. With this understanding in mind, the above-described technical solutions and/or portions thereof that contribute to the prior art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein (including but not limited to disk storage, CD-ROM, optical storage, etc.).
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (29)

1. A method for generating a three-dimensional house type is characterized by comprising the following steps:
acquiring at least two images of a room from different visual angles;
generating three-dimensional layout diagrams respectively corresponding to the at least two images;
determining a stitching position corresponding to the three-dimensional layout map based on the at least two images;
and splicing the three-dimensional layout drawing according to the splicing position to generate a three-dimensional house type corresponding to the house.
2. The method of claim 1, wherein generating a three-dimensional layout corresponding to each of the at least two images comprises:
generating two-dimensional layout maps respectively corresponding to the at least two images;
determining camera parameters corresponding to the at least two images;
and generating three-dimensional layout maps respectively corresponding to the at least two images based on the at least two images, the two-dimensional layout map and the camera parameters.
3. The method of claim 2, wherein generating a three-dimensional layout corresponding to the at least two images, respectively, based on the at least two images, the two-dimensional layout, and the camera parameters comprises:
acquiring height information between the image shooting positions corresponding to the at least two images and the ground;
determining a spatial constraint relationship corresponding to pixel points in the at least two images based on the height information, the at least two images and the camera parameters;
and performing three-dimensional reconstruction operation on the pixel points in the two-dimensional layout diagram based on the space constraint relation to generate the three-dimensional layout diagram.
4. The method of claim 1, wherein determining a stitching location corresponding to the three-dimensional layout map based on the at least two images comprises:
extracting wall information included in each three-dimensional layout drawing;
generating a two-dimensional wall surface image corresponding to the wall surface information;
and determining a splicing position corresponding to the three-dimensional layout drawing based on the two-dimensional wall surface images corresponding to all the wall surface information and the at least two images.
5. The method of claim 4, wherein generating a two-dimensional wall image corresponding to the wall information comprises:
acquiring a constraint relation for generating a two-dimensional wall surface image;
and generating a two-dimensional wall surface image corresponding to the wall surface information based on the constraint relation and the three-dimensional layout.
6. The method of claim 4, wherein determining a stitching location corresponding to the three-dimensional layout map based on the two-dimensional wall images corresponding to all wall information and the at least two images comprises:
acquiring image similarity corresponding to any two-dimensional wall images;
determining an image adjacency relationship of the at least two three-dimensional layouts based on the at least two images;
and determining the splicing position corresponding to the two adjacent three-dimensional layout maps based on the image similarity.
7. The method of claim 6, wherein determining the stitching location corresponding to two adjacent three-dimensional layouts based on the image similarity comprises:
acquiring a first splicing wall surface pair corresponding to the highest image similarity;
and determining the splicing positions corresponding to the two adjacent three-dimensional layout maps based on the first splicing wall pair.
8. The method of claim 7, wherein determining a stitching location corresponding to two adjacent three-dimensional layouts based on the first pair of stitching walls comprises:
performing feature extraction operation on the first spliced wall surface to obtain a first wall surface feature and a second wall surface feature;
and determining at least one splicing position corresponding to two adjacent three-dimensional layout graphs based on the first wall surface characteristic and the second wall surface characteristic.
9. The method of claim 7, wherein determining image adjacency relationships for the at least two three-dimensional layouts based on the at least two images comprises:
determining a first neighboring relationship corresponding to the at least two images;
determining an image adjacency relationship of the at least two three-dimensional layouts based on the first adjacency relationship.
10. The method of claim 6, wherein after determining the stitching location corresponding to two adjacent three-dimensional layouts, the method further comprises:
detecting whether the splicing position is reasonable;
when the splicing position is reasonable, splicing the three-dimensional layout chart based on the splicing position; alternatively, the first and second electrodes may be,
and when the splicing position is not reasonable, acquiring a second splicing wall surface pair corresponding to the second highest image similarity, and determining the splicing position corresponding to the second splicing wall surface pair.
11. The method of claim 10, wherein detecting whether the splice location is legitimate comprises:
pre-splicing the three-dimensional layout drawing based on the splicing position to obtain a pre-spliced house type;
identifying wall features corresponding to all walls in the pre-spliced house type;
and detecting whether the splicing position is reasonable or not based on the wall features corresponding to the walls.
12. The method of claim 11, wherein the wall surface features comprise: dimensional characteristics of the wall surface; based on the wall characteristic that each wall corresponds, detect whether the concatenation position is reasonable includes:
determining the size deviation information between any two wall surfaces in the pre-spliced house type based on the size characteristics corresponding to each wall surface;
when the size deviation information is smaller than a set value, determining that the splicing position is reasonable; alternatively, the first and second electrodes may be,
and when the size deviation information is larger than or equal to a set value, determining the splicing position unreasonable.
13. The method of claim 11, wherein the wall surface features comprise: relative position features between wall surfaces; based on the wall characteristic that each wall corresponds, detect whether the concatenation position is reasonable includes:
detecting whether wall surface intersection or wall surface shielding occurs between any two wall surfaces in the pre-spliced house type or not based on the relative position characteristics of the wall surfaces;
when the wall surface is crossed or shielded, the splicing position is determined to be unreasonable; alternatively, the first and second electrodes may be,
and when the wall surface intersection does not occur and the wall surface shielding does not occur, determining that the splicing position is reasonable.
14. The method of claim 1, wherein the generating a three-dimensional house type corresponding to a room by performing a stitching process on the three-dimensional layout according to the stitching position comprises:
splicing all the three-dimensional layout maps based on the splicing positions to obtain spliced room type data corresponding to the rooms;
detecting whether gaps exist at all splicing positions in the spliced house type data or not;
determining the spliced house type data as the three-dimensional house type when gaps do not exist at all splicing positions in the spliced house type data; alternatively, the first and second electrodes may be,
and when a gap exists at a splicing position in the spliced house type data, optimizing the spliced house type data to generate the three-dimensional house type.
15. The method of claim 14, wherein optimizing the stitched data to generate the three-dimensional house type comprises:
optimizing all splicing positions based on the fact that a gap exists at one splicing position in the spliced house type data to obtain the optimized splicing position;
and splicing all the three-dimensional layout maps based on the optimized splicing positions to generate the three-dimensional house type corresponding to the room.
16. The method of claim 15, wherein optimizing all splice positions based on gaps existing at a splice position in the spliced house type data to obtain an optimized splice position comprises:
acquiring the splicing matching degree of the three-dimensional layout corresponding to each splicing position;
determining the two three-dimensional layout maps corresponding to the highest splicing matching degree as a reference image pair;
and adjusting the splicing position corresponding to the three-dimensional layout diagram based on the reference image pair and the gap to obtain the optimized splicing position.
17. The method of claim 16, wherein the reference image pair comprises: a first image pair located in a first direction and a second image pair located in a second direction; adjusting the splicing position corresponding to the three-dimensional layout diagram based on the reference diagram and the gap to obtain an optimized splicing position, comprising:
determining a first adjustment distance of the stitching location in a first direction based on the first image pair and the gap;
determining a second adjusted distance of the splice location in a second direction based on the second image pair and the gap;
and adjusting the splicing position in the first direction and the second direction respectively based on a first adjusting distance and a second adjusting distance to obtain the optimized splicing position.
18. The method of any one of claims 1-17, wherein after generating the three-dimensional room type corresponding to the room, the method further comprises:
generating a construction house type corresponding to the three-dimensional house type;
and obtaining construction checking information according to the three-dimensional house type and the construction house type.
19. The method of any one of claims 1-17, wherein after generating the three-dimensional room type corresponding to the room, the method further comprises:
acquiring a display request of the three-dimensional house type;
and displaying the three-dimensional house type by utilizing a setting device based on the display request.
20. The method of claim 19, wherein the presence request comprises at least one of: an augmented reality display request, a virtual reality display request, a mixed reality display request, and an image reality display request;
correspondingly, the setting device comprises at least one of the following: augmented reality equipment, virtual reality equipment, mixed reality equipment, image reality equipment.
21. The method of any one of claims 1-17, wherein after generating the three-dimensional room type corresponding to the room, the method further comprises:
acquiring the generation quality of the three-dimensional house type;
and when the generation quality does not meet the set condition, generating an image re-shooting request so as to re-acquire at least two images of different view angles of the room based on the image re-shooting request.
22. A method for generating a three-dimensional house type is characterized by comprising the following steps:
acquiring at least two images of a room from different visual angles;
generating two-dimensional layout maps respectively corresponding to the at least two images;
determining camera parameters corresponding to the at least two images;
generating a three-dimensional house type corresponding to the at least two images based on the at least two images, the two-dimensional layout map, and the camera parameters.
23. A method for generating a three-dimensional house type is characterized by comprising the following steps:
acquiring at least two three-dimensional layout maps of different view angles of a room;
determining splicing positions corresponding to the at least two three-dimensional layout maps;
and splicing the at least two three-dimensional layout maps according to the splicing position to generate a three-dimensional house type corresponding to the house.
24. A three-dimensional house type generation device is characterized by comprising:
the first acquisition module is used for acquiring at least two images of a room from different visual angles;
a first generating module, configured to generate three-dimensional layout maps corresponding to the at least two images, respectively;
a first determining module, configured to determine a stitching location corresponding to the three-dimensional layout map based on the at least two images;
and the first processing module is used for splicing the three-dimensional layout according to the splicing position to generate a three-dimensional house type corresponding to the room.
25. An electronic device, comprising: a memory, a processor; wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the method of generating a three-dimensional house form of any of claims 1 to 21.
26. A three-dimensional house type generation device is characterized by comprising:
the second acquisition module is used for acquiring at least two images of a room from different visual angles;
a second generating module, configured to generate two-dimensional layout maps corresponding to the at least two images, respectively;
a second determination module to determine camera parameters corresponding to the at least two images;
a second processing module for generating a three-dimensional house type corresponding to the at least two images based on the at least two images, the two-dimensional layout map and the camera parameters.
27. An electronic device, comprising: a memory, a processor; wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the method of generating a three-dimensional house type of claim 22.
28. A method for generating a three-dimensional house type is characterized by comprising the following steps:
the third acquisition module is used for acquiring at least two three-dimensional layout maps of different view angles of a room;
the third determining module is used for determining the splicing positions corresponding to the at least two three-dimensional layout maps;
and the third processing module is used for splicing the at least two three-dimensional layout maps according to the splicing position to generate a three-dimensional house type corresponding to the room.
29. An electronic device, comprising: a memory, a processor; wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the method of generating a three-dimensional house form of claim 23.
CN202110272326.XA 2021-03-12 2021-03-12 Three-dimensional house type generation method, device and equipment Pending CN113298708A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110272326.XA CN113298708A (en) 2021-03-12 2021-03-12 Three-dimensional house type generation method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110272326.XA CN113298708A (en) 2021-03-12 2021-03-12 Three-dimensional house type generation method, device and equipment

Publications (1)

Publication Number Publication Date
CN113298708A true CN113298708A (en) 2021-08-24

Family

ID=77319248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110272326.XA Pending CN113298708A (en) 2021-03-12 2021-03-12 Three-dimensional house type generation method, device and equipment

Country Status (1)

Country Link
CN (1) CN113298708A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114219845A (en) * 2021-11-30 2022-03-22 慧之安信息技术股份有限公司 Residential unit area judgment method and device based on deep learning
CN114554108A (en) * 2022-02-24 2022-05-27 北京有竹居网络技术有限公司 Image processing method and device and electronic equipment
CN115733705A (en) * 2022-11-08 2023-03-03 深圳绿米联创科技有限公司 Space-based information processing method and device, electronic equipment and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114219845A (en) * 2021-11-30 2022-03-22 慧之安信息技术股份有限公司 Residential unit area judgment method and device based on deep learning
CN114219845B (en) * 2021-11-30 2022-08-19 慧之安信息技术股份有限公司 Residential unit area judgment method and device based on deep learning
CN114554108A (en) * 2022-02-24 2022-05-27 北京有竹居网络技术有限公司 Image processing method and device and electronic equipment
CN114554108B (en) * 2022-02-24 2023-10-27 北京有竹居网络技术有限公司 Image processing method and device and electronic equipment
CN115733705A (en) * 2022-11-08 2023-03-03 深圳绿米联创科技有限公司 Space-based information processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112771539B (en) Employing three-dimensional data predicted from two-dimensional images using neural networks for 3D modeling applications
CN109887003B (en) Method and equipment for carrying out three-dimensional tracking initialization
US10777002B2 (en) 3D model generating system, 3D model generating method, and program
Yang et al. Image-based 3D scene reconstruction and exploration in augmented reality
US20190287293A1 (en) Visual localisation
US11521311B1 (en) Collaborative disparity decomposition
US20200090303A1 (en) Method and device for fusing panoramic video images
US11748906B2 (en) Gaze point calculation method, apparatus and device
US20130095920A1 (en) Generating free viewpoint video using stereo imaging
Chen et al. Oasis: A large-scale dataset for single image 3d in the wild
EP3467788B1 (en) Three-dimensional model generation system, three-dimensional model generation method, and program
CN113298708A (en) Three-dimensional house type generation method, device and equipment
JP2019075082A (en) Video processing method and device using depth value estimation
US9551579B1 (en) Automatic connection of images using visual features
US20190220952A1 (en) Method of acquiring optimized spherical image using multiple cameras
CN115439607A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
US10339702B2 (en) Method for improving occluded edge quality in augmented reality based on depth camera
Bokaris et al. 3D reconstruction of indoor scenes using a single RGB-D image
Wang et al. Real‐time fusion of multiple videos and 3D real scenes based on optimal viewpoint selection
KR20170108552A (en) Information system for analysis of waterfront structure damage
CN117115274B (en) Method, device, equipment and storage medium for determining three-dimensional information
CN117456114B (en) Multi-view-based three-dimensional image reconstruction method and system
US20230419526A1 (en) Method, apparatus, and computer-readable medium for room layout extraction
Ahmadabadian Photogrammetric multi-view stereo and imaging network design
WO2023164084A1 (en) Systems and methods for generating dimensionally coherent training data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240320

Address after: # 03-06, Lai Zan Da Building 1, 51 Belarusian Road, Singapore

Applicant after: Alibaba Innovation Co.

Country or region after: Singapore

Address before: Room 01, 45th Floor, AXA Building, 8 Shanton Road, Singapore

Applicant before: Alibaba Singapore Holdings Ltd.

Country or region before: Singapore

TA01 Transfer of patent application right