CN114727090B - Entity space scanning method, device, terminal equipment and storage medium - Google Patents

Entity space scanning method, device, terminal equipment and storage medium Download PDF

Info

Publication number
CN114727090B
CN114727090B CN202210267652.6A CN202210267652A CN114727090B CN 114727090 B CN114727090 B CN 114727090B CN 202210267652 A CN202210267652 A CN 202210267652A CN 114727090 B CN114727090 B CN 114727090B
Authority
CN
China
Prior art keywords
processed
space
state
modeling
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210267652.6A
Other languages
Chinese (zh)
Other versions
CN114727090A (en
Inventor
卞文瀚
胡晓航
金星安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202210267652.6A priority Critical patent/CN114727090B/en
Publication of CN114727090A publication Critical patent/CN114727090A/en
Application granted granted Critical
Publication of CN114727090B publication Critical patent/CN114727090B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/04Architectural design, interior design

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a physical space scanning method, a physical space scanning device, terminal equipment and a storage medium. In the embodiment of the application, the real-scene scanning is carried out on the target entity space, the parameterization processing is carried out on the target entity space based on the real-scene picture obtained by scanning, and the information is closer to the real data; in addition, in the parameterization process, the parameterization state of each region is synchronously displayed on the live-action picture, so that a user can conveniently know whether the parameterization process of each region is finished or not in time. Further, corresponding processing can be adopted according to the parameterized processing state of each region, the region can be ensured to be positioned in the view field range of the camera, factors such as scanning speed and the like can be adjusted so as to carry out parameterization processing on the region, the integrity and accuracy of the whole space parameterization processing are ensured, and only regions which are not parameterized and have failed in parameterization processing can be scanned, so that repeated scanning can be avoided, and modeling efficiency is improved.

Description

Entity space scanning method, device, terminal equipment and storage medium
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a method, an apparatus, a terminal device, and a storage medium for entity space scanning.
Background
With the development of intelligent home technology, online home decoration is becoming popular. People can carry out home decoration design on line through various home decoration tools, and the functions of the existing home decoration tools are more and more abundant, so that home decoration schemes with various styles and various styles can be provided for users, and the increasing home decoration demands of the users are met.
The home decoration tool needs to acquire a three-dimensional house model which the user wants to perform home decoration design in advance before providing the home decoration scheme for the user. In the prior art, home decoration tools generally construct a three-dimensional house model based on a house pattern by scanning the house pattern. The method for obtaining the three-dimensional house model is low in efficiency, and the built three-dimensional house model is low in accuracy and has a certain difference from an actual house, so that the design effect of the later-stage house decoration is affected.
Disclosure of Invention
Aspects of the application provide a physical space scanning method, a device, a terminal device and a storage medium, so as to solve the technical problems that the construction efficiency of a three-dimensional house model is low and the precision of the constructed three-dimensional house model is low.
The embodiment of the application provides an entity space scanning method, which is applicable to terminal equipment, wherein the terminal equipment is positioned in a target entity space and is movable, and the method comprises the following steps: in the moving process of the terminal equipment, carrying out live-action scanning on a target entity space, and displaying a currently scanned live-action picture, wherein the live-action picture comprises a part of entity space positioned in a scanning view field; and carrying out parameterization processing and/or three-dimensional space modeling processing on part of the entity space step by step, and synchronously displaying state representation information dynamically adapted to the parameterization processing and/or modeling state of the processed area in a live-action picture, wherein the processed area and the parameterization processing and/or modeling state thereof are dynamically changed, and different parameterization processing and/or modeling states correspond to different state representation information.
The embodiment of the application also provides a physical space scanning device, which can be applied to terminal equipment, wherein the terminal equipment is positioned in a target physical space and is movable, and the device comprises: the scanning module is used for carrying out live-action scanning on the target entity space in the moving process of the terminal equipment; the display module is used for displaying the real scene currently scanned by the scanning module, and the real scene comprises a part of entity space positioned in the scanning view field; the processing module is used for carrying out parameterization processing and/or three-dimensional space modeling processing on part of the entity space step by step; the display module is also used for: and synchronously displaying state representation information dynamically adapted to the parameterized processing and/or modeling states of the processed region in the live-action picture, wherein the processed region and the parameterized processing and/or modeling states thereof are dynamically changed, and different parameterized processing and/or modeling states correspond to different state representation information.
The embodiment of the application also provides a terminal device, which can be located in a target entity space and is movable, and the terminal device comprises: a memory and a processor; wherein the memory is for storing a computer program/instructions, the processor being coupled to the memory for executing the computer program/instructions for implementing the steps in the method described above.
The present embodiments also provide a computer readable storage medium storing a computer program/instructions which, when executed by a processor, cause the processor to implement the steps in the above-described method.
In the embodiment of the application, the real-scene scanning is carried out on the target entity space, the parameterization and/or the three-dimensional modeling processing is carried out on the target entity space based on the real-scene picture obtained by scanning, and the information is closer to the real data, so that the parameterization and/or the modeling accuracy is guaranteed; in addition, in the parameterization and/or three-dimensional modeling process, the parameterization and/or three-dimensional modeling states of all the areas are synchronously displayed on the live-action picture, so that a user can conveniently know whether the parameterization and/or three-dimensional modeling process of all the areas is finished or not in time. Further, corresponding processing can be adopted according to the parameterization process and/or modeling state of each region, for example, for the region which is not parameterized or not modeled, the region can be ensured to be positioned in the field of view of the camera, factors such as scanning speed and the like can be adjusted so as to carry out parameterization and/or three-dimensional modeling on the region, the integrity and accuracy of the whole space parameterization and/or model construction are ensured, and only the region which is not parameterized or modeled and is unsuccessful in parameterization or modeling can be scanned, so that repeated scanning can be avoided, and modeling efficiency is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1 is a flow chart of a physical space scanning method according to an exemplary embodiment of the present application;
FIGS. 2 a-2 i are schematic diagrams of various pages presented on a graphical user interface provided in an exemplary embodiment of the present application;
FIG. 3 is a schematic structural diagram of a physical space scanning device according to an exemplary embodiment of the present application;
fig. 4 is a schematic structural diagram of a terminal device according to an exemplary embodiment of the present application.
Detailed Description
For the purposes, technical solutions and advantages of the present application, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Aiming at the technical problems of low building efficiency of a house model and low precision of a built three-dimensional house model in the prior art, in some embodiments of the application, a terminal device moves in a target entity space, a live-action scan is performed on the target entity space in the moving process, and a currently scanned live-action picture comprising part of the entity space in a scan field of view is displayed; and carrying out parameterization processing and/or three-dimensional space modeling processing on part of the entity space step by step based on the scanned live-action picture, and synchronously displaying parameterization states and/or modeling states of all areas on the live-action picture in the parameterization processing and/or modeling process. The real scene picture obtained based on scanning is subjected to parameterization and/or three-dimensional modeling, and the information is closer to the real data, so that the parameterization and/or modeling accuracy is guaranteed; in addition, in the parameterization and/or modeling process, the parameterization state and/or modeling state of each region are synchronously displayed on the live-action picture, so that a user can conveniently know whether the parameterization process and/or the three-dimensional modeling process of each region is finished or not in time.
Further, corresponding processing can be adopted according to the parameterized state and/or modeling state of each region, for example, for a region which is not parameterized or not modeled, the region can be ensured to be positioned in the field of view of the camera, factors such as scanning speed and the like can be adjusted so as to parameterize or model the region, the integrity and accuracy of the whole space parameterized processing and/or model construction are ensured, and only the region which is not parameterized or modeled and is failed or is not modeled can be scanned, so that repeated scanning can be avoided, and modeling efficiency is improved.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a flowchart of a physical space scanning method according to an exemplary embodiment of the present application. The entity space scanning method is applicable to terminal equipment, wherein the terminal equipment can be local terminal equipment, and the local terminal equipment stores application programs and is used for presenting a graphical user interface. The local terminal device is used for interacting with a user through a graphical user interface, namely, the local terminal device downloads and installs the application program and operates the application program. The manner in which the local terminal device provides the graphical user interface to the user may include, for example, providing the graphical user interface on a display screen of the local terminal device that may be displayed, or by holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including an application screen, and a processor for running the application, generating the graphical user interface, and controlling the display of the graphical user interface on the display screen.
When the terminal device is a local terminal device, it may be an intelligent handheld device, such as a smart phone, a tablet computer, a notebook computer or a desktop computer, or may be an intelligent wearable device, such as an intelligent watch, an intelligent bracelet, or may be various intelligent home appliances with display screens, such as an intelligent television, an intelligent large screen, or an intelligent robot, but not limited thereto. The local terminal equipment is provided with an image acquisition device for scanning the environment where the terminal equipment is located to obtain an environment image or video, and the image acquisition device can be a device with a function of acquiring pictures or video, such as a camera, but is not limited to the device.
Based on this, the embodiment of the application provides an entity space scanning method suitable for the terminal device, as shown in fig. 1, the method includes:
101. in the moving process of the terminal equipment, carrying out live-action scanning on a target entity space, and displaying a currently scanned live-action picture, wherein the live-action picture comprises a part of entity space positioned in a scanning view field;
102. and carrying out parameterization processing and/or three-dimensional space modeling processing on part of the entity space step by step, and synchronously displaying state representation information dynamically adapted to the parameterization processing and/or modeling state of the processed area in a live-action picture, wherein the processed area and the parameterization processing and/or modeling state thereof are dynamically changed, and different state representation information corresponding to different parameterization processing and/or modeling states.
In this embodiment, the target entity space may be any physical space, for example, may be a subspace in the target entity house, and the target entity house may be a real three-dimensional space existing in the real world, for example, may be a real housing space, a shop space, a market space, and the like, and accordingly, the target entity space may be a subspace of a single sleeping room, a kitchen, a restaurant, a bathroom, and the like of the housing space, may be a single shop space, or may be a single shop in the market space, but is not limited thereto.
In this embodiment, the terminal device may move in the target entity space, where the terminal device may move in the target entity space by itself, or may be moved in the target entity space by a user carrying the terminal device, or may be moved in the target entity space by another device capable of moving by itself (for example, a robot or a moving trolley, etc.). Along with the movement of the terminal equipment, an image acquisition device (such as a camera) of the terminal equipment can conduct live-action scanning on a target entity space, a currently scanned live-action picture is displayed on a graphical user interface of the terminal equipment, the live-action picture comprises a part of entity space positioned in a scanning view field of the image acquisition device, the scanning view field is a scannable view angle range of each position of the image acquisition device of the terminal equipment in the moving process, the part of entity space in the scanning view field can be any part of space in the target entity space, any part of entity space contains a target entity object, the target entity object at least comprises a part of hard-mounted structure of the target entity space, and further the image acquisition device can also comprise a soft-mounted structure attached to the part of hard-mounted structure. The part of the hard-wearing structure can comprise parts in the structures such as a wall surface, a top surface, a ground surface, a corner and a skirting line of the target entity space, for example, comprises part of the wall surface, part of the ground surface and part of the skirting line, or only comprises part of the wall surface and only comprises part of the ground surface; of course, the partially stiffened structure may also comprise one or more complete structures of these structures, for example may comprise a complete wall surface, or comprise a complete floor surface, or comprise complete floor surfaces and skirting lines, etc. The soft-wear structure on the partially hard-wear structure may include at least one of a suspended ceiling, closet, wall painting, and various furniture embedded in the hard-wear structure, or may include one or more complete ones of these structures.
In this embodiment, the moving direction of the terminal device in the target physical space is not limited, and for example, the terminal device may be rotated by 360 degrees, or may be moved back and forth along a certain direction, so long as the moving mode that can cover the entire target physical space by moving is applicable to the embodiments of the present application. In addition, in the embodiment of the application, the moving speed of the terminal equipment in the target entity space is not limited, so long as enough definition of the acquired live-action picture can be ensured, and the requirement of three-dimensional space modeling on image definition is met. Of course, if the definition of the live-action picture is lower than the set definition threshold, the terminal device may display a prompt message to prompt the user or the autonomous mobile device carrying the terminal device to reduce the moving speed, so as to ensure the definition of the live-action picture.
Alternatively, application software with a live-action scanning function may be installed on the terminal device, for example, the application software may be three-dimensional model building software or decoration design software, for example, various home decoration design software. No matter what kind of application software, at least the functions of real scene scanning, parameterizing scanned real scene pictures, constructing a three-dimensional space model based on parameterizing results and the like are needed. After the user opens the application software, various pages provided by the application software can be displayed on a graphical user interface of the terminal equipment, wherein the various pages at least comprise pages for displaying the scanned live-action pictures, and further, other information related to the live-action pictures, such as guide information, popup windows, various controls and the like, can be displayed on the pages. It should be noted that, in addition to the above pages for displaying the live-action screen, the application software may include other pages, and a skip relationship exists between the pages. In response to a touch operation of a user on any page, or performing a corresponding task or jumping to other pages associated with the touch operation, the implementation form of the pages associated with the touch operation is related to the type of touch. The number of pages, the type of pages, and the skip relationship between pages provided by the application software may vary depending on the type and function of the application software. An exemplary description will now be made with reference to the home decoration design application software shown in fig. 2 a-2 i.
Fig. 2a is a top page of home design application software of the present embodiment, where in order to unify and distinguish the pages of the present application, the top page may be defined as a first page, and subsequently appearing pages are sequentially defined as a second page and a third page. At least a position information input box and a corresponding confirmation control are displayed on the first pageThe address information input box can be used for manually inputting or automatically positioning and filling the position information of the target entity space, and the position information of the target entity space can be indirectly embodied through information such as the name of a cell, the name of a market, the name of a shop and the like. After the location information of the target entity space is completely filled, the user can jump to a second page in response to the confirmation operation of the location information, and the second page is shown in fig. 2 b. And the second page is at least provided with a living room classification column, an area classification column, a scanning house type control, a home decoration design control and the like. The living room classification column at least displays 1 room, 2 rooms, 3 rooms, 4 rooms and more selection controls, and can be used for selecting the types of living rooms. The area classification column displays area information, and the area information can be displayed in a segmented value form, for example, 50-90 m 2 、90~120m 2 、120~150m 2 And the like, can be used for selecting the area of the living room. The user can start the operation of house type live-action scanning, parameterization processing and three-dimensional space model construction by clicking the scanning house type control. The user can start the drawing operation of the house type graph by clicking the drawing house type control.
Further, the user can search whether the target entity space exists through the living room classification column and the area classification column on the second page, and if so, the home decoration design control can be directly triggered to carry out home decoration design on the target entity space. If the target entity space cannot be found in the above manner, the scan house type control can be directly triggered, and in response to the triggering operation of the scan house type control by the user, the page jumps to a third page, so as to perform live-action scanning on the target entity space, wherein the third page is shown in fig. 2 c. The third page is at least divided into two parts, wherein one part is displayed with an operation control of a target entity space, and the target entity house is taken as a real housing space, so that the part is displayed with a selection control of a living room, a guest restaurant, a bedroom, a kitchen, a bathroom and the like; if the target entity space is a shop, the second page displays selection controls such as a storage room, a commodity display room and the like, but the method is not limited to the selection controls, the target entity room can be other types of entity rooms, the corresponding target entity space can be a certain entity space in the other types of entity rooms, and the target control can also be a selection control corresponding to the other types of entity spaces. The other part can be a static announcement page, the static announcement page can display the function introduction information of the page, so that a user can know and use the function of the page conveniently, for example, the function introduction information of the page can be information such as 'parameterization can be performed and/or a corresponding three-dimensional space model can be generated by using mobile phone scanning', but the method is not limited to the information, and the display information of the static announcement page can be updated in real time according to the function of the page. Assuming that the user selects the living room as the target real space to be scanned, the scanning link of the target real space can be accessed by triggering the "living room" control, as described below.
In the above example, the user triggers the "scan house type" control to enter the page shown in fig. 2c, so that the user may further select a specific subspace as the target real space, which is not limited thereto. If the target entity house does not contain a plurality of subspaces, the interface shown in the figure 2c can be skipped, and the real scene scanning link can be directly entered.
In this embodiment, to perform live-action scanning on the target entity space, taking the interface shown in fig. 2b or fig. 2c as an example, the user may trigger the scan home-type control to start live-action scanning, parameterization processing, and construction operations of the three-dimensional space model. In order to perform a live-action scan of a target entity space, it is required that the terminal device is located in and movable in the target entity space. Taking the example that the user carries the terminal device into the target entity space and moves in the target entity space, in order to facilitate the user to carry the terminal device into the target entity space, the terminal device may input space entry guide information to the user so as to prompt the user to carry the terminal device into the target entity space. The guiding information for space entry may be graphic information with a space entry guiding function or animation information with a space entry guiding function, and may specifically be displayed above a live-action picture, and may further include voice information, such as "please carry the terminal device into the space", so as to prompt the user to need to carry the terminal device into the target entity space. Further alternatively, in the case where the guidance information for the space entry guidance is graphic information or animation information, a preset display time (e.g., 5s, 3s, 6s, etc.) or a confirmation control may also be displayed, and the guidance information may automatically disappear when the preset display time ends or after the user clicks the confirmation space.
After determining that the terminal device enters the target entity space, the terminal device may further input mobile scanning guide information to the user, so as to prompt the user to carry the terminal device to perform mobile scanning on the target entity space. The mobile scanning guide information may be graphic information or animation information with a mobile scanning guide function, and may also include voice information, for example, "please carry the terminal device to move in space or rotate 360 degrees", so as to prompt the user to need to carry the terminal device to perform mobile scanning on the target entity space. Further optionally, in order to facilitate the user to more accurately and efficiently carry the terminal device to perform mobile scanning on the target entity space, the mobile scanning guide information may further include mobile direction guide information and scanning manner guide information, where the mobile direction guide information is used to guide a mobile direction when the user carries the terminal device to move, for example, first forward, then leftward and then rightward, where the mobile direction guide information may be formed based on a two-dimensional space structure diagram (such as a house type diagram) corresponding to the target entity space and is adapted to an internal space structure of the target entity space. The scan mode guiding information is used to guide the user to carry the terminal device to move, and can be understood as a moving mode of the terminal device relative to the user, for example, a 360-degree rotation scan mode, a top-to-bottom scan mode, a left-to-right scan mode, or the like. Similarly, further alternatively, in the case where the guidance information for the mobile scanning guidance is graphic information or animation information, a preset display time (for example, 4s, 5s, or 6s, etc.) or a confirmation control may also be displayed, and the guidance information may automatically disappear when the preset display time ends or after the user clicks the confirmation space. The preset display time may be the same or different for different guidance information, which is not limited.
Further, the following details the user currently in the housing space, but not limited to the housing space, and the third page displays the selection controls such as living room, guest restaurant, bedroom, kitchen and bathroom. Assuming that the user is in a guest location hall, the user can trigger a "living room" control on a third page, and respond to the triggering operation of the "living room" control, the page jumps to a fourth page, and the fourth page is a scanning page, so as to realize the scanning of the target entity space, and the fourth page is shown in fig. 2 d. And a live-action picture of the target entity space in the current visual field range shot by the graph acquisition device is displayed on the fourth page, a popup window is displayed on the live-action picture, and at least a guide animation, guide information and a related confirmation control are displayed on the popup window. The guiding animation is a moving picture of guiding the user in the virtual space, which is displayed in real time after virtualizing the user, the terminal equipment and the target entity space, and is accompanied with corresponding voice prompt information; the guide information is direction information for guiding the movement of the user, for example, left turn, right turn, rotation angle, etc.; the associated validation control may be a control having the word "I have walked to the destination". When scanning the target entity space, the user can scan the space at any position of the target entity space, can scan while moving, can scan at a fixed position of the target entity space by 360 degrees, and can acquire information such as the position of the terminal equipment in real time in the scanning process, so that in order to reduce the data processing amount of the terminal equipment and scan the target entity space relatively completely, the initial scanning position can be set at the central position of the target entity space, and on the basis of the central position scanning, if an incomplete scanning area exists, the initial scanning position can be moved to the vicinity of the area to scan.
Further, the following describes the initial scan position in the center of the target physical space as an example, but is not limited to the center. The guidance information on the fourth page may be information having a "please walk to the middle of the house" typeface, and the associated confirmation control may be a control having a "i have walked to the center position" typeface. The user can walk to the central position of the target entity space according to the guiding animation and the voice prompt, after the user reaches the central position of the target entity space, the user can be informed that the user reaches the central position by displaying special symbols in the popup window or by the voice prompt mode, the user can trigger a confirmation control of 'I walk to the central position', the popup window enters a hidden state, a control of 'scanning a house' and guiding information of the control enter a display state, and simultaneously the voice prompt is accompanied, so that the user is guided to trigger the control of 'scanning the house' to scan the target entity space.
Further, in response to the user triggering the operation of the "scan house" control, the fourth page enters a scanning state, and correspondingly, the implementation form of the "scan house" control is converted into a "scan completion" control, the popup window also enters a display state from a hidden state, and one piece of animation, guide information and voice information on the popup window simultaneously prompts the user to perform angle rotation so as to guide the user to perform omnibearing scanning on the target entity space. The pop-up window can be displayed intermittently at a preset time interval and a preset display time, that is, the guide icon automatically enters the hidden state beyond the preset display time, enters the display state again after the preset time interval, and repeats in such a way, so that a user can receive guidance and view a complete scanning page at the same time, for example, the preset time interval can be 15s, the preset display time can be 5s, the guide icon automatically enters the hidden state after being displayed for 5s, and enters the display state again after being in the hidden state 15. The "scanning completion" control has the functions of displaying the scanning progress in real time and ending the scanning after forming a closed scanning space, and the display form of the scanning progress can be various, for example, the scanning progress can be displayed around the "scanning completion" control in a closed circular ring form, and the circular ring is gradually filled with the increase of the scanning progress until the circular ring is filled with the closed circular ring. After the scanning progress forms a closed ring, a control of 'scanning completion' can be triggered, if the triggering is successful, the scanning is finished when the closed space is formed, the display mode of the scanning progress is shown in fig. 2e, and the flow of the guide animation is shown in fig. 2 f; if the scan completion control is triggered and an error message prompt exists, the scanning is continued according to the scan guiding information until the scan completion control is triggered successfully and then the scanning is ended.
In this embodiment, when the image acquisition device of the terminal device performs live-action scanning on the target entity space, parameterization processing and/or three-dimensional space modeling processing may be performed on a part of the scanned entity space step by step, where the parameterization processing and/or the three-dimensional space modeling processing may be performed by the terminal device, or may be performed by sending, through the terminal device, scanned live-action picture data to the server, where the server performs a parameterization processing task and/or a three-dimensional space modeling processing task, and feeds back, to the terminal device, a progress of processing the parameterization task and/or a progress of processing the three-dimensional modeling task.
In this embodiment of the present application, parameterizing and/or three-dimensional space modeling is performed on a part of the physical space included in the scanned live-action picture, which includes two cases, that is, only parameterizing is performed on a part of the physical space included in the scanned live-action picture, and three-dimensional space modeling is not performed, so as to obtain space data after parameterizing, where in this case, state characterization information dynamically adapted to the parameterized state of the processed area is synchronously displayed in the live-action picture. Or, the method may also perform parameterization processing on the scanned part of the entity space, and further perform three-dimensional space modeling based on space data obtained by the parameterization processing, so as to obtain a three-dimensional space model corresponding to the target entity space, where in this case, state characterization information dynamically adapted to the parameterized state of the processed area may be selected to be displayed in synchronization in the live-action picture, and state characterization information dynamically adapted to the modeled state of the processed area may also be selected to be displayed in synchronization in the live-action picture; alternatively, state characterization information that is adapted to both the parameterized state dynamics and the modeled state of the processed region may be displayed in synchronization in the live-action picture. In this embodiment, the two operations of parameterization and three-dimensional space modeling may be performed asynchronously or synchronously in real time; briefly, the three-dimensional spatial modeling process may be asynchronous or may be real-time.
The asynchronous execution process means that parameterization is gradually executed, space data obtained by the parameterization is accumulated, after a certain data amount is accumulated, three-dimensional space modeling processing is performed according to the accumulated space data, and the process is continuously repeated until a complete three-dimensional space model corresponding to a target entity space is obtained. For example, each time, the parameterization is performed on the scanned part of the physical space step by step, and after the parameterization of the scanned part of the physical space is completed, the parameterized data is used to perform three-dimensional space modeling step by step. In this case, in one embodiment, state characterization information dynamically adapted to the parameterized state of the processed region is displayed in synchronization in the live view; in another embodiment, state characterization information dynamically adapted to the modeled state of the processed region is displayed in synchronization in the live-action picture.
The process of synchronous real-time execution refers to a process of gradually executing parameterization processing and performing three-dimensional space modeling according to the latest space data in real time when space data are obtained through each parameterization processing. The specific embodiment can be as follows: and carrying out parameterization processing on the scanned partial entity space step by step, carrying out three-dimensional space modeling processing by utilizing the latest space data each time the parameterization processing obtains the latest space data, and carrying out parameterization processing and three-dimensional space modeling processing on the scanned partial entity space in sequence according to the mode until the modeling of the scanned partial entity space is completed. The parameterization and three-dimensional space modeling are highly real-time, and the parameterization and modeling states are considered to be almost synchronous, so that in this case, state characterization information corresponding to the processed region is displayed in synchronization on the live-action screen, and the parameterization and modeling states of the processed region can be simultaneously represented. Of course, it is also possible to set that the state characterization information corresponding to the processed area is displayed in synchronization in the live-action screen to indicate only the parameterized state of the processed area or to indicate only the modeled state of the processed area.
Further, the parameterization is gradually performed on part of the entity space, which may be implemented as follows: and according to the image data quantity corresponding to the partial entity space, combining the image data quantity supported by single processing, and gradually carrying out parameterization processing on different areas in the partial entity space. The parameterization process can be understood as performing parameter identification on a part of the physical space included in the scanned live-action picture to obtain spatial data corresponding to the part of the physical space, where the spatial data includes structural data of the part of the physical space, for example, information such as a spatial object included in the part of the physical space, a length, a width, a height, a position, an area, a type and the like of the spatial object, and the structural data are used for constructing a structure of a local spatial model corresponding to the part of the physical space in a three-dimensional modeling process. Further optionally, texture data of each spatial object included in the partial physical space may be acquired during the parameterization process, where the texture data is used to render texture information of a local spatial model corresponding to the partial physical space during the three-dimensional modeling process, and the texture data and the structural data together form the spatial data. Alternatively, the texture data may be automatically generated according to the type and structure data of each space object included in the partial physical space, for example, the color, wallpaper material, color, etc. of the wall surface may be automatically set when the wall surface is identified, the floor color and texture used for the floor may be automatically set when the floor is identified, or the color and texture of the floor tile used for the floor may be set. In addition, the identified space object may be displayed to the user through a man-machine interaction process, and the texture information item that can be set by the space object is provided to the user, so that the user can configure the space object according to his own preference, for example, the user can configure the color of the wall surface, the wallpaper material, etc., can configure the floor for use on the ground, and configure the color and texture of the floor, or configure the floor for use on the ground, and configure the texture data such as the type, pattern, etc. of the floor tile.
The amount of image data supported by a single process may be understood as an amount of image data processed by a data thread running once, and may be specifically determined according to the capability of a processor of a terminal device or a server device performing parameterization, which is not limited. The sequence of parameterizing the partial physical space may be determined by the moving direction of the image capturing device during scanning with reference to the continuous area, for example, the area scanned first is parameterized along with the moving direction of the image capturing device, and the parameterized is performed after the area scanned continuously, but the parameterized method is not limited thereto. To facilitate an understanding of the process of performing the parameterization step by step, an example is described below. For example, the image data amount supported by the single processing is a, and the image data amount a corresponds to a partial area of the scanned live-action picture, so that the partial area is a single parameterization area, and further, along with the moving direction of the image acquisition device when scanning the partial physical space, parameterization can be gradually performed on other areas corresponding to the same image data amount a until the parameterization of the live-action picture corresponding to the partial physical space is completed.
Furthermore, on the basis of gradually carrying out parameterization processing on the scanned part of the entity space, an asynchronous or real-time processing mode can be adopted, and according to the space data obtained by the parameterization processing, three-dimensional modeling processing is gradually carried out on the part of the entity space, and the specific implementation modes can be divided into the following two modes: when the parameterization processing and the three-dimensional space modeling processing are asynchronously executed, according to the data volume after the parameterization processing corresponding to the partial entity space, combining the data volume supported by the single three-dimensional space modeling processing, gradually carrying out three-dimensional space modeling processing on different areas in the partial entity space; and when the parameterization and the three-dimensional space modeling are performed in real time, each time the space data obtained by the parameterization is processed, the latest space data is utilized to perform the three-dimensional space modeling until the scanned part of the entity space is modeled. The modeling process can be completed by a modeling thread, and the data volume processed by the modeling thread once can be regarded as the data volume supported by single three-dimensional modeling processing; the sequence of three-dimensional modeling of the partial entity space may be determined based on the sequence of parameterization, that is, the moving direction during scanning by the image acquisition device, and specific examples will not be described again.
Further, in the embodiment of the present application, in the process of gradually performing three-dimensional spatial modeling processing on a part of the physical space, a floating window may be displayed on the live-action image, and in the floating window, a construction process of a three-dimensional spatial model corresponding to the target physical space is dynamically displayed, where a linkage relationship exists between a currently constructed model part and a currently processed region.
Further, when parameterizing and/or modeling a part of entity space gradually, state characterization information dynamically adapted to the parameterized state and/or modeling state of the processed area can be synchronously displayed in the live-action picture, so that a user can know the state and progress of current parameterization and/or three-dimensional space modeling conveniently. For example, in the case of the parameterization process, the areas where no parameterization is performed, parameterization is being performed, and parameterization is completed can be distinguished by the state characterization information displayed in the live-action screen. The processed area and the parameterized state thereof are dynamically changed, and different parameterized states correspond to different state characterization information. For example, in the case of the three-dimensional space modeling process, by the state characterization information displayed in the live-action screen, it is possible to distinguish the regions where modeling is completed, in-progress, and not started. The processed area and the modeling state thereof are dynamically changed, and different states correspond to different state characterization information. In the embodiments of the present application, for the case of the parameterization processing, the processed area is an area where the parameterization processing has been started, including an area where the parameterization processing is being performed and an area where the parameterization processing has been completed, and correspondingly, the parameterization states of the respective processed areas include a parameterization proceeding state and a parameterization completion state. In the case of the three-dimensional space modeling processing, the processed regions are regions in which the three-dimensional space modeling processing has been started, including a region in which the three-dimensional space modeling is being performed and a region in which the three-dimensional space modeling has been completed, and accordingly, the modeling states of the respective processed regions include a modeling-in state and a modeling-completed state. In addition, the live-action screen further includes a region where parameterization is not started or modeling is not started, and state characterization information corresponding to the region may be displayed on the region.
Further, state characterization information dynamically adapted to the parameterized state and/or the modeled state of the processed region is synchronously displayed in the live-action picture, and the specific implementation manner is as follows: and displaying the covering area corresponding to each processed area on the live-action picture. Each mask region includes pattern information adapted to the current parameterization state and/or modeling state of the corresponding processed region, and the mask regions are connected to form a continuous mask in visual effect. Then, the pattern information in each mask area is dynamically updated according to the parameterized state and/or the dynamic change of the modeling state of each processed area. The updating of the pattern information in each mask area may be any visual attribute of updating the pattern information, and the visual attribute may be a pattern style of the pattern, a density of the pattern, a color of a line of the pattern, a background color of the pattern, and the like. It should be noted that, different pattern information indicates different parameterized states and/or modeling states, and the visualized information of the patterns corresponding to the parameterized states and/or modeling states cannot be identical when the parameterized states and/or modeling states are different, that is, at least one attribute of the visualized attributes of the pattern information is different when the parameterized states and/or modeling states are different, so that the different modeling states can be distinguished. In detail, in the case of synchronously displaying the state characterization information dynamically adapted to the parameterized state of the processed area in the live-action picture, different pattern information indicates different parameterized states, and the visualization information of the pattern corresponding to the parameterized states cannot be completely the same when the parameterized states are different, that is, at least one attribute of the visualization attributes of the pattern information is different when the parameterized states are different, so that the different parameterized states can be distinguished. In the case of synchronously displaying state characterization information dynamically adapted to the modeling state of the processed region in the live-action screen, different pattern information indicates different modeling states, and the visualized information of the pattern corresponding to the different modeling states cannot be identical, that is, at least one attribute of the visualized attributes of the pattern information is different when the modeling states are different, so that the different modeling states can be distinguished. In the case where the state characterization information synchronously displayed in the live-action picture simultaneously represents the parameterized state and the modeled state of the processed region, the same pattern information may reflect the parameterized state and the modeled state of the processed region simultaneously, and different pattern information represents different parameterized states and modeled states, the visualized information of the pattern corresponding to the parameterized state and modeled state at different times may not be identical, that is, at least one of the visualized attributes of the pattern information may be different when the parameterized state and the modeled state are different, so that the different parameterized states and modeled states may be distinguished.
Further, the mask layer region corresponding to each processed region is displayed in the live-action picture, and the specific embodiment is as follows: for the current processed area, determining the last processed area adjacent to the current processed area according to the adjacent relation and the processing sequence between the current processed area and other processed areas; generating pattern information in the mask layer area corresponding to the current processed area according to the pattern information in the mask layer area corresponding to the last processed area; and displaying the corresponding mask region above the current processed region according to the pattern information in the mask region corresponding to the current processed region. For example, assume that the currently scanned live-action picture is divided into two adjacent areas a and B, the area a is in a parameterized state, the area B is in an on-going parameterized state, and the difference in the intensity of the mask pattern is set as an attribute for distinguishing the modeling state, and other attributes are the same. In this example, the color of the mask layer in the area a is blue, the pattern shape on the mask layer is grid-shaped, the grid shape is sparse, and the color of the grid lines is white. Then according to the adjacent relation between the A area and the B area and the processing sequence of modeling the B area and then modeling the A area, the color of a part of the mask layer in the parameterized state in the B area can be determined to be blue, the pattern shape on the mask layer is in a grid shape, the grid shape is dense, and the color of grid lines is white. As the completion progress of the B-region parameterization process gradually increases, the B-region gradually changes from a dense grid to a sparse grid. The mask area is shown in fig. 2 g. The setting of the mask region visual attribute is not limited to this, and other visual attributes other than the mesh form may be used as the attribute for distinguishing the modeling state.
Further, according to the pattern information in the previous processed region corresponding to the mask region, the pattern information in the current processed region corresponding to the mask region is generated, and the specific implementation is as follows: acquiring the tail end positions and the extensible directions of the extensible lines from the pattern information in the corresponding mask layer area of the previous processed area; extending the extensible lines to the current processed area according to the tail end positions and the extensible directions of the extensible lines, and overlapping the extensible lines in the current processed area to form pattern information in a mask layer area corresponding to the current processed area; wherein, since the last processing region and the current processing region are not processed simultaneously, the visual properties of the pattern information formed by overlapping the plurality of extensible lines in the last processed region and the current processed region are not identical, for example, the pattern effect shown in fig. 2g may be presented.
In this embodiment, for the case of parameterization, the parameterization state of each processed area includes a parameterization progress state and a parameterization completion state, and dynamically updating the pattern information in each mask area according to the dynamic change of the parameterization state of each processed area includes: and updating the pattern information in the mask layer area corresponding to any processed area from a first visual state to a second visual state when the parameterized state of any processed area is changed from the parameterized state to the parameterized state, wherein the visual properties corresponding to the first visual state and the second visual state are not identical. Accordingly, for the case of three-dimensional modeling processing, where the modeling state of each processed region includes a modeling state and a modeling completion state, the pattern information in each mask region is dynamically updated according to the dynamic change of the modeling state of each processed region, which may be implemented as follows: and updating the pattern information in the mask layer area corresponding to any processed area from a first visual state to a second visual state when the modeling state of any processed area is changed from the modeling state to the modeling completion state, wherein the visual properties corresponding to the first visual state and the second visual state are not identical.
In this embodiment, for the case of parameterization, in the process of parameterizing a part of the entity space gradually, a floating window may be displayed on the live-action screen, and the parameterization process corresponding to the target entity space is dynamically displayed in the floating window, where a linkage relationship exists between the current parameterized part and the current processed area.
Further, when parameterizing is performed on a part of the entity space step by step, state characterization information dynamically adapted to the parameterized state of the processed area can be synchronously displayed in the live-action picture, so that a user can know the state and progress of the current three-dimensional space modeling conveniently, and each area where parameterized processing is completed, parameterized processing is performed and parameterized processing is not started is distinguished.
It should be noted that, the mask pattern information corresponding to the parameterization process needs to be different from the mask pattern information corresponding to the three-dimensional space modeling process, and the mask pattern information may be the same in type, but different in visualization state, for example, a part of mask color after parameterization is red, and a part of mask color after three-dimensional space modeling is green, but not limited thereto.
In this embodiment, the spatial attribute formed by each processed region may also be determined according to the adjacency relationship between each processed region, where the spatial attribute includes at least one of the number of corners, the spatial area, and the spatial height. The specific implementation mode when the number of the corners of each treated area is determined is as follows: splicing the treated areas according to the adjacent relation among the treated areas, and determining the hard-packed structural surface in each treated area, wherein the hard-packed structural surface comprises a top surface, a wall surface and a ground surface; the intersection of every three hard-sided structural surfaces is defined as a corner, whereby the number of corners of each treated area can be determined, but the manner of determining the number of corners of each treated area is not limited thereto. The specific embodiment when determining the spatial area of each treated region is as follows: the area of each of the hard-packed structure surfaces present in each of the processed areas is determined separately and summed separately to obtain the spatial area of each of the processed areas, but the manner of determining the spatial area of each of the processed areas is not limited thereto. The specific embodiment when determining the spatial height of each treated area is as follows: and splicing the processed areas according to the adjacent relation of the processed areas to obtain a three-dimensional space model of the processed areas, and determining the space height formed by the processed areas based on the three-dimensional space model, wherein the mode for determining the space height of the processed areas is not limited to the method.
Further, a closed loop is formed on the scan progress bar of the current target physical space, and/or the scan is determined to be completed when the number of corners is 4, and/or the sum of the spatial areas and the spatial height of each processed area are the same as the actual data of the target physical space.
Further, after the house type scanning is completed, the soft package structures such as doors and windows in the target entity space are scanned according to the guiding information of the doors and windows scanned on the graphical user interface until the scanning is completed.
Further, in order to improve the accuracy of determining whether to form a closed space, a "scan complete" control may be triggered for secondary confirmation. If the 'scanning completion' control is triggered, displaying prompt information that parameterization processing corresponding to the target entity space is completed and the three-dimensional space model is constructed, or responding to the operation of triggering the 'scanning completion' control, and jumping the fourth page to a graphical user interface of the next processing stage, namely a fifth page, so as to explain that a 'living room' forms a closed space; and if the 'scanning completion' control is triggered, displaying parameterization corresponding to the 'living room' and/or prompt information that the three-dimensional space model is not built, and continuously rescanning the area with unsuccessful parameterization and/or unsuccessful modeling in the target entity space until a closed space is formed. In this embodiment, when no closed space is formed, rescanning is only required for the area where parameterization is unsuccessful and/or modeling is unsuccessful, so that repeated scanning can be avoided, and thus modeling efficiency is improved. Here, the process of rescanning the area where modeling is unsuccessful and performing parameterization and/or three-dimensional modeling on the area is the same as the above process, which is not repeated.
Further, under the condition that the processed areas form a closed space, displaying a navigation guiding interface for guiding the user to enter other subspaces to continue the live-action scanning, namely, re-jumping to a third page, wherein the third page comprises at least two other subspaces, such as a kitchen, a bedroom or a bathroom; and responding to the subspace selection operation, displaying a navigation path from the current subspace to the selected subspace, so as to guide the user to carry the terminal equipment into the selected subspace and continuously carrying out the live-action scanning on the selected subspace.
Further, under the condition that each subspace is determined to form a closed space, displaying a home decoration design page, responding to the user to trigger the home decoration design, and carrying out the home decoration design on the three-dimensional space model corresponding to the target entity space. The home decoration design at least comprises a hard decoration structural surface design and a decoration style design, wherein the hard decoration structural surface design page is shown in fig. 2h, and the decoration style design page is shown in fig. 2 i. Specifically, in response to a home decoration design triggering operation, home decoration design is performed on a three-dimensional space model corresponding to a target entity space, and the specific implementation mode is as follows: splicing the three-dimensional space models corresponding to the subspaces according to the relative position relation among the subspaces to obtain a three-dimensional house model corresponding to the target house; responding to the home decoration design triggering operation, and displaying a three-dimensional house model, wherein the three-dimensional house model comprises three-dimensional space models corresponding to all subspaces; responding to the roaming operation, under the condition of roaming into the three-dimensional space model corresponding to the target entity space, performing home decoration design aiming at the three-dimensional space model corresponding to the target entity space, and outputting a home decoration design effect diagram.
In this embodiment, in response to the roaming operation, roaming is performed into the three-dimensional space model corresponding to the target entity space, and the specific implementation manner is as follows: the real scene picture in the target entity space comprises a plurality of roaming point positions, the roaming point positions are preset, the real scene picture can be switched between the roaming point positions, and at different roaming point positions, a user can see different areas of the real scene picture, for example, when roaming to a living room, the user can see the three-dimensional scene of the living room, when roaming to a kitchen, the user can see the three-dimensional scene of the kitchen, and when roaming to a bedroom, the user can see the three-dimensional scene of a master bedroom. The roaming route can be preset between different roaming points, for example, a roaming control can be set on a graphical user interface, when a user initiates roaming operation through the roaming control on the graphical user interface, the graphical user interface can display a roaming point name list, the names of the roaming points in the list can be living room, bedroom, guest restaurant and the like, then the user selects the roaming point according to the own requirement, the terminal equipment can sense the roaming operation initiated by the user, determine that the roaming to the target roaming point position is required, and roam from the current roaming point position to the target roaming point position along the set roaming route. In addition to initiating the roaming operation through the roaming control, the user may click a certain position on the gui, and the terminal device may lock the target roaming point according to the position clicked by the user (lock the roaming point as the target roaming point as long as clicking within a certain range of the roaming point), and roam from the current roaming point to the target roaming point along the set roaming path.
It should be noted that, in addition to roaming from one roaming point to another roaming point, the user may also perform viewing angle switching at the current location, that is, switching the viewing angle, so as to view the same spatial region from different viewing angles. For example, a view angle switching control can be set on the graphical user interface, and a user can initiate a view angle switching operation in a left-right sliding manner through the view angle switching control on the graphical user interface, so that the camera swings a corresponding angle along with the view angle switching operation to change the view angle, and the switching of the view angle is completed. Besides initiating the view angle switching operation through the left-right sliding view angle switching control, a user can touch the view angle switching control on the graphical user interface in a clicking mode, then the terminal equipment responds to the touch operation, a selectable view angle list is displayed on the graphical user interface, the list comprises a plurality of view angles which can be selected by the user, such as a left front 30-degree view angle, a left front 50-degree view angle, a right front view angle, a right rear 30-degree view angle and the like, then the user selects the view angle according to own requirements, and further the live-action picture is observed under the selected view angle.
In this embodiment, home decoration design is performed for a three-dimensional space model corresponding to the target entity space, as shown in fig. 2h and 2 i. Decoration controls for each hard-wired structure are displayed. And responding to the triggering operation of the decoration control of any hard-wearing structure, displaying various decoration types of the hard-wearing structure, and then responding to the selection operation of a user on the decoration types, and outputting a home decoration design effect graph. For example, a home effect graph is generated in response to a user's selection operation of a wall surface, a floor surface, skirting line, furniture, or a decoration style, or the like. The furniture can be leather furniture, wood furniture and the like, and the decoration style can be style 1, style 2, style 3, style 4 or the like.
In the embodiment of the application, the real-scene scanning is carried out on the target entity space, the parameterization and/or the three-dimensional modeling processing is carried out on the target entity space based on the real-scene picture obtained by scanning, and the information is closer to the real data, so that the parameterization and/or the modeling accuracy is guaranteed; in addition, in the parameterization and/or three-dimensional modeling process, the parameterization and/or three-dimensional modeling states of all the areas are synchronously displayed on the live-action picture, so that a user can conveniently know whether the parameterization and/or three-dimensional modeling process of all the areas is finished or not in time. Further, corresponding processing can be adopted according to the parameterization process and/or modeling state of each region, for example, for the region which is not parameterized or not modeled, the region can be ensured to be positioned in the field of view of the camera, factors such as scanning speed and the like can be adjusted so as to parameterize and/or model the region, the integrity and accuracy of whole space parameterization identification and/or model construction are ensured, and only the region which is not parameterized or modeled and is unsuccessful in parameterization or modeling can be scanned, so that repeated scanning can be avoided, and modeling efficiency is improved.
Fig. 3 is a schematic structural diagram of a physical space scanning device according to an exemplary embodiment of the present application. The device can be applied to terminal equipment, and the terminal equipment is located in the target entity space and can move. As shown in fig. 3, the apparatus includes:
the scanning module 31 is configured to perform live-action scanning on the target entity space during the moving process of the terminal device;
the display module 32 is configured to display a live-action picture currently scanned by the scanning module, where the live-action picture includes a part of physical space located in the scan field of view;
a processing module 33, configured to perform parameterization processing and/or three-dimensional space modeling processing on a part of the physical space step by step;
the display module 32 is also configured to: and synchronously displaying state representation information dynamically adapted to the modeling state of the processed region in the live-action picture, wherein the processed region and the parameterized processing and/or modeling state thereof are dynamically changed, and different parameterized processing and/or modeling states correspond to different state representation information.
Further, the processing module 33 is specifically configured to, when configured to perform parameterization and/or three-dimensional modeling on a part of the physical space step by step: and according to the image data quantity corresponding to the partial entity space, combining the image data quantity supported by single processing, and gradually carrying out parameterization processing and/or three-dimensional space modeling processing on different areas in the partial entity space.
Further, the display module 32, when used for synchronously displaying, in the live-action screen, state characterization information dynamically adapted to the parameterized and/or modeled states of the processed area, is specifically configured to: displaying a covering region corresponding to each processed region in a live-action picture, wherein each covering region comprises pattern information which is matched with the parameterization processing and/or modeling state of the corresponding processed region; according to the dynamic change of the parameterized processing and/or modeling state of each processed area, the pattern information in each mask area is dynamically updated, and different pattern information represents different parameterized processing and/or modeling states.
Further, the display module 32 is specifically configured to, when configured to display, in the live-action screen, a mask area corresponding to each processed area: for the current processed area, determining the last processed area adjacent to the current processed area according to the adjacent relation and the processing sequence between the current processed area and other processed areas; generating pattern information in the mask layer area corresponding to the current processed area according to the pattern information in the mask layer area corresponding to the last processed area; and displaying the corresponding mask region above the current processed region according to the pattern information in the mask region corresponding to the current processed region.
Further, the display module 32 is specifically configured to, when generating the pattern information in the current processed region corresponding to the mask region according to the pattern information in the previous processed region corresponding to the mask region: acquiring the tail end positions and the extensible directions of the extensible lines from the pattern information in the corresponding mask layer area of the previous processed area; extending the extensible lines to the current processed area according to the tail end positions and the extensible directions of the extensible lines, and overlapping the extensible lines in the current processed area to form pattern information in a mask layer area corresponding to the current processed area; wherein the plurality of extensible lines are not identical in visual property with respect to the pattern information formed by overlapping each other in the last processed area and in the current processed area.
Further, the modeling state of each processed area includes a modeling state and a modeling completion state, and the display module 32 is specifically configured to, when configured to dynamically update the pattern information in each mask area according to the dynamic change of the modeling state of each processed area: for any processed area, when the modeling state of the processed area is changed from the modeling state to the modeling completion state, updating the pattern information in the mask area corresponding to the processed area from a first visual state to a second visual state, wherein the visual properties corresponding to the first visual state and the second visual state are not completely the same; or, the parameterized state of each processed area includes a parameterized proceeding state and a parameterized finishing state, and dynamically updating the pattern information in each mask area according to the dynamic change of the parameterized state of each processed area, including: and updating the pattern information in the mask layer area corresponding to any processed area from a first visual state to a second visual state when the parameterized state of any processed area is changed from the parameterized state to the parameterized state, wherein the visual properties corresponding to the first visual state and the second visual state are not identical.
Further, the device is also used for: determining a spatial attribute formed by each processed region according to the adjacent relation among the processed regions, wherein the spatial attribute comprises at least one of the number of corners, the spatial area and the spatial height; under the condition that each processed area is determined to form a closed space according to the space attribute, displaying the constructed prompt information of parameterization processing and/or three-dimensional space model modeling processing corresponding to the target entity space; and under the condition that the three-dimensional space modeling processing is completed, responding to the home decoration design triggering operation, carrying out home decoration design on the three-dimensional space model corresponding to the target entity space, and outputting a home decoration design effect diagram.
Further, the target entity space is a subspace of the target house, and the device is further configured to, in a case where it is determined that each processed region has formed a closed space according to the spatial attribute: displaying a navigation guide interface for guiding a user to enter other subspaces to continue to perform live-action scanning, wherein the navigation guide interface comprises at least two other subspaces; and responding to the subspace selection operation, displaying a navigation path from the current subspace to the selected subspace, so as to guide the user to carry the terminal equipment into the selected subspace and continuously carrying out the live-action scanning on the selected subspace.
Further, the device is used for responding to home decoration design triggering operation, and when the device is used for carrying out home decoration aiming at the three-dimensional space model corresponding to the target entity space, the device is specifically used for: splicing the three-dimensional space models corresponding to the subspaces according to the relative position relation among the subspaces to obtain a three-dimensional house model corresponding to the target house; responding to the home decoration design triggering operation, and displaying a three-dimensional house model, wherein the three-dimensional house model comprises three-dimensional space models corresponding to all subspaces; responding to the roaming operation, and performing home decoration design on the three-dimensional space model corresponding to the target entity space under the condition of roaming into the three-dimensional space model corresponding to the target entity space.
Further, the processing module 33, when used in the process of three-dimensional space modeling of a part of the physical space, is further configured to: displaying a floating window on the live-action picture, dynamically displaying the construction process of the three-dimensional space model corresponding to the target entity space in the floating window, and enabling a linkage relation to exist between the currently constructed model part and the currently processed area.
Further, before the display module 32 is configured to perform a live-action scan on the target entity space, it is further configured to: displaying space entry guide information to prompt a user to carry terminal equipment into a target entity space; and displaying the mobile scanning guide information to prompt the user to carry the terminal equipment to carry out mobile scanning on the target entity space, wherein the mobile scanning guide information at least comprises mobile direction guide information and scanning mode guide information.
What needs to be explained here is: the technical solution described in each method embodiment may be implemented by the live-action space scanning device provided in the foregoing embodiment, and the specific implementation principle of each module or unit may be referred to the corresponding content in each method embodiment, which is not described herein.
Fig. 4 is a schematic structural diagram of a terminal device according to an exemplary embodiment of the present application. A terminal device is moveable and positionable in a target entity space, the terminal device comprising: a memory 40a and a processor 40b; wherein the memory 40a is for storing computer programs/instructions, and the processor 40b is coupled to the memory 40a for executing the computer programs/instructions for performing the steps of:
in the moving process of the terminal equipment, performing live-action scanning on a target entity space; displaying a real scene currently scanned by the scanning module, wherein the real scene comprises a part of entity space positioned in a scanning view field; gradually carrying out parameterization processing and/or three-dimensional space modeling processing on part of the entity space; and synchronously displaying state representation information dynamically adapted to the modeling state of the processed region in the live-action picture, wherein the processed region and the parameterized processing and/or modeling state thereof are dynamically changed, and different parameterized processing and/or modeling states correspond to different state representation information.
Further, the processor 40b is specifically configured to, when configured to perform parameterization and/or three-dimensional modeling on a part of the physical space step by step: and according to the image data quantity corresponding to the partial entity space, combining the image data quantity supported by single processing, and gradually carrying out parameterization processing on different areas in the partial entity space.
Further, the processor 40b, when used for synchronously displaying the state characterization information dynamically adapted to the parameterized and/or modeled state of the processed region in the live-action picture, is specifically configured to: displaying a covering region corresponding to each processed region in a live-action picture, wherein each covering region comprises pattern information which is matched with the parameterization processing and/or modeling state of the corresponding processed region; according to the dynamic change of the parameterized processing and/or modeling state of each processed area, the pattern information in each mask area is dynamically updated, and different pattern information represents different parameterized processing and/or modeling states.
Further, the processor 40b, when used for displaying the mask area corresponding to each processed area in the live-action picture, is specifically configured to: for the current processed area, determining the last processed area adjacent to the current processed area according to the adjacent relation and the processing sequence between the current processed area and other processed areas; generating pattern information in the mask layer area corresponding to the current processed area according to the pattern information in the mask layer area corresponding to the last processed area; and displaying the corresponding mask region above the current processed region according to the pattern information in the mask region corresponding to the current processed region.
Further, the processor 40b is specifically configured to, when generating the pattern information in the current processed region corresponding to the mask region according to the pattern information in the previous processed region corresponding to the mask region: acquiring the tail end positions and the extensible directions of the extensible lines from the pattern information in the corresponding mask layer area of the previous processed area; extending the extensible lines to the current processed area according to the tail end positions and the extensible directions of the extensible lines, and overlapping the extensible lines in the current processed area to form pattern information in a mask layer area corresponding to the current processed area; wherein the plurality of extensible lines are not identical in visual property with respect to the pattern information formed by overlapping each other in the last processed area and in the current processed area.
Further, the modeling state of each processed region includes a modeling state and a modeling completion state, and the processor 40b is specifically configured to, when configured to dynamically update the pattern information in each mask region according to the dynamic change of the modeling state of each processed region: for any processed area, when the modeling state of the processed area is changed from the modeling state to the modeling completion state, updating the pattern information in the mask area corresponding to the processed area from a first visual state to a second visual state, wherein the visual properties corresponding to the first visual state and the second visual state are not completely the same; or, the parameterized state of each processed area includes a parameterized proceeding state and a parameterized finishing state, and dynamically updating the pattern information in each mask area according to the dynamic change of the parameterized state of each processed area, including: and updating the pattern information in the mask layer area corresponding to any processed area from a first visual state to a second visual state when the parameterized state of any processed area is changed from the parameterized state to the parameterized state, wherein the visual properties corresponding to the first visual state and the second visual state are not identical.
Further, the processor 40b is further configured to: determining a spatial attribute formed by each processed region according to the adjacent relation among the processed regions, wherein the spatial attribute comprises at least one of the number of corners, the spatial area and the spatial height; under the condition that each processed area is determined to form a closed space according to the space attribute, displaying the constructed prompt information of parameterization processing and/or three-dimensional space model modeling processing corresponding to the target entity space; and under the condition that the three-dimensional space modeling processing is completed, responding to the home decoration design triggering operation, carrying out home decoration design on the three-dimensional space model corresponding to the target entity space, and outputting a home decoration design effect diagram.
Further, the target entity space is a subspace of the target house, and the processor 40b is further configured to, in a case where it is determined that each processed region has formed a closed space according to the spatial attribute: displaying a navigation guide interface for guiding a user to enter other subspaces to continue to perform live-action scanning, wherein the navigation guide interface comprises at least two other subspaces; and responding to the subspace selection operation, displaying a navigation path from the current subspace to the selected subspace, so as to guide the user to carry the terminal equipment into the selected subspace and continuously carrying out the live-action scanning on the selected subspace.
Further, the processor 40b is configured to, when performing home design for the three-dimensional space model corresponding to the target physical space in response to the home design triggering operation, specifically: splicing the three-dimensional space models corresponding to the subspaces according to the relative position relation among the subspaces to obtain a three-dimensional house model corresponding to the target house; responding to the home decoration design triggering operation, and displaying a three-dimensional house model, wherein the three-dimensional house model comprises three-dimensional space models corresponding to all subspaces; responding to the roaming operation, and performing home decoration design on the three-dimensional space model corresponding to the target entity space under the condition of roaming into the three-dimensional space model corresponding to the target entity space.
Further, the processor 40b is further configured to, in the process for performing three-dimensional modeling on the part of the physical space step by step: displaying a floating window on the live-action picture, dynamically displaying the construction process of the three-dimensional space model corresponding to the target entity space in the floating window, and enabling a linkage relation to exist between the currently constructed model part and the currently processed area.
Further, the processor 40b, before being used for performing a live-action scan on the target entity space, is further configured to: displaying space entry guide information to prompt a user to carry terminal equipment into a target entity space; and displaying the mobile scanning guide information to prompt the user to carry the terminal equipment to carry out mobile scanning on the target entity space, wherein the mobile scanning guide information at least comprises mobile direction guide information and scanning mode guide information.
Further, as shown in fig. 4, the terminal device further includes: a display 40c, a communication component 40d, a power component 40e, an audio component 40f, and other components. Only part of the components are schematically shown in fig. 4, which does not mean that the terminal device only comprises the components shown in fig. 4. The terminal device of the embodiment may be implemented as a terminal device such as a desktop computer, a notebook computer, a smart phone, or an IOT device.
What needs to be explained here is: the terminal device provided in the foregoing embodiments may implement the technical solutions described in the foregoing method embodiments, and the specific implementation principles of the foregoing modules or units may refer to corresponding contents in the foregoing method embodiments, which are not described herein again.
An exemplary embodiment of the present application also provides a computer-readable storage medium storing a computer program/instruction that, when executed by one or more processors, cause the one or more processors to implement the steps in the method embodiments of the present application as described above.
What needs to be explained here is: the storage medium provided in the foregoing embodiments may implement the technical solutions described in the foregoing method embodiments, and the specific implementation principles of the foregoing modules or units may refer to corresponding contents in the foregoing method embodiments, which are not repeated herein.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (14)

1. A physical space scanning method, adapted for a terminal device, the terminal device being located in a target physical space and being movable, the method comprising:
in the moving process of the terminal equipment, carrying out live-action scanning on the target entity space, and displaying a live-action picture which is currently scanned, wherein the live-action picture comprises a part of entity space positioned in a scanning view field;
and carrying out parameterization processing and/or three-dimensional space modeling processing on the partial entity space step by step, synchronously displaying a covering region corresponding to each processed region in the live-action picture, wherein the covering region carries pattern information corresponding to state representation information dynamically adapted to parameterization states and/or modeling states of each processed region, the processed region and the parameterization states and/or modeling states thereof are dynamically changed, and different state representation information corresponding to different parameterization states and/or modeling states is generated according to the adjacent relation between the processed region and other processed regions and the pattern information in the covering region corresponding to the last processed region adjacent to the processed region, wherein the pattern information in the covering region corresponding to each processed region is determined according to the adjacent relation and the processing sequence between the processed region and other processed regions.
2. The method according to claim 1, wherein the step-wise parameterizing and/or three-dimensional space modeling of the part of the physical space comprises:
and according to the image data quantity corresponding to the partial entity space, combining the image data quantity supported by single processing, and gradually carrying out parameterization processing and/or three-dimensional space modeling processing on different areas in the partial entity space.
3. Method according to claim 1, characterized in that the synchronous display of state characterization information dynamically adapted to the parameterized and/or modeled state of the processed area in the live-action picture comprises:
displaying a covering region corresponding to each processed region in the live-action picture, wherein each covering region comprises pattern information adapted to the parameterized state and/or modeling state of the corresponding processed region;
according to the dynamic change of the parameterized state and/or the modeling state of each processed area, the pattern information in each mask area is dynamically updated, and different pattern information represents different parameterized states and/or modeling states.
4. A method according to claim 3, wherein displaying, in the live-action picture, a mask area corresponding to each processed area, comprises:
For the current processed area, determining the last processed area adjacent to the current processed area according to the adjacent relation and the processing sequence between the current processed area and other processed areas;
generating pattern information in the mask layer area corresponding to the current processed area according to the pattern information in the mask layer area corresponding to the last processed area;
and displaying the corresponding mask region above the current processed region according to the pattern information in the mask region corresponding to the current processed region.
5. The method of claim 4, wherein generating the pattern information in the mask region corresponding to the current processed region based on the pattern information in the mask region corresponding to the last processed region, comprises:
acquiring the tail end positions and the extensible directions of the extensible lines from the pattern information in the corresponding mask layer area of the previous processed area;
extending the plurality of extensible lines to the current processed area according to the tail end positions and the extensible directions of the plurality of extensible lines, wherein the plurality of extensible lines are mutually overlapped in the current processed area to form pattern information in a covering area corresponding to the current processed area;
Wherein the plurality of extensible lines are not identical in visual property with respect to pattern information formed by overlapping each other in the last processed area and the current processed area.
6. A method according to claim 3, wherein the modeling state of each processed region includes a modeling-in state and a modeling-out state, and dynamically updating the pattern information in each mask region based on the dynamic change of the modeling state of each processed region comprises: for any processed area, when the modeling state of the processed area is changed from the modeling state to the modeling completion state, updating the pattern information in the mask layer area corresponding to the processed area from a first visual state to a second visual state, wherein the visual properties corresponding to the first visual state and the second visual state are not completely the same;
or,
the parameterized state of each processed area includes a parameterized proceeding state and a parameterized finishing state, and dynamically updating the pattern information in each mask area according to the dynamic change of the parameterized state of each processed area, including: and updating the pattern information in the mask layer area corresponding to any processed area from a first visual state to a second visual state when the parameterized state of any processed area is changed from the parameterized state to the parameterized state, wherein the visual properties corresponding to the first visual state and the second visual state are not identical.
7. The method of any one of claims 1-6, further comprising:
determining a spatial attribute formed by each processed region according to the adjacent relation among the processed regions, wherein the spatial attribute comprises at least one of the number of corners, the spatial area and the spatial height;
under the condition that each processed area is determined to form a closed space according to the space attribute, displaying prompt information that parameterization processing and/or three-dimensional space modeling processing corresponding to the target entity space are completed; and
and under the condition that the three-dimensional space modeling processing is completed, responding to home decoration design triggering operation, carrying out home decoration design on the three-dimensional space model corresponding to the target entity space, and outputting a home decoration design effect diagram.
8. The method of claim 7, wherein the target entity space is a subspace of a target house, and wherein in the event that the respective processed regions are determined to have formed a closed space based on the spatial attribute, the method further comprises:
displaying a navigation guide interface for guiding a user to enter other subspaces to continue to perform live-action scanning, wherein the navigation guide interface comprises at least two other subspaces; and
And responding to the subspace selection operation, displaying a navigation path from the current subspace to the selected subspace, so as to guide the user to carry the terminal equipment into the selected subspace and continuously carrying out live-action scanning on the selected subspace.
9. The method of claim 8, wherein in response to a home design trigger operation, home design is performed for the three-dimensional space model corresponding to the target entity space, comprising:
splicing the three-dimensional space models corresponding to the subspaces according to the relative position relation among the subspaces to obtain a three-dimensional house model corresponding to the target house;
responding to the home decoration design triggering operation, and displaying the three-dimensional house model, wherein the three-dimensional house model comprises a three-dimensional space model corresponding to each subspace;
responding to the roaming operation, and performing home decoration design on the three-dimensional space model corresponding to the target entity space under the condition of roaming into the three-dimensional space model corresponding to the target entity space.
10. The method of any one of claims 1-6, further comprising, during the step-wise three-dimensional spatial modeling of the portion of physical space:
Displaying a floating window on the live-action picture, dynamically displaying the construction process of the three-dimensional space model corresponding to the target entity space in the floating window, and enabling a linkage relation to exist between a currently constructed model part and a currently processed area.
11. The method of any of claims 1-6, further comprising, prior to performing a live-action scan of the target entity space, at least one of:
displaying space entry guide information to prompt a user to enter the target entity space with the terminal equipment;
and displaying mobile scanning guide information to prompt a user to carry the terminal equipment to carry out mobile scanning on the target entity space, wherein the mobile scanning guide information at least comprises mobile direction guide information and scanning mode guide information.
12. A physical space scanning apparatus, applicable to a terminal device, the terminal device being located in a target physical space and being movable, the apparatus comprising:
the scanning module is used for carrying out live-action scanning on the target entity space in the moving process of the terminal equipment;
the display module is used for displaying the real scene picture currently scanned by the scanning module, and the real scene picture comprises a part of entity space positioned in the scanning view field;
The processing module is used for carrying out parameterization processing and/or three-dimensional space modeling processing on the part of the entity space step by step;
the display module is further configured to: and synchronously displaying the covering areas corresponding to the processed areas in the live-action picture, wherein the covering areas bear pattern information corresponding to state representation information dynamically adapted to the parameterization processing and/or modeling states of the processed areas, the processed areas and the parameterization processing and/or modeling states thereof are dynamically changed, different state representation information corresponding to different parameterization processing and/or modeling states is generated according to the adjacent relation between the processed areas and other processed areas and the pattern information in the covering area corresponding to the last processed area adjacent to the processed areas, which is determined according to the processing sequence and the adjacent relation between the processed areas and other processed areas.
13. A terminal device, wherein the terminal device is moveable and positionable in a target entity space, the terminal device comprising: a memory and a processor; wherein the memory is for storing a computer program/instruction, the processor being coupled to the memory for executing the computer program/instruction for implementing the steps in the method of any of claims 1-11.
14. A computer readable storage medium storing a computer program/instructions which, when executed by a processor, cause the processor to carry out the steps of the method of any one of claims 1 to 11.
CN202210267652.6A 2022-03-17 2022-03-17 Entity space scanning method, device, terminal equipment and storage medium Active CN114727090B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210267652.6A CN114727090B (en) 2022-03-17 2022-03-17 Entity space scanning method, device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210267652.6A CN114727090B (en) 2022-03-17 2022-03-17 Entity space scanning method, device, terminal equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114727090A CN114727090A (en) 2022-07-08
CN114727090B true CN114727090B (en) 2024-01-26

Family

ID=82237176

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210267652.6A Active CN114727090B (en) 2022-03-17 2022-03-17 Entity space scanning method, device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114727090B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013174867A1 (en) * 2012-05-22 2013-11-28 Pimaia Method for modeling a building or a room of same on the basis of a limited number of photographs of the walls thereof
WO2018005059A1 (en) * 2016-06-30 2018-01-04 Microsoft Technology Licensing, Llc Three-dimensional object scanning feedback
US9881425B1 (en) * 2016-09-09 2018-01-30 The Boeing Company Synchronized side-by-side display of real and virtual environments
CN111161144A (en) * 2019-12-18 2020-05-15 北京城市网邻信息技术有限公司 Panorama acquisition method, panorama acquisition device and storage medium
US10699404B1 (en) * 2017-11-22 2020-06-30 State Farm Mutual Automobile Insurance Company Guided vehicle capture for virtual model generation
CN111932666A (en) * 2020-07-17 2020-11-13 北京字节跳动网络技术有限公司 Reconstruction method and device of house three-dimensional virtual image and electronic equipment
WO2021249390A1 (en) * 2020-06-12 2021-12-16 贝壳技术有限公司 Method and apparatus for implementing augmented reality, storage medium, and electronic device
CN114003322A (en) * 2021-09-16 2022-02-01 北京城市网邻信息技术有限公司 Method, equipment and device for displaying real scene space of house and storage medium
CN114186311A (en) * 2021-11-30 2022-03-15 北京城市网邻信息技术有限公司 Information display method, equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200349758A1 (en) * 2017-05-31 2020-11-05 Ethan Bryce Paulson Method and System for the 3D Design and Calibration of 2D Substrates
US10872467B2 (en) * 2018-06-06 2020-12-22 Ke.Com (Beijing) Technology Co., Ltd. Method for data collection and model generation of house
CN111369424A (en) * 2020-02-10 2020-07-03 北京城市网邻信息技术有限公司 Method, device, equipment and storage medium for generating three-dimensional space of target house

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013174867A1 (en) * 2012-05-22 2013-11-28 Pimaia Method for modeling a building or a room of same on the basis of a limited number of photographs of the walls thereof
WO2018005059A1 (en) * 2016-06-30 2018-01-04 Microsoft Technology Licensing, Llc Three-dimensional object scanning feedback
US9881425B1 (en) * 2016-09-09 2018-01-30 The Boeing Company Synchronized side-by-side display of real and virtual environments
US10699404B1 (en) * 2017-11-22 2020-06-30 State Farm Mutual Automobile Insurance Company Guided vehicle capture for virtual model generation
CN111161144A (en) * 2019-12-18 2020-05-15 北京城市网邻信息技术有限公司 Panorama acquisition method, panorama acquisition device and storage medium
WO2021249390A1 (en) * 2020-06-12 2021-12-16 贝壳技术有限公司 Method and apparatus for implementing augmented reality, storage medium, and electronic device
CN111932666A (en) * 2020-07-17 2020-11-13 北京字节跳动网络技术有限公司 Reconstruction method and device of house three-dimensional virtual image and electronic equipment
CN114003322A (en) * 2021-09-16 2022-02-01 北京城市网邻信息技术有限公司 Method, equipment and device for displaying real scene space of house and storage medium
CN114186311A (en) * 2021-11-30 2022-03-15 北京城市网邻信息技术有限公司 Information display method, equipment and storage medium

Also Published As

Publication number Publication date
CN114727090A (en) 2022-07-08

Similar Documents

Publication Publication Date Title
US10655969B2 (en) Determining and/or generating a navigation path through a captured three-dimensional model rendered on a device
US9940404B2 (en) Three-dimensional (3D) browsing
KR101842106B1 (en) Generating augmented reality content for unknown objects
US20220164493A1 (en) Automated Tools For Generating Mapping Information For Buildings
US6271842B1 (en) Navigation via environmental objects in three-dimensional workspace interactive displays
US20180225885A1 (en) Zone-based three-dimensional (3d) browsing
CN107870672B (en) Method and device for realizing menu panel in virtual reality scene and readable storage medium
JP7121811B2 (en) Method, apparatus, and storage medium for displaying three-dimensional spatial views
US20240078703A1 (en) Personalized scene image processing method, apparatus and storage medium
KR20190009081A (en) Method and system for authoring ar content by collecting ar content templates based on crowdsourcing
CN115690375B (en) Building model modification interaction method, system and terminal based on virtual reality technology
JP2020523668A (en) System and method for configuring virtual camera
Sun et al. Enabling participatory design of 3D virtual scenes on mobile devices
CN114727090B (en) Entity space scanning method, device, terminal equipment and storage medium
US20210241539A1 (en) Broker For Instancing
CN112612463A (en) Graphical programming control method, system and device
CN111210486B (en) Method and device for realizing streamer effect
CN111589151A (en) Method, device, equipment and storage medium for realizing interactive function
CN112181394A (en) Method, device and equipment for creating three-dimensional building model component
JP4091403B2 (en) Image simulation program, image simulation method, and image simulation apparatus
KR101806922B1 (en) Method and apparatus for producing a virtual reality content
CN113742507A (en) Method for three-dimensionally displaying an article and associated device
CN108268701A (en) Ceiling joist accessory moving method, system and electronic equipment
CN111617475A (en) Interactive object construction method, device, equipment and storage medium
JP7490684B2 (en) ROBOT CONTROL METHOD, DEVICE, STORAGE MEDIUM, ELECTRONIC DEVICE, PROGRAM PRODUCT, AND ROBOT

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant