CN114727090A - Entity space scanning method, device, terminal equipment and storage medium - Google Patents

Entity space scanning method, device, terminal equipment and storage medium Download PDF

Info

Publication number
CN114727090A
CN114727090A CN202210267652.6A CN202210267652A CN114727090A CN 114727090 A CN114727090 A CN 114727090A CN 202210267652 A CN202210267652 A CN 202210267652A CN 114727090 A CN114727090 A CN 114727090A
Authority
CN
China
Prior art keywords
state
space
modeling
parameterization
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210267652.6A
Other languages
Chinese (zh)
Other versions
CN114727090B (en
Inventor
卞文瀚
胡晓航
金星安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202210267652.6A priority Critical patent/CN114727090B/en
Publication of CN114727090A publication Critical patent/CN114727090A/en
Application granted granted Critical
Publication of CN114727090B publication Critical patent/CN114727090B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/04Architectural design, interior design

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Architecture (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a method and a device for scanning an entity space, terminal equipment and a storage medium. In the embodiment of the application, the real scene scanning is carried out on the target entity space, and the parameterization processing is carried out on the target entity space based on the real scene picture obtained by scanning, so that the information is closer to the real data; in addition, in the parameterization processing process, the parameterization processing state of each area is synchronously displayed on the live-action picture, so that a user can conveniently know whether the parameterization processing process of each area is finished or not in time. Furthermore, corresponding processing can be adopted according to the parameterization processing state of each region, the region can be ensured to be positioned in the field of view of the camera, factors such as scanning speed and the like can be adjusted, so that the parameterization processing can be performed on the region, the integrity and the accuracy of the whole space parameterization processing can be ensured, the region which is not subjected to parameterization processing and fails in parameterization processing can be scanned, repeated scanning can be avoided, and the modeling efficiency can be improved.

Description

Entity space scanning method and device, terminal equipment and storage medium
Technical Field
The present application relates to the field of internet technologies, and in particular, to a method and an apparatus for scanning an entity space, a terminal device, and a storage medium.
Background
With the development of smart home technology, online home decoration is more and more popular with people. People can carry out home decoration design on line through various home decoration tools, the functions of the existing home decoration tools are more and more abundant, home decoration schemes of various styles and styles can be provided for users, and the increasing demands of the users on home decoration are met.
Before providing a home decoration scheme for a user, the home decoration tool needs to acquire a three-dimensional house model which the user wishes to perform home decoration design in advance, so that the user can perform home decoration design in the three-dimensional house model. In the prior art, home decoration tools typically construct a three-dimensional house model based on a house layout by scanning the house layout. The mode for acquiring the three-dimensional house model has low efficiency, and the built three-dimensional house model has low accuracy and has certain difference with an actual house, which influences the later-stage home decoration design effect.
Disclosure of Invention
Aspects of the present application provide a method, an apparatus, a terminal device and a storage medium for scanning an entity space, so as to solve the technical problems of low efficiency of building a three-dimensional house model and low precision of the built three-dimensional house model.
The embodiment of the application provides an entity space scanning method, which is suitable for terminal equipment, wherein the terminal equipment is positioned in a target entity space and can move, and the method comprises the following steps: in the moving process of the terminal equipment, real-scene scanning is carried out on a target entity space, and a currently scanned real-scene picture is displayed, wherein the real-scene picture comprises a part of entity space in a scanning view field; and carrying out parameterization processing and/or three-dimensional space modeling processing on part of the solid space step by step, and synchronously displaying state representation information dynamically adaptive to the parameterization processing and/or modeling state of the processed region in a live-action picture, wherein the processed region and the parameterization processing and/or modeling state thereof are dynamically changed, and different parameterization processing and/or modeling states correspond to different state representation information.
The embodiment of the present application further provides an entity space scanning apparatus, which can be applied to a terminal device, where the terminal device is located in a target entity space and is movable, and the apparatus includes: the scanning module is used for carrying out real scene scanning on the target entity space in the moving process of the terminal equipment; the display module is used for displaying the live-action picture scanned by the scanning module currently, and the live-action picture comprises part of entity space in the scanning field of view; the processing module is used for carrying out parameterization processing and/or three-dimensional space modeling processing on part of the entity space step by step; the display module is further configured to: and synchronously displaying state representation information dynamically adapted to the parameterization processing and/or modeling state of the processed area in the live-action picture, wherein the processed area and the parameterization processing and/or modeling state thereof are dynamically changed, and different parameterization processing and/or modeling states correspond to different state representation information.
The embodiment of the present application further provides a terminal device, where the terminal device may be located in a target entity space and may be mobile, and the terminal device includes: a memory and a processor; wherein a memory is used for storing computer programs/instructions, and the processor is coupled with the memory for executing the computer programs/instructions for implementing the steps in the method.
Embodiments of the present application also provide a computer readable storage medium storing a computer program/instructions, which when executed by a processor, causes the processor to implement the steps of the above-mentioned method.
In the embodiment of the application, the real scene scanning is carried out on the target entity space, and the parameterization processing and/or the three-dimensional modeling processing are carried out on the target entity space based on the real scene picture obtained by scanning, so that the information is closer to the real data, and the accuracy of the parameterization processing and/or the modeling is favorably ensured; in addition, in the parameterization processing and/or three-dimensional modeling process, the parameterization processing and/or three-dimensional modeling state of each area is synchronously displayed on the live-action picture, so that a user can conveniently know whether the parameterization processing and/or three-dimensional modeling process of each area is finished or not in time. Furthermore, corresponding processing can be adopted according to the parameterization process and/or modeling state of each region, for example, for the regions which are not subjected to parameterization processing or modeling, the regions can be ensured to be positioned in the field of view of the camera, factors such as scanning speed and the like can be adjusted, so that the parameterization processing and/or three-dimensional modeling can be carried out on the regions, the integrity and the accuracy of the whole space parameterization processing and/or model construction can be ensured, and only the regions which are not subjected to parameterization processing or modeling and in which the parameterization processing fails or the modeling fails can be scanned, so that repeated scanning can be avoided, and the modeling efficiency can be improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flowchart of a physical space scanning method according to an exemplary embodiment of the present application;
FIGS. 2 a-2 i are schematic diagrams of various pages shown on a graphical user interface provided by an exemplary embodiment of the present application;
fig. 3 is a schematic structural diagram of a physical space scanning apparatus according to an exemplary embodiment of the present application;
fig. 4 is a schematic structural diagram of a terminal device according to an exemplary embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Aiming at the technical problems of low building efficiency of a house model and low precision of a built three-dimensional house model in the prior art, in some embodiments of the application, a terminal device moves in a target entity space, performs live-action scanning on the target entity space in the moving process, and displays a currently scanned live-action picture comprising part of the entity space in a scanning view field; and on the basis of the scanned live-action picture, gradually carrying out parameterization processing and/or three-dimensional space modeling processing on part of the entity space, and synchronously displaying the parameterization state and/or modeling state of each area on the live-action picture in the parameterization processing and/or modeling process. The real scene picture obtained by scanning is subjected to parameterization processing and/or three-dimensional modeling processing, so that the information is closer to real data, and the accuracy of parameterization and/or modeling is favorably ensured; in addition, in the parameterization and/or modeling process, the parameterization state and/or modeling state of each region is synchronously displayed on the live-action picture, so that a user can conveniently know whether the parameterization process and/or three-dimensional modeling process of each region is completed or not in time.
Furthermore, corresponding processing can be adopted according to the parameterization state and/or modeling state of each region, for example, for the regions which are not subjected to parameterization processing or modeling, the regions can be ensured to be positioned in the field of view of the camera, factors such as scanning speed and the like can be adjusted so as to parameterize or model the regions, the integrity and the accuracy of the whole space parameterization processing and/or model construction can be ensured, and only the regions which are not subjected to parameterization processing or modeling and are failed in parameterization processing or modeling can be scanned, so that repeated scanning can be avoided, and the modeling efficiency can be improved.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a flowchart illustrating a physical space scanning method according to an exemplary embodiment of the present disclosure. The entity space scanning method is suitable for terminal equipment, wherein the terminal equipment can be local terminal equipment, and the local terminal equipment stores an application program and is used for presenting a graphical user interface. The local terminal device is used for interacting with a user through a graphical user interface, namely downloading and installing an application program through the local terminal device and running the application program. The manner in which the local terminal device provides the graphical user interface to the user may include a variety of ways, for example, on a display screen of the local terminal device that may be displayed, or by providing the graphical user interface through holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including an application screen and a processor for running the application, generating the graphical user interface, and controlling display of the graphical user interface on the display screen.
When the terminal device is a local terminal device, the terminal device may be an intelligent handheld device, such as a smart phone, a tablet computer, a notebook computer or a desktop computer, or an intelligent wearable device, such as an intelligent watch, an intelligent bracelet, or various intelligent appliances with a display screen, such as an intelligent television, an intelligent large screen or an intelligent robot, but not limited thereto. The local terminal device is provided with an image acquisition device for scanning the environment where the terminal device is located to obtain an environment image or video, and the image acquisition device may be a camera or other device having a function of acquiring pictures or videos, but is not limited thereto.
Based on this, as shown in fig. 1, the entity space scanning method applicable to the terminal device provided in the embodiment of the present application includes:
101. in the moving process of the terminal equipment, real-scene scanning is carried out on a target entity space, and a currently scanned real-scene picture is displayed, wherein the real-scene picture comprises a part of entity space in a scanning view field;
102. and carrying out parameterization processing and/or three-dimensional space modeling processing on part of the solid space step by step, and synchronously displaying state representation information dynamically adaptive to the parameterization processing and/or modeling state of the processed region in a live-action picture, wherein the processed region and the parameterization processing and/or modeling state of the processed region are dynamically changed, and different parameterization processing and/or modeling states correspond to different state representation information.
In this embodiment, the target physical space may be any physical space, for example, a subspace in a target physical room, and the target physical room is a real three-dimensional space existing in the real world, for example, a real house space, a shop space, a mall space, and the like, and accordingly, the target physical space may be a subspace in a single bedroom, a kitchen, a restaurant, or a bathroom of a house space, a single shop space, or a single shop in a mall space, but is not limited thereto.
In this embodiment, the terminal device may move in the target physical space, where the terminal device may move in the target physical space by itself, or may move in the target physical space with the terminal device carried by the user, or may move in the target physical space with the terminal device carried by another device (e.g., a robot or a moving cart) that can move by itself. With the movement of the terminal device, an image acquisition device (e.g., a camera) of the terminal device may perform live-action scanning on the target entity space, and display a currently scanned live-action picture on a graphical user interface of the terminal device, where the live-action picture includes a part of the entity space located within a scanning field of the image acquisition device, the scanning field is a scannable view range of each position where the image acquisition device of the terminal device is located during the movement, the part of the entity space within the scanning field may be any part of the object entity space, and any part of the entity space includes the target entity object, and the target entity object includes at least a part of a hard-package structure of the target entity space, and further may include a soft-package structure attached to the part of the hard-package structure. The partial hard mounting structure can comprise parts in structures such as wall surfaces, top surfaces, floors, wall corners, skirting lines and the like of a target solid space, for example, a partial wall surface, a partial floor and a partial skirting line, or only a partial wall surface and only a partial floor; of course, a partially-rigid structure may also include one or more of the entire structures, and may include, for example, a complete wall surface, or a complete floor surface and skirting lines, etc. The soft structure on the partially hard-mounted structure may comprise a portion of at least one of a suspended ceiling, a closet, a wall painting, and various furniture embedded in the hard-mounted structure, or may comprise one or more entire structures of such structures.
In this embodiment, the moving direction of the terminal device in the target physical space is not limited, and for example, the terminal device may be moved by rotating 360 degrees, or may be moved back and forth along a certain direction, and any moving manner that can completely cover the target physical space by moving is applicable to the embodiment of the present application. In addition, in the embodiment of the application, the moving speed of the terminal device in the target entity space is not limited, as long as the acquired live-action picture can be ensured to have enough definition and meet the requirement of three-dimensional space modeling on image definition. Of course, if the definition of the live-action picture is lower than the set definition threshold, the terminal device may display a prompt message to prompt the user or the autonomous mobile device carrying the terminal device to reduce the moving speed, so as to ensure the definition of the live-action picture.
Optionally, application software with a live-action scanning function may be installed on the terminal device, for example, the application software may be three-dimensional model building software, or decoration design software, such as various home decoration design software. No matter what kind of application software, at least the functions of live-action scanning, parameterization processing on the scanned live-action picture, three-dimensional space model construction based on the parameterization processing result, and the like need to be provided. After the user opens the application software, various pages provided by the application software can be displayed on the graphical user interface of the terminal device, the various pages at least include a page for displaying the scanned live-action picture, and further, other information related to the live-action picture, such as guidance information, pop-up windows, various controls, and the like, can be displayed on the page. It should be noted that the application software may include other pages besides the page for displaying the live-action picture, and the pages have a jump relationship. In response to a touch operation of a user on any page, or executing a corresponding task or jumping to other pages associated with the touch operation, the implementation form of the page associated with the touch operation is related to the type of touch. The number of pages provided by the application software, the types of pages, and the jump relationship between pages may vary according to the types and functions of the application software. The following is an exemplary description of the home appliance design application software in conjunction with the examples shown in fig. 2 a-2 i.
Fig. 2a is a top page of the home decoration design application software of this embodiment, and in order to unify and distinguish the pages of this application, the top page may be defined as a first page, and the subsequent pages may be sequentially defined as a second page and a third page in order of appearance. The first page is at least provided with a position information input box and a corresponding confirmation control, the address information input box can be used for manually inputting or automatically positioning and filling the position information of the target entity space, and the position information of the target entity space can be indirectly embodied through information such as the name of a cell, the name of a market, the name of a shop and the like. After the position information of the target entity space is completely filled, jumping to a second page can be carried out in response to the confirmation operation of the user on the position information, and the second page is shown in fig. 2 b. At least a living room classification column, an area classification column, a scanned house type control, a home decoration design control and the like are displayed on the second page. The room classification column at least displays 1 room, 2 rooms, 3 rooms, 4 rooms and more than selection controls, and can be used for selecting the type of the room. The area classification column displays area information which can be displayed in a segmented value mode, for example, 50-90 m2、90~120m2、120~150m2And the selection control can be used for selecting the area of the living room. The user can start the operations of user type live-action scanning, parameterization processing and three-dimensional space model construction by clicking the scanning user type control. The user can start the drawing operation of the user-type graph by clicking the drawing user-type control.
Further, the user can search whether the target entity space already exists through the room classification bar and the area classification bar on the second page, and if the target entity space already exists, the home decoration design control can be directly triggered to carry out home decoration design on the target entity space. If the target entity space cannot be found through the method, the scanning user type control can be directly triggered, and in response to the triggering operation of the scanning user type control by the user, the page jumps to the third page to perform the live-action scanning on the target entity space, wherein the third page is as shown in fig. 2 c. The third page is divided into at least two parts, wherein one part displays operation controls of a target entity space, and the target entity house is used as a real housing space, so that the part displays selection controls of a living room, a dining room, a bedroom, a kitchen, a toilet and the like; if the target entity space is a shop, the second page displays selection controls such as a storage room and a commodity display room, but not limited thereto, the target entity house may also be an entity house of another type, correspondingly, the target entity space may be a certain entity space in the entity house of another type, and the target control may also be a selection control corresponding to the entity space of another type. The other part may be a static declaration page, and the static declaration page may display function introduction information of the page, so as to facilitate a user to know and use the function of the page, for example, the function introduction information of the page may be information such as "using a mobile phone to scan and perform parameterization processing and/or generate a corresponding three-dimensional space model", but is not limited thereto, and the display information of the static declaration page may be updated in real time according to the function of the page. Assuming that the user selects the living room as the target live-action space to be scanned, the scanning link of the target live-action space can be entered by triggering the "living room" control, as described below.
In this description, in the above example, the user triggers the "scan user type" control to enter the page shown in fig. 2c, so that the user can further select a specific subspace as the target real-scene space, but the invention is not limited thereto. If the target physical house does not contain a plurality of subspaces, the interface shown in fig. 2c can be skipped, and the real-scene scanning link can be directly entered.
In this embodiment, to perform live-action scanning on the target physical space, taking the interface shown in fig. 2b or fig. 2c as an example, the user may trigger the scanning user-type control to start the live-action scanning, the parameterization processing, and the three-dimensional space model building operation. In order to perform live-action scanning on the target entity space, the terminal device needs to be located in the target entity space and can move in the target entity space. Taking the example that the user carries the terminal device to enter the target entity space and move in the target entity space, the terminal device may input space entry guidance information to the user in order to prompt the user to enter the target entity space. The guidance information for space entry may be image and text information with a space entry guidance function, or animation information with a space entry guidance function, and may be specifically displayed above the live-action picture, and may further include voice information, such as "please carry the terminal device into the space", to prompt the user that the user needs to carry the terminal device into the target entity space. Further optionally, in the case that the guidance information for the space entry guidance is teletext information or animation information, a preset display time (e.g., 5s, 3s, 6s, etc.) or a confirmation control may be displayed, and the guidance information may automatically disappear when the preset display time is over or after the user clicks the confirmation space.
After determining that the terminal device enters the target entity space, the terminal device may further input mobile scanning guidance information to the user to prompt the user to carry the terminal device to perform mobile scanning on the target entity space. The mobile scanning guidance information may be image-text information or animation information with a mobile scanning guidance function, and may further include voice information, such as "please carry the terminal device to move in space or rotate 360 degrees" to prompt the user that the portable terminal device needs to perform mobile scanning on the target physical space. Further optionally, in order to facilitate the user to more accurately and efficiently carry the terminal device to perform the mobile scanning on the target entity space, the mobile scanning guide information may further include moving direction guide information and scanning manner guide information, where the moving direction guide information is used to guide a moving direction of the terminal device carried by the user when moving, for example, forward, then left, then right, and the like, and the moving direction guide information may be formed based on a two-dimensional space structure diagram (such as a floor type diagram) corresponding to the target entity space and adapted to an internal space structure of the target entity space. The scanning mode guidance information is used for guiding a scanning mode when the user carries the terminal device to move, and may also be understood as a moving mode of the terminal device relative to the user, for example, a 360-degree rotation scanning mode, a scanning mode from top to bottom, a scanning mode from left to right, or the like. Likewise, further optionally, in the case that the guidance information for moving the scanning guidance is the graphic information or the animation information, a preset display time (e.g., 4s, 5s, or 6s, etc.) or a confirmation control may be displayed, and the guidance information may automatically disappear when the preset display time is over or after the user clicks the confirmation space. It should be noted that, for different guidance information, the preset display time may be the same or different, which is not limited herein.
Further, the following description will be given taking the example of the user currently located in the housing space, but not limited to the housing space, and in this case, the third page displays selection controls such as a living room, a dining room, a bedroom, a kitchen, and a bathroom. Assuming that the user is in the living room, the user may trigger the "living room" control on the third page, and in response to the triggering operation of the "living room" control, the page jumps to the fourth page, which is a scanning page, to scan the target physical space, where the fourth page is as shown in fig. 2 d. And a live-action picture of the target entity space in the current field range shot by the image acquisition device is displayed on the fourth page, a popup window is displayed on the live-action picture, and at least a guide animation, guide information and a related confirmation control are displayed on the popup window. The guiding animation is a moving picture which guides a user in a virtual space and is displayed in real time after the user, the terminal equipment and the target entity space are virtualized, and meanwhile, corresponding voice prompt information is accompanied; the guidance information is direction information for guiding the user to move, for example, to turn left, turn right, and a rotation angle, etc.; the relevant confirmation control may be a control with the word "i have walked to destination". When scanning the target entity space, a user can perform spatial scanning at any position of the target entity space, can perform scanning while moving, can perform 360-degree rotation scanning at a fixed position of the target entity space, and also needs to acquire information such as the position of the terminal device in real time in the scanning process.
Further, the following description will take the central position of the initial scanning position in the target physical space as an example, but not limited to the central position. The guidance information on the fourth page may be information having the word "please walk to the middle of the house first" and the associated confirmation control may be a control having the word "i have walked to the center position". The user can walk to the central position of the target entity space according to the guide animation and the voice prompt, after the central position of the target entity space is reached, the user can be informed that the user has reached the central position by displaying a special symbol in the popup window or in a voice prompt mode, at the moment, the user can trigger a confirmation control of 'I has walked to the central position', the popup window enters a hidden state, a control of 'house scanning' and guide information of the control are triggered to enter a display state, and meanwhile, the voice prompt is accompanied to guide the user to trigger the control of 'house scanning' and scan the target entity space.
Further, in response to the operation that the user triggers the control of 'scan house', the fourth page enters a scan state, correspondingly, the implementation form of the control of 'scan house' is converted into a 'scan completion' control, the popup window also enters a display state from a hidden state, and an animation, guide information and voice information on the popup window simultaneously prompt the user to rotate by an angle so as to guide the user to carry out all-dimensional scanning on the target entity space. The pop-up window can be displayed intermittently at preset time intervals and preset display time, namely, the guide icon automatically enters a hidden state after exceeding the preset display time, and enters the display state again after the preset time intervals, so that the pop-up window is circulated and reciprocated, a user can watch a complete scanning page while receiving guidance, for example, the preset time intervals can be 15s, the preset display time can be 5s, the pop-up window automatically enters the hidden state after the guide icon displays 5s, and the pop-up window enters the display state again after being in the hidden state 15. The scanning completion control has the functions of displaying the scanning progress in real time and finishing scanning after a closed scanning space is formed, the display form of the scanning progress can be various, for example, the scanning progress can be displayed around the scanning completion control in a closed circular ring form, and the circular ring is gradually filled along with the increase of the scanning progress until the circular ring is filled to be the closed circular ring. After a closed circular ring is formed in the scanning progress, a scanning completion control can be triggered, if the triggering is successful, the scanning is completed to form a closed space, the scanning is completed, the display mode of the scanning progress is shown in fig. 2e, and further, the animation flow is guided to be shown in fig. 2 f; if the scanning completion control is triggered, an error message prompts that the area which is not successfully scanned still exists, scanning is continued according to the scanning guide information until the scanning completion control is triggered successfully, and then scanning is finished.
In this embodiment, when the image acquisition device of the terminal device performs live-action scanning on the target entity space, parameterization processing and/or three-dimensional space modeling processing may be performed on a part of the scanned entity space step by step, where the parameterization processing and/or the three-dimensional space modeling processing may be performed by the terminal device, or scanned live-action picture data may be sent to the server through the terminal device, and the server performs a parameterization processing task and/or a three-dimensional space modeling processing task and feeds back a progress of processing the parameterization task and/or a progress of processing the three-dimensional modeling task to the terminal device.
In the embodiment of the present application, parameterization processing and/or three-dimensional space modeling processing are performed on part of entity spaces included in a scanned live-action picture, which includes the following two cases, parameterization processing may be performed only on part of entity spaces included in the scanned live-action picture, and three-dimensional space modeling is not performed, so as to obtain parameterized spatial data, and in this case, state representation information dynamically adapted to a parameterized state of a processed region is synchronously displayed in the live-action picture. Or, parameterizing a scanned part of the entity space, and then modeling the three-dimensional space based on the space data obtained by the parameterization, in order to obtain a three-dimensional space model corresponding to the target entity space, in this case, state representation information dynamically adaptive to the parameterized state of the processed region may be selected to be synchronously displayed in the live-action picture, or state representation information dynamically adaptive to the modeled state of the processed region may be selected to be synchronously displayed in the live-action picture; alternatively, it is also possible to select to display in real scene the state characterizing information synchronously adapted to both the parametric state dynamics and the modeling state of the processed region. In the embodiment, two operations of parameterization processing and three-dimensional space modeling may be executed asynchronously or synchronously in real time; in short, the three-dimensional modeling process may be asynchronous or real-time.
The asynchronous execution process is to execute parameterization step by step, accumulate the space data obtained by the parameterization, perform three-dimensional space modeling processing according to the accumulated space data after a certain amount of data is accumulated, and repeat the process continuously until a complete three-dimensional space model corresponding to the target entity space is obtained. For example, each time, the parameterization processing is performed on the scanned part of the entity space step by step, and after the parameterization processing of the scanned part of the entity space is completed, the three-dimensional space modeling processing is performed step by using the data after the parameterization processing. In this case, in one embodiment, the state representation information dynamically adapted to the parameterized state of the processed region is displayed synchronously in the live-action picture; in another embodiment, state characterizing information dynamically adapted to the modeled state of the processed region is displayed synchronously in the live action scene.
The synchronous real-time execution process is a process of gradually executing parameterization processing and carrying out three-dimensional space modeling according to latest obtained spatial data in real time when spatial data are obtained by the parameterization processing each time. Specific embodiments may be as follows: and carrying out parameterization processing on the scanned part of the entity space step by step, carrying out three-dimensional space modeling processing by using the latest space data when the latest space data is obtained through parameterization processing, and carrying out parameterization processing and three-dimensional space modeling processing on the scanned part of the entity space in sequence according to the mode until the scanned part of the entity space is modeled. The real-time performance of the parameterization processing and the three-dimensional space modeling processing is strong, and the parameterization state and the modeling state can be considered to be almost synchronous, so that in the condition, the state representation information corresponding to the processed area is synchronously displayed in the live-action picture, and the parameterization state and the modeling state of the processed area can be simultaneously represented. Of course, it may be configured that the state representation information corresponding to the processed region is synchronously displayed in the live-action picture to represent only the parametric state of the processed region or only the modeling state of the processed region.
Further, the parameterization processing is performed on part of the entity space step by step, and the specific implementation may be: and according to the image data volume corresponding to the part of the entity space, combining the image data volume supported by single processing, and gradually carrying out parameterization processing on different areas in the part of the entity space. The parameterization processing may be understood as performing parameter identification on a part of entity space included in the scanned live-action picture to obtain spatial data corresponding to the part of entity space, where the spatial data includes structural data of the part of entity space, such as information of a space object included in the part of entity space, length, width, height, position, area, type, and the like, and the structural data is used to construct a structure of a local spatial model corresponding to the part of entity space in a three-dimensional modeling process. Further optionally, during the parameterization process, texture data of each space object included in the part of the entity space may also be acquired, the texture data being used for rendering texture information of a local space model corresponding to the part of the entity space during the three-dimensional modeling process, the texture data and the structure data together constituting the space data. Alternatively, the texture data may be automatically generated according to the type and structure data of each space object included in the part of the physical space, for example, the color, wallpaper material, design and color of the wall surface may be automatically set when the wall surface is recognized, the color and texture of the floor used on the ground may be automatically set when the floor is recognized, or the color and texture of the floor tile used on the ground may be set. In addition, the identified space object can be displayed to the user through a human-computer interaction process, the texture information item which can be set by the space object can be provided for the user, and the user can configure the space object according to the preference of the user, for example, the user can configure the color and the material of wall surface, the material of wall paper and the like, can configure the floor using the ground and configure the color and the texture of the floor, or configure the floor using the floor tile and configure the type, the pattern and other texture data of the floor tile.
The image data amount supported by the single processing may be understood as the image data amount processed by the data thread running once, which may be determined according to the capability of the processor of the terminal device or the server device performing the parameterization processing, and is not limited thereto. The sequence of parameterization processing on the partial physical space may be determined by the moving direction of the image capturing device during scanning, based on the continuous region, for example, according to the moving direction of the image capturing device, the parameterization processing is performed on the region scanned first, and the parameterization processing is performed on the continuous region scanned later, but the parameterization processing mode is not limited thereto. To facilitate understanding of the process of performing the parameterization step by step, the following description is given by way of example. For example, the image data amount supported by the single processing is a, and the image data amount a corresponds to a partial region of the scanned real-scene image, so that the partial region is a single parameterized processing region, and further, along with the moving direction of the image acquisition device when scanning the partial physical space, other regions corresponding to the same image data amount a may be parameterized step by step until the parameterized processing of the real-scene image corresponding to the partial physical space is completed.
Further, on the basis of gradually performing parameterization processing on the scanned part of the entity space, an asynchronous or real-time processing mode can be adopted, and three-dimensional modeling processing can be performed on the part of the entity space gradually according to the space data obtained by the parameterization processing, and specific implementation modes can be divided into the following two modes: when the parameterization processing and the three-dimensional space modeling processing are executed asynchronously, according to the data volume after the parameterization processing corresponding to part of the entity space and in combination with the data volume supported by the single three-dimensional space modeling processing, the three-dimensional space modeling processing is carried out on different areas in the part of the entity space step by step; when the parameterization processing and the three-dimensional space modeling processing are executed in real time, the latest space data is utilized to carry out the three-dimensional space modeling processing each time the space data obtained by the parameterization processing is carried out until the scanned part of the entity space is modeled. The modeling process can be completed by a modeling thread, and the data volume processed by the modeling thread running once can be regarded as the data volume supported by single three-dimensional modeling processing; the sequence of three-dimensional modeling of the partial solid space can be determined by taking the sequence of parameterization processing as a reference, namely the moving direction of the image acquisition device during scanning, and specific examples are not repeated.
Further, in the embodiment of the present application, during the three-dimensional space modeling processing of a part of the entity space, a floating window may be displayed on the live-action picture, the building process of the three-dimensional space model corresponding to the target entity space is dynamically displayed in the floating window, and a linkage relationship exists between the currently-being-built model part and the currently-processed region.
Further, when partial entity space is parameterized and/or modeled in three-dimensional space step by step, state representation information dynamically adaptive to the parameterized state and/or modeled state of the processed area can be synchronously displayed in the live-action picture, so that a user can know the state and progress of the current parameterization and/or three-dimensional space modeling. For example, in the case of parameterization, regions that are not parameterized, are being parameterized, and are parameterized are distinguished from regions that are parameterized by state representation information displayed in the live-action scene. The processed region and the parameterized state thereof are dynamically changed, and different parameterized states correspond to different state representation information. For example, in the case of the three-dimensional space modeling process, it is possible to distinguish between regions that have been modeled, are being modeled, and have not started modeling, from the state representation information displayed in the live-action screen. The processed area and the modeling state thereof are dynamically changed, and different states correspond to different state representation information. In the embodiments of the present application, in the case of the parameterization processing, the processed regions are regions in which the parameterization processing has been started, and include a region in which the parameterization processing is being performed and a region in which the parameterization processing has been completed, and accordingly, the parameterization state of each processed region includes a parameterization performing state and a parameterization completing state. In the case of the three-dimensional space modeling process, the processed regions are regions for which the three-dimensional space modeling process has been started, including a region for which the three-dimensional space modeling is being performed and a region for which the three-dimensional space modeling has been completed, and accordingly, the modeling state of each processed region includes a modeling-in state and a modeling-completed state. In addition, the live-action scene also includes an area where parameterization or modeling is not started, and the corresponding state representation information can be displayed for the area.
Further, the state representation information dynamically adapted to the parameterized state and/or the modeled state of the processed region is synchronously displayed in the live-action picture, and the specific implementation mode is as follows: a masking region corresponding to each processed region is displayed on the live-action screen. Each of the cover layer regions includes pattern information adapted to the parameterization state and/or modeling state of the processed region corresponding to the cover layer region, and the cover layer regions are connected to form a continuous cover layer in visual effect. Thereafter, the pattern information in each of the cladding regions is dynamically updated in accordance with the dynamic changes in the parameterized state and/or modeled state of each of the processed regions. The pattern information in each layer region can be updated to be any visualization attribute of the updated pattern information, and the visualization attribute can be the style of the pattern, the density of the pattern, the color of the pattern line, the background color of the pattern and the like. It should be noted that different pattern information represents different parameterization states and/or modeling states, and visualization information of patterns corresponding to different parameterization states and/or modeling states cannot be completely the same, that is, at least one of visualization attributes of the pattern information is different when the parameterization states and/or modeling states are different, so that different modeling states can be distinguished. In detail, in the case of synchronously displaying state representation information dynamically adapted to the parametric state of the processed region in the live-action picture, different pattern information indicates different parametric states, and the visualization information of the corresponding patterns cannot be completely the same when the parametric states are different, that is, at least one of the visualization attributes of the pattern information is different when the parametric states are different, so that different parametric states can be distinguished. In the case of synchronously displaying state representation information dynamically adapted to the modeling state of the processed region in the live-action picture, different pattern information represents different modeling states, and the visualization information of the corresponding patterns in different modeling states cannot be completely the same, that is, at least one of the visualization attributes of the pattern information is different when the modeling states are different, so that different modeling states can be distinguished. In the case where the state representation information synchronously displayed in the live-action picture represents the parameterized state and the modeled state of the processed region at the same time, the same pattern information may reflect the parameterized state and the modeled state of the processed region at the same time, and different pattern information represents different parameterized states and modeled states, and the visualization information of the corresponding pattern cannot be completely the same when the parameterized state and the modeled state are different, that is, when the parameterized state and the modeled state are different, at least one of the visualization attributes of the pattern information is different, so that the different parameterized states and modeled states can be distinguished.
Further, a masking layer region corresponding to each processed region is displayed on the live-action screen, and the specific embodiment is as follows: aiming at the current processed area, determining the previous processed area adjacent to the current processed area according to the adjacency relation and the processing sequence between the current processed area and other processed areas; generating pattern information in the covering layer region corresponding to the current processed region according to the pattern information in the covering layer region corresponding to the previous processed region; and displaying the corresponding covering layer area above the current processed area according to the pattern information of the covering layer area corresponding to the current processed area. For example, it is assumed that the currently scanned live-action picture is divided into two adjacent regions a and B, the region a is in a parameterization completion state, the region B is in an ongoing parameterization state, and the difference in the density of the mask layer patterns is set as an attribute for distinguishing the modeling state, and the other attributes are the same. In this example, the color of the mask layer in the area a is blue, the shape of the pattern on the mask layer is in a grid shape, the grid shape is sparse, and the color of the grid lines is white. Then, according to the neighboring relationship between the a and B regions and the processing sequence of modeling after modeling the B region in the a region, it can be determined that the color of the partial mask layer in the parameterized state in the B region is blue, the shape of the pattern on the mask layer is in a grid shape, the grid shape is dense, and the color of the grid lines is white. With the gradual increase of the completion progress of the B region parameterization process, the B region is gradually changed from a dense grid to a sparse grid. The masking region is shown in fig. 2 g. The setting of the visualization attribute of the mask region is not limited to this, and other visualization attributes than the mesh form may be used as the attribute for distinguishing the modeling state.
Further, according to the pattern information in the covering layer region corresponding to the previous processed region, the pattern information in the covering layer region corresponding to the current processed region is generated, and the specific implementation manner is as follows: acquiring the tail end positions and the extensible directions of a plurality of extensible lines from pattern information in the covering layer region corresponding to the previous processed region; extending the plurality of extensible lines to the current processed area according to the tail end positions and the extensible directions of the plurality of extensible lines, wherein the plurality of extensible lines are mutually overlapped in the current processed area to form pattern information in the covering area corresponding to the current processed area; since the previous processing area and the current processing area are not processed simultaneously, the visual properties of the pattern information formed by overlapping the plurality of extensible lines in the previous processed area and the current processed area are not completely the same, for example, the pattern effect shown in fig. 2g may be presented.
In this embodiment, for the case of parameterization processing, if the parameterization state of each processed region includes a parameterization performing state and a parameterization completing state, dynamically updating the pattern information in each cladding region according to the dynamic change of the parameterization state of each processed region includes: and updating the pattern information of the covering layer region corresponding to any processed region from a first visualization state to a second visualization state when the parameterization state of the processed region is changed from a parameterization proceeding state to a parameterization completion state, wherein the visualization properties corresponding to the first visualization state and the second visualization state are not completely the same. Correspondingly, for the case of three-dimensional space modeling processing, the modeling state of each processed region includes a modeling-in state and a modeling-completed state, and then pattern information in each cladding region is dynamically updated according to a dynamic change of the modeling state of each processed region, which may be as follows in a specific embodiment: and updating the pattern information in the surface layer region corresponding to any processed region from a first visualization state to a second visualization state when the modeling state of the processed region is changed from the modeling state to the modeling completion state, wherein the visualization attributes corresponding to the first visualization state and the second visualization state are not completely the same.
In this embodiment, for the parameterization, during the parameterization process of the part of the entity space step by step, a floating window may be displayed on the live-action picture, the parameterization process corresponding to the target entity space is dynamically displayed in the floating window, and there is a linkage relationship between the currently parameterized part and the currently processed region.
Further, when the partial entity space is parameterized step by step, state representation information dynamically adaptive to the parameterized processing state of the processed region can be synchronously displayed in the live-action picture, so that a user can conveniently know the state and the progress of the current three-dimensional space modeling and distinguish each region which is parameterized, is being parameterized and is not started.
It should be noted that, the mask pattern information corresponding to the parameterization processing process needs to be distinguished from the mask pattern information corresponding to the three-dimensional space modeling processing process, and the types of the mask pattern information may be the same, but the visualization states are different, for example, the color of the part of the mask layer after the parameterization processing is red, and the color of the part of the mask layer after the three-dimensional space modeling processing is green, but the method is not limited thereto.
In this embodiment, the formed spatial attributes of the processed regions may also be determined according to the adjacency relationship between the processed regions, and the spatial attributes include at least one of the number of corners, the spatial area, and the spatial height. When determining the number of corners of each processed area, the specific implementation method is as follows: splicing the processed areas according to the adjacency relation among the processed areas to determine the hardbound structural surfaces existing in the processed areas, wherein the hardbound structural surfaces comprise top surfaces, wall surfaces and ground surfaces; the number of corners of each treated area can be determined by determining the intersection of every three hard-mounted structural surfaces as a corner, but the manner of determining the number of corners of each treated area is not limited thereto. When determining the spatial area of each processed region, the specific implementation is as follows: the spatial area of each processed region may be obtained by determining the area of each hard mounting structure surface existing in each processed region, and summing the areas of each hard mounting structure surface existing in each processed region, respectively, but the manner of determining the spatial area of each processed region is not limited thereto. When determining the spatial height of each processed area, the specific implementation is as follows: the method includes the steps of stitching the processed regions according to the adjacency relation of the processed regions to obtain a three-dimensional space model of the processed regions, and determining the space height formed by the processed regions based on the three-dimensional space model.
Further, a closed loop is formed on the scanning progress bar of the current target entity space, and/or the scanning completion is determined when the number of the corners is 4, and/or the sum of the space areas and the space heights of the processed areas are the same as the actual data of the target entity space.
Further, after the house type scanning is finished, the soft installation structures such as doors and windows of the target entity space are scanned according to the guide information of the scanning doors and windows on the graphical user interface until the scanning is finished.
Further, in order to improve the accuracy of judging whether a closed space is formed, the scanning completion control can be triggered to perform secondary confirmation. If the 'scanning completion' control is triggered, displaying prompt information that parameterization processing corresponding to the target entity space is completed and a three-dimensional space model is built, or responding to the operation of triggering the 'scanning completion' control, skipping to a graphical user interface of the next processing stage from a fourth page, namely a fifth page, and showing that a 'living room' forms a closed space; and if the prompt information that the parameterization processing and/or the three-dimensional space model corresponding to the living room is not constructed is displayed after the scanning completion control is triggered, continuously re-scanning the area in the target entity space where the parameterization processing is unsuccessful and/or the modeling is unsuccessful until a closed space is formed. In the embodiment, when the closed space is not formed, only the areas which are not successful in parameterization processing and/or modeling are needed to be rescanned, and repeated scanning can be avoided, so that the modeling efficiency is improved. Here, the process of continuously rescanning the region with unsuccessful modeling, and performing parameterization processing and/or three-dimensional space modeling on the region is the same as the process described above, and details thereof are not repeated.
Further, under the condition that the processed areas are determined to form the closed space, displaying a navigation guide interface for guiding the user to enter other subspaces to continue to perform live-action scanning, namely, jumping to a third page again, wherein the third page comprises at least two other subspaces, such as a kitchen, a bedroom or a bathroom; and responding to the subspace selection operation, and displaying a navigation path from the current subspace to the selected subspace so as to guide the user carrying the terminal equipment to enter the selected subspace and continue to carry out real-scene scanning on the selected subspace.
Further, under the condition that the sub-spaces form the closed space, displaying a home decoration design page, responding to the triggering operation of the user on the home decoration design, and carrying out home decoration design on the three-dimensional space model corresponding to the target entity space. The home decoration design at least comprises the design of a hard decoration structure surface and the design of a decoration style, wherein the design page of the hard decoration structure surface is shown in figure 2h, and the design page of the decoration style is shown in figure 2 i. Specifically, in response to a home decoration design triggering operation, a home decoration design is performed on a three-dimensional space model corresponding to a target entity space, and the specific implementation manner is as follows: splicing the three-dimensional space models corresponding to the subspaces according to the relative position relationship among the subspaces to obtain the three-dimensional house model corresponding to the target house; responding to the home decoration design triggering operation, and displaying a three-dimensional house model, wherein the three-dimensional house model comprises three-dimensional space models corresponding to the subspaces; responding to the roaming operation, and under the condition of roaming into the three-dimensional space model corresponding to the target entity space, performing home decoration design on the three-dimensional space model corresponding to the target entity space, and outputting a home decoration design effect diagram.
In this embodiment, in response to the roaming operation, the roaming operation is performed to the three-dimensional space model corresponding to the target physical space, which is specifically implemented as follows: the method comprises the steps that a live-action picture of a target entity space comprises a plurality of roaming point positions, the roaming point positions are preset, the live-action picture can be switched among the roaming point positions, at different roaming point positions, a user can see different areas of the live-action picture, for example, when the user roams to a living room, the user can see a three-dimensional scene of the living room, when the user roams to a kitchen, the user can see a three-dimensional scene of the kitchen, and when the user roams to a bedroom, the user can see a three-dimensional scene of a master bedroom. The roaming path can be preset between different roaming point positions, for example, a roaming control can be set on a graphical user interface, when a user initiates a roaming operation through the roaming control on the graphical user interface, the graphical user interface can display a roaming point name list, the roaming point names in the list can be a living room, a bedroom, a guest restaurant and the like, then the user selects a roaming point according to the own requirement, the terminal device can sense the roaming operation initiated by the user, the position of a target roaming point to be roamed is determined, and the target roaming point is roamed from the current roaming point position along the set roaming path. In addition to initiating a roaming operation through the roaming control, the user can click a certain position on the graphical user interface, the terminal device can lock the target roaming point position according to the position clicked by the user (as long as the terminal device clicks within a certain roaming point range, the target roaming point is locked as the target roaming point), and the terminal device can roam from the current roaming point position to the target roaming point position along the set roaming path.
It should be noted that, in addition to roaming from one roaming point position to another roaming point position in the real-world space, the user may also switch viewing angles at the current position, that is, switch viewing angles, so as to view the same space area from different viewing angles. For example, a visual angle switching control can also be set on the graphical user interface, the user can initiate visual angle switching operation in a left-right sliding mode through the visual angle switching control on the graphical user interface, and the camera can swing a corresponding angle along with the visual angle switching operation to change a visual angle and complete the switching of the visual angle. The method comprises the steps that except that visual angle switching operation is initiated through a left-right sliding visual angle switching control, a user can touch the visual angle switching control on a graphical user interface in a clicking mode, then a terminal device responds to the touch operation, a selectable visual angle list is displayed on the graphical user interface, the list comprises a plurality of visual angles which can be selected by the user, such as a left front 30-degree visual angle, a left front 50-degree visual angle, a right front visual angle, a right rear 30-degree visual angle and the like, then the user selects the visual angles according to the requirement of the user, and then the real-scene picture is observed under the selected visual angles.
In this embodiment, a home decoration design is performed on a three-dimensional space model corresponding to a target entity space, as shown in fig. 2h and 2 i. And displaying decorative controls of all hard structures. Responding to the triggering operation of the decoration control of any hard decoration structure, showing a plurality of decoration types of the hard decoration structure, and then responding to the selection operation of a user on the decoration types, and outputting a home decoration design effect diagram. For example, the home decoration effect map is generated in response to a user's selection operation for a wall surface, a floor surface, a skirting line, furniture, a finishing style, or the like. The furniture can be leather furniture, wood furniture and the like, and the decoration style can be style 1, style 2, style 3, style 4 and the like.
In the embodiment of the application, the real scene scanning is carried out on the target entity space, and the parameterization processing and/or the three-dimensional modeling processing are carried out on the target entity space based on the real scene picture obtained by scanning, so that the information is closer to the real data, and the accuracy of the parameterization processing and/or the modeling is favorably ensured; in addition, in the parameterization processing and/or three-dimensional modeling process, the parameterization processing and/or three-dimensional modeling state of each area is synchronously displayed on the live-action picture, so that a user can conveniently know whether the parameterization processing and/or three-dimensional modeling process of each area is finished or not in time. Furthermore, corresponding processing can be adopted according to the parameterization process and/or modeling state of each region, for example, for the regions which are not subjected to parameterization processing or modeling, the regions can be ensured to be positioned in the field of view of the camera, factors such as scanning speed and the like can be adjusted, so that the regions are subjected to parameterization processing and/or three-dimensional modeling, the completeness and the accuracy of whole space parameterization identification and/or model construction are ensured, and only the regions which are not subjected to parameterization processing or modeling and in which the parameterization processing fails or the modeling fails are scanned, so that repeated scanning can be avoided, and the modeling efficiency is improved.
Fig. 3 is a schematic structural diagram of a physical space scanning apparatus according to an exemplary embodiment of the present application. The device can be applied to terminal equipment, and the terminal equipment is positioned in the target entity space and can move. As shown in fig. 3, the apparatus includes:
the scanning module 31 is configured to perform live-action scanning on a target entity space in the moving process of the terminal device;
the display module 32 is configured to display a live-action picture currently scanned by the scanning module, where the live-action picture includes a part of the entity space located in the scanning field of view;
the processing module 33 is used for carrying out parameterization processing and/or three-dimensional space modeling processing on part of the entity space step by step;
the display module 32 is further configured to: and synchronously displaying state representation information dynamically adapted to the modeling state of the processed region in the live-action picture, wherein the processed region and the parameterization processing and/or modeling state of the processed region are dynamically changed, and different parameterization processing and/or modeling states correspond to different state representation information.
Further, the processing module 33, when being configured to perform the parameterization processing and/or the three-dimensional space modeling processing on the partial solid space step by step, is specifically configured to: and according to the image data volume corresponding to the part of the entity space, combining the image data volume supported by single processing, and gradually carrying out parameterization processing and/or three-dimensional space modeling processing on different regions in the part of the entity space.
Further, the display module 32, when configured to synchronously display the state representation information dynamically adapted to the parameterization processing and/or the modeling state of the processed area in the live-action picture, is specifically configured to: displaying a covering layer area corresponding to each processed area in a live-action picture, wherein each covering layer area comprises pattern information which is adaptive to the parameterization processing and/or modeling state of the processed area corresponding to the covering layer area; and dynamically updating pattern information in each cladding region according to the dynamic change of the parameterization processing and/or modeling state of each processed region, wherein different pattern information represents different parameterization processing and/or modeling states.
Further, when the display module 32 is configured to display the covering layer regions corresponding to the processed regions in the live-action picture, specifically: aiming at the current processed area, determining the previous processed area adjacent to the current processed area according to the adjacency relation and the processing sequence between the current processed area and other processed areas; generating pattern information in the coating region corresponding to the current processed region according to the pattern information in the coating region corresponding to the previous processed region; and displaying the corresponding covering layer area above the current processed area according to the pattern information of the covering layer area corresponding to the current processed area.
Further, when the display module 32 is configured to generate the pattern information in the skin region corresponding to the current processed region according to the pattern information in the skin region corresponding to the previous processed region, specifically, the display module is configured to: acquiring the tail end positions and the extensible directions of a plurality of extensible lines from pattern information in the covering layer region corresponding to the previous processed region; extending the plurality of extensible lines to the current processed area according to the respective tail end positions and the extensible directions of the plurality of extensible lines, wherein the plurality of extensible lines are overlapped with each other in the current processed area to form pattern information in the covering area corresponding to the current processed area; and the visual properties of the pattern information formed by mutually overlapping the plurality of extensible lines in the last processed area and the current processed area are not completely the same.
Further, the modeling state of each processed region includes a modeling-in state and a modeling-completed state, and the display module 32 is specifically configured to, when configured to dynamically update the pattern information in each skin region according to the dynamic change of the modeling state of each processed region: for any processed region, when the modeling state of the processed region is changed from the modeling state to the modeling completion state, updating the pattern information in the skin region corresponding to the processed region from a first visualization state to a second visualization state, wherein the visualization attributes corresponding to the first visualization state and the second visualization state are not completely the same; or, if the parameterization state of each processed region includes a parameterization proceeding state and a parameterization completion state, the pattern information in each coating region is dynamically updated according to the dynamic change of the parameterization state of each processed region, and the method includes the following steps: and for any processed region, when the parameterization state of the processed region is changed from the parameterization proceeding state to the parameterization completion state, updating the pattern information in the cladding region corresponding to the processed region from a first visualization state to a second visualization state, wherein the visualization properties corresponding to the first visualization state and the second visualization state are not completely the same.
Further, the apparatus is further configured to: determining the formed spatial attributes of the processed regions according to the adjacency relation among the processed regions, wherein the spatial attributes comprise at least one of the number of corners, the spatial area and the spatial height; displaying constructed prompt information of parameterization processing and/or three-dimensional space model modeling processing corresponding to a target entity space under the condition that each processed area is determined to form a closed space according to the space attribute; and under the condition that the three-dimensional space modeling processing is finished, responding to the home decoration design triggering operation, carrying out home decoration design aiming at the three-dimensional space model corresponding to the target entity space, and outputting a home decoration design effect diagram.
Further, the target physical space is a subspace in the target house, and the apparatus is configured to, when determining that the respective processed regions have formed the closed space according to the spatial attributes, further: displaying a navigation guide interface for guiding a user to enter other subspaces to continue to carry out live-action scanning, wherein the navigation guide interface comprises at least two other subspaces; and responding to the subspace selection operation, and displaying a navigation path from the current subspace to the selected subspace so as to guide the user carrying the terminal equipment to enter the selected subspace and continue to carry out real-scene scanning on the selected subspace.
Further, the device is used for responding to a home decoration design triggering operation, performing home decoration timing for a three-dimensional space model corresponding to a target entity space, and is specifically used for: splicing the three-dimensional space models corresponding to the subspaces according to the relative position relationship among the subspaces to obtain the three-dimensional house model corresponding to the target house; responding to the home decoration design triggering operation, and displaying a three-dimensional house model, wherein the three-dimensional house model comprises three-dimensional space models corresponding to the subspaces; and responding to the roaming operation, and performing home decoration design on the three-dimensional space model corresponding to the target entity space under the condition of roaming into the three-dimensional space model corresponding to the target entity space.
Further, the processing module 33, when being configured to gradually perform the three-dimensional modeling processing on the partial solid space, is further configured to: and displaying a floating window on the live-action picture, dynamically displaying the construction process of the three-dimensional space model corresponding to the target entity space in the floating window, wherein a linkage relation exists between the currently constructed model part and the currently processed area.
Further, before the display module 32 is used to perform live action scanning on the target physical space, it is further used to: displaying space entering guide information to prompt a user to carry a terminal device to enter a target entity space; and displaying mobile scanning guide information to prompt a user to carry terminal equipment to carry out mobile scanning on the target entity space, wherein the mobile scanning guide information at least comprises moving direction guide information and scanning mode guide information.
Here, it should be noted that: the real-world space scanning device provided in the above embodiments may implement the technical solutions described in the above method embodiments, and the specific implementation principles of the modules or units may refer to the corresponding contents in the above method embodiments, which are not described herein again.
Fig. 4 is a schematic structural diagram of a terminal device according to an exemplary embodiment of the present application. The terminal device is located in the target physical space and is movable, and the terminal device comprises: a memory 40a and a processor 40 b; wherein the memory 40a is for storing computer programs/instructions and the processor 40b is coupled with the memory 40a for executing the computer programs/instructions for implementing the steps of:
in the moving process of the terminal equipment, real-scene scanning is carried out on a target entity space; displaying a live-action picture currently scanned by the scanning module, wherein the live-action picture comprises a part of entity space positioned in a scanning field of view; carrying out parameterization processing and/or three-dimensional space modeling processing on part of the entity space step by step; and synchronously displaying state representation information dynamically adapted to the modeling state of the processed region in the live-action picture, wherein the processed region and the parameterization processing and/or modeling state of the processed region are dynamically changed, and different parameterization processing and/or modeling states correspond to different state representation information.
Further, the processor 40b, when being configured to perform the parameterization processing and/or the three-dimensional space modeling processing on the partial solid space step by step, is specifically configured to: and according to the image data volume corresponding to the part of the entity space, combining the image data volume supported by single processing, and gradually carrying out parameterization processing on different areas in the part of the entity space.
Further, the processor 40b, when configured to synchronously display in the live-action picture the state characterizing information dynamically adapted to the parameterization processing and/or the modeling state of the processed area, is specifically configured to: displaying a covering layer area corresponding to each processed area in a live-action picture, wherein each covering layer area comprises pattern information which is adaptive to the parameterization processing and/or modeling state of the processed area corresponding to the covering layer area; and dynamically updating pattern information in each cladding region according to the dynamic change of the parameterization processing and/or modeling state of each processed region, wherein different pattern information represents different parameterization processing and/or modeling states.
Further, when the processor 40b is configured to display the covering layer regions corresponding to the processed regions on the live-action screen, specifically: aiming at the current processed area, determining the previous processed area adjacent to the current processed area according to the adjacency relation and the processing sequence between the current processed area and other processed areas; generating pattern information in the covering layer region corresponding to the current processed region according to the pattern information in the covering layer region corresponding to the previous processed region; and displaying the corresponding covering layer area above the current processed area according to the pattern information of the covering layer area corresponding to the current processed area.
Further, when the processor 40b is configured to generate the pattern information in the skin region corresponding to the current processed region according to the pattern information in the skin region corresponding to the previous processed region, specifically, the processor is configured to: acquiring the tail end positions and the extensible directions of a plurality of extensible lines from pattern information in the covering layer region corresponding to the previous processed region; extending the plurality of extensible lines to the current processed area according to the respective tail end positions and the extensible directions of the plurality of extensible lines, wherein the plurality of extensible lines are overlapped with each other in the current processed area to form pattern information in the covering area corresponding to the current processed area; and the visual properties of the pattern information formed by mutually overlapping the plurality of extensible lines in the last processed area and the current processed area are not completely the same.
Further, the modeling state of each processed region includes a modeling-in state and a modeling-completed state, and the processor 40b, when configured to dynamically update the pattern information in each cladding region according to the dynamic change of the modeling state of each processed region, is specifically configured to: for any processed region, when the modeling state of the processed region is changed from the modeling state to the modeling completion state, updating the pattern information in the skin region corresponding to the processed region from a first visualization state to a second visualization state, wherein the visualization attributes corresponding to the first visualization state and the second visualization state are not completely the same; or, the parameterization state of each processed region includes a parameterization performing state and a parameterization completing state, and the pattern information in each cladding region is dynamically updated according to the dynamic change of the parameterization state of each processed region, including: and updating the pattern information of the covering layer region corresponding to any processed region from a first visualization state to a second visualization state when the parameterization state of the processed region is changed from a parameterization proceeding state to a parameterization completion state, wherein the visualization properties corresponding to the first visualization state and the second visualization state are not completely the same.
Further, the processor 40b is further configured to: determining the formed spatial attributes of the processed regions according to the adjacency relation among the processed regions, wherein the spatial attributes comprise at least one of the number of corners, the spatial area and the spatial height; displaying constructed prompt information of parameterization processing and/or three-dimensional space model modeling processing corresponding to a target entity space under the condition that each processed area is determined to form a closed space according to the space attribute; and under the condition that the three-dimensional space modeling processing is finished, responding to the home decoration design triggering operation, carrying out home decoration design aiming at the three-dimensional space model corresponding to the target entity space, and outputting a home decoration design effect diagram.
Further, the target physical space is a subspace of the target house, and the processor 40b, when being configured to determine that each processed region has formed the closed space according to the spatial attribute, is further configured to: displaying a navigation guide interface for guiding a user to enter other subspaces to continue to carry out live-action scanning, wherein the navigation guide interface comprises at least two other subspaces; and responding to the subspace selection operation, and displaying a navigation path from the current subspace to the selected subspace so as to guide the user carrying the terminal equipment to enter the selected subspace and continue to carry out real-scene scanning on the selected subspace.
Further, the processor 40b is configured to, when configured to respond to the home decoration design triggering operation, perform home decoration on the three-dimensional space model corresponding to the target physical space, specifically: splicing the three-dimensional space models corresponding to the subspaces according to the relative position relationship among the subspaces to obtain the three-dimensional house model corresponding to the target house; responding to the home decoration design triggering operation, and displaying a three-dimensional house model, wherein the three-dimensional house model comprises three-dimensional space models corresponding to the subspaces; and responding to the roaming operation, and performing home decoration design on the three-dimensional space model corresponding to the target entity space under the condition of roaming into the three-dimensional space model corresponding to the target entity space.
Further, the processor 40b, in the process for gradually modeling the three-dimensional space of the partial solid space, is further configured to: and displaying a floating window on the live-action picture, dynamically displaying the construction process of the three-dimensional space model corresponding to the target entity space in the floating window, wherein a linkage relation exists between the currently constructed model part and the currently processed area.
Further, the processor 40b, before being configured to perform the live action scan on the target physical space, is further configured to: displaying space entry guide information to prompt a user to enter a target entity space with terminal equipment; and displaying mobile scanning guide information to prompt a user to carry terminal equipment to carry out mobile scanning on the target entity space, wherein the mobile scanning guide information at least comprises moving direction guide information and scanning mode guide information.
Further, as shown in fig. 4, the terminal device further includes: display 40c, communications component 40d, power component 40e, audio component 40f, and the like. Only some of the components are schematically shown in fig. 4, and it is not meant that the terminal device includes only the components shown in fig. 4. The terminal device of this embodiment may be implemented as a desktop computer, a notebook computer, a smart phone, or an IOT device.
Here, it should be noted that: the terminal device provided in the foregoing embodiments may implement the technical solutions described in the foregoing method embodiments, and the specific implementation principle of each module or unit may refer to the corresponding content in the foregoing method embodiments, which is not described herein again.
An exemplary embodiment of the present application also provides a computer readable storage medium storing a computer program/instructions which, when executed by one or more processors, cause the one or more processors to implement the steps in the above-described method embodiments of the present application.
Here, it should be noted that: the storage medium provided in the foregoing embodiments may implement the technical solutions described in the foregoing method embodiments, and the specific implementation principle of each module or unit may refer to the corresponding content in the foregoing method embodiments, which is not described herein again.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (14)

1. A physical space scanning method is characterized by being applicable to a terminal device, wherein the terminal device is located in a target physical space and can move, and the method comprises the following steps:
in the moving process of the terminal equipment, performing real-scene scanning on the target entity space, and displaying a currently scanned real-scene picture, wherein the real-scene picture comprises a part of entity space in a scanning view field;
and carrying out parameterization processing and/or three-dimensional space modeling processing on the part of the entity space step by step, and synchronously displaying state representation information dynamically adaptive to the parameterization state and/or modeling state of the processed region in the live-action picture, wherein the processed region and the parameterization state and/or modeling state of the processed region are dynamically changed, and different parameterization states and/or modeling states correspond to different state representation information.
2. The method according to claim 1, wherein the step-by-step parameterization and/or three-dimensional modeling of the portion of the physical space comprises:
and according to the image data amount corresponding to the part of the entity space, combining the image data amount supported by single processing, and gradually carrying out parameterization processing and/or three-dimensional space modeling processing on different regions in the part of the entity space.
3. The method according to claim 1, wherein synchronously displaying in the live action scene state characterizing information dynamically adapted to the parametric state and/or the modeled state of the processed region comprises:
displaying a covering layer area corresponding to each processed area in the live-action picture, wherein each covering layer area comprises pattern information adaptive to the parameterization state and/or modeling state of the corresponding processed area;
and dynamically updating pattern information in each cladding region according to the dynamic change of the parameterized state and/or the modeling state of each processed region, wherein different pattern information represents different parameterized states and/or modeling states.
4. The method according to claim 3, wherein displaying a masking region corresponding to each processed region in the live-action screen includes:
aiming at the current processed area, determining the previous processed area adjacent to the current processed area according to the adjacency relation and the processing sequence between the current processed area and other processed areas;
generating pattern information in the covering layer region corresponding to the current processed region according to the pattern information in the covering layer region corresponding to the previous processed region;
and displaying the corresponding covering layer area above the current processed area according to the pattern information of the covering layer area corresponding to the current processed area.
5. The method of claim 4, wherein generating pattern information in the covering layer region corresponding to the current processed region according to the pattern information in the covering layer region corresponding to the previous processed region comprises:
acquiring the tail end positions and the extensible directions of a plurality of extensible lines from pattern information in the covering layer region corresponding to the previous processed region;
extending the plurality of extensible lines to the current processed area according to the respective tail end positions and the extensible directions of the plurality of extensible lines, wherein the plurality of extensible lines are mutually overlapped in the current processed area to form pattern information in the covering area corresponding to the current processed area;
and the visual properties of the pattern information formed by mutually overlapping the plurality of extensible lines in the last processed area and the current processed area are not completely the same.
6. The method of claim 3, wherein the modeling state of each processed region comprises an in-modeling state and a modeling complete state, and dynamically updating the pattern information in each cladding region according to the dynamic change of the modeling state of each processed region comprises: for any processed region, when the modeling state of the processed region is changed from the modeling state to the modeling completion state, updating pattern information in the skin region corresponding to the processed region from a first visualization state to a second visualization state, wherein visualization attributes corresponding to the first visualization state and the second visualization state are not completely the same;
or,
the parameterization state of each processed region comprises a parameterization proceeding state and a parameterization completion state, and the pattern information in each coating region is dynamically updated according to the dynamic change of the parameterization state of each processed region, wherein the parameterization proceeding state and the parameterization completion state comprise the following steps: and updating the pattern information of the covering layer region corresponding to any processed region from a first visualization state to a second visualization state when the parameterization state of the processed region is changed from a parameterization proceeding state to a parameterization completion state, wherein the visualization properties corresponding to the first visualization state and the second visualization state are not completely the same.
7. The method of any one of claims 1-6, further comprising:
determining the formed spatial attributes of the processed regions according to the adjacency relation among the processed regions, wherein the spatial attributes comprise at least one of the number of corners, the spatial area and the spatial height;
displaying prompt information which is finished by parameterization processing and/or three-dimensional space modeling processing corresponding to the target entity space under the condition that the processed areas form closed spaces according to the space attributes; and
and under the condition that the three-dimensional space modeling processing is finished, responding to the home decoration design triggering operation, carrying out home decoration design on the three-dimensional space model corresponding to the target entity space, and outputting a home decoration design effect diagram.
8. The method of claim 7, wherein the target physical space is a subspace of a target house, and wherein in the event that it is determined from the spatial attributes that the respective processed regions have formed an enclosed space, the method further comprises:
displaying a navigation guide interface for guiding a user to enter other subspaces to continue to perform live-action scanning, wherein the navigation guide interface comprises at least two other subspaces; and
and responding to subspace selection operation, and displaying a navigation path from the current subspace to the selected subspace so as to guide the user carrying the terminal equipment to enter the selected subspace and continue to carry out real-scene scanning on the selected subspace.
9. The method of claim 8, wherein in response to a home decoration design triggering operation, performing home decoration design on the three-dimensional space model corresponding to the target entity space comprises:
splicing the three-dimensional space models corresponding to the subspaces according to the relative position relationship among the subspaces to obtain the three-dimensional house model corresponding to the target house;
responding to a home decoration design triggering operation, and displaying the three-dimensional house model, wherein the three-dimensional house model comprises three-dimensional space models corresponding to subspaces;
in response to the roaming operation, performing home decoration design on the three-dimensional space model corresponding to the target entity space under the condition of roaming into the three-dimensional space model corresponding to the target entity space.
10. The method according to any one of claims 1 to 6, wherein during the step-by-step three-dimensional modeling process for the partial solid space, further comprising:
and displaying a floating window on the live-action picture, dynamically displaying the construction process of the three-dimensional space model corresponding to the target entity space in the floating window, wherein a linkage relation exists between the currently constructed model part and the currently processed area.
11. The method according to any one of claims 1-6, further comprising at least one of the following operations before performing the live action scan on the target physical space:
displaying space entry guide information to prompt a user to enter the target entity space with the terminal equipment;
and displaying mobile scanning guide information to prompt a user to carry the terminal equipment to carry out mobile scanning on the target entity space, wherein the mobile scanning guide information at least comprises moving direction guide information and scanning mode guide information.
12. An entity space scanning device, which can be applied in a terminal device, wherein the terminal device is located in a target entity space and is movable, the device comprises:
the scanning module is used for carrying out real scene scanning on the target entity space in the moving process of the terminal equipment;
the display module is used for displaying the real scene picture currently scanned by the scanning module, and the real scene picture comprises a part of entity space in a scanning view field;
the processing module is used for carrying out parameterization processing and/or three-dimensional space modeling processing on the part of the entity space step by step;
the display module is further configured to: and synchronously displaying state representation information dynamically adapted to the parameterization processing and/or modeling state of the processed region in the live-action picture, wherein the processed region and the parameterization processing and/or modeling state thereof are dynamically changed, and different parameterization processing and/or modeling states correspond to different state representation information.
13. A terminal device, wherein the terminal device is movable and locatable in a target physical space, the terminal device comprising: a memory and a processor; wherein the memory is configured to store computer programs/instructions, and the processor is coupled with the memory and configured to execute the computer programs/instructions for implementing the steps in the method according to any one of claims 1 to 11.
14. A computer-readable storage medium storing a computer program/instructions, which when executed by a processor causes the processor to carry out the steps of the method of any one of claims 1-11.
CN202210267652.6A 2022-03-17 2022-03-17 Entity space scanning method, device, terminal equipment and storage medium Active CN114727090B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210267652.6A CN114727090B (en) 2022-03-17 2022-03-17 Entity space scanning method, device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210267652.6A CN114727090B (en) 2022-03-17 2022-03-17 Entity space scanning method, device, terminal equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114727090A true CN114727090A (en) 2022-07-08
CN114727090B CN114727090B (en) 2024-01-26

Family

ID=82237176

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210267652.6A Active CN114727090B (en) 2022-03-17 2022-03-17 Entity space scanning method, device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114727090B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024114408A1 (en) * 2022-11-30 2024-06-06 杭州阿里巴巴海外互联网产业有限公司 Method and apparatus for providing commodity virtual tryout information, and electronic device

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013174867A1 (en) * 2012-05-22 2013-11-28 Pimaia Method for modeling a building or a room of same on the basis of a limited number of photographs of the walls thereof
WO2018005059A1 (en) * 2016-06-30 2018-01-04 Microsoft Technology Licensing, Llc Three-dimensional object scanning feedback
US9881425B1 (en) * 2016-09-09 2018-01-30 The Boeing Company Synchronized side-by-side display of real and virtual environments
US20190378330A1 (en) * 2018-06-06 2019-12-12 Ke.Com (Beijing) Technology Co., Ltd. Method for data collection and model generation of house
CN111161144A (en) * 2019-12-18 2020-05-15 北京城市网邻信息技术有限公司 Panorama acquisition method, panorama acquisition device and storage medium
US10699404B1 (en) * 2017-11-22 2020-06-30 State Farm Mutual Automobile Insurance Company Guided vehicle capture for virtual model generation
US20200349758A1 (en) * 2017-05-31 2020-11-05 Ethan Bryce Paulson Method and System for the 3D Design and Calibration of 2D Substrates
CN111932666A (en) * 2020-07-17 2020-11-13 北京字节跳动网络技术有限公司 Reconstruction method and device of house three-dimensional virtual image and electronic equipment
US20210319149A1 (en) * 2020-02-10 2021-10-14 Beijing Chengshi Wanglin Information Technology Co., Ltd. Method, device, equipment and storage medium for generating three-dimensional space of target house
WO2021249390A1 (en) * 2020-06-12 2021-12-16 贝壳技术有限公司 Method and apparatus for implementing augmented reality, storage medium, and electronic device
CN114003322A (en) * 2021-09-16 2022-02-01 北京城市网邻信息技术有限公司 Method, equipment and device for displaying real scene space of house and storage medium
CN114186311A (en) * 2021-11-30 2022-03-15 北京城市网邻信息技术有限公司 Information display method, equipment and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013174867A1 (en) * 2012-05-22 2013-11-28 Pimaia Method for modeling a building or a room of same on the basis of a limited number of photographs of the walls thereof
WO2018005059A1 (en) * 2016-06-30 2018-01-04 Microsoft Technology Licensing, Llc Three-dimensional object scanning feedback
US9881425B1 (en) * 2016-09-09 2018-01-30 The Boeing Company Synchronized side-by-side display of real and virtual environments
US20200349758A1 (en) * 2017-05-31 2020-11-05 Ethan Bryce Paulson Method and System for the 3D Design and Calibration of 2D Substrates
US10699404B1 (en) * 2017-11-22 2020-06-30 State Farm Mutual Automobile Insurance Company Guided vehicle capture for virtual model generation
US20190378330A1 (en) * 2018-06-06 2019-12-12 Ke.Com (Beijing) Technology Co., Ltd. Method for data collection and model generation of house
CN111161144A (en) * 2019-12-18 2020-05-15 北京城市网邻信息技术有限公司 Panorama acquisition method, panorama acquisition device and storage medium
US20210319149A1 (en) * 2020-02-10 2021-10-14 Beijing Chengshi Wanglin Information Technology Co., Ltd. Method, device, equipment and storage medium for generating three-dimensional space of target house
WO2021249390A1 (en) * 2020-06-12 2021-12-16 贝壳技术有限公司 Method and apparatus for implementing augmented reality, storage medium, and electronic device
CN111932666A (en) * 2020-07-17 2020-11-13 北京字节跳动网络技术有限公司 Reconstruction method and device of house three-dimensional virtual image and electronic equipment
CN114003322A (en) * 2021-09-16 2022-02-01 北京城市网邻信息技术有限公司 Method, equipment and device for displaying real scene space of house and storage medium
CN114186311A (en) * 2021-11-30 2022-03-15 北京城市网邻信息技术有限公司 Information display method, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024114408A1 (en) * 2022-11-30 2024-06-06 杭州阿里巴巴海外互联网产业有限公司 Method and apparatus for providing commodity virtual tryout information, and electronic device

Also Published As

Publication number Publication date
CN114727090B (en) 2024-01-26

Similar Documents

Publication Publication Date Title
US9940404B2 (en) Three-dimensional (3D) browsing
US11087134B2 (en) Augmented reality smartglasses for use at cultural sites
US20180225885A1 (en) Zone-based three-dimensional (3d) browsing
US5900879A (en) Three-dimensional workspace interactive display having browsing viewpoints for navigation and work viewpoints for user-object interactive non-navigational work functions with automatic switching to browsing viewpoints upon completion of work functions
JP7121811B2 (en) Method, apparatus, and storage medium for displaying three-dimensional spatial views
KR102433857B1 (en) Device and method for creating dynamic virtual content in mixed reality
KR101989089B1 (en) Method and system for authoring ar content by collecting ar content templates based on crowdsourcing
US20240078703A1 (en) Personalized scene image processing method, apparatus and storage medium
CN115690375B (en) Building model modification interaction method, system and terminal based on virtual reality technology
CN114727090B (en) Entity space scanning method, device, terminal equipment and storage medium
CN114511668A (en) Method, device and equipment for acquiring three-dimensional decoration image and storage medium
CN112308948A (en) Construction method and application of light field roaming model for house property marketing
CN114020235B (en) Audio processing method in live-action space, electronic terminal and storage medium
JP2020523668A (en) System and method for configuring virtual camera
Wang et al. PointShopAR: Supporting environmental design prototyping using point cloud in augmented reality
US20210241539A1 (en) Broker For Instancing
CN112612463A (en) Graphical programming control method, system and device
CN111589151A (en) Method, device, equipment and storage medium for realizing interactive function
CN112181394A (en) Method, device and equipment for creating three-dimensional building model component
CN115907912A (en) Method and device for providing virtual trial information of commodities and electronic equipment
KR101806922B1 (en) Method and apparatus for producing a virtual reality content
CN113742507A (en) Method for three-dimensionally displaying an article and associated device
CN110604918B (en) Interface element adjustment method and device, storage medium and electronic equipment
CN113535046A (en) Text component editing method, device, equipment and readable medium
CN115100327B (en) Method and device for generating animation three-dimensional video and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant