CN116310062A - Three-dimensional scene construction method and device, storage medium and electronic equipment - Google Patents

Three-dimensional scene construction method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN116310062A
CN116310062A CN202111489350.5A CN202111489350A CN116310062A CN 116310062 A CN116310062 A CN 116310062A CN 202111489350 A CN202111489350 A CN 202111489350A CN 116310062 A CN116310062 A CN 116310062A
Authority
CN
China
Prior art keywords
scene
initial
image data
model
dimensional scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111489350.5A
Other languages
Chinese (zh)
Inventor
李萍
孙昊
黄隆珲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom International Co ltd
Original Assignee
China Telecom International Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom International Co ltd filed Critical China Telecom International Co ltd
Priority to CN202111489350.5A priority Critical patent/CN116310062A/en
Publication of CN116310062A publication Critical patent/CN116310062A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure provides a three-dimensional scene construction method, a three-dimensional scene construction device, electronic equipment and a storage medium, and relates to the technical field of computers. The method comprises the following steps: acquiring scene image data of a target entity scene, and analyzing sensing information in the scene image data to generate a space coordinate system corresponding to the scene image data; identifying surface objects in scene image data, and constructing an initial three-dimensional scene of a target entity scene in a space coordinate system based on the surface objects; acquiring an actual distance parameter value in a target entity scene, and determining size information of an initial three-dimensional scene according to the actual distance parameter value; acquiring a device construction instruction, and determining a device model and a corresponding placement position applied to an initial three-dimensional scene according to the device construction instruction; and displaying the equipment model in the initial three-dimensional scene according to the placement positions to generate the three-dimensional scene with the equipment construction effect. The method can construct an initial three-dimensional model of the scene, and can also quickly respond to the construction instruction of the user to generate and display the corresponding scene effect.

Description

Three-dimensional scene construction method and device, storage medium and electronic equipment
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a three-dimensional scene construction method and device, a storage medium and electronic equipment.
Background
When the project construction scheme effect is displayed to the user, the environment, the cabinet position, the matched facilities and the like can be displayed to the client in a three-dimensional scene/model mode.
In the related art, before the display is performed by the three-dimensional modeling/virtual reality VR technology, a great deal of time is often spent for modeling to perform the display, and the defects of long construction period, high cost, difficult modification and incapability of flexibly and rapidly responding to the demands of clients for performing scene display exist.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The invention aims to provide a three-dimensional scene construction method, a three-dimensional scene construction device, electronic equipment and a storage medium, so as to solve the problems that the construction period is long, the cost is high, the modification is difficult, and the scene display cannot be flexibly and quickly performed in response to the client demands.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to one aspect of the present disclosure, there is provided a three-dimensional scene construction method including: acquiring scene image data of a target entity scene, and analyzing sensing information in the scene image data to generate a space coordinate system corresponding to the scene image data; identifying surface objects in scene image data, and constructing an initial three-dimensional scene of a target entity scene in a space coordinate system based on the surface objects; acquiring an actual distance parameter value in a target entity scene, and determining size information of an initial three-dimensional scene according to the actual distance parameter value; acquiring a device construction instruction, and determining a device model and a corresponding placement position applied to an initial three-dimensional scene according to the device construction instruction; and displaying the equipment model in the initial three-dimensional scene according to the placement position, and further generating the three-dimensional scene with the equipment construction effect.
In one embodiment of the disclosure, the scene image data is obtained by image acquisition of a target entity scene using an image acquisition device having a sensing device; the scene image data comprises a plurality of images; and a step of analyzing the sensing information in the scene image data to generate a spatial coordinate system corresponding to the scene image data, comprising: analyzing the sensing information in the scene image data to obtain the relative position information of the image in the scene image data and the three-dimensional direction information corresponding to the image; and generating a space coordinate system corresponding to the scene image data according to the relative position information and the three-dimensional direction information.
In one embodiment of the present disclosure, the step of identifying surface objects in scene image data, constructing an initial three-dimensional scene of a target physical scene in a spatial coordinate system based on the surface objects, comprises: determining feature points on each image in the scene image data through an Augmented Reality (AR) technology, and identifying surface objects in the scene image data according to the feature points; determining a position of the surface object in a spatial coordinate system based on the spatial coordinate system corresponding to the scene image data; constructing an initial surface model of the surface object according to the position of the surface object in the space coordinate system; optimizing the initial surface model through a nonlinear fitting technology to obtain a target surface model, and constructing an initial three-dimensional scene of the target entity scene according to the target surface model.
In one embodiment of the present disclosure, the step of optimizing the initial surface model by a nonlinear fitting technique to obtain a target surface model includes: creating a mesh object corresponding to the initial surface model in the initial three-dimensional scene; attaching the mesh object to a corresponding initial surface model to generate labeling information of the initial surface model; and splicing the initial surface model based on the labeling information by a nonlinear fitting technology to obtain the target surface model.
In one embodiment of the present disclosure, the step of obtaining the actual distance parameter value in the target entity scene includes: using image acquisition equipment with a sensing device to measure the distance of a target entity scene to obtain an actual distance parameter value in the target entity scene; and/or acquiring building design data of the target entity scene, and determining actual distance parameter values in the target entity scene according to the building design data.
In one embodiment of the present disclosure, the step of determining size information of the initial three-dimensional scene according to the actual distance parameter value includes: determining virtual measurement distances on all coordinate axes in a space coordinate system according to the actual distance parameter values; and determining the size information of the initial three-dimensional scene according to the virtual measurement distance on each coordinate axis.
In one embodiment of the present disclosure, the step of obtaining a device building instruction, determining a device model and a corresponding placement position applied to an initial three-dimensional scene according to the device building instruction, includes: displaying a layout interface; responding to equipment construction operation performed by a user on a layout interface, and generating equipment construction instructions; and analyzing the equipment construction instruction to obtain an equipment model and a corresponding placement position.
In one embodiment of the present disclosure, the step of displaying the device model in the initial three-dimensional scene according to the pose position includes: acquiring the actual model size of the equipment model, and acquiring the display size of the equipment model based on the size information of the initial three-dimensional scene and the actual model size; placing the equipment model with the display size in an initial three-dimensional scene according to the placement position by an Augmented Reality (AR) technology; adjusting the coordinates and/or the direction of the equipment model placement by using a nonlinear fitting algorithm, and determining the target placement position of the equipment model placement; and displaying the equipment model with the display size in the initial three-dimensional scene based on the target placement position through the virtual reality VR technology.
According to another aspect of the present disclosure, there is provided a three-dimensional scene construction apparatus including: the acquisition module is used for acquiring scene image data of the target entity scene and analyzing the sensing information in the scene image data to generate a space coordinate system corresponding to the scene image data; the construction module is used for identifying surface objects in the scene image data and constructing an initial three-dimensional scene of the target entity scene in a space coordinate system based on the surface objects; the determining module is used for acquiring actual distance parameter values in the target entity scene and determining size information of the initial three-dimensional scene according to the actual distance parameter values; the device comprises a placement module, a display module and a display module, wherein the placement module is used for acquiring a device construction instruction and determining a device model and a corresponding placement position applied to an initial three-dimensional scene according to the device construction instruction; the display module is used for displaying the equipment model in the initial three-dimensional scene according to the placement position, and further generating a three-dimensional scene with the equipment construction effect.
In one embodiment of the disclosure, the scene image data is obtained by image acquisition of a target entity scene using an image acquisition device having a sensing device; the scene image data comprises a plurality of images; and a step of the acquisition module analyzing the sensing information in the scene image data to generate a spatial coordinate system corresponding to the scene image data, comprising: analyzing the sensing information in the scene image data to obtain the relative position information of the image in the scene image data and the three-dimensional direction information corresponding to the image; and generating a space coordinate system corresponding to the scene image data according to the relative position information and the three-dimensional direction information.
In one embodiment of the present disclosure, the step of the construction module identifying surface objects in the scene image data, constructing an initial three-dimensional scene of the target entity scene in a spatial coordinate system based on the surface objects, comprises: determining feature points on each image in the scene image data through an Augmented Reality (AR) technology, and identifying surface objects in the scene image data according to the feature points; determining a position of the surface object in a spatial coordinate system based on the spatial coordinate system corresponding to the scene image data; constructing an initial surface model of the surface object according to the position of the surface object in the space coordinate system; optimizing the initial surface model through a nonlinear fitting technology to obtain a target surface model, and constructing an initial three-dimensional scene of the target entity scene according to the target surface model.
In one embodiment of the present disclosure, the step of the building module optimizing the initial surface model to obtain the target surface model by a nonlinear fitting technique includes: creating a mesh object corresponding to the initial surface model in the initial three-dimensional scene; attaching the mesh object to a corresponding initial surface model to generate labeling information of the initial surface model; and splicing the initial surface model based on the labeling information by a nonlinear fitting technology to obtain the target surface model.
In one embodiment of the present disclosure, the step of the determining module obtaining the actual distance parameter value in the target entity scene includes: using image acquisition equipment with a sensing device to measure the distance of a target entity scene to obtain an actual distance parameter value in the target entity scene; and/or acquiring building design data of the target entity scene, and determining actual distance parameter values in the target entity scene according to the building design data.
In one embodiment of the present disclosure, the step of determining, by the determining module, size information of the initial three-dimensional scene according to the actual distance parameter value includes: determining virtual measurement distances on all coordinate axes in a space coordinate system according to the actual distance parameter values; and determining the size information of the initial three-dimensional scene according to the virtual measurement distance on each coordinate axis.
In one embodiment of the present disclosure, the step of the placement module obtaining a device building instruction, determining a device model and a corresponding placement position applied to an initial three-dimensional scene according to the device building instruction, includes: displaying a layout interface; responding to equipment construction operation performed by a user on a layout interface, and generating equipment construction instructions; and analyzing the equipment construction instruction to obtain an equipment model and a corresponding placement position.
In one embodiment of the present disclosure, the step of the display module displaying the equipment model in the initial three-dimensional scene according to the placement position includes: acquiring the actual model size of the equipment model, and acquiring the display size of the equipment model based on the size information of the initial three-dimensional scene and the actual model size; placing the equipment model with the display size in an initial three-dimensional scene according to the placement position by an Augmented Reality (AR) technology; adjusting the coordinates and/or the direction of the equipment model placement by using a nonlinear fitting algorithm, and determining the target placement position of the equipment model placement; and displaying the equipment model with the display size in the initial three-dimensional scene based on the target placement position through the virtual reality VR technology.
According to yet another aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the three-dimensional scene construction method described above.
According to still another aspect of the present disclosure, there is provided an electronic apparatus including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the three-dimensional scene construction method described above via execution of the executable instructions.
The three-dimensional scene construction method provided by the embodiment of the disclosure can construct an initial three-dimensional model of a target entity scene, and can rapidly respond to the equipment construction instruction of a user to display the equipment model in the initial three-dimensional model, so that corresponding scene effects are generated and displayed.
Further, the three-dimensional scene construction method provided by the embodiment of the present disclosure may further perform the display of the scene effect through a virtual reality VR technology after generating the corresponding scene effect in response to the device construction instruction.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
FIG. 1 illustrates a schematic diagram of an exemplary system architecture to which the three-dimensional scene construction method of embodiments of the present disclosure may be applied;
FIG. 2 illustrates a flow chart of a three-dimensional scene construction method of one embodiment of the present disclosure;
FIG. 3 illustrates a schematic diagram of generating a spatial coordinate system in a three-dimensional scene construction method according to one embodiment of the disclosure;
FIG. 4 illustrates a flow chart of constructing an initial three-dimensional scene in a three-dimensional scene construction method according to one embodiment of the disclosure;
FIG. 5 illustrates a flow chart of obtaining a target surface model in a three-dimensional scene construction method according to one embodiment of the disclosure;
FIG. 6 illustrates a schematic diagram of creating mesh objects in a three-dimensional scene construction method according to one embodiment of the disclosure;
FIG. 7 illustrates a schematic diagram of labeling an initial surface model in a three-dimensional scene construction method according to one embodiment of the disclosure;
FIG. 8 illustrates a flow chart showing a device model in a three-dimensional scene construction method according to one embodiment of the present disclosure;
FIG. 9 shows a schematic diagram of a presentation of a device model in a three-dimensional scene construction method according to one embodiment of the present disclosure;
FIG. 10 illustrates a flow chart of a three-dimensional scene construction method of an embodiment of the present disclosure;
FIG. 11 illustrates a block diagram of a three-dimensional scene building apparatus of an embodiment of the disclosure; and
fig. 12 shows a block diagram of a three-dimensional scene building computer device in an embodiment of the disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present disclosure, the meaning of "a plurality" is at least two, such as two, three, etc., unless explicitly specified otherwise.
In view of the technical problems in the related art, embodiments of the present disclosure provide a three-dimensional scene construction method for at least solving one or all of the technical problems.
FIG. 1 illustrates a schematic diagram of an exemplary system architecture to which the three-dimensional scene construction method of embodiments of the present disclosure may be applied; as shown in fig. 1:
the system architecture may include a server 101, a network 102, and a client 103. Network 102 is the medium used to provide communication links between clients 103 and server 101. Network 102 may include various connection types such as wired, wireless communication links, or fiber optic cables, among others.
The server 101 may be a server providing various services, such as a background management server providing support for devices operated by users with the client 203. The background management server can construct an initial three-dimensional scene of the target entity scene according to the scene image data, can also receive and process equipment construction instructions, and display corresponding equipment models in the initial three-dimensional scene displayed in the corresponding interface of the client 203 based on the equipment construction instructions, so that three-dimensional scenes with equipment construction effects are generated and displayed in the corresponding interface of the client 203.
In some alternative embodiments, the server 101 may acquire scene image data of the target entity scene, parse the sensing information in the scene image data to generate a spatial coordinate system corresponding to the scene image data; the server 101 may identify surface objects in the scene image data, construct an initial three-dimensional scene of the target entity scene in a spatial coordinate system based on the surface objects; the server 101 may acquire an actual distance parameter value in the target entity scene, and determine size information of the initial three-dimensional scene according to the actual distance parameter value; the server 101 may acquire an equipment construction instruction, and determine an equipment model and a corresponding placement position applied to the initial three-dimensional scene according to the equipment construction instruction; the server 101 may display the device model in the initial three-dimensional scene according to the placement position, so as to generate a three-dimensional scene with a device construction effect.
The client 103 may be a mobile terminal such as a mobile phone, a game console, a tablet computer, an electronic book reader, smart glasses, a smart home device, an AR (Augmented Reality) device, a VR (Virtual Reality) device, or the like, or the client 103 may be a personal computer such as a laptop portable computer and a desktop computer, or the like.
In some alternative embodiments, the client 103 may present the initial three-dimensional scene of the target entity scene to the operator, may also provide the operator with an interface for issuing the device build instructions, and may present the three-dimensional scene with the device build effect to the operator.
It should be understood that the number of clients, networks and servers in fig. 1 is merely illustrative, and the server 101 may be a server of one entity, may be a server cluster formed by a plurality of servers, may be a cloud server, and may have any number of clients, networks and servers according to actual needs.
Hereinafter, respective steps of the three-dimensional scene-based wiring method in the exemplary embodiments of the present disclosure will be described in more detail with reference to the accompanying drawings and embodiments.
Fig. 2 shows a flowchart of a three-dimensional scene construction method of an embodiment of the present disclosure. The method provided by the embodiments of the present disclosure may be performed in a server or a client as shown in fig. 1, but the present disclosure is not limited thereto.
In the following illustration, the server cluster 101 is exemplified as an execution subject.
As shown in fig. 2, the three-dimensional scene construction method provided by the embodiment of the present disclosure may include the following steps:
In step S201, scene image data of the target entity scene is acquired, and sensing information in the scene image data is parsed to generate a spatial coordinate system corresponding to the scene image data.
In this embodiment, the target entity scene may be a specific machine room, a server storage room, a working space, and the like. The scene image data may be multimedia files in the form of videos, motion pictures, image sets, etc.; the scene image data can be obtained by image acquisition of the target entity scene by using image acquisition equipment with a sensing device, for example, the scene can be shot by using a camera of a mobile phone to obtain video, a moving picture, an image set and the like of the target entity scene.
In step S203, surface objects in the scene image data are identified, and an initial three-dimensional scene of the target entity scene is constructed in a spatial coordinate system based on the surface objects. Wherein the surface object may be a wall, a ceiling, etc. in the target physical scene. The initial three-dimensional scene may be a virtual-scene-rendered representation of a target physical scene without device decoration, such as: the initial three-dimensional scene may be a virtual scene consisting of four walls, a top surface, and a bottom surface.
Step S205, obtaining the actual distance parameter value in the target entity scene, and determining the size information of the initial three-dimensional scene according to the actual distance parameter value. The actual distance parameter value may be information actually describing the size of the target physical scene, such as size information of each wall surface, height information in the scene, and the like.
Step S207, acquiring a device construction instruction, and determining a device model and a corresponding placement position applied to the initial three-dimensional scene according to the device construction instruction. The device building instruction may be issued by the user in a corresponding operation interface displayed on the client 103; the device build instructions may indicate the type or style of device that the user wishes to pose, and may indicate the location in the initial three-dimensional scene where the user wishes to pose the device model.
Step S209, the equipment model is displayed in the initial three-dimensional scene according to the placement positions, and further a three-dimensional scene with the equipment construction effect is generated. In this embodiment, the device model specified by the user can be quickly placed in the constructed initial three-dimensional scene in a visualized manner in response to the device construction instruction, and the three-dimensional scene with the device construction effect is displayed to the user, so that the scene effect is intuitively displayed.
By the three-dimensional scene construction method provided by the embodiment of the disclosure, an initial three-dimensional model of a target entity scene can be constructed, and the equipment model can be displayed in the initial three-dimensional model in quick response to the equipment construction instruction of a user, so that corresponding scene effects are generated and displayed. The method has the advantages of short construction period, easy modification and capability of flexibly and rapidly responding to the demands of clients to display scenes.
In some embodiments, the scene image data includes a plurality of images therein; and a step of analyzing the sensing information in the scene image data to generate a spatial coordinate system corresponding to the scene image data, comprising: analyzing the sensing information in the scene image data to obtain the relative position information of the image in the scene image data and the three-dimensional direction information corresponding to the image; and generating a space coordinate system corresponding to the scene image data according to the relative position information and the three-dimensional direction information.
The scene image data may have sensing information, such as: using the accelerometer, the gyroscope obtains sensory information. For another example, the relative position of the image in the scene image data can be identified by the mobile phone camera and the sensor, and further the X, Y, Z triaxial for the initial three-dimensional scene is marked.
Fig. 3 is a schematic diagram of generating a spatial coordinate system in a three-dimensional scene construction method according to an embodiment of the present disclosure, and fig. 3 is a schematic diagram of generating a spatial coordinate system with three-dimensional directions in a scene image, where the three-dimensional directions include: x-axis, Y-axis, Z-axis.
Fig. 4 shows a flowchart of constructing an initial three-dimensional scene in the three-dimensional scene constructing method according to an embodiment of the present disclosure, and as shown in fig. 4, step S203 in the embodiment of fig. 2 may further include the steps of:
In step S401, feature points on each image in the scene image data are determined by the augmented reality AR technology, and a surface object in the scene image data is identified according to the feature points. Among these, the surface objects may be, for example, 4 walls (or walls), 1 floor, 1 ceiling, and several surfaces.
Step S403, determining a position of the surface object in the spatial coordinate system based on the spatial coordinate system corresponding to the scene image data.
Step S405, constructing an initial surface model of the surface object according to the position of the surface object in the spatial coordinate system. Through step S405, the position of the surface object in the space coordinate system may be determined based on the generated space coordinate system, so as to complete the boundary labeling of the target entity scene, and further create the initial surface model of the surface object.
Step S407, optimizing the initial surface model through a nonlinear fitting technology to obtain a target surface model, and constructing an initial three-dimensional scene of the target entity scene according to the target surface model.
It can be seen that by implementing fig. 4, the surface recognition and recording in the target entity scene can be completed through the feature point recognition in the scene image, creating a virtual initial three-dimensional scene. Specifically, through the AR mode, equipment such as a mobile phone camera and a sensor is used, and methods such as image recognition and nonlinear fitting are combined, so that the space recognition of an actual IDC machine room is realized, and the establishment of the initial three-dimensional scene is further completed.
Fig. 5 shows a flowchart of obtaining a target surface model in the three-dimensional scene construction method according to an embodiment of the present disclosure, and as shown in fig. 5, step S407 in the embodiment of fig. 4 may further include the following steps:
in step S501, mesh objects corresponding to an initial surface model are created in an initial three-dimensional scene. In some implementations, a translucent mesh may be created in an initial three-dimensional scene by an application in a mobile device (e.g., a cell phone).
Step S503, attaching the mesh object to the corresponding initial surface model, and generating labeling information of the initial surface model. In some practical applications, the created semitransparent net sheet can be attached to the ground or the wall surface through the related application program so as to realize the surface marking.
And step S505, splicing the initial surface model based on the labeling information by a nonlinear fitting technology to obtain the target surface model. The stitching of the faces may be accomplished using non-linear fitting functions provided in the relevant applications described above (e.g., using functions provided by Ceres libraries in the applications) to generate virtual faces that more closely conform to the actual scene as the target surface model.
Therefore, by implementing fig. 5, the semitransparent net piece can be created to be attached to the ground or the wall surface in the initial three-dimensional scene, and the nonlinear fitting and the surface splicing are completed, so that the effect of improving the accuracy of identifying and marking each surface in the initial three-dimensional scene is achieved.
Fig. 6 is a schematic diagram of creating a mesh object in the three-dimensional scene construction method according to an embodiment of the present disclosure, as shown in fig. 6, which is a schematic diagram of step S501 in the embodiment of fig. 5 in a practical application, including: mesh objects 601, 602, 603, 604 are created in the initial three-dimensional scene 600. Wherein the initial three-dimensional scene 600 and mesh objects 601, 602, 603, 604 may be presented by an application on a mobile device (e.g., a cell phone).
Fig. 7 is a schematic diagram illustrating labeling of an initial surface model in the three-dimensional scene construction method according to an embodiment of the present disclosure, as shown in fig. 7, which is a schematic diagram of step S503 in the embodiment of fig. 5 in a practical application, including: the mobile device 700, an operation interface 701 provided by an application program on the mobile device 700, an operation button 702 for labeling in the operation interface 701, and a mesh object 703 presented in the operation interface 701. By implementing the schematic diagram shown in fig. 7, the initial three-dimensional scene and the mesh object 703 may be displayed through an operation interface 701 provided by an application program in the mobile device 700 (such as a mobile phone), and an operation button 702 for labeling may be provided in the operation interface 701, so that a user may perform corresponding operation on the operation button 702, thereby labeling the mesh object 703. Further, in the present schematic diagram, the operation buttons 702 may include labeling operation buttons for the wall surface, the ground surface, and the invalid surface, as shown in fig. 7, the labeling operation button for "wall surface" may be pressed to slide toward the mesh object 703, and at this time, a prompt message for "dragging the label as the wall surface" may appear in the operation interface 701.
In some practical applications, if there are devices (such as a cabinet, a computer desk, etc.) or decoration facilities already existing in the target entity scene, the surfaces of the devices (such as the cabinet, the computer desk, etc.) or the decoration facilities may be ignored first in the process of labeling the initial surface model, the mesh objects are not created on the surfaces, and the labeling operation is not performed on the surfaces.
In some embodiments, the step of obtaining the actual distance parameter value in the target entity scene in step S205 in the embodiment of fig. 2 includes: using image acquisition equipment with a sensing device to measure the distance of a target entity scene to obtain an actual distance parameter value in the target entity scene; and/or acquiring building design data of the target entity scene, and determining actual distance parameter values in the target entity scene according to the building design data.
Further in some embodiments, the step of determining the size information of the initial three-dimensional scene according to the actual distance parameter value in step S205 in the embodiment of fig. 2 may include: determining virtual measurement distances on all coordinate axes in a space coordinate system according to the actual distance parameter values; and determining the size information of the initial three-dimensional scene according to the virtual measurement distance on each coordinate axis.
The sensor device may be a photosensor, a camera having a photosensor, or the like. In this embodiment, the distance measurement and calculation of the scene can be completed through the laser sensor and the camera, so as to obtain an accurate scene size value, and further complete the construction of the three-dimensional scene. Alternatively, when there is design standard data (reference data) of the target entity scene, the measurer may be allowed to manually modify the size data of the initial three-dimensional scene in the corresponding interface.
In some embodiments, the step of obtaining device build instructions, determining a device model and corresponding pose location applied to the initial three-dimensional scene from the device build instructions may include: displaying a layout interface; responding to equipment construction operation performed by a user on a layout interface, and generating equipment construction instructions; and analyzing the equipment construction instruction to obtain an equipment model and a corresponding placement position.
The user can acquire the equipment purchasing scheme first, and then determine the equipment information which is expected to be displayed in the scene construction based on the equipment purchasing scheme, wherein the equipment information can comprise equipment types, equipment models, equipment shapes and the like. The layout interface can be displayed to the user through an application program on the mobile device, so that the user can select or drag the layout interface to generate a device construction instruction, and further the mobile device analyzes the instruction to determine a device model designated by the user and the placement position of the device model. Specifically, a user can acquire the requirement of an IDC (Internet Data Center, network data center) purchasing scheme, determine equipment such as a cabinet, a bridge, a cage, a precision air conditioner, a camera and the like which need to be installed in the scheme, and finish the placement of an equipment model in an AR scene in a dragging mode from an equipment library provided by the app through a mobile phone app.
Fig. 8 is a flowchart illustrating a device model in a three-dimensional scene in the three-dimensional scene construction method according to an embodiment of the present disclosure, and as shown in fig. 8, the step of displaying the device model in an initial three-dimensional scene according to a placement position in step S209 in the embodiment of fig. 2 may further include the steps of:
step S801, obtaining the actual model size of the equipment model, and obtaining the display size of the equipment model based on the size information of the initial three-dimensional scene and the actual model size.
Step S803, placing the device model with the display size in the initial three-dimensional scene according to the placement position through the augmented reality AR technology.
In step S805, the coordinates and/or directions of the placement of the device model are adjusted using a nonlinear fitting algorithm, and the target placement position of the placement of the device model is determined.
Step S807, displaying the device model with the display size based on the target placement position in the initial three-dimensional scene by the virtual reality VR technology.
In some practical applications, after the user selects the device model in the mobile phone app to put, the user may also edit the position, size, angle, transparency, remark information and other properties of the device model. After the equipment models are placed one by one, a target three-dimensional scene meeting the requirement of a user purchase scheme can be generated, and the target three-dimensional scene is displayed to the user.
Fig. 9 shows a schematic diagram of a device model shown in a three-dimensional scene construction method according to an embodiment of the present disclosure, as shown in fig. 9, including: an initial three-dimensional scene 900, and a device model 901 placed in the initial three-dimensional scene 900. Schematic 9 illustrates a scene effect including a device model.
Fig. 10 shows a flowchart of a three-dimensional scene construction method according to an embodiment of the present disclosure, as shown in fig. 10, including:
step 1: firstly, a camera function in a mobile phone is opened by using a mobile phone app, and camera shooting content is displayed in real time in an application; the mobile phone can be held to complete 360-degree shooting, and the X, Y, Z coordinate axis information of the scene is calculated and initialized by shooting and analyzing the image information of a room (namely, a target entity scene) and calling the sensing information acquired by the accelerometer and the gyroscope.
Step 2: after the coordinate axis of the scene is established, the feature points in the room image can be identified, so that the surface identification and recording in the scene/room can be completed, for example, 1 ground surface, 1 top surface and a plurality of surfaces of 4 wall surfaces (or a plurality of wall surfaces) can be clearly distinguished. And (3) finishing boundary marking of the room on the basis, and initially creating a virtual three-dimensional scene.
Step 3: several surface information has been recorded and identified in the virtual scene of the previous step. A semitransparent net piece can be created in the virtual scene through the mobile phone app and attached to the ground or the wall surface so as to carry out surface marking; if the room has equipment already in place, the surfaces of the cabinet and the equipment can be optionally ignored and no more surface-mounting operations can be performed on these surfaces. After the surface labeling is completed, the surface can be spliced by using an application (such as an application capable of providing Ceres library) on the mobile phone through nonlinear fitting, and a virtual surface conforming to the actual scene is generated, so that the accuracy of the three-dimensional virtual scene created in the step two is improved.
Step 4: the mobile phone app invokes the mobile phone LIDAR to finish measurement and calculation of each surface distance (size) in the scene, the measured and calculated values can be used for generating coordinate axes X ', Y', Z ', the values of X', Y ', Z' are stored in the mobile phone for local storage, and the relatively accurate scene size values can be used as input data and transmitted to a system background through a network interface to finish virtual three-dimensional scene construction. In addition, if one or more machine room design standard data (reference data) are provided, the measurer can be allowed to manually modify the size data of the machine room scene in the mobile phone app interactive interface.
Through the steps 1-4, the creation work of the virtual three-dimensional scene of the specified IDC machine room can be completed quickly by using the mobile phone app.
Step 5: the equipment such as a cabinet, a bridge, a cage, a precise air conditioner, a camera and the like which are required to be installed in the scheme can be determined according to the requirements of the IDC purchase scheme of the client, the client can be a mobile phone app user, and the client finishes the placement of the three-dimensional equipment model in an AR scene in a dragging mode from the equipment library of the app by using the mobile phone app. After the customer selects the equipment model, the customer can edit the position, size, angle, transparency, remark information and other properties of the equipment model. After the equipment is placed one by one, a target three-dimensional scene meeting the requirement of a customer purchasing scheme can be generated in the mobile phone app, and the target three-dimensional scene can be used as a construction scheme layout to be demonstrated or displayed for the customer.
Step 6: the virtual machine room purchasing construction scheme layout obtained in the step 5 can be further optimized, for example, the placement position and the placement angle of the equipment can be further optimized. In this step, the mobile phone app can be used to optimize the placement position and angle of the virtual device in the AR scene through a nonlinear fitting algorithm, so as to obtain the optimized placement position value of the virtual device.
Step 7: the more accurate construction scheme layout can be generated according to the data generated in the step 6 and the virtual scene generated in the step 4. The construction scheme layout can be displayed in an AR environment without limitation, and in some practical applications, the construction scheme layout can also provide a basic data model for accurate engineering management application, and can be used as a digital twin construction scheme model for multiple parties such as an IDC machine room seller, a client, a constructor, a supervisor and the like.
By the three-dimensional scene construction method provided by the embodiment of the disclosure, the following functions and effects can be realized: 1) The virtual three-dimensional scene construction of the real IDC machine room can be realized by combining equipment such as a mobile phone camera, a sensor and the like with methods such as image recognition, nonlinear fitting and the like; 2) The virtual equipment can be placed in the three-dimensional scene through an AR technology, and a construction scheme (namely a target three-dimensional scene) showing example with relatively accurate size and position is completed through data fitting. After the construction scheme is stored, the three-dimensional virtual scene can be displayed in a VR mode, and the three-dimensional virtual scene is not limited to being displayed in an AR mode depending on a real scene.
It is noted that the above-described figures are only schematic illustrations of processes involved in a method according to an exemplary embodiment of the invention, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Fig. 11 shows a block diagram of a three-dimensional scene constructing apparatus 1100 in a fifth embodiment of the present disclosure; as shown in fig. 11, includes: an acquisition module 1101, configured to acquire scene image data of a target entity scene, and parse sensing information in the scene image data to generate a spatial coordinate system corresponding to the scene image data; a construction module 1102, configured to identify a surface object in the scene image data, and construct an initial three-dimensional scene of the target entity scene in a spatial coordinate system based on the surface object; a determining module 1103, configured to obtain an actual distance parameter value in the target entity scene, and determine size information of the initial three-dimensional scene according to the actual distance parameter value; the placement module 1104 is configured to obtain an equipment construction instruction, and determine an equipment model and a corresponding placement position applied to the initial three-dimensional scene according to the equipment construction instruction; the display module 1105 is configured to display the device model in the initial three-dimensional scene according to the placement position, so as to generate a three-dimensional scene with a device construction effect.
In some embodiments, the scene image data is obtained by image capturing of a target physical scene using an image capture device having a sensing device; the scene image data comprises a plurality of images; and a step of the acquisition module 1101 analyzing the sensing information in the scene image data to generate a spatial coordinate system corresponding to the scene image data, including: analyzing the sensing information in the scene image data to obtain the relative position information of the image in the scene image data and the three-dimensional direction information corresponding to the image; and generating a space coordinate system corresponding to the scene image data according to the relative position information and the three-dimensional direction information.
In some embodiments, the constructing module 1102 identifies surface objects in the scene image data, and constructs an initial three-dimensional scene of the target entity scene in a spatial coordinate system based on the surface objects, comprising: determining feature points on each image in the scene image data through an Augmented Reality (AR) technology, and identifying surface objects in the scene image data according to the feature points; determining a position of the surface object in a spatial coordinate system based on the spatial coordinate system corresponding to the scene image data; constructing an initial surface model of the surface object according to the position of the surface object in the space coordinate system; optimizing the initial surface model through a nonlinear fitting technology to obtain a target surface model, and constructing an initial three-dimensional scene of the target entity scene according to the target surface model.
In some embodiments, the step of constructing module 1102 optimizes the initial surface model by a nonlinear fitting technique to obtain a target surface model includes: creating a mesh object corresponding to the initial surface model in the initial three-dimensional scene; attaching the mesh object to a corresponding initial surface model to generate labeling information of the initial surface model; and splicing the initial surface model based on the labeling information by a nonlinear fitting technology to obtain the target surface model.
In some embodiments, the step of determining module 1103 obtaining the actual distance parameter value in the target entity scene includes: using image acquisition equipment with a sensing device to measure the distance of a target entity scene to obtain an actual distance parameter value in the target entity scene; and/or acquiring building design data of the target entity scene, and determining actual distance parameter values in the target entity scene according to the building design data.
In some embodiments, the step of determining, by the determining module 1103, size information of the initial three-dimensional scene according to the actual distance parameter value includes: determining virtual measurement distances on all coordinate axes in a space coordinate system according to the actual distance parameter values; and determining the size information of the initial three-dimensional scene according to the virtual measurement distance on each coordinate axis.
In some embodiments, the pose module 1104 obtains device build instructions, and determines a device model and a corresponding pose position to apply to the initial three-dimensional scene based on the device build instructions, comprising: displaying a layout interface; responding to equipment construction operation performed by a user on a layout interface, and generating equipment construction instructions; and analyzing the equipment construction instruction to obtain an equipment model and a corresponding placement position.
In some embodiments, the step of presenting the device model in the initial three-dimensional scene by the presentation module 1105 according to the placement position includes: acquiring the actual model size of the equipment model, and acquiring the display size of the equipment model based on the size information of the initial three-dimensional scene and the actual model size; placing the equipment model with the display size in an initial three-dimensional scene according to the placement position by an Augmented Reality (AR) technology; adjusting the coordinates and/or the direction of the equipment model placement by using a nonlinear fitting algorithm, and determining the target placement position of the equipment model placement; and displaying the equipment model with the display size in the initial three-dimensional scene based on the target placement position through the virtual reality VR technology.
It can be seen that by implementing the three-dimensional scene-based wiring device shown in fig. 11, it is possible to generate wiring paths in a virtual three-dimensional scene in combination with wiring demands, and calculate corresponding wiring lengths, thereby providing sufficient and accurate data support for wiring schemes in an actual scene. In addition, the wiring device based on the three-dimensional scene can also realize virtual ranging based on a virtual three-dimensional scene and combining an AR technology and a SLAM technology.
Those skilled in the art will appreciate that the various aspects of the invention may be implemented as a system, method, or program product. Accordingly, aspects of the invention may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
Fig. 12 shows a block diagram of a wiring computer device based on a three-dimensional scene in an embodiment of the present disclosure. It should be noted that the illustrated electronic device is only an example, and should not impose any limitation on the functions and application scope of the embodiments of the present invention.
An electronic device 1200 according to this embodiment of the present invention is described below with reference to fig. 12. The electronic device 1200 shown in fig. 12 is merely an example, and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 12, the electronic device 1200 is in the form of a general purpose computing device. Components of electronic device 1200 may include, but are not limited to: the at least one processing unit 1210, the at least one memory unit 1220, and a bus 1230 connecting the different system components (including the memory unit 1220 and the processing unit 1210).
Wherein the storage unit stores program code that is executable by the processing unit 1210 such that the processing unit 1210 performs steps according to various exemplary embodiments of the present invention described in the above-described "exemplary methods" section of the present specification. For example, the processing unit 1210 may perform step S201 shown in fig. 2, acquire scene image data of the target entity scene, and parse the sensing information in the scene image data to generate a spatial coordinate system corresponding to the scene image data; step S203, identifying surface objects in the scene image data, and constructing an initial three-dimensional scene of the target entity scene in a space coordinate system based on the surface objects; step S205, obtaining actual distance parameter values in the target entity scene, and determining size information of an initial three-dimensional scene according to the actual distance parameter values; step S207, acquiring an equipment construction instruction, and determining an equipment model and a corresponding placement position applied to an initial three-dimensional scene according to the equipment construction instruction; step S209, the equipment model is displayed in the initial three-dimensional scene according to the placement positions, and further a three-dimensional scene with the equipment construction effect is generated.
The storage unit 1220 may include a readable medium in the form of a volatile storage unit, such as a Random Access Memory (RAM) 12201 and/or a cache memory 12202, and may further include a Read Only Memory (ROM) 12203.
Storage unit 1220 may also include a program/utility 12204 having a set (at least one) of program modules 12205, such program modules 12205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 1230 may be a local bus representing one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or using any of a variety of bus architectures.
The electronic device 1200 may also communicate with one or more external devices 1300 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 1200, and/or any device (e.g., router, modem, etc.) that enables the electronic device 1200 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 1250. Also, the electronic device 1200 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the internet through the network adapter 1260. As shown, the network adapter 1260 communicates with other modules of the electronic device 1200 over bus 1230. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 1200, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification is also provided. In some possible embodiments, the various aspects of the invention may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the disclosure as described in the "exemplary methods" section of this specification, when said program product is run on the terminal device.
A program product for implementing the above-described method according to an embodiment of the present invention may employ a portable compact disc read-only memory (CD-ROM) and include program code, and may be run on a terminal device such as a personal computer. However, the program product of the present invention is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Furthermore, although the steps of the methods in the present disclosure are depicted in a particular order in the drawings, this does not require or imply that the steps must be performed in that particular order or that all illustrated steps be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a mobile terminal, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (11)

1. A three-dimensional scene construction method, comprising:
acquiring scene image data of a target entity scene, and analyzing sensing information in the scene image data to generate a space coordinate system corresponding to the scene image data;
identifying surface objects in the scene image data, and constructing an initial three-dimensional scene of the target entity scene in the space coordinate system based on the surface objects;
acquiring an actual distance parameter value in the target entity scene, and determining size information of the initial three-dimensional scene according to the actual distance parameter value;
acquiring an equipment construction instruction, and determining an equipment model and a corresponding placement position applied to the initial three-dimensional scene according to the equipment construction instruction;
And displaying the equipment model in the initial three-dimensional scene according to the placement positions, and further generating a three-dimensional scene with an equipment construction effect.
2. The method of claim 1, wherein the scene image data is obtained by image capturing the target physical scene using an image capturing device having a sensing device; the scene image data comprises a plurality of images; the method comprises the steps of,
the step of analyzing the sensing information in the scene image data to generate a spatial coordinate system corresponding to the scene image data includes:
analyzing the sensing information in the scene image data to obtain the relative position information of the image in the scene image data and the three-dimensional direction information corresponding to the image;
and generating a space coordinate system corresponding to the scene image data according to the relative position information and the three-dimensional direction information.
3. The method of claim 1, wherein the step of identifying surface objects in the scene image data, constructing an initial three-dimensional scene of the target physical scene in the spatial coordinate system based on the surface objects, comprises:
Determining feature points on each image in the scene image data through an Augmented Reality (AR) technology, and identifying surface objects in the scene image data according to the feature points;
determining a position of the surface object in a spatial coordinate system based on the spatial coordinate system corresponding to the scene image data;
constructing an initial surface model of the surface object according to the position of the surface object in the space coordinate system;
optimizing the initial surface model through a nonlinear fitting technology to obtain a target surface model, and constructing an initial three-dimensional scene of the target entity scene according to the target surface model.
4. A method according to claim 3, wherein the step of optimizing the initial surface model by a nonlinear fitting technique to obtain a target surface model comprises:
creating a mesh object corresponding to the initial surface model in the initial three-dimensional scene;
attaching the mesh object to a corresponding initial surface model to generate labeling information of the initial surface model;
and splicing the initial surface model based on the labeling information through a nonlinear fitting technology to obtain the target surface model.
5. The method according to claim 1, wherein the step of obtaining actual distance parameter values in the target entity scene comprises:
using image acquisition equipment with a sensing device to range the target entity scene to obtain an actual distance parameter value in the target entity scene; and/or the number of the groups of groups,
building design data of the target entity scene are obtained, and actual distance parameter values in the target entity scene are determined according to the building design data.
6. The method according to claim 1, wherein the step of determining size information of the initial three-dimensional scene from the actual distance parameter values comprises:
determining virtual measurement distances on all coordinate axes in the space coordinate system according to the actual distance parameter values;
and determining the size information of the initial three-dimensional scene according to the virtual measurement distances on the coordinate axes.
7. The method of claim 1, wherein the step of obtaining device build instructions, determining a device model and corresponding pose location applied to the initial three-dimensional scene based on the device build instructions, comprises:
Displaying a layout interface;
responding to equipment construction operation performed by a user on the layout interface, and generating an equipment construction instruction;
and analyzing the equipment construction instruction to obtain the equipment model and the corresponding placement position.
8. The method of claim 1, wherein the step of presenting the device model in the initial three-dimensional scene according to the pose location comprises:
acquiring the actual model size of the equipment model, and acquiring the display size of the equipment model based on the size information of the initial three-dimensional scene and the actual model size;
placing the equipment model with the display size in the initial three-dimensional scene according to the placement position through an Augmented Reality (AR) technology;
adjusting the coordinates and/or the direction of the equipment model placement by using a nonlinear fitting algorithm, and determining the target placement position of the equipment model placement;
the device model with the display size is displayed in the initial three-dimensional scene based on the target placement position through a Virtual Reality (VR) technology.
9. A three-dimensional scene construction apparatus, comprising:
the acquisition module is used for acquiring scene image data of a target entity scene, and analyzing sensing information in the scene image data to generate a space coordinate system corresponding to the scene image data;
A construction module for identifying surface objects in the scene image data, constructing an initial three-dimensional scene of the target entity scene in the spatial coordinate system based on the surface objects;
the determining module is used for obtaining an actual distance parameter value in the target entity scene and determining the size information of the initial three-dimensional scene according to the actual distance parameter value;
the setting module is used for acquiring equipment construction instructions, and determining an equipment model and a corresponding setting position applied to the initial three-dimensional scene according to the equipment construction instructions;
the display module is used for displaying the equipment model in the initial three-dimensional scene according to the placement position, and further generating a three-dimensional scene with an equipment construction effect.
10. A computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the three-dimensional scene construction method according to any of claims 1 to 8.
11. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which when executed by the one or more processors cause the one or more processors to implement the three-dimensional scene construction method of any of claims 1 to 8.
CN202111489350.5A 2021-12-08 2021-12-08 Three-dimensional scene construction method and device, storage medium and electronic equipment Pending CN116310062A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111489350.5A CN116310062A (en) 2021-12-08 2021-12-08 Three-dimensional scene construction method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111489350.5A CN116310062A (en) 2021-12-08 2021-12-08 Three-dimensional scene construction method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN116310062A true CN116310062A (en) 2023-06-23

Family

ID=86787396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111489350.5A Pending CN116310062A (en) 2021-12-08 2021-12-08 Three-dimensional scene construction method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN116310062A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116822419A (en) * 2023-07-11 2023-09-29 安徽斯维尔信息科技有限公司 Intelligent electrical design system based on application scene data
CN117152349A (en) * 2023-08-03 2023-12-01 无锡泰禾宏科技有限公司 Virtual scene self-adaptive construction system and method based on AR and big data analysis
CN117576359A (en) * 2024-01-16 2024-02-20 北京德塔精要信息技术有限公司 3D model construction method and device based on Unity webpage platform
CN117876642A (en) * 2024-03-08 2024-04-12 杭州海康威视系统技术有限公司 Digital model construction method, computer program product and electronic equipment
CN117876642B (en) * 2024-03-08 2024-06-11 杭州海康威视系统技术有限公司 Digital model construction method, computer program product and electronic equipment

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116822419A (en) * 2023-07-11 2023-09-29 安徽斯维尔信息科技有限公司 Intelligent electrical design system based on application scene data
CN116822419B (en) * 2023-07-11 2024-05-28 安徽斯维尔信息科技有限公司 Intelligent electrical design system based on application scene data
CN117152349A (en) * 2023-08-03 2023-12-01 无锡泰禾宏科技有限公司 Virtual scene self-adaptive construction system and method based on AR and big data analysis
CN117152349B (en) * 2023-08-03 2024-02-23 无锡泰禾宏科技有限公司 Virtual scene self-adaptive construction system and method based on AR and big data analysis
CN117576359A (en) * 2024-01-16 2024-02-20 北京德塔精要信息技术有限公司 3D model construction method and device based on Unity webpage platform
CN117576359B (en) * 2024-01-16 2024-04-12 北京德塔精要信息技术有限公司 3D model construction method and device based on Unity webpage platform
CN117876642A (en) * 2024-03-08 2024-04-12 杭州海康威视系统技术有限公司 Digital model construction method, computer program product and electronic equipment
CN117876642B (en) * 2024-03-08 2024-06-11 杭州海康威视系统技术有限公司 Digital model construction method, computer program product and electronic equipment

Similar Documents

Publication Publication Date Title
US20220247984A1 (en) System and method for interactive projection
CN116310062A (en) Three-dimensional scene construction method and device, storage medium and electronic equipment
Dangelmaier et al. Virtual and augmented reality support for discrete manufacturing system simulation
CN106846497B (en) Method and device for presenting three-dimensional map applied to terminal
KR101108743B1 (en) Method and apparatus for holographic user interface communication
CN110473293B (en) Virtual object processing method and device, storage medium and electronic equipment
US20090153587A1 (en) Mixed reality system and method for scheduling of production process
CN107329671B (en) Model display method and device
CN115639981A (en) Digital archive implementation method and system based on 3DGIS
CN110286906B (en) User interface display method and device, storage medium and mobile terminal
CN114638939A (en) Model generation method, model generation device, electronic device, and readable storage medium
CN113129362B (en) Method and device for acquiring three-dimensional coordinate data
CN112465971B (en) Method and device for guiding point positions in model, storage medium and electronic equipment
US20190378335A1 (en) Viewer position coordination in simulated reality
KR102500488B1 (en) Method for measuring Real length in 3D tour and 3D tour system therefor
US11562538B2 (en) Method and system for providing a user interface for a 3D environment
CN115512046B (en) Panorama display method and device for points outside model, equipment and medium
Medien Implementation of a low cost marker based infrared optical tracking system
CN114913277A (en) Method, device, equipment and medium for three-dimensional interactive display of object
CN110990106B (en) Data display method and device, computer equipment and storage medium
CN114445171A (en) Product display method, device, medium and VR equipment
CN111429576A (en) Information display method, electronic device, and computer-readable medium
CN108920598A (en) Panorama sketch browsing method, device, terminal device, server and storage medium
CN113112613B (en) Model display method and device, electronic equipment and storage medium
CN113421343B (en) Method based on internal structure of augmented reality observation equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination