CN116912425A - Visual scene construction method based on three-dimensional reconstruction and related equipment - Google Patents

Visual scene construction method based on three-dimensional reconstruction and related equipment Download PDF

Info

Publication number
CN116912425A
CN116912425A CN202311008766.XA CN202311008766A CN116912425A CN 116912425 A CN116912425 A CN 116912425A CN 202311008766 A CN202311008766 A CN 202311008766A CN 116912425 A CN116912425 A CN 116912425A
Authority
CN
China
Prior art keywords
scene
model
dimensional
target
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311008766.XA
Other languages
Chinese (zh)
Inventor
颜峰
刘思彦
刘柏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202311008766.XA priority Critical patent/CN116912425A/en
Publication of CN116912425A publication Critical patent/CN116912425A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a visual scene building method based on three-dimensional reconstruction, which comprises the following steps: acquiring a two-dimensional material, and constructing a sub-scene model according to the two-dimensional material and a pre-constructed three-dimensional construction model; determining a scene construction task, and determining an initial scene model according to the scene construction task and the sub-scene model; adjusting the initial scene model, determining a target scene model, and generating a target scene description file of the target scene model; and responding to the target scene description file to accord with a preset rule, and throwing the target scene model into a target resource platform.

Description

Visual scene construction method based on three-dimensional reconstruction and related equipment
Technical Field
The application relates to the technical field of three-dimensional model construction, in particular to a visual scene construction method based on three-dimensional reconstruction and related equipment.
Background
With the continuous development of three-dimensional model construction technology, the requirements of the field of video animation and games for three-dimensional scene models are increasing, and in the prior art, three-dimensional scenes usually need to be constructed through a series of steps of original pictures, low modes, high modes, light models and the like, and the process from the original pictures to the model display is relatively long. Secondly, the types and the number of the three-dimensional scenes in the immersed movable platform are fixed and are not easy to multiplex, when certain activities with high customization degree need to be applied to the three-dimensional model, the built-in fixed scenes cannot be directly applied to the activities, and the built-in scene resources can increase the volume of the installation package. The more built-in scenes of the immersed movable platform, the more memory space is occupied, and for each activity, not all scenes can be used in the built-in scenes, and the memory space is wasted. Therefore, how to quickly and efficiently build a new three-dimensional scene by applying mature model resources in combination with a visual scene is a problem to be solved.
Disclosure of Invention
Accordingly, the present application is directed to a method for debugging data transmitted from a client to a remote server at a local server.
Based on the above object, the present application provides a method for constructing a visual scene based on three-dimensional reconstruction, which comprises the following steps:
acquiring a two-dimensional material, and constructing a sub-scene model according to the two-dimensional material and a pre-constructed three-dimensional construction model;
determining a scene construction task, and determining an initial scene model according to the scene construction task and the sub-scene model;
adjusting the initial scene model, determining a target scene model, and generating a target scene description file of the target scene model;
and responding to the target scene description file to accord with a preset rule, and throwing the target scene model into a target resource platform.
Based on the same object, the application also provides a visual scene construction device based on three-dimensional reconstruction, which comprises:
the acquisition module is configured to acquire two-dimensional materials, and construct a sub-scene model according to the two-dimensional materials and a pre-constructed three-dimensional construction model;
the construction module is configured to determine a scene construction task, and determine an initial scene model according to the scene construction task and the sub-scene model;
the adjusting module is configured to adjust the initial scene model, determine a target scene model and generate a target scene description file of the target scene model;
and the application module is configured to input the target scene model into a target resource platform in response to the target scene description file conforming to a preset rule.
Based on the above object, the present application further provides an electronic device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements the method for constructing a three-dimensional reconstruction-based visual scene according to any one of the above methods when executing the program.
Based on the above object, the present application further provides a non-transitory computer readable storage medium storing computer instructions for causing the computer to execute the three-dimensional reconstruction-based visual scene construction method as set forth in any one of the above.
From the above, it can be seen that the visual scene construction method and the related device based on three-dimensional reconstruction provided by the application firstly acquire two-dimensional materials, construct sub-scene models according to the two-dimensional materials and the three-dimensional construction model constructed in advance, further determine a scene construction task, determine an initial scene model according to the scene construction task and the sub-scene models, then adjust the initial scene model, determine a target scene model, generate a target scene description file of the target scene model, and finally input the target scene model into the target resource platform in response to the target scene description file conforming to a preset rule. According to the method, a mature three-dimensional model can be constructed through the two-dimensional materials, the corresponding three-dimensional model is applied according to the scene description file, the ideal target scene model can be determined by visually adjusting the three-dimensional model in the application process, and finally, the target resource platform meeting the preset rule is input to be applied by a user, so that the generation time of the three-dimensional model is shortened, the manufacturing period of the three-dimensional scene is shortened, and the efficiency of building the three-dimensional scene is improved.
Drawings
In order to more clearly illustrate the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only of the application and that other drawings can be obtained from them without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a visual scene construction platform based on three-dimensional reconstruction according to an embodiment of the present application.
Fig. 2 is a flow chart of a method for constructing a visual scene based on three-dimensional reconstruction according to an embodiment of the present application.
Fig. 3 is a schematic diagram of a visual scene building device based on three-dimensional reconstruction provided by the embodiment of the application.
Fig. 4 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The present application will be further described in detail below with reference to specific embodiments and with reference to the accompanying drawings, in order to make the objects, technical solutions and advantages of the present application more apparent.
It should be noted that unless otherwise defined, technical or scientific terms used in the embodiments of the present application should be given the ordinary meaning as understood by one of ordinary skill in the art to which the present application belongs. The terms "first," "second," and the like, as used in embodiments of the present application, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof, but does not exclude other elements or items. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", etc. are used merely to indicate relative positional relationships, which may also be changed when the absolute position of the object to be described is changed.
As described in the background section, three-dimensional reconstruction is a three-dimensional modeling method based on computer vision and image processing techniques, i.e., extracting information from multiple two-dimensional images or other sensor data to create a three-dimensional model or scene.
However, the applicant finds that in the prior art, a series of steps such as original painting, low modulus, high modulus, and light model are generally required to construct a three-dimensional scene, wherein the main steps are that modeling software is used to display a large outline of an object according to the proportion of a real object to a person, the display surface number is relatively low, zBrush and other software is further used to engrave the object, details of the object are displayed, stone/wood textures and texture in the real world are restored, the display surface number is relatively high, further, the process of mapping the texture map onto the three-dimensional model surface by unfolding the coordinates of the texture map of the model surface is generally required to unfold and map the UV coordinates of the model so as to realize mapping and rendering of the texture, and the like. The prior art has a longer process from original painting to model display, and secondly, the types and the number of three-dimensional scenes in the immersed movable platform are fixed and are not easy to multiplex, when certain activities with high customization degree need to be applied to the three-dimensional model, the built-in fixed scenes cannot be directly applied to the activities, and the built-in scene resources can increase the volume of the installation package. The more built-in scenes of the immersed movable platform, the more memory space is occupied, and for each activity, not all scenes can be used in the built-in scenes, and the memory space is wasted.
The application provides a visual scene construction method based on three-dimensional reconstruction and related equipment, wherein two-dimensional materials are firstly obtained, a sub-scene model is constructed according to the two-dimensional materials and a pre-constructed three-dimensional construction model, a scene construction task is further determined, an initial scene model is determined according to the scene construction task and the sub-scene model, then the initial scene model is adjusted, a target scene model is determined, a target scene description file of the target scene model is generated, and finally, the target scene model is put into a target resource platform in response to the fact that the target scene description file accords with a preset rule. According to the method, a mature three-dimensional model can be constructed through the two-dimensional materials, the corresponding three-dimensional model is applied according to the scene description file, the ideal target scene model can be determined by visually adjusting the three-dimensional model in the application process, and finally, the target resource platform meeting the preset rule is input to be applied by a user, so that the generation time of the three-dimensional model is shortened, the manufacturing period of the three-dimensional scene is shortened, and the efficiency of building the three-dimensional scene is improved.
The data transmission method provided by the embodiment of the application is specifically described by a specific embodiment.
Referring to fig. 1, a schematic diagram of a visual scene construction platform based on three-dimensional reconstruction is provided in an embodiment of the present application.
The visual scene building platform based on three-dimensional reconstruction can restore real objects in videos or pictures into a three-dimensional model by using an AI algorithm, shortens the time for generating the model, reduces the manufacturing period of the scene, and simultaneously combines the built-in scene and the model of the immersive movable platform to quickly build a new scene. The system mainly comprises a three-dimensional reconstruction platform, a visual scene construction platform, a scene resource platform, an auditing platform and a file storage service platform.
The three-dimensional reconstruction platform can provide functions of creating, previewing, editing and the like of the AI generation type model, and maintains the 3D model which is failed to be generated, successful to be generated and in generation; the visual scene construction platform is used for constructing a new scene, the scene is derived based on the built-in scene, and finally a scene description file is generated and used for describing a scene model and the relative position between the models; the scene resource platform can be used for storing a usable 3D model and a scene description file; the auditing platform is mainly used for auditing the 3D model issued by the three-dimensional reconstruction platform and the scene description file of the visual scene building platform, and the scene resources passing the auditing can be put into the scene resource platform for use; the file storage service platform can save the resource files of the scene resource platform, and buffer the resource files to the server nearest to the user, so that the user can access the files faster.
The method for constructing a visual scene based on three-dimensional reconstruction according to an exemplary embodiment of the present application will be described below with reference to a specific application scene. It should be noted that the above application scenario is only shown for the convenience of understanding the spirit and principle of the present application, and the embodiments of the present application are not limited in any way. Rather, embodiments of the application may be applied to any scenario where applicable.
The method for constructing the visual scene based on the three-dimensional reconstruction provided by the embodiment of the application is specifically described by a specific embodiment.
Fig. 2 shows a flow diagram of a method for constructing a visual scene based on three-dimensional reconstruction according to an embodiment of the present application.
Referring to fig. 2, the method for constructing a visual scene based on three-dimensional reconstruction provided by the embodiment of the application specifically includes the following steps:
step S201, acquiring a two-dimensional material, and constructing a sub-scene model according to the two-dimensional material and a pre-constructed three-dimensional construction model.
In specific implementation, a user can enter a three-dimensional reconstruction platform to provide two-dimensional materials, wherein the two-dimensional materials can be a section of video which is shot slowly and uniformly for a target scene, can be a group of pictures shot for the target scene, can be shot in a surrounding manner by using more than three angles in a continuous shooting mode, and can also be video or picture materials of a 3D scene model shot by using a virtual camera.
As an optional embodiment, two-dimensional materials of a target scene determined by at least three visual angles can be used as a two-dimensional scenery dividing image, scene data of the two-dimensional scenery dividing image and shooting data aiming at the two-dimensional image are determined, the scene data can describe the position, the number, the coordinates, the size and other data of the target scene displayed by the two-dimensional scenery dividing image, the shooting data can describe the shooting angle of the two-dimensional scenery dividing image or the shooting parameters of a virtual camera, further, the relation among all the two-dimensional scenery dividing images is established according to the scene data, the two-dimensional panoramic image is determined, the completeness of the target scene can be described by the two-dimensional panoramic image, further, the image feature data of the two-dimensional panoramic image is determined, and the image feature data and the shooting data are input into a pre-constructed three-dimensional building model to obtain a sub-scene model.
The three-dimensional reconstruction method based on the AI algorithm mainly comprises the following steps of:
deep learning: the neural network is trained to perform three-dimensional reconstruction using a deep learning technique, for example, image semantic segmentation and feature extraction using a Convolutional Neural Network (CNN), three-dimensional shape generation and optimization using a generation countermeasure network (GAN), and the like.
Visual SLAM: by utilizing the vision SLAM (Simultaneous Localization and Mapping) technology and combining a deep learning algorithm, the instant three-dimensional reconstruction and scene understanding are realized, for example, the three-dimensional reconstruction and navigation of an indoor scene are realized by using a visual SLAM algorithm based on the deep learning.
Multi-sensor data fusion: and the data of various sensors are fused and integrated by utilizing a multi-sensor data fusion technology, so that more accurate and complete three-dimensional reconstruction results are realized, for example, the data of RGB images, depth images, laser radars and the like are combined.
Augmented reality: and the three-dimensional reconstruction result is fused with the real scene by using the augmented reality technology, so that more realistic and interactive scene display and application are realized, for example, the real-time augmented reality scene reconstruction is performed by using a deep learning algorithm.
The sub-scene model, namely the 3D model, can be an independent physical 3D model, can be a combination of physical 3D models, and can be a combination scene of physical 3D models depending on a specific scene.
As an alternative embodiment, the sub-scene model may be optimized, which may specifically include denoising the sub-scene model, where some noise may exist in the sub-scene model, the denoising operation may effectively remove the interference noise in the sub-scene model,
the optimization operation may further include filling in the gap, and in the process of model construction, the sub-scene model may have some defects due to the influence of problems such as shooting angle, shooting integrity, parameter mismatch and the like of the two-dimensional image, so that the missing part needs to be filled.
The smooth processing can enable lines of the sub-scene model to be smoother, and visual sense of the model is improved.
As an optional embodiment, after the sub-scene model is built, the sub-scene model may be submitted to an auditing platform for auditing, and the auditing platform may audit the sub-scene model according to preset requirements. If the verification is passed, the sub-scene model can be put into the target resource platform, and if the verification is not passed, the sub-scene model is forbidden to be put into the target resource platform.
Step S202, determining a scene construction task, and determining an initial scene model according to the scene construction task and the sub-scene model.
As an optional embodiment, a user can establish a scene construction task on the visual scene construction platform, the scene construction task can comprise a scene template of a foundation, and meanwhile, a permission account of the scene can be specified, namely, a constructor for constructing the scene and an auditor for auditing the scene can be specified, and the scene style of the target scene can be laid by the scene template of the foundation.
The scene construction task can be a description statement for the target scene, or a construction sketch for the target scene.
As an alternative embodiment, first, a basic scenario template indicated by the scenario building task, that is, a basic scenario template, may be determined, and further, a scenario description file of the basic scenario template is pulled from the CDN, where the CDN (Content Delivery Network ) refers to a set of servers distributed in various regions. These servers store copies of the data so that the servers can satisfy requests for data based on which servers are closest to the user. After the scene description file of the basic scene template is acquired, the file can be cached in a storage medium of the current device.
Further, the scene description file of the sub-scene model is pulled at the same time, and corresponding scene resources are pulled according to the scene description file of the basic scene template and the scene description file of the sub-scene model, so that an initial scene model is determined; the initial scene model comprises a basic scene template and a sub scene model.
Step S203, adjusting the initial scene model, determining a target scene model, and generating a target scene description file of the target scene model.
As an alternative embodiment, the constructor may adjust the types, the numbers and the positions of the basic scene templates or the sub-scene models, determine adjustment parameters of the adjustment operation in response to the adjustment operation for the basic scene templates or/and the sub-scene models, further determine a third scene description file according to the adjustment parameters, the first scene description file and the second scene description file, the third scene description file is used for determining the positions between the basic scene templates and the sub-scene models, and finally determine the target scene model according to the third scene description file.
As an alternative embodiment, the scenario description file of the target scenario model may be submitted to the source station of the storage service, which will cache the URL of the scenario description file to the CDN.
And step S204, in response to the target scene description file conforming to a preset rule, the target scene model is put into a target resource platform.
As an optional embodiment, the auditor can enter an audit portal of the visual scene construction platform, load the target scene model according to the scene description file of the target scene model, and audit according to the description Hu Zonghe scene sketch of the scene task provided by the user, for example, which models, the positions of the models and the like exist, so that abnormal situations such as model overlapping, scene penetration and the like are prevented. If the verification is passed, the target scene model is put into the scene resource platform, and if the verification is not passed, the process returns to the step S203 to readjust.
It should be noted that, the auditing step in the application may be that the auditing platform can audit the sub-scene model according to a preset rule, or through manual auditing.
As an optional embodiment, a user selects a target scene model in a scene resource platform, puts the target scene model into an immersive activity platform, loads the target scene model, can put media resources such as pictures, videos and the like into texture maps on the surface of the model, and then synchronizes to on-the-fly activities to render, so that the thermal update of the media resources is realized.
Specifically, the target rendering resources may be mapped to a model surface of the target scene model, resulting in a texture rendered target scene model.
From the above, it can be seen that the visual scene construction method and the related device based on three-dimensional reconstruction provided by the application firstly acquire two-dimensional materials, construct sub-scene models according to the two-dimensional materials and the three-dimensional construction model constructed in advance, further determine a scene construction task, determine an initial scene model according to the scene construction task and the sub-scene models, then adjust the initial scene model, determine a target scene model, generate a target scene description file of the target scene model, and finally input the target scene model into the target resource platform in response to the target scene description file conforming to a preset rule. According to the method, a mature three-dimensional model can be constructed through the two-dimensional materials, the corresponding three-dimensional model is applied according to the scene description file, the ideal target scene model can be determined by visually adjusting the three-dimensional model in the application process, and finally, the target resource platform meeting the preset rule is input to be applied by a user, so that the generation time of the three-dimensional model is shortened, the manufacturing period of the three-dimensional scene is shortened, and the efficiency of building the three-dimensional scene is improved.
It should be noted that, the method of the embodiment of the present application may be performed by a single device, for example, a computer or a server. The method of the embodiment can also be applied to a distributed scene, and is completed by mutually matching a plurality of devices. In the case of such a distributed scenario, one of the devices may perform only one or more steps of the method of an embodiment of the present application, the devices interacting with each other to accomplish the method.
It should be noted that the foregoing describes some embodiments of the present application. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments described above and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
Fig. 3 shows a schematic diagram of a visual scene construction device based on three-dimensional reconstruction provided by the embodiment of the application.
Based on the same inventive concept, the application also provides a visual scene constructing device based on three-dimensional reconstruction, which corresponds to the method of any embodiment.
Referring to fig. 3, the visual scene construction device based on three-dimensional reconstruction includes: the system comprises an acquisition module, a construction module, an adjustment module and an application module; wherein, the liquid crystal display device comprises a liquid crystal display device,
the acquisition module 301 is configured to acquire a two-dimensional material, and construct a sub-scene model according to the two-dimensional material and a pre-constructed three-dimensional construction model;
a build module 302 configured to determine a scene build task, and determine an initial scene model according to the scene build task and the sub-scene model;
an adjustment module 303 configured to adjust the initial scene model, determine a target scene model, and generate a target scene description file of the target scene model;
and the application module 304 is configured to input the target scene model into a target resource platform in response to the target scene description file conforming to a preset rule.
As an alternative embodiment, the two-dimensional material includes: a two-dimensional scenery-dividing image of the target scene determined with at least three viewing angles;
the acquisition module 301 is further configured to:
determining scene data of the two-dimensional scenery images and shooting data aiming at the two-dimensional images, and establishing relations among all the two-dimensional scenery images according to the scene data so as to determine two-dimensional panoramic images;
and determining image characteristic data of the two-dimensional panoramic image, and inputting the image characteristic data and the shooting data into the three-dimensional construction model to obtain the sub-scene model.
As an alternative embodiment, the obtaining module 301 is further configured to:
performing optimization operation on the sub-scene model to obtain an optimized sub-scene model; the optimization operation comprises any one of denoising, filling in the gap and smoothing the sub-scene model;
and responding to the optimized sub-scene model to reach a preset requirement, and throwing the sub-scene model into a target resource platform.
As an alternative embodiment, the building module 302 is further configured to:
determining a basic scene template indicated by the scene building task;
acquiring a first scene description file of the basic scene template and a second scene description file of the sub-scene model;
determining the initial scene model according to the first scene description file and the second scene description file; wherein the initial scene model includes the base scene template and the sub-scene model.
As an alternative embodiment, the adjusting module 303 is further configured to:
determining adjustment parameters of an adjustment operation in response to the adjustment operation for the base scene template or/and the sub scene model; wherein the adjusting operation comprises adjusting the type, the number or the position of the basic template and/or the sub-scene model;
determining a third scene description file according to the adjustment parameters, the first scene description file and the second scene description file, and determining the target scene model according to the third scene description file; the third scene description file is used for determining positions between the basic scene template and the sub scene model.
As an alternative embodiment, application module 304 is further configured to:
and mapping the target rendering resources to the model surface of the target scene model to obtain the target scene model after texture rendering.
As an alternative embodiment, application module 304 is further configured to:
and responding to the target scene description file not conforming to a preset rule, and performing secondary adjustment on the target scene model to obtain an adjusted target scene model.
For convenience of description, the above devices are described as being functionally divided into various modules, respectively. Of course, the functions of each module may be implemented in the same piece or pieces of software and/or hardware when implementing the present application.
The device of the foregoing embodiment is configured to implement the corresponding three-dimensional reconstruction-based visual scene building method in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein.
Fig. 4 shows an exemplary structural diagram of an electronic device according to an embodiment of the present application.
Based on the same inventive concept, the application also provides an electronic device corresponding to the method of any embodiment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the method for constructing the visual scene based on the three-dimensional reconstruction according to any embodiment when executing the program.
Fig. 4 shows a more specific hardware architecture of an electronic device according to this embodiment, where the device may include: processor 410, memory 420, input/output interface 430, communication interface 440, and bus 450. Wherein processor 410, memory 420, input/output interface 430, and communication interface 440 enable communication connections within the device between each other via bus 450.
The processor 410 may be implemented by a general-purpose CPU (Central Processing Unit ), a microprocessor, an application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc. for executing relevant programs to implement the technical solutions provided in the embodiments of the present disclosure.
The Memory 420 may be implemented in the form of ROM (Read Only Memory), RAM (Random Access Memory ), static storage device, dynamic storage device, or the like. Memory 420 may store an operating system and other application programs, and when the technical solutions provided by the embodiments of the present specification are implemented in software or firmware, the relevant program codes are stored in memory 420 and invoked for execution by processor 410.
The input/output interface 430 is used to connect with an input/output module to realize information input and output. The input/output module may be configured as a component in a device (not shown in the figure) or may be external to the device to provide corresponding functionality. Wherein the input devices may include a keyboard, mouse, touch screen, microphone, various types of sensors, etc., and the output devices may include a display, speaker, vibrator, indicator lights, etc.
The communication interface 440 is used to connect communication modules (not shown) to enable communication interactions of the device with other devices. The communication module may implement communication through a wired manner (such as USB, network cable, etc.), or may implement communication through a wireless manner (such as mobile network, WIFI, bluetooth, etc.).
Bus 450 includes a path to transfer information between components of the device (e.g., processor 410, memory 420, input/output interface 430, and communication interface 440).
It should be noted that although the above device only shows the processor 410, the memory 420, the input/output interface 430, the communication interface 440, and the bus 450, in the implementation, the device may further include other components necessary to achieve normal operation. Furthermore, it will be understood by those skilled in the art that the above-described apparatus may include only the components necessary to implement the embodiments of the present description, and not all the components shown in the drawings.
The electronic device of the foregoing embodiment is configured to implement the corresponding three-dimensional reconstruction-based visual scene building method in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein.
Based on the same inventive concept, the present application also provides a non-transitory computer readable storage medium corresponding to the method of any embodiment, wherein the non-transitory computer readable storage medium stores computer instructions for causing the computer to execute the method for constructing a visual scene based on three-dimensional reconstruction according to any embodiment.
The computer readable media of the present embodiments, including both permanent and non-permanent, removable and non-removable media, may be used to implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
The storage medium of the foregoing embodiments stores computer instructions for causing the computer to execute the method for building a visual scene based on three-dimensional reconstruction according to any one of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein.
Based on the same inventive concept, corresponding to the method for building a visual scene based on three-dimensional reconstruction according to any of the above embodiments, the present disclosure further provides a computer program product, which includes computer program instructions. In some embodiments, the computer program instructions may be executable by one or more processors of a computer to cause the computer and/or the processor to perform the three-dimensional reconstruction-based visual scene construction method. Corresponding to the execution subject corresponding to each step in each embodiment of the three-dimensional reconstruction-based visual scene building method, the processor executing the corresponding step may belong to the corresponding execution subject.
The computer program product of the above embodiment is configured to enable the computer and/or the processor to perform the method for building a visual scene based on three-dimensional reconstruction according to any one of the above embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein.
Those of ordinary skill in the art will appreciate that: the discussion of any of the embodiments above is merely exemplary and is not intended to suggest that the scope of the application (including the claims) is limited to these examples; the technical features of the above embodiments or in the different embodiments may also be combined within the idea of the application, the steps may be implemented in any order, and there are many other variations of the different aspects of the embodiments of the application as described above, which are not provided in detail for the sake of brevity.
Additionally, well-known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown within the provided figures, in order to simplify the illustration and discussion, and so as not to obscure the embodiments of the present application. Furthermore, the devices may be shown in block diagram form in order to avoid obscuring the embodiments of the present application, and also in view of the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform within which the embodiments of the present application are to be implemented (i.e., such specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the application, it should be apparent to one skilled in the art that embodiments of the application can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative in nature and not as restrictive.
While the application has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of those embodiments will be apparent to those skilled in the art in light of the foregoing description. For example, other memory architectures (e.g., dynamic RAM (DRAM)) may use the embodiments discussed.
The present embodiments are intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the appended claims. Therefore, any omissions, modifications, equivalent substitutions, improvements, and the like, which are within the spirit and principles of the embodiments of the application, are intended to be included within the scope of the application.

Claims (10)

1. A method for building a visual scene based on three-dimensional reconstruction, the method comprising:
acquiring a two-dimensional material, and constructing a sub-scene model according to the two-dimensional material and a pre-constructed three-dimensional construction model;
determining a scene construction task, and determining an initial scene model according to the scene construction task and the sub-scene model;
adjusting the initial scene model, determining a target scene model, and generating a target scene description file of the target scene model;
and responding to the target scene description file to accord with a preset rule, and throwing the target scene model into a target resource platform.
2. The method of claim 1, wherein the two-dimensional material comprises: a two-dimensional scenery-dividing image of the target scene determined with at least three viewing angles;
the constructing a sub-scene model according to the two-dimensional material and a pre-constructed three-dimensional construction model comprises the following steps:
determining scene data of the two-dimensional scenery images and shooting data aiming at the two-dimensional images, and establishing relations among all the two-dimensional scenery images according to the scene data so as to determine two-dimensional panoramic images;
and determining image characteristic data of the two-dimensional panoramic image, and inputting the image characteristic data and the shooting data into the three-dimensional construction model to obtain the sub-scene model.
3. The method according to claim 1, wherein the method further comprises:
performing optimization operation on the sub-scene model to obtain an optimized sub-scene model; the optimization operation comprises any one of denoising, filling in the gap and smoothing the sub-scene model;
and responding to the optimized sub-scene model to reach a preset requirement, and throwing the sub-scene model into a target resource platform.
4. The method of claim 1, wherein the determining a scene construction task, determining an initial scene model from the scene construction task and the sub-scene model, comprises:
determining a basic scene template indicated by the scene building task;
acquiring a first scene description file of the basic scene template and a second scene description file of the sub-scene model;
determining the initial scene model according to the first scene description file and the second scene description file; wherein the initial scene model includes the base scene template and the sub-scene model.
5. The method of claim 4, wherein said adjusting the initial scene model to determine a target scene model comprises:
determining adjustment parameters of an adjustment operation in response to the adjustment operation for the base scene template or/and the sub scene model; wherein the adjusting operation comprises adjusting the type, the number or the position of the basic template and/or the sub-scene model;
determining a third scene description file according to the adjustment parameters, the first scene description file and the second scene description file, and determining the target scene model according to the third scene description file; the third scene description file is used for determining positions between the basic scene template and the sub scene model.
6. The method according to claim 1, wherein the method further comprises:
and mapping the target rendering resources to the model surface of the target scene model to obtain the target scene model after texture rendering.
7. The method according to claim 1, wherein the method further comprises:
and responding to the target scene description file not conforming to a preset rule, and performing secondary adjustment on the target scene model to obtain an adjusted target scene model.
8. A visual scene building device based on three-dimensional reconstruction, the device comprising:
the acquisition module is configured to acquire two-dimensional materials, and construct a sub-scene model according to the two-dimensional materials and a pre-constructed three-dimensional construction model;
the construction module is configured to determine a scene construction task, and determine an initial scene model according to the scene construction task and the sub-scene model;
the adjusting module is configured to adjust the initial scene model, determine a target scene model and generate a target scene description file of the target scene model;
and the application module is configured to input the target scene model into a target resource platform in response to the target scene description file conforming to a preset rule.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 7 when the program is executed by the processor.
10. A computer readable storage medium, characterized in that the non-transitory computer readable storage medium stores computer instructions for causing the computer to perform the method of any one of claims 1 to 7.
CN202311008766.XA 2023-08-10 2023-08-10 Visual scene construction method based on three-dimensional reconstruction and related equipment Pending CN116912425A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311008766.XA CN116912425A (en) 2023-08-10 2023-08-10 Visual scene construction method based on three-dimensional reconstruction and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311008766.XA CN116912425A (en) 2023-08-10 2023-08-10 Visual scene construction method based on three-dimensional reconstruction and related equipment

Publications (1)

Publication Number Publication Date
CN116912425A true CN116912425A (en) 2023-10-20

Family

ID=88362975

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311008766.XA Pending CN116912425A (en) 2023-08-10 2023-08-10 Visual scene construction method based on three-dimensional reconstruction and related equipment

Country Status (1)

Country Link
CN (1) CN116912425A (en)

Similar Documents

Publication Publication Date Title
US9852544B2 (en) Methods and systems for providing a preloader animation for image viewers
US9240070B2 (en) Methods and systems for viewing dynamic high-resolution 3D imagery over a network
US20180276882A1 (en) Systems and methods for augmented reality art creation
CN112884875A (en) Image rendering method and device, computer equipment and storage medium
KR102433857B1 (en) Device and method for creating dynamic virtual content in mixed reality
JP7432005B2 (en) Methods, devices, equipment and computer programs for converting two-dimensional images into three-dimensional images
US11663775B2 (en) Generating physically-based material maps
US20140184596A1 (en) Image based rendering
Tasse et al. Enhanced texture‐based terrain synthesis on graphics hardware
US20170213394A1 (en) Environmentally mapped virtualization mechanism
US8854392B2 (en) Circular scratch shader
JP2019516202A (en) Generate arbitrary view
Okura et al. Mixed-reality world exploration using image-based rendering
Martin et al. MaterIA: Single Image High‐Resolution Material Capture in the Wild
JP2023504609A (en) hybrid streaming
CN115496845A (en) Image rendering method and device, electronic equipment and storage medium
Zhao et al. LITAR: Visually coherent lighting for mobile augmented reality
Fanini et al. Interactive 3D landscapes on line
Komianos et al. Efficient and realistic cultural heritage representation in large scale virtual environments
CN110038302B (en) Unity 3D-based grid generation method and device
Spini et al. Web 3d indoor authoring and vr exploration via texture baking service
CN111950057A (en) Loading method and device of Building Information Model (BIM)
CN116912425A (en) Visual scene construction method based on three-dimensional reconstruction and related equipment
CN113419806B (en) Image processing method, device, computer equipment and storage medium
Belhi et al. An integrated framework for the interaction and 3D visualization of cultural heritage

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination