CN116503550A - Method, device, storage medium and system for generating three-dimensional virtual scene - Google Patents

Method, device, storage medium and system for generating three-dimensional virtual scene Download PDF

Info

Publication number
CN116503550A
CN116503550A CN202310379738.2A CN202310379738A CN116503550A CN 116503550 A CN116503550 A CN 116503550A CN 202310379738 A CN202310379738 A CN 202310379738A CN 116503550 A CN116503550 A CN 116503550A
Authority
CN
China
Prior art keywords
information
scene
dimensional virtual
determining
peripheral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310379738.2A
Other languages
Chinese (zh)
Inventor
黄明杨
马菲莹
蒋佳忆
王雨桐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Damo Institute Hangzhou Technology Co Ltd
Original Assignee
Alibaba Damo Institute Hangzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Damo Institute Hangzhou Technology Co Ltd filed Critical Alibaba Damo Institute Hangzhou Technology Co Ltd
Priority to CN202310379738.2A priority Critical patent/CN116503550A/en
Publication of CN116503550A publication Critical patent/CN116503550A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Civil Engineering (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Structural Engineering (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a method, a device, a storage medium and a system for generating a three-dimensional virtual scene. Wherein the method comprises the following steps: scene description information and images to be referred are subjected to scene analysis to obtain peripheral structure information, wherein the scene description information is used for determining a plurality of areas obtained by dividing in a three-dimensional virtual scene, the images to be referred are used for providing texture reference information, and the peripheral structure information is used for determining a plurality of building components to be used; determining container placement information through peripheral structure information; rendering based on the peripheral structure information and the container placement information to obtain a three-dimensional virtual scene. The method and the device solve the technical problems that the efficiency of the generating process is low, the reusability is poor, and the style diversity of the three-dimensional virtual scene is poor because the prior art relies on manual operation to generate the three-dimensional scene with different scene styles.

Description

Method, device, storage medium and system for generating three-dimensional virtual scene
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method, an apparatus, a storage medium, and a system for generating a three-dimensional virtual scene.
Background
With the improvement of the internet technology level and the intelligent device performance, the production of a three-dimensional virtual scene becomes one of important tasks in computer technology, such as the production of a three-dimensional virtual exhibition hall, the production of a three-dimensional virtual store, the production of a three-dimensional virtual game scene, and the like. In various application scenes, the generation requirements for the three-dimensional virtual scene are complex and various, the three-dimensional virtual scene with various wind patterns is generated, the prior art generally relies on the establishment of the three-dimensional virtual scene customized according to the scene style requirements, the process of manually manufacturing the three-dimensional virtual scene has poor reusability and low generation efficiency on different scenes, and the generated three-dimensional virtual scene has poor style diversity.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the application provides a method, a device, a storage medium and a system for generating a three-dimensional virtual scene, which at least solve the technical problems that the efficiency of the generation process is low, the reusability is poor and the style diversity of the three-dimensional virtual scene is poor because the prior art relies on manual generation of three-dimensional scenes of different scene styles.
According to an aspect of an embodiment of the present application, there is provided a method for generating a three-dimensional virtual scene, including: scene description information and images to be referred are subjected to scene analysis to obtain peripheral structure information, wherein the scene description information is used for determining a plurality of areas which are obtained by dividing in a three-dimensional virtual scene, the images to be referred are used for providing texture reference information, the peripheral structure information is used for determining a plurality of building components to be used, the plurality of building components are used for building peripheral building structures of the plurality of areas, and the texture information of the plurality of building components is matched with the texture reference information; determining container placement information through the peripheral structure information, wherein the container placement information is used for determining placement modes of a plurality of container components in the peripheral building structure; rendering based on the peripheral structure information and the container placement information to obtain a three-dimensional virtual scene.
According to another aspect of the embodiments of the present application, there is further provided a method for generating a three-dimensional virtual scene, where a graphical user interface is provided by a terminal device, and content displayed on the graphical user interface includes at least one information input control, where the method includes: responding to a first touch operation acting on a graphical user interface, and inputting scene description information and an image to be referenced through an information input control, wherein the scene description information is used for determining a plurality of areas obtained by dividing in a three-dimensional virtual scene, and the image to be referenced is used for providing texture reference information; responding to a second touch operation acting on the graphical user interface, carrying out scene analysis on scene description information and an image to be referred to obtain peripheral structure information, determining container placement information through the peripheral structure information, and rendering to obtain a three-dimensional virtual scene based on the peripheral structure information and the container placement information, wherein the peripheral structure information is used for determining a plurality of building components to be used, the plurality of building components are used for building peripheral building structures of a plurality of areas, texture information of the plurality of building components is matched with texture reference information, and the container placement information is used for determining placement modes of the plurality of container components in the peripheral building structures.
According to another aspect of the embodiments of the present application, there is also provided a method for generating a three-dimensional virtual scene, including: scene analysis is carried out on the electronic market scene description information and the image to be referred to, so as to obtain electronic market scene peripheral structure information, wherein the electronic market scene description information is used for determining a plurality of areas which are obtained by dividing in the three-dimensional virtual electronic market scene, the image to be referred to is used for providing texture reference information, the electronic market scene peripheral structure information is used for determining a plurality of building components to be used, the plurality of building components are used for building peripheral building structures of the plurality of areas, and the texture information of the plurality of building components is matched with the texture reference information; determining electric market scene container placement information through electric market scene peripheral structure information, wherein the electric market scene container placement information is used for determining placement modes of a plurality of container assemblies in a peripheral building structure; rendering based on the peripheral structure information of the electronic commerce scene and the placement information of the electronic commerce scene container to obtain the three-dimensional virtual electronic commerce scene.
According to another aspect of the embodiments of the present application, there is also provided an apparatus for generating a three-dimensional virtual scene, including: the analysis module is used for carrying out scene analysis on the scene description information and the image to be referred to obtain peripheral structure information, wherein the scene description information is used for determining a plurality of areas which are obtained by dividing in the three-dimensional virtual scene, the image to be referred to is used for providing texture reference information, the peripheral structure information is used for determining a plurality of building components to be used, the plurality of building components are used for building peripheral building structures of the plurality of areas, and the texture information of the plurality of building components is matched with the texture reference information; the determining module is used for determining container placement information through the peripheral structure information, wherein the container placement information is used for determining placement modes of a plurality of container components in the peripheral building structure; and the rendering module is used for rendering the three-dimensional virtual scene based on the peripheral structure information and the container placement information.
According to another aspect of the embodiments of the present application, there is further provided a computer readable storage medium, where the computer readable storage medium includes a stored program, and when the program runs, the device where the computer readable storage medium is controlled to execute any one of the methods for generating a three-dimensional virtual scene described above.
According to another aspect of the embodiments of the present application, there is also provided a system for generating a three-dimensional virtual scene, including: a processor; and a memory, coupled to the processor, for providing instructions to the processor to process the steps of: scene description information and images to be referred are subjected to scene analysis to obtain peripheral structure information, wherein the scene description information is used for determining a plurality of areas which are obtained by dividing in a three-dimensional virtual scene, the images to be referred are used for providing texture reference information, the peripheral structure information is used for determining a plurality of building components to be used, the plurality of building components are used for building peripheral building structures of the plurality of areas, and the texture information of the plurality of building components is matched with the texture reference information; determining container placement information through the peripheral structure information, wherein the container placement information is used for determining placement modes of a plurality of container components in the peripheral building structure; rendering based on the peripheral structure information and the container placement information to obtain a three-dimensional virtual scene.
According to the method for generating the three-dimensional virtual scene, scene description information and images to be referred are subjected to scene analysis to obtain peripheral structure information, wherein the scene description information is used for determining a plurality of areas which are obtained by dividing in the three-dimensional virtual scene, the images to be referred are used for providing texture reference information, the peripheral structure information is used for determining a plurality of building components to be used, the plurality of building components are used for building peripheral building structures of the plurality of areas, and the texture information of the plurality of building components is matched with the texture reference information; determining container placement information through the peripheral structure information, wherein the container placement information is used for determining placement modes of a plurality of container components in the peripheral building structure; and rendering based on the peripheral structure information and the container placement information to obtain a three-dimensional virtual scene. The scene description information and the scene generation style requirement with the reference image can represent the three-dimensional virtual scene, and the method adopts a two-stage construction processing mode of peripheral structure-internal placement, so that the purpose of automatically generating the three-dimensional virtual scene based on the scene generation style requirement is achieved, the technical effects of improving the style diversity of the three-dimensional virtual scene, improving the efficiency and reusability of three-dimensional scene generation are achieved, and the technical problems that the efficiency of the generation process is low, the reusability is poor and the style diversity of the three-dimensional virtual scene is poor due to the fact that the prior art relies on manual three-dimensional scene generation of different scene styles are solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1 shows a hardware block diagram of a computer terminal (or mobile device) for implementing a method of generating a three-dimensional virtual scene;
FIG. 2 is a flow chart of a method of generating a three-dimensional virtual scene according to embodiment 1 of the present application;
FIG. 3 is a schematic diagram of an alternative process of generating a three-dimensional virtual scene according to embodiment 1 of the present application;
FIG. 4 is a schematic diagram of an alternative peripheral scene generation process according to embodiment 1 of the present application;
FIG. 5 is a schematic diagram of an alternative recognition result according to embodiment 1 of the present application;
FIG. 6 is a schematic diagram of an alternative texture generation extension process according to embodiment 1 of the present application;
FIG. 7 is a schematic illustration of an alternative container placement message according to embodiment 1 of the present application;
FIG. 8 is a flow chart of a method of generating a three-dimensional virtual scene according to embodiment 2 of the present application;
FIG. 9 is a flow chart of a method of generating a three-dimensional virtual scene according to embodiment 3 of the present application;
fig. 10 is a schematic structural view of an apparatus for generating a three-dimensional virtual scene according to embodiment 4 of the present application;
fig. 11 is a block diagram of a computer terminal according to embodiment 5 of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, partial terms or terminology appearing in describing embodiments of the present application are applicable to the following explanation:
three-dimensional virtual store: in the e-commerce field, a virtual structured area generated by a rendering engine includes virtual components including: wall surfaces, floors, ceilings, container assemblies for placing commodity models, and the like. The virtual roles controlled by the user can enter the three-dimensional virtual store to walk, view the commodity model, select and purchase the commodity and the like.
Scene analysis: refers to the process of identifying scene components in a scene image by means of semantic segmentation.
Generating an antagonism network (Generative Adversarial Network, GAN): comprising a generator for generating new samples of features in a data set to be imitated and a discriminator for discriminating between data samples in a real data set and spurious samples.
Fig. neural network (Graph Neural Network, GNN): an unsupervised learning method for identifying signals hidden in graphic data. GNNs use multiple hidden layers to recursively compute the graph to learn the characterization of implicit signals encapsulated in complex connected graphs.
The transducer architecture: a sequence-to-sequence (Sequence to Sequence) neural network based on an attention mechanism. The transducer architecture employs three steps of attention mechanisms, multi-headed attention, and location enhancement, performing end-to-end sampling and transformation without reducing depth.
With the improvement of the internet technology level and the intelligent device performance, the production of a three-dimensional virtual scene becomes one of important tasks in computer technology, such as the production of a three-dimensional virtual exhibition hall, the production of a three-dimensional virtual store, the production of a three-dimensional virtual game scene, and the like. Based on the method for customizing and generating the three-dimensional virtual scene according to the scene style requirement by relying on manpower in the prior art, an algorithm for assisting in generating the three-dimensional virtual scene is provided in the related technical field: plane-to-Scene Jing Suanfa (Plane-to-Scene) reconstructs a three-dimensional model of the Scene interior based on the image, specifically, first extracting feature points from a captured multi-view image, and then mapping the feature points in different views; and finally, three-dimensional coordinates of the feature points in the real world under the corresponding pose are estimated through triangulation, and then a three-dimensional virtual scene corresponding to the image is obtained. However, the algorithm is only suitable for scenes with definite structural relation of scene components in the three-dimensional virtual scene, and has the following technical defects: the reusability of different scenes is poor, the generation efficiency is low, and the style diversity of the generated three-dimensional virtual scene is poor.
Example 1
In accordance with embodiments of the present application, there is also provided a method embodiment of generating a three-dimensional virtual scene, it being noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
The method embodiment provided in the first embodiment of the present application may be executed in a mobile terminal, a computer terminal or a similar computing device. Fig. 1 shows a hardware block diagram of a computer terminal (or mobile device) for implementing a method of generating a three-dimensional virtual scene. As shown in fig. 1, the computer terminal 10 (or mobile device 10) may include one or more (shown in the figures as 102a,102b, … …,102 n) processors 102 (the processors 102 may include, but are not limited to, a microprocessor (Microcontroller Unit, MCU) or a programmable logic device (Field Programmable Gate Array, FPGA) or the like processing means), a memory 104 for storing data, and a transmission means 106 for communication functions. In addition, the computer terminal 10 may further include: a display, an input/output interface (I/O interface), a universal serial BUS (Universal Serial Bus, USB) port (which may be included as one of the ports of the BUS), a network interface, a cursor control device (e.g., a mouse, a touch pad, etc.), a keyboard, a power supply, and/or a camera.
It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 1 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors 102 and/or other data processing circuits described above may be referred to generally herein as "data processing circuits. The data processing circuit may be embodied in whole or in part in software, hardware, firmware, or any other combination. Furthermore, the data processing circuitry may be a single stand-alone processing module, or incorporated, in whole or in part, into any of the other elements in the computer terminal 10 (or mobile device). As referred to in the embodiments of the present application, the data processing circuit acts as a processor control (e.g., selection of the path of the variable resistor termination to interface).
The memory 104 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the method for generating a three-dimensional virtual scene in the embodiments of the present application, and the processor 102 executes the software programs and modules stored in the memory 104, thereby performing various functional applications and data processing, that is, implementing the method for generating a three-dimensional virtual scene described above. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means 106 is arranged to connect to a network via a network interface for receiving or transmitting data. Specific examples of the network described above may include wired and/or wireless networks provided by the communication provider of the computer terminal 10. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module for communicating with the internet wirelessly.
The display shown in fig. 1 may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 10 (or mobile device).
It should be noted here that, in some alternative embodiments, the computer device (or mobile device) shown in fig. 1 described above may include hardware elements (including circuitry), software elements (including computer code stored on a computer-readable medium), or a combination of both hardware and software elements. It should be noted that fig. 1 is only one example of a specific example, and is intended to illustrate the types of components that may be present in the computer device (or mobile device) described above.
It should be noted herein that in some embodiments, the computer device (or mobile device) shown in FIG. 1 described above has a touch display (also referred to as a "touch screen" or "touch display"). In some embodiments, the computer device (or mobile device) shown in fig. 1 above has a Graphical User Interface (GUI) with which a user may interact with by touching finger contacts and/or gestures on a touch-sensitive surface, where the human-machine interaction functionality optionally includes the following interactions: executable instructions for performing the above-described human-machine interaction functions, such as creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, sending and receiving electronic mail, talking interfaces, playing digital video, playing digital music, and/or web browsing, are configured/stored in a computer program product or readable storage medium executable by one or more processors.
In the above-described operating environment, the present application provides a method for generating a three-dimensional virtual scene as shown in fig. 2. Fig. 2 is a flowchart of a method for generating a three-dimensional virtual scene according to embodiment 1 of the present application, as shown in fig. 2, the method for generating a three-dimensional virtual scene includes:
Step S21, scene analysis is carried out on scene description information and images to be referred to, so as to obtain peripheral structure information, wherein the scene description information is used for determining a plurality of areas which are obtained by dividing in a three-dimensional virtual scene, the images to be referred to are used for providing texture reference information, the peripheral structure information is used for determining a plurality of building components to be used, the plurality of building components are used for building peripheral building structures of the plurality of areas, and the texture information of the plurality of building components is matched with the texture reference information;
step S22, determining container placement information through peripheral structure information, wherein the container placement information is used for determining placement modes of a plurality of container components in a peripheral building structure;
and S23, rendering based on the peripheral structure information and the container placement information to obtain a three-dimensional virtual scene.
The method for generating the three-dimensional virtual scene can be applied to the following scenes: a scene of a three-dimensional virtual shop is generated in the electronic commerce field, a scene of a three-dimensional virtual game scene is generated in the electronic game field, a scene of simulation analysis is performed by a three-dimensional virtual factory is generated in the industrial field, a scene of a three-dimensional virtual classroom is generated in the teaching/training/scientific research field, a scene of a three-dimensional virtual urban area of a sound field in the urban planning field, a scene of a three-dimensional virtual exhibition hall is generated in the science and technology or art field, and the like. The scene description information is used for determining a plurality of scene areas corresponding to a three-dimensional virtual scene in the application scene, for example, a plurality of shop areas in a three-dimensional virtual shop and a plurality of game terrain areas in a three-dimensional virtual game scene. The image to be referenced is used for providing texture information corresponding to peripheral building structures (such as floors, walls, ceilings and the like) of the plurality of scene areas, and the texture information is obtained by adapting the image to be referenced. That is, the above-mentioned scene description information and the image to be referred to can be used to determine the scene style of the three-dimensional virtual scene, for example, the scene style of the three-dimensional virtual scene is determined to be "brief and fresh" according to the scene description information, and then the three-dimensional virtual scene corresponds to a smaller area division number, and the texture information corresponding to the peripheral building structure of the three-dimensional virtual scene is determined to be a solid texture of the preset color according to the image to be referred to. The container placement information is used for determining a placement mode of a plurality of container components (such as container components for placing commodity models in three-dimensional virtual shops) in the peripheral building structure in the application scene, and the placement mode of the plurality of container components can be represented by structural relations among the plurality of container components.
The method for generating the three-dimensional virtual scene provided by the embodiment of the application scene can be operated at the client corresponding to the application scene, the client determines scene description information and images to be referenced in the application scene from a preset database or data input by a user in real time, peripheral structure information is obtained by utilizing the scene description information and the images to be referenced, further, container placement information corresponding to a plurality of container components in the peripheral building structure is determined based on the peripheral structure information, and the peripheral structure information and the container placement information are rendered to obtain the three-dimensional virtual scene. Further, the client displays the three-dimensional virtual scene on a corresponding graphical user interface, and provides the three-dimensional virtual scene to the user through an output device (such as a display screen, VR glasses, etc.).
The method for generating the three-dimensional virtual scene provided by the embodiment of the application can be operated at the server corresponding to the application scene. The server may be an independent server or a server cluster, and the three-dimensional virtual scene is generated according to the scene description information and the image to be referred given by the client. The server may also be a cloud server, interact with the client in real time through software as a service (Software as a Service, saaS), and obtain peripheral structure information according to scene description information and images to be referenced given by the client, and then use the scene description information and the images to be referenced to determine container placement information corresponding to a plurality of container components in the peripheral building structure based on the peripheral structure information, and render the peripheral structure information and the container placement information to obtain the three-dimensional virtual scene. Further, the server returns the three-dimensional virtual scene to the client to be provided for the user.
According to the method for generating the three-dimensional virtual scene, scene description information and images to be referred are subjected to scene analysis to obtain peripheral structure information, wherein the scene description information is used for determining a plurality of areas which are obtained by dividing in the three-dimensional virtual scene, the images to be referred are used for providing texture reference information, the peripheral structure information is used for determining a plurality of building components to be used, the plurality of building components are used for building peripheral building structures of the plurality of areas, and the texture information of the plurality of building components is matched with the texture reference information; determining container placement information through the peripheral structure information, wherein the container placement information is used for determining placement modes of a plurality of container components in the peripheral building structure; and rendering based on the peripheral structure information and the container placement information to obtain a three-dimensional virtual scene. The scene description information and the scene generation style requirement with the reference image can represent the three-dimensional virtual scene, and the method adopts a two-stage construction processing mode of peripheral structure-internal placement, so that the purpose of automatically generating the three-dimensional virtual scene based on the scene generation style requirement is achieved, the technical effects of improving the style diversity of the three-dimensional virtual scene, improving the efficiency and reusability of three-dimensional scene generation are achieved, and the technical problems that the efficiency of the generation process is low, the reusability is poor and the style diversity of the three-dimensional virtual scene is poor due to the fact that the prior art relies on manual three-dimensional scene generation of different scene styles are solved.
The method for generating the three-dimensional virtual scene is particularly suitable for generating the scene of the three-dimensional virtual shop in the field of electronic commerce, and the method provided by the embodiment of the application is specifically described by taking the scene as an example. Fig. 3 is a schematic diagram of an alternative process for generating a three-dimensional virtual scene according to embodiment 1 of the present application, and as shown in fig. 3, the three-dimensional virtual scene generation process includes three stages: a first stage, a peripheral scene generation stage; a second stage, an internal component placement stage; and a third stage, namely a scene style adjustment stage.
In an alternative embodiment, in step S21, scene analysis is performed on the scene description information and the image to be referred to, to obtain peripheral structure information, including the following method steps:
step S211, determining the number of the areas of the plurality of areas and the connection relation between the different areas by using the scene description information;
step S212, determining the component identifiers and the component numbers of the plurality of building components according to the area numbers, and determining the structural relation among the plurality of building components according to the connection relation;
step S213, extracting texture reference information from the image to be referenced;
in step S214, peripheral structure information is generated based on the component identification, the number of components, the structural relationship, and the texture reference information.
As shown in fig. 3, when the peripheral scene is generated in the first stage, scene description information is acquired; determining the connection relation between the number of the areas and different areas of the plurality of areas by using scene description information; acquiring an image to be referred; texture reference information is extracted from an image to be referenced.
Fig. 4 is a schematic diagram of an alternative peripheral scene generation process according to embodiment 1 of the present application, as shown in fig. 4, in which three shop areas (such as women's clothing, men's clothing, and children's clothing) and connection relations between the three shop areas are determined using scene description information when generating a three-dimensional virtual shop. For example, a female garment region, a male garment region, and a child garment region, the female garment region being connected to the child garment region, the male garment region being connected to the child garment region. Still as shown in fig. 4, an image to be referred to is acquired, which may be a picture of a store in a real scene (2 images to be referred to are exemplified in fig. 4).
As also shown in fig. 4, a plurality of building components is determined by the number of store areas, the plurality of building components including a first number of wall components and a second number of floor components; further, a structural relationship between the first number of wall assemblies and the second number of floor assemblies is determined. Further, peripheral structure information of the three-dimensional virtual store is generated based on the wall surface component, the floor component, the structural relation and 2 images to be referred. The peripheral structure information may include a plurality of different patterns (one is exemplarily shown in the drawings) of peripheral structures, and the different patterns may be different texture patterns, different connection patterns, and the like.
In an alternative embodiment, in step S213, texture reference information is extracted from the image to be referenced, comprising the following method steps:
step S2131, performing scene recognition on the image to be referred to obtain a recognition result, wherein the recognition result is used for determining corresponding display areas of a plurality of building components in the image to be referred to;
step S2132, performing texture generation expansion on the identification result to obtain texture reference information.
As an exemplary embodiment, the scene recognition is performed on the image to be referred by using a preset recognition model, so as to obtain a recognition result, where the recognition result is used to determine display areas corresponding to the plurality of building scene components.
Fig. 5 is a schematic diagram of an alternative recognition result according to embodiment 1 of the present application, where, as shown in fig. 5, an image to be recognized corresponding to a three-dimensional virtual shop is recognized, and the obtained recognition result includes a first display area corresponding to a wall component and a second display area corresponding to a floor component.
As an exemplary embodiment, the texture generation expansion is performed on original texture information in the display area corresponding to the plurality of building scene components in the recognition result to obtain texture reference information, where the original texture information may be a first texture map, the texture reference information may be a second texture map, the first texture map is similar to the texture features of the second texture map, and the resolution of the second texture map is higher than the resolution of the first texture map. Fig. 6 is a schematic diagram of an optional texture generation expansion process according to embodiment 1 of the present application, as shown in fig. 6, a first texture map is a texture map with a smaller left side, and the first texture map is original texture information corresponding to the wall component in the above identification result. The second texture map is a texture map with larger right side size, and the texture generation expansion is performed based on the first texture map to obtain the second texture map.
In an alternative embodiment, in step S214, peripheral structure information is generated based on the component identification, the number of components, the structural relationship, and the texture reference information, comprising the method steps of:
step S2141, virtual three-dimensional grid information corresponding to a plurality of areas is determined based on component identifications, component numbers and structural relations;
in step S2142, the virtual three-dimensional mesh information and the texture reference information are used to generate peripheral structure information.
As an exemplary embodiment, a rendering engine is utilized to determine virtual three-dimensional grid information corresponding to a plurality of areas based on component identifications, component numbers and structural relationships, and peripheral structure information is generated by adopting the virtual three-dimensional grid information and texture reference information, wherein the virtual three-dimensional grid information is used for determining coordinate position information of the plurality of areas in a three-dimensional virtual scene, and the peripheral structure information is stored in a peripheral data exchange format file.
As shown in fig. 3, when the peripheral scene generation is performed in the first stage, peripheral structure information of the virtual scene is generated based on the determined number of scene areas, scene area connection relation, and texture reference map, and the peripheral structure information is stored in a data exchange format file (JavaScript Object Notation, JSON) and is recorded as a peripheral JSON file.
In an alternative embodiment, in step S22, the container placement information is determined from the peripheral structure information, comprising the following method steps:
step S221, obtaining container materials, wherein the container materials are used for determining display patterns of a plurality of container components;
step S222, selecting a plurality of container components by utilizing container materials;
step S223, configuring the placement positions of the plurality of container components based on the peripheral structure information to obtain container placement information.
Still as shown in fig. 3, when the internal components are placed in the second stage, container materials are acquired, where the container materials are used to determine display patterns of a plurality of container components for placing commodity models in the three-dimensional virtual store, and the display patterns include: display shape, display size, etc.; selecting a plurality of container components from a preset container component library by utilizing the container materials; and configuring the placement positions of the container components in the three-dimensional virtual store based on the peripheral JSON file obtained in the first stage to obtain container placement information. Further, the container placement information is stored in the JSON file, and an internal JSON file is obtained.
In an alternative embodiment, in step S223, the placement positions of the plurality of container components are configured based on the peripheral structure information to obtain container placement information, including the following method steps:
Step S2231, generating a plurality of peripheral building structure styles based on the peripheral structure information;
step S2232, selecting a target peripheral building structure pattern from a plurality of peripheral building structure patterns;
step S2233, configuring the placement positions of the plurality of container assemblies according to the target peripheral building structure pattern to obtain container placement information.
In the alternative embodiment, the plurality of peripheral building structure patterns may be different texture patterns, different linking patterns of peripheral building structures. The process of creating the various peripheral building structures described above may be as shown in fig. 4.
As an exemplary embodiment, when the placement positions of the plurality of container components are configured, a target peripheral building structure pattern is selected from a plurality of peripheral building structure patterns according to scene generation style requirements, and then the placement positions of the plurality of container components are configured according to the target peripheral building structure pattern, so that container placement information is obtained. The container placement information includes placement positions, placement directions and the like of each container assembly in the plurality of container assemblies in the three-dimensional virtual scene.
It should be noted that, in the embodiment of the present application, the implementation scheme of configuring the placement positions of the plurality of container assemblies may be one of the following: the method comprises a placement scheme based on a preset placement rule, a placement algorithm scheme based on a transducer structure and a placement algorithm scheme based on a convolutional neural network (Convolutional Neural Network, CNN).
In an alternative embodiment, in step S23, a three-dimensional virtual scene is rendered based on the peripheral structure information and the container placement information, including the following method steps:
step S231, determining target scene structure information based on the peripheral structure information and the container placement information;
step S232, determining a plurality of three-dimensional virtual component models to be displayed in a preset rendering engine through target scene structure information;
step S233, rendering the three-dimensional virtual component model in a graphical user interface of a preset rendering engine, and displaying the three-dimensional virtual scene.
As shown in fig. 3, after the peripheral JSON file is obtained in the first stage, generating a three-dimensional scene structure of the peripheral building structure, and determining a plurality of virtual component models corresponding to the three-dimensional virtual scene; based on the three-dimensional scene structure and the plurality of virtual component models, generating a model loading file, and further importing the model loading file into preset software (such as a preset rendering tool) to render and generate a three-dimensional virtual scene.
Fig. 7 is a schematic diagram of alternative container placement information according to embodiment 1 of the present application, as shown in fig. 7, when generating a three-dimensional virtual store, a preset rendering Engine (such as a Unity Engine, a illusion Engine (UE), etc.) is used to determine a plurality of three-dimensional virtual component models to be exhibited based on target scene structure information, so as to render and generate the three-dimensional virtual store.
In an alternative embodiment, in step S231, the target scene structure information is determined based on the peripheral structure information and the container placement information, comprising the following method steps:
step S2311, determining initial scene structure information based on the peripheral structure information and the container placement information;
in step S2312, texture-style conversion is performed on the initial scene structure information to obtain the target scene structure information.
As an exemplary embodiment, texture style conversion is performed on the initial scene structure information according to a preset style adjustment rule to obtain target scene structure information, where the preset style adjustment rule includes a texture adjustment rule, a shape adjustment rule, a layout adjustment rule, and a constraint adjustment rule, and the constraint adjustment rule is used to ensure that component structure errors are eliminated in the target scene structure information (for example, a wall surface is not connected in a closed loop, and a gap is formed between the wall surface and a floor). The target scene structure information may be a target scene JSON file after style adjustment of the initial scene JSON file. By converting the texture style of the initial scene structure information, the diversity of the scene style and the scene attractiveness in the target scene structure information can be improved.
It should be noted that, in the embodiment of the present application, the implementation scheme for performing texture style conversion on the initial scene structure information may be one of the following: a texture color style conversion scheme based on an image processing mode of RGB values and a CNN-based integral style adjustment scheme.
In an alternative embodiment, in step S232, a plurality of three-dimensional virtual component models to be exhibited are determined in a preset rendering engine through the target scene structure information, including the following method steps:
step S2321, importing target scene structure information into a preset rendering engine;
step S2322, obtaining model loading information corresponding to the target scene structure information in a preset rendering engine;
step S2323, determining a plurality of three-dimensional virtual component models to be exhibited based on the model loading information.
As an exemplary embodiment, the preset rendering engine is configured to determine model loading information, and the model loading information may be stored in a model loading file (FBX file). The preset rendering engine may be a Unity engine, UE, etc. For example, a target scene JSON file corresponding to the three-dimensional virtual scene is imported into a Unity engine, an FBX file corresponding to the target scene JSON file is obtained in the Unity engine, and a plurality of three-dimensional virtual component models corresponding to the three-dimensional virtual scene are determined in the Unity engine according to the FBX file, wherein the plurality of three-dimensional virtual component models can be target virtual component models selected from a preset component model library according to the FBX file.
In an optional embodiment, in step S233, rendering a plurality of three-dimensional virtual component models within a graphical user interface of a preset rendering engine, and displaying a three-dimensional virtual scene, includes the following method steps:
step S2331, splicing the plurality of three-dimensional virtual component models to obtain a virtual three-dimensional scene model corresponding to the three-dimensional virtual scene;
step S2332, rendering the virtual three-dimensional scene model in a graphical user interface of a preset rendering engine, and displaying the three-dimensional virtual scene.
As an exemplary embodiment, a preset rendering engine is utilized to splice a plurality of three-dimensional virtual component models according to the connection relation among the plurality of three-dimensional virtual component models, so as to obtain a virtual three-dimensional scene model corresponding to the three-dimensional virtual scene; further, rendering the virtual three-dimensional scene model in the graphical user interface by using a rendering tool in a preset rendering engine, and displaying the three-dimensional virtual scene. The three-dimensional virtual scene may be a scene picture or a scene video of the three-dimensional virtual scene.
It is easy to understand that according to the technical scheme provided by the embodiment of the application, under the condition that a scene is not required to be built manually, scene description information and a scene style generation requirement of the three-dimensional virtual scene to be represented by the image to be referenced are utilized, and then the three-dimensional virtual scene is automatically and fully generated based on the scene generation style requirement, so that style diversity of the three-dimensional virtual scene is improved, and efficiency and reusability of three-dimensional scene generation are improved.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and provide corresponding operation entries for the user to select authorization or rejection.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus a necessary general hardware platform, but that it may also be implemented by means of hardware. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk, optical disk), comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method described in the embodiments of the present application.
Example 2
In the operating environment as in example 1, the present application provides another method of generating a three-dimensional virtual scene as shown in fig. 8. Fig. 8 is a flowchart of a method for generating a three-dimensional virtual scene according to embodiment 2 of the present application, where a graphical user interface is provided by a terminal device, and content displayed by the graphical user interface includes at least one information input control, as shown in fig. 8, and the method for generating a three-dimensional virtual scene includes:
step S801, in response to a first touch operation applied to a graphical user interface, inputting scene description information and an image to be referenced through an information input control, wherein the scene description information is used for determining a plurality of areas obtained by dividing in a three-dimensional virtual scene, and the image to be referenced is used for providing texture reference information;
step S802, responding to a second touch operation acting on a graphical user interface, carrying out scene analysis on scene description information and an image to be referred to obtain peripheral structure information, determining container placement information through the peripheral structure information, and rendering to obtain a three-dimensional virtual scene based on the peripheral structure information and the container placement information, wherein the peripheral structure information is used for determining a plurality of building components to be used, the plurality of building components are used for building peripheral building structures of a plurality of areas, texture information of the plurality of building components is matched with the texture reference information, and the container placement information is used for determining placement modes of the plurality of container components in the peripheral building structures.
The graphical user interface may be a graphical user interface of a client performing the method of generating a three-dimensional virtual scene described above. The information input control displayed by the graphical user interface is used for supporting input behaviors realized by a user through a first touch operation, and the input behaviors are used for determining scene description information and images to be referred. When the second touch operation acting on the graphical user interface is detected, triggering the client to execute the following method steps: and performing scene analysis on the scene description information and the image to be referred to obtain peripheral structure information, determining container placement information through the peripheral structure information, and rendering based on the peripheral structure information and the container placement information to obtain a three-dimensional virtual scene. The first touch operation may be a touch input operation. The second touch operation may be an operation of touch clicking a decision button or a submit button.
The method for generating the three-dimensional virtual scene can be applied to the following scenes: a scene of a three-dimensional virtual shop is generated in the electronic commerce field, a scene of a three-dimensional virtual game scene is generated in the electronic game field, a scene of simulation analysis is performed by a three-dimensional virtual factory is generated in the industrial field, a scene of a three-dimensional virtual classroom is generated in the teaching/training/scientific research field, a scene of a three-dimensional virtual urban area of a sound field in the urban planning field, a scene of a three-dimensional virtual exhibition hall is generated in the science and technology or art field, and the like. The scene description information is used for determining a plurality of scene areas corresponding to a three-dimensional virtual scene in the application scene, for example, a plurality of shop areas in a three-dimensional virtual shop and a plurality of game terrain areas in a three-dimensional virtual game scene. The image to be referenced is used for providing texture information corresponding to peripheral building structures (such as floors, walls, ceilings and the like) of the plurality of scene areas, and the texture information is obtained by adapting the image to be referenced. That is, the above-described scene description information and the image to be referred to can be used to determine the scene style of the three-dimensional virtual scene. The container placement information is used for determining a placement mode of a plurality of container components (such as container components for placing commodity models in three-dimensional virtual shops) in the peripheral building structure in the application scene, and the placement mode of the plurality of container components can be represented by structural relations among the plurality of container components.
Specifically, performing scene analysis on the scene description information and the image to be referred to obtain peripheral structure information, determining container placement information according to the peripheral structure information, and rendering to obtain a three-dimensional virtual scene based on the peripheral structure information and the container placement information further includes other method steps, which can be referred to the related description in embodiment 1 of the present application, and will not be repeated.
Thus, according to embodiment 2 of the present application, a user can specify scene description information and an image to be referred to by a touch operation performed on a graphical user interface to determine a scene style generation requirement of a three-dimensional virtual scene, and further trigger a client to automatically perform three-dimensional virtual scene generation, and a scene picture or a scene video of the generated three-dimensional virtual scene will be displayed on a display picture of the graphical user interface. In the process, the user has higher degree of freedom and flexibility, the three-dimensional virtual scene can be generated in a full-flow and automatic mode according to the scene style generation requirement of the application scene, the style diversity of the three-dimensional virtual scene is improved, the efficiency and reusability of the three-dimensional scene generation are improved, and the method provided by the embodiment of the application is beneficial to application in the actual scene.
It should be noted that, the preferred implementation manner of this embodiment may be referred to the related description in embodiment 1, and will not be repeated here.
Example 3
In the operating environment as in example 1, the present application provides another method of generating a three-dimensional virtual scene as shown in fig. 9. Fig. 9 is a flowchart of a method for generating a three-dimensional virtual scene according to embodiment 3 of the present application, as shown in fig. 9, the method for generating a three-dimensional virtual scene includes:
step S901, scene analysis is carried out on electronic market scene description information and images to be referred to, so as to obtain electronic market scene peripheral structure information, wherein the electronic market scene description information is used for determining a plurality of areas obtained by dividing in a three-dimensional virtual electronic market scene, the images to be referred to are used for providing texture reference information, the electronic market scene peripheral structure information is used for determining a plurality of building components to be used, the plurality of building components are used for building peripheral building structures of the plurality of areas, and the texture information of the plurality of building components is matched with the texture reference information;
step S902, determining electric market scene container placement information through electric market scene peripheral structure information, wherein the electric market scene container placement information is used for determining placement modes of a plurality of container components in a peripheral building structure;
And step S903, rendering based on the peripheral structure information of the electronic commerce scene and the placement information of the electronic commerce scene container to obtain a three-dimensional virtual electronic commerce scene.
In the e-commerce scene, the image to be referred to is used for providing texture information corresponding to a peripheral building structure (such as a shop floor, a shop wall, a shop ceiling, etc.) of the e-commerce scene, and the texture information is obtained by adapting the image to be referred to. That is, the above electronic commerce scene description information and the image to be referred can be used to determine the scene style of the three-dimensional virtual electronic commerce scene, for example, the scene style of the three-dimensional virtual electronic commerce scene is "brief and fresh", then the three-dimensional virtual electronic commerce scene is determined to correspond to a smaller area division number according to the electronic commerce scene description information, and the texture information corresponding to the peripheral building structure of the three-dimensional virtual electronic commerce scene is determined to be a solid texture of the preset color according to the image to be referred. The electronic market scene container placement information is used for determining the placement mode of container components used for placing commodity models in the three-dimensional virtual store in the electronic market scene, and the placement mode of the container components can be characterized by the structural relation among the container components.
The method for generating the three-dimensional virtual electronic market scene provided by the embodiment of the invention can be operated on the client corresponding to the application scene, the client determines electronic market scene description information and images to be referenced in the application scene from a preset database or data input by a user in real time, then obtains electronic market scene peripheral structure information by utilizing the electronic market scene description information and the images to be referenced, further determines electronic market scene container placement information corresponding to a plurality of container components in a peripheral building structure based on the electronic market peripheral structure information, and renders the electronic market scene peripheral structure information and the electronic market container placement information to obtain the three-dimensional virtual electronic market. Further, the client displays the three-dimensional virtual electronic marketplace on a corresponding graphical user interface, and provides the three-dimensional virtual electronic marketplace to the user through an output device (such as a display screen, VR glasses, etc.).
The method for generating the three-dimensional virtual scene provided by the embodiment of the application can be operated at the server corresponding to the application scene. The server may be an independent server or a server cluster, and the three-dimensional virtual electronic commerce scene is generated according to the electronic commerce scene description information and the image to be referred given by the client. The server side can also be a cloud server, real-time interaction is carried out with the client side in a SaaS mode, the electronic market scene peripheral structure information is obtained according to electronic market scene description information and to-be-referenced images given by the client side, then the electronic market scene peripheral structure information is obtained by utilizing the electronic market scene description information and the to-be-referenced images, further electronic market scene container arrangement information corresponding to a plurality of container assemblies in the peripheral building structure is determined based on the electronic market scene peripheral structure information, and the electronic market scene peripheral structure information and the electronic market scene container arrangement information are rendered to obtain the three-dimensional virtual electronic market. Further, the server returns the three-dimensional virtual electronic commerce scene to the client to be provided for the user.
According to the method for generating the three-dimensional virtual electronic market scene, scene analysis is carried out on electronic market scene description information and images to be referred to obtain electronic market scene peripheral structure information, wherein the electronic market scene description information is used for determining a plurality of areas obtained by dividing in the three-dimensional virtual electronic market scene, the images to be referred to are used for providing texture reference information, the electronic market scene peripheral structure information is used for determining a plurality of building components to be used, the plurality of building components are used for building peripheral building structures of the plurality of areas, and the texture information of the plurality of building components is matched with the texture reference information; determining electric market scene container placement information through electric market scene peripheral structure information, wherein the electric market scene container placement information is used for determining placement modes of a plurality of container assemblies in a peripheral building structure; and rendering based on the peripheral structure information of the electronic commerce scene and the placement information of the electronic commerce scene container to obtain the three-dimensional virtual electronic commerce scene. The electronic commerce scene description information and the scene generation style requirement of the three-dimensional virtual electronic commerce scene can be represented by the reference image, and the method adopts a two-stage construction processing mode of peripheral structure-internal placement, so that the purpose of automatically generating the three-dimensional virtual electronic commerce scene based on the scene generation style requirement is achieved, the technical effects of improving the style diversity of the three-dimensional virtual electronic commerce scene, improving the efficiency and reusability of three-dimensional scene generation are achieved, and the technical problems that the generation process efficiency is low, the reusability is poor and the style diversity of the three-dimensional virtual electronic commerce scene is poor due to the fact that the prior art relies on manual three-dimensional scene generation of different scene styles are solved.
It should be noted that, the preferred implementation manner of this embodiment may be referred to the related description in embodiment 1, and will not be repeated here.
Example 4
According to the embodiment of the application, an embodiment of a device for implementing the method for generating the three-dimensional virtual scene is also provided. Fig. 10 is a schematic structural diagram of an apparatus for generating a three-dimensional virtual scene according to embodiment 4 of the present application, as shown in fig. 10, the apparatus includes: the analyzing module 1001 is configured to perform scene analysis on scene description information and an image to be referred to obtain peripheral structure information, where the scene description information is used to determine a plurality of areas obtained by dividing in a three-dimensional virtual scene, the image to be referred to is used to provide texture reference information, the peripheral structure information is used to determine a plurality of building components to be used, the plurality of building components are used to build peripheral building structures of the plurality of areas, and the texture information of the plurality of building components is adapted to the texture reference information; a determining module 1002, configured to determine, according to the peripheral structure information, container placement information, where the container placement information is used to determine a placement manner of a plurality of container components in the peripheral building structure; and a rendering module 1003, configured to render the three-dimensional virtual scene based on the peripheral structure information and the container placement information.
Optionally, the parsing module 1001 is further configured to: determining the number of the areas of the plurality of areas and the connection relation between the different areas by using the scene description information; determining component identifications and component numbers of a plurality of building components according to the number of the areas, and determining structural relations among the plurality of building components according to the connection relations; extracting texture reference information from an image to be referenced; peripheral structure information is generated based on the component identification, the number of components, the structural relationship, and the texture reference information.
Optionally, the parsing module 1001 is further configured to: performing scene recognition on the image to be referred to obtain a recognition result, wherein the recognition result is used for determining corresponding display areas of the building components in the image to be referred to; and generating and expanding the texture of the identification result to obtain texture reference information.
Optionally, the parsing module 1001 is further configured to: determining virtual three-dimensional grid information corresponding to a plurality of areas based on the component identifications, the component numbers and the structural relation; and generating peripheral structure information by adopting the virtual three-dimensional grid information and the texture reference information.
Optionally, the determining module 1002 is further configured to: acquiring container materials, wherein the container materials are used for determining display patterns of a plurality of container components; selecting a plurality of container components by utilizing container materials; and configuring the placement positions of the plurality of container components based on the peripheral structure information to obtain container placement information.
Optionally, the determining module 1002 is further configured to: generating a plurality of peripheral building structure styles based on the peripheral structure information; selecting a target peripheral building structure pattern from a plurality of peripheral building structure patterns; and configuring the placement positions of the plurality of container assemblies according to the target peripheral building structure style to obtain container placement information.
Optionally, the rendering module 1003 is further configured to: determining target scene structure information based on the peripheral structure information and the container placement information; determining a plurality of three-dimensional virtual component models to be displayed in a preset rendering engine through the target scene structure information; rendering the three-dimensional virtual component model in a graphical user interface of a preset rendering engine, and displaying the three-dimensional virtual scene.
Optionally, the rendering module 1003 is further configured to: determining initial scene structure information based on the peripheral structure information and the container placement information; and performing texture style conversion on the initial scene structure information to obtain target scene structure information.
Optionally, the rendering module 1003 is further configured to: importing the target scene structure information into a preset rendering engine; obtaining model loading information corresponding to the target scene structure information in a preset rendering engine; a plurality of three-dimensional virtual component models to be exposed are determined based on the model loading information.
Optionally, the rendering module 1003 is further configured to: splicing the plurality of three-dimensional virtual component models to obtain a virtual three-dimensional scene model corresponding to the three-dimensional virtual scene; rendering the virtual three-dimensional scene model in a graphical user interface of a preset rendering engine, and displaying the three-dimensional virtual scene.
According to the method for generating the three-dimensional virtual scene, scene description information and images to be referred are subjected to scene analysis to obtain peripheral structure information, wherein the scene description information is used for determining a plurality of areas which are obtained by dividing in the three-dimensional virtual scene, the images to be referred are used for providing texture reference information, the peripheral structure information is used for determining a plurality of building components to be used, the plurality of building components are used for building peripheral building structures of the plurality of areas, and the texture information of the plurality of building components is matched with the texture reference information; determining container placement information through the peripheral structure information, wherein the container placement information is used for determining placement modes of a plurality of container components in the peripheral building structure; and rendering based on the peripheral structure information and the container placement information to obtain a three-dimensional virtual scene. The scene description information and the scene generation style requirement with the reference image can represent the three-dimensional virtual scene, and the method adopts a two-stage construction processing mode of peripheral structure-internal placement, so that the purpose of automatically generating the three-dimensional virtual scene based on the scene generation style requirement is achieved, the technical effects of improving the style diversity of the three-dimensional virtual scene, improving the efficiency and reusability of three-dimensional scene generation are achieved, and the technical problems that the efficiency of the generation process is low, the reusability is poor and the style diversity of the three-dimensional virtual scene is poor due to the fact that the prior art relies on manual three-dimensional scene generation of different scene styles are solved.
Here, the parsing module 1001, the determining module 1002, and the rendering module 1003 correspond to steps S21 to S23 in embodiment 1, and the three modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 1. It should be noted that the above-mentioned modules or units may be hardware components or software components stored in a memory (for example, the memory 104) and processed by one or more processors (for example, the processors 102a,102b, … …,102 n), and the above-mentioned modules may also be executed as a part of the apparatus in the computer terminal 10 provided in embodiment 1.
It should be noted that, the preferred implementation manner of this embodiment may be referred to the related description in embodiment 1 or embodiment 2, and will not be described herein.
Example 5
According to the embodiment of the application, there is further provided a computer terminal, which may be any one of the computer terminal devices in the computer terminal group. Alternatively, in the present embodiment, the above-described computer terminal may be replaced with a terminal device such as a mobile terminal.
Alternatively, in this embodiment, the above-mentioned computer terminal may be located in at least one network device among a plurality of network devices of the computer network.
In this embodiment, the above-mentioned computer terminal may execute program codes of the following steps in the method for generating a three-dimensional virtual scene: scene description information and images to be referred are subjected to scene analysis to obtain peripheral structure information, wherein the scene description information is used for determining a plurality of areas which are obtained by dividing in a three-dimensional virtual scene, the images to be referred are used for providing texture reference information, the peripheral structure information is used for determining a plurality of building components to be used, the plurality of building components are used for building peripheral building structures of the plurality of areas, and the texture information of the plurality of building components is matched with the texture reference information; determining container placement information through the peripheral structure information, wherein the container placement information is used for determining placement modes of a plurality of container components in the peripheral building structure; rendering based on the peripheral structure information and the container placement information to obtain a three-dimensional virtual scene.
Alternatively, fig. 11 is a block diagram of a computer terminal according to embodiment 5 of the present application, and as shown in fig. 11, the computer terminal 110 may include: one or more (only one is shown) processors 1102, memory 1104, a memory controller 1106, and a peripheral interface 1108, wherein the peripheral interface 1108 connects to a radio frequency module, an audio module, and a display.
The memory 1104 may be used to store software programs and modules, such as program instructions/modules corresponding to the method and apparatus for generating a three-dimensional virtual scene in the embodiments of the present application, and the processor executes the software programs and modules stored in the memory, thereby performing various functional applications and data processing, that is, implementing the method for generating a three-dimensional virtual scene described above. Memory 1104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 1104 may further include memory located remotely from the processor, which may be connected to computer terminal 110 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor 1102 may call the information stored in the memory and the application program through the transmission device to perform the following steps: scene description information and images to be referred are subjected to scene analysis to obtain peripheral structure information, wherein the scene description information is used for determining a plurality of areas which are obtained by dividing in a three-dimensional virtual scene, the images to be referred are used for providing texture reference information, the peripheral structure information is used for determining a plurality of building components to be used, the plurality of building components are used for building peripheral building structures of the plurality of areas, and the texture information of the plurality of building components is matched with the texture reference information; determining container placement information through the peripheral structure information, wherein the container placement information is used for determining placement modes of a plurality of container components in the peripheral building structure; rendering based on the peripheral structure information and the container placement information to obtain a three-dimensional virtual scene.
Optionally, the processor 1102 may further execute program code for: determining the number of the areas of the plurality of areas and the connection relation between the different areas by using the scene description information; determining component identifications and component numbers of a plurality of building components according to the number of the areas, and determining structural relations among the plurality of building components according to the connection relations; extracting texture reference information from an image to be referenced; peripheral structure information is generated based on the component identification, the number of components, the structural relationship, and the texture reference information.
Optionally, the processor 1102 may further execute program code for: performing scene recognition on the image to be referred to obtain a recognition result, wherein the recognition result is used for determining corresponding display areas of the building components in the image to be referred to; and generating and expanding the texture of the identification result to obtain texture reference information.
Optionally, the processor 1102 may further execute program code for: determining virtual three-dimensional grid information corresponding to a plurality of areas based on the component identifications, the component numbers and the structural relation; and generating peripheral structure information by adopting the virtual three-dimensional grid information and the texture reference information.
Optionally, the processor 1102 may further execute program code for: acquiring container materials, wherein the container materials are used for determining display patterns of a plurality of container components; selecting a plurality of container components by utilizing container materials; and configuring the placement positions of the plurality of container components based on the peripheral structure information to obtain container placement information.
Optionally, the processor 1102 may further execute program code for: generating a plurality of peripheral building structure styles based on the peripheral structure information; selecting a target peripheral building structure pattern from a plurality of peripheral building structure patterns; and configuring the placement positions of the plurality of container assemblies according to the target peripheral building structure style to obtain container placement information.
Optionally, the processor 1102 may further execute program code for: determining target scene structure information based on the peripheral structure information and the container placement information; determining a plurality of three-dimensional virtual component models to be displayed in a preset rendering engine through the target scene structure information; rendering the three-dimensional virtual component model in a graphical user interface of a preset rendering engine, and displaying the three-dimensional virtual scene.
Optionally, the processor 1102 may further execute program code for: determining initial scene structure information based on the peripheral structure information and the container placement information; and performing texture style conversion on the initial scene structure information to obtain target scene structure information.
Optionally, the processor 1102 may further execute program code for: importing the target scene structure information into a preset rendering engine; obtaining model loading information corresponding to the target scene structure information in a preset rendering engine; a plurality of three-dimensional virtual component models to be exposed are determined based on the model loading information.
Optionally, the processor 1102 may further execute program code for: splicing the plurality of three-dimensional virtual component models to obtain a virtual three-dimensional scene model corresponding to the three-dimensional virtual scene; rendering the virtual three-dimensional scene model in a graphical user interface of a preset rendering engine, and displaying the three-dimensional virtual scene.
The processor 1102 may call the information stored in the memory and the application program through the transmission device to perform the following steps: responding to a first touch operation acting on a graphical user interface, and inputting scene description information and an image to be referenced through an information input control, wherein the scene description information is used for determining a plurality of areas obtained by dividing in a three-dimensional virtual scene, and the image to be referenced is used for providing texture reference information; responding to a second touch operation acting on the graphical user interface, carrying out scene analysis on scene description information and an image to be referred to obtain peripheral structure information, determining container placement information through the peripheral structure information, and rendering to obtain a three-dimensional virtual scene based on the peripheral structure information and the container placement information, wherein the peripheral structure information is used for determining a plurality of building components to be used, the plurality of building components are used for building peripheral building structures of a plurality of areas, texture information of the plurality of building components is matched with texture reference information, and the container placement information is used for determining placement modes of the plurality of container components in the peripheral building structures.
The processor 1102 may call the information stored in the memory and the application program through the transmission device to perform the following steps: scene analysis is carried out on the electronic market scene description information and the image to be referred to, so as to obtain electronic market scene peripheral structure information, wherein the electronic market scene description information is used for determining a plurality of areas which are obtained by dividing in the three-dimensional virtual electronic market scene, the image to be referred to is used for providing texture reference information, the electronic market scene peripheral structure information is used for determining a plurality of building components to be used, the plurality of building components are used for building peripheral building structures of the plurality of areas, and the texture information of the plurality of building components is matched with the texture reference information; determining electric market scene container placement information through electric market scene peripheral structure information, wherein the electric market scene container placement information is used for determining placement modes of a plurality of container assemblies in a peripheral building structure; rendering based on the peripheral structure information of the electronic commerce scene and the placement information of the electronic commerce scene container to obtain the three-dimensional virtual electronic commerce scene.
By adopting the embodiment of the application, the computer terminal for generating the three-dimensional virtual scene is provided, and the peripheral structure information is obtained by carrying out scene analysis on scene description information and images to be referred, wherein the scene description information is used for determining a plurality of areas obtained by dividing in the three-dimensional virtual scene, the images to be referred are used for providing texture reference information, the peripheral structure information is used for determining a plurality of building components to be used, the plurality of building components are used for building peripheral building structures of the plurality of areas, and the texture information of the plurality of building components is matched with the texture reference information; determining container placement information through the peripheral structure information, wherein the container placement information is used for determining placement modes of a plurality of container components in the peripheral building structure; and rendering based on the peripheral structure information and the container placement information to obtain a three-dimensional virtual scene. The scene description information and the scene generation style requirement with the reference image can represent the three-dimensional virtual scene, and the method adopts a two-stage construction processing mode of peripheral structure-internal placement, so that the purpose of automatically generating the three-dimensional virtual scene based on the scene generation style requirement is achieved, the technical effects of improving the style diversity of the three-dimensional virtual scene, improving the efficiency and reusability of three-dimensional scene generation are achieved, and the technical problems that the efficiency of the generation process is low, the reusability is poor and the style diversity of the three-dimensional virtual scene is poor due to the fact that the prior art relies on manual three-dimensional scene generation of different scene styles are solved.
It will be appreciated by those skilled in the art that the configuration shown in fig. 11 is only illustrative, and the computer terminal may be a terminal device such as a smart phone (e.g. an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, and a mobile internet device (Mobile Internet Devices, MID). Fig. 11 does not limit the structure of the computer terminal. For example, the computer terminal 110 may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in fig. 11, or have a different configuration than shown in fig. 11.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program for instructing a terminal device to execute in association with hardware, the program may be stored in a computer readable storage medium, and the storage medium may include: flash disk, ROM, RAM, magnetic or optical disk, etc.
Example 6
According to an embodiment of the present application, there is also provided a computer-readable storage medium. Alternatively, in this embodiment, the storage medium may be used to store program codes executed by the method for generating a three-dimensional virtual scene provided in embodiment 1, embodiment 2, or embodiment 3.
Alternatively, in this embodiment, the storage medium may be located in any one of the computer terminals in the computer terminal group in the computer network, or in any one of the mobile terminals in the mobile terminal group.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: scene description information and images to be referred are subjected to scene analysis to obtain peripheral structure information, wherein the scene description information is used for determining a plurality of areas which are obtained by dividing in a three-dimensional virtual scene, the images to be referred are used for providing texture reference information, the peripheral structure information is used for determining a plurality of building components to be used, the plurality of building components are used for building peripheral building structures of the plurality of areas, and the texture information of the plurality of building components is matched with the texture reference information; determining container placement information through the peripheral structure information, wherein the container placement information is used for determining placement modes of a plurality of container components in the peripheral building structure; rendering based on the peripheral structure information and the container placement information to obtain a three-dimensional virtual scene.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: determining the number of the areas of the plurality of areas and the connection relation between the different areas by using the scene description information; determining component identifications and component numbers of a plurality of building components according to the number of the areas, and determining structural relations among the plurality of building components according to the connection relations; extracting texture reference information from an image to be referenced; peripheral structure information is generated based on the component identification, the number of components, the structural relationship, and the texture reference information.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: performing scene recognition on the image to be referred to obtain a recognition result, wherein the recognition result is used for determining corresponding display areas of the building components in the image to be referred to; and generating and expanding the texture of the identification result to obtain texture reference information.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: determining virtual three-dimensional grid information corresponding to a plurality of areas based on the component identifications, the component numbers and the structural relation; and generating peripheral structure information by adopting the virtual three-dimensional grid information and the texture reference information.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: acquiring container materials, wherein the container materials are used for determining display patterns of a plurality of container components; selecting a plurality of container components by utilizing container materials; and configuring the placement positions of the plurality of container components based on the peripheral structure information to obtain container placement information.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: generating a plurality of peripheral building structure styles based on the peripheral structure information; selecting a target peripheral building structure pattern from a plurality of peripheral building structure patterns; and configuring the placement positions of the plurality of container assemblies according to the target peripheral building structure style to obtain container placement information.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: determining target scene structure information based on the peripheral structure information and the container placement information; determining a plurality of three-dimensional virtual component models to be displayed in a preset rendering engine through the target scene structure information; rendering the three-dimensional virtual component model in a graphical user interface of a preset rendering engine, and displaying the three-dimensional virtual scene.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: determining initial scene structure information based on the peripheral structure information and the container placement information; and performing texture style conversion on the initial scene structure information to obtain target scene structure information.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: importing the target scene structure information into a preset rendering engine; obtaining model loading information corresponding to the target scene structure information in a preset rendering engine; a plurality of three-dimensional virtual component models to be exposed are determined based on the model loading information.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: splicing the plurality of three-dimensional virtual component models to obtain a virtual three-dimensional scene model corresponding to the three-dimensional virtual scene; rendering the virtual three-dimensional scene model in a graphical user interface of a preset rendering engine, and displaying the three-dimensional virtual scene.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: responding to a first touch operation acting on a graphical user interface, and inputting scene description information and an image to be referenced through an information input control, wherein the scene description information is used for determining a plurality of areas obtained by dividing in a three-dimensional virtual scene, and the image to be referenced is used for providing texture reference information; responding to a second touch operation acting on the graphical user interface, carrying out scene analysis on scene description information and an image to be referred to obtain peripheral structure information, determining container placement information through the peripheral structure information, and rendering to obtain a three-dimensional virtual scene based on the peripheral structure information and the container placement information, wherein the peripheral structure information is used for determining a plurality of building components to be used, the plurality of building components are used for building peripheral building structures of a plurality of areas, texture information of the plurality of building components is matched with texture reference information, and the container placement information is used for determining placement modes of the plurality of container components in the peripheral building structures.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: scene analysis is carried out on the electronic market scene description information and the image to be referred to, so as to obtain electronic market scene peripheral structure information, wherein the electronic market scene description information is used for determining a plurality of areas which are obtained by dividing in the three-dimensional virtual electronic market scene, the image to be referred to is used for providing texture reference information, the electronic market scene peripheral structure information is used for determining a plurality of building components to be used, the plurality of building components are used for building peripheral building structures of the plurality of areas, and the texture information of the plurality of building components is matched with the texture reference information; determining electric market scene container placement information through electric market scene peripheral structure information, wherein the electric market scene container placement information is used for determining placement modes of a plurality of container assemblies in a peripheral building structure; rendering based on the peripheral structure information of the electronic commerce scene and the placement information of the electronic commerce scene container to obtain the three-dimensional virtual electronic commerce scene.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, ROM, RAM, a mobile hard disk, a magnetic disk or an optical disk.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application and are intended to be comprehended within the scope of the present application.

Claims (15)

1. A method of generating a three-dimensional virtual scene, comprising:
Scene description information and images to be referred are subjected to scene analysis to obtain peripheral structure information, wherein the scene description information is used for determining a plurality of areas which are obtained by dividing in a three-dimensional virtual scene, the images to be referred are used for providing texture reference information, the peripheral structure information is used for determining a plurality of building components to be used, the plurality of building components are used for building peripheral building structures of the plurality of areas, and the texture information of the plurality of building components is matched with the texture reference information;
determining container placement information according to the peripheral structure information, wherein the container placement information is used for determining placement modes of a plurality of container components in the peripheral building structure;
and rendering the three-dimensional virtual scene based on the peripheral structure information and the container placement information.
2. The method of claim 1, wherein performing scene parsing on the scene description information and the image to be referenced to obtain the peripheral structure information comprises:
determining the number of the areas of the plurality of areas and the connection relation between different areas by using the scene description information;
determining component identifications and component numbers of the plurality of building components according to the area number, and determining structural relations among the plurality of building components according to the connection relations;
Extracting the texture reference information from the image to be referenced;
generating the peripheral structure information based on the component identification, the number of components, the structural relationship, and the texture reference information.
3. The method of claim 2, wherein extracting the texture reference information from the image to be referenced comprises:
performing scene recognition on the image to be referred to obtain a recognition result, wherein the recognition result is used for determining corresponding display areas of the building components in the image to be referred to;
and generating and expanding textures of the identification result to obtain the texture reference information.
4. The method of claim 2, wherein generating the peripheral structure information based on the component identification, the number of components, the structural relationship, and the texture reference information comprises:
determining virtual three-dimensional grid information corresponding to the plurality of areas based on the component identifications, the component numbers and the structural relation;
and generating the peripheral structure information by adopting the virtual three-dimensional grid information and the texture reference information.
5. The method of claim 1, wherein determining the container placement information from the peripheral structure information comprises:
Acquiring container materials, wherein the container materials are used for determining display patterns of the plurality of container components;
selecting the plurality of container components using the container material;
and configuring the placement positions of the plurality of container components based on the peripheral structure information to obtain the container placement information.
6. The method of claim 5, wherein configuring the placement locations of the plurality of container assemblies based on the peripheral structure information, the obtaining the container placement information comprises:
generating a plurality of peripheral building structure styles based on the peripheral structure information;
selecting a target peripheral building structure pattern from the plurality of peripheral building structure patterns;
and configuring the placement positions of the plurality of container assemblies according to the target peripheral building structure pattern to obtain the container placement information.
7. The method of claim 1, wherein rendering the three-dimensional virtual scene based on the peripheral structure information and the container pose information comprises:
determining target scene structure information based on the peripheral structure information and the container placement information;
determining a plurality of three-dimensional virtual component models to be displayed in a preset rendering engine through the target scene structure information;
Rendering the three-dimensional virtual component model in a graphical user interface of the preset rendering engine, and displaying the three-dimensional virtual scene.
8. The method of claim 7, wherein determining the target scene structure information based on the peripheral structure information and the container pose information comprises:
determining initial scene structure information based on the peripheral structure information and the container placement information;
and performing texture style conversion on the initial scene structure information to obtain the target scene structure information.
9. The method of claim 7, wherein determining the plurality of three-dimensional virtual component models to be exposed in the preset rendering engine from the target scene structure information comprises:
importing the target scene structure information into the preset rendering engine;
obtaining model loading information corresponding to the target scene structure information in the preset rendering engine;
and determining the plurality of three-dimensional virtual component models to be displayed based on the model loading information.
10. The method of claim 9, wherein rendering the plurality of three-dimensional virtual component models within a graphical user interface of the preset rendering engine, the exposing the three-dimensional virtual scene comprises:
Splicing the plurality of three-dimensional virtual component models to obtain a virtual three-dimensional scene model corresponding to the three-dimensional virtual scene;
rendering the virtual three-dimensional scene model in a graphical user interface of the preset rendering engine, and displaying the three-dimensional virtual scene.
11. A method for generating a three-dimensional virtual scene, characterized in that a graphical user interface is provided by a terminal device,
the content displayed by the graphical user interface at least comprises an information input control, and the method comprises the following steps:
responding to a first touch operation acting on the graphical user interface, and inputting scene description information and an image to be referenced through the information input control, wherein the scene description information is used for determining a plurality of areas obtained by dividing in a three-dimensional virtual scene, and the image to be referenced is used for providing texture reference information;
responding to a second touch operation acting on the graphical user interface, carrying out scene analysis on the scene description information and the image to be referred to obtain peripheral structure information, determining container placement information through the peripheral structure information, and rendering to obtain the three-dimensional virtual scene based on the peripheral structure information and the container placement information, wherein the peripheral structure information is used for determining a plurality of building components to be used, the plurality of building components are used for building peripheral building structures of the plurality of areas, texture information of the plurality of building components is matched with the texture reference information, and the container placement information is used for determining placement modes of the plurality of container components in the peripheral building structures.
12. A method of generating a three-dimensional virtual scene, comprising:
scene analysis is carried out on electronic market scene description information and an image to be referred to obtain electronic market scene peripheral structure information, wherein the electronic market scene description information is used for determining a plurality of areas which are obtained by dividing in a three-dimensional virtual electronic market scene, the image to be referred to is used for providing texture reference information, the electronic market scene peripheral structure information is used for determining a plurality of building components to be used, the plurality of building components are used for building peripheral building structures of the plurality of areas, and the texture information of the plurality of building components is matched with the texture reference information;
determining electric market scene container placement information through the electric market scene peripheral structure information, wherein the electric market scene container placement information is used for determining placement modes of a plurality of container components in the peripheral building structure;
and rendering based on the peripheral structure information of the electronic commerce scene and the placement information of the electronic commerce scene container to obtain the three-dimensional virtual electronic commerce scene.
13. An apparatus for generating a three-dimensional virtual scene, comprising:
the analysis module is used for carrying out scene analysis on scene description information and images to be referred to obtain peripheral structure information, wherein the scene description information is used for determining a plurality of areas which are obtained by dividing in a three-dimensional virtual scene, the images to be referred to are used for providing texture reference information, the peripheral structure information is used for determining a plurality of building components to be used, the plurality of building components are used for building peripheral building structures of the plurality of areas, and the texture information of the plurality of building components is matched with the texture reference information;
The determining module is used for determining container placement information according to the peripheral structure information, wherein the container placement information is used for determining placement modes of a plurality of container components in the peripheral building structure;
and the rendering module is used for rendering the three-dimensional virtual scene based on the peripheral structure information and the container placement information.
14. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored executable program, wherein the executable program when run controls a device in which the computer readable storage medium is located to perform the method of generating a three-dimensional virtual scene according to any one of claims 1 to 12.
15. A system for generating a three-dimensional virtual scene, comprising:
a processor;
a memory, coupled to the processor, for providing instructions to the processor to process the following processing steps:
scene description information and images to be referred are subjected to scene analysis to obtain peripheral structure information, wherein the scene description information is used for determining a plurality of areas which are obtained by dividing in a three-dimensional virtual scene, the images to be referred are used for providing texture reference information, the peripheral structure information is used for determining a plurality of building components to be used, the plurality of building components are used for building peripheral building structures of the plurality of areas, and the texture information of the plurality of building components is matched with the texture reference information;
Determining container placement information according to the peripheral structure information, wherein the container placement information is used for determining placement modes of a plurality of container components in the peripheral building structure;
and rendering the three-dimensional virtual scene based on the peripheral structure information and the container placement information.
CN202310379738.2A 2023-04-06 2023-04-06 Method, device, storage medium and system for generating three-dimensional virtual scene Pending CN116503550A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310379738.2A CN116503550A (en) 2023-04-06 2023-04-06 Method, device, storage medium and system for generating three-dimensional virtual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310379738.2A CN116503550A (en) 2023-04-06 2023-04-06 Method, device, storage medium and system for generating three-dimensional virtual scene

Publications (1)

Publication Number Publication Date
CN116503550A true CN116503550A (en) 2023-07-28

Family

ID=87315943

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310379738.2A Pending CN116503550A (en) 2023-04-06 2023-04-06 Method, device, storage medium and system for generating three-dimensional virtual scene

Country Status (1)

Country Link
CN (1) CN116503550A (en)

Similar Documents

Publication Publication Date Title
CN106254848B (en) A kind of learning method and terminal based on augmented reality
CN109636919B (en) Holographic technology-based virtual exhibition hall construction method, system and storage medium
CN108520552A (en) Image processing method, device, storage medium and electronic equipment
CN109035415B (en) Virtual model processing method, device, equipment and computer readable storage medium
CN111643899A (en) Virtual article display method and device, electronic equipment and storage medium
CN111142967B (en) Augmented reality display method and device, electronic equipment and storage medium
CN112101252B (en) Image processing method, system, device and medium based on deep learning
CN112070901A (en) AR scene construction method and device for garden, storage medium and terminal
CN117058284A (en) Image generation method, device and computer readable storage medium
CN113742804B (en) Furniture layout diagram generation method, device, equipment and storage medium
CN111167119A (en) Game development display method, device, equipment and storage medium
CN116503550A (en) Method, device, storage medium and system for generating three-dimensional virtual scene
Hsu et al. A multimedia presentation system using a 3D gesture interface in museums
CN114061593B (en) Navigation method and related device based on building information model
CN111580700B (en) Electronic interaction sand table system and interaction method
CN115524990A (en) Intelligent household control method, device, system and medium based on digital twins
De Paolis et al. Augmented Reality, Virtual Reality, and Computer Graphics: 6th International Conference, AVR 2019, Santa Maria al Bagno, Italy, June 24–27, 2019, Proceedings, Part II
CN116029912A (en) Training of image processing model, image processing method, device, equipment and medium
CN112686990A (en) Three-dimensional model display method and device, storage medium and computer equipment
CN116204167B (en) Method and system for realizing full-flow visual editing Virtual Reality (VR)
CN116645465A (en) Method, device, storage medium and system for generating three-dimensional virtual scene
CN115984943B (en) Facial expression capturing and model training method, device, equipment, medium and product
Li [Retracted] Task Image Setting of 3D Animation Based on Virtual Reality and Artificial Intelligence
CN116055708B (en) Perception visual interactive spherical screen three-dimensional imaging method and system
Xu Application of 3D Roaming Technology in Micro-course Design

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination