CN115564916A - Editing method and device of virtual scene, computer equipment and storage medium - Google Patents

Editing method and device of virtual scene, computer equipment and storage medium Download PDF

Info

Publication number
CN115564916A
CN115564916A CN202211401315.8A CN202211401315A CN115564916A CN 115564916 A CN115564916 A CN 115564916A CN 202211401315 A CN202211401315 A CN 202211401315A CN 115564916 A CN115564916 A CN 115564916A
Authority
CN
China
Prior art keywords
scene
virtual scene
virtual
processed
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211401315.8A
Other languages
Chinese (zh)
Inventor
李家辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202211401315.8A priority Critical patent/CN115564916A/en
Publication of CN115564916A publication Critical patent/CN115564916A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Abstract

The embodiment of the application discloses a method and a device for editing a virtual scene, computer equipment and a storage medium. In the editing process of the virtual scene, when a position designated event aiming at the virtual scene is detected, determining a designated scene position from the virtual scene, wherein the virtual scene is configured with matching conditions of virtual scene models required to be set at different scene positions; then, determining a scene area to be processed from a virtual scene according to the designated scene position, and further determining a plurality of target virtual scene models which meet the matching condition with the scene area to be processed from a plurality of preset virtual scene models; the virtual scene module is formed based on the target virtual scene models, and is arranged in the area to be processed, so that the scene editing of the area to be processed is completed, and the editing efficiency of the virtual scene can be improved.

Description

Editing method and device of virtual scene, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for editing a virtual scene, a computer device, and a storage medium.
Background
In editing and creating a virtual scene, the virtual scene is usually created based on a complete map designed in advance. Dividing the complete map into a plurality of map areas, generating a corresponding scene module for each map area, and finally forming a virtual scene by the plurality of scene modules.
In the related art, each scene module is composed of at least one scene model, and then each scene module is spliced manually to manufacture a virtual scene. Because a plurality of scene models in each scene module are combined into a whole, each scene module can only realize a single expressed scene, and cannot meet the requirement of realizing rich expression effects of virtual scenes.
Disclosure of Invention
The embodiment of the application provides a method and a device for editing a virtual scene, a computer device and a storage medium, which can improve the editing efficiency of the virtual scene.
The embodiment of the application provides a method for editing a virtual scene, which comprises the following steps:
in response to a position specification event aiming at a virtual scene, determining a specified scene position in the virtual scene, wherein matching conditions of virtual scene models required to be set at different scene positions are configured in the virtual scene;
determining a scene area to be processed from the virtual scene according to the designated scene position and a preset distance;
determining at least one target virtual scene model meeting the matching condition with the scene area to be processed from a plurality of preset virtual scene models;
and forming a virtual scene module based on the at least one target virtual scene model, setting the virtual scene module in the region to be processed, and generating a virtual scene corresponding to the region to be processed.
Correspondingly, an embodiment of the present application further provides an editing apparatus for a virtual scene, including:
a first determination unit, configured to determine a designated scene position in a virtual scene in response to a position designation event for the virtual scene, where matching conditions of virtual scene models that need to be set for different scene positions are configured in the virtual scene;
the second determining unit is used for determining a scene area to be processed from the virtual scene according to the designated scene position and a preset distance;
a third determining unit, configured to determine, from a plurality of preset virtual scene models, at least one target virtual scene model that satisfies the matching condition with the to-be-processed scene region;
and the generating unit is used for forming a virtual scene module based on the at least one target virtual scene model, setting the virtual scene module in the to-be-processed area and generating a virtual scene corresponding to the to-be-processed area.
Correspondingly, an embodiment of the present application further provides a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the method for editing a virtual scene provided in any of the embodiments of the present application.
Correspondingly, the embodiment of the application also provides a storage medium, wherein the storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor to execute the editing method of the virtual scene.
In the editing process of the virtual scene, when a position designated event aiming at the virtual scene is detected, determining a designated scene position from the virtual scene, wherein matching conditions of virtual scene models required to be set at different scene positions are configured in the virtual scene; then, determining a scene area to be processed from a virtual scene according to the designated scene position, and further determining a plurality of target virtual scene models which meet the matching condition with the scene area to be processed from a plurality of preset virtual scene models; the virtual scene module is formed based on the target virtual scene models, and is arranged in the area to be processed, so that the scene editing of the area to be processed is completed, and the editing efficiency of the virtual scene can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a scene schematic diagram of an editing system of a virtual scene according to an embodiment of the present application.
Fig. 2 is a schematic flowchart of an editing method for a virtual scene according to an embodiment of the present disclosure.
Fig. 3 is an application scenario diagram of an editing method for a virtual scenario provided in an embodiment of the present application.
Fig. 4 is a schematic flowchart of another editing method for a virtual scene according to an embodiment of the present application.
Fig. 5 is an application scenario diagram of another editing method for a virtual scenario provided in an embodiment of the present application.
Fig. 6 is an application scenario diagram of another editing method for a virtual scenario provided in an embodiment of the present application.
Fig. 7 is a flowchart illustrating another editing method for a virtual scene according to an embodiment of the present application.
Fig. 8 is a block diagram of a virtual scene editing apparatus according to an embodiment of the present application.
Fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a method and a device for editing a virtual scene, a storage medium and computer equipment. Specifically, the editing method of the virtual scene in the embodiment of the present application may be executed by a computer device, where the computer device may be a terminal or a server. The terminal can be a terminal device such as a smart phone, a tablet Computer, a notebook Computer, a touch screen, a Personal Computer (PC), a Personal Digital Assistant (PDA), and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN, and a big data and artificial intelligence platform.
For example, when the editing method of the virtual scene is operated on the terminal, the terminal device stores a game application program and is used for presenting the virtual scene in a game screen. The terminal device is used for interacting with a user through a graphical user interface, for example, downloading and installing a game application program through the terminal device and running the game application program. The manner in which the terminal device provides the graphical user interface to the user may include a variety of ways, for example, the graphical user interface may be rendered for display on a display screen of the terminal device or presented by holographic projection. For example, the terminal device may include a touch display screen for presenting a graphical user interface including a game screen and receiving operation instructions generated by a user acting on the graphical user interface, and a processor for executing the game, generating the graphical user interface, responding to the operation instructions, and controlling display of the graphical user interface on the touch display screen.
For example, when the editing method of the virtual scene runs on a server, the virtual scene can be a cloud game. Cloud gaming refers to a gaming mode based on cloud computing. In the running mode of the cloud game, the running main body of the game application program and the game picture presenting main body are separated, and the storage and the running of the use method of the prop are finished on the cloud game server. The game screen presentation is performed at a cloud game client, which is mainly used for receiving and sending game data and presenting the game screen, for example, the cloud game client may be a display device with a data transmission function near a user side, such as a mobile terminal, a television, a computer, a palm computer, a personal digital assistant, and the like, but a terminal device for performing game data processing is a cloud game server at the cloud end. When a game is played, a user operates the cloud game client to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, data such as game pictures and the like are encoded and compressed, the data are returned to the cloud game client through a network, and finally the data are decoded through the cloud game client and the game pictures are output.
Referring to fig. 1, fig. 1 is a schematic view of a virtual scene editing system according to an embodiment of the present disclosure. The system may include at least one terminal, at least one server, at least one database, and a network. The terminal held by the user can be connected to servers of different games through a network. A terminal is any device having computing hardware capable of supporting and executing a software product corresponding to a game. In addition, the terminal has one or more multi-touch sensitive screens for sensing and obtaining input of a user through a touch or slide operation performed at a plurality of points of one or more touch display screens. In addition, when the system includes a plurality of terminals, a plurality of servers, and a plurality of networks, different terminals may be connected to each other through different networks and through different servers. The network may be a wireless network or a wired network, such as a Wireless Local Area Network (WLAN), a Local Area Network (LAN), a cellular network, a 2G network, a 3G network, a 4G network, a 5G network, etc. In addition, different terminals may be connected to other terminals or to a server using their own bluetooth network or hotspot network. For example, multiple users may be online through different terminals to connect and synchronize with each other over a suitable network to support multiplayer gaming. Additionally, the system may include a plurality of databases coupled to different servers and in which information relating to the gaming environment may be stored continuously as different users play the multiplayer game online.
The embodiment of the application provides a method for editing a virtual scene, which can be executed by a terminal or a server. The embodiment of the present application is described by taking an example in which a method for using a prop is executed by a terminal. The terminal comprises a touch display screen and a processor, wherein the touch display screen is used for presenting a graphical user interface and receiving an operation instruction generated by a user acting on the graphical user interface. When a user operates the graphical user interface through the touch display screen, the graphical user interface can control the local content of the terminal through responding to the received operation instruction, and can also control the content of the opposite-end server through responding to the received operation instruction. For example, the operation instruction generated by the user acting on the graphical user interface comprises an instruction for starting a game application, and the processor is configured to start the game application after receiving the instruction provided by the user for starting the game application. Further, the processor is configured to render and draw a graphical user interface associated with the game on the touch-sensitive display screen. A touch display screen is a multi-touch sensitive screen capable of sensing a touch or slide operation performed simultaneously at a plurality of points on the screen. The user uses a finger to perform touch operation on the graphical user interface, and when the graphical user interface detects the touch operation, different virtual objects in the graphical user interface of the game are controlled to perform actions corresponding to the touch operation. For example, the game may be any one of a leisure game, an action game, a role-playing game, a strategy game, a sports game, a game of chance, and the like. Wherein the game may include a virtual scene of the game drawn on a graphical user interface. Further, one or more virtual objects, such as virtual characters, controlled by the user (or player) may be included in the virtual scene of the game. Additionally, one or more obstacles, such as railings, ravines, walls, etc., may also be included in the virtual scene of the game to limit movement of the virtual objects, e.g., to limit movement of one or more objects to a particular area within the virtual scene. Optionally, the virtual scene of the game also includes one or more elements, such as skills, points, character health, energy, etc., to provide assistance to the player, provide virtual services, increase points related to player performance, etc. In addition, the graphical user interface may also present one or more indicators to provide instructional information to the player. For example, a game may include a player-controlled virtual object and one or more other virtual objects (such as an enemy character). In one embodiment, one or more other virtual objects are controlled by other players of the game. For example, one or more other virtual objects may be computer controlled, such as a robot using Artificial Intelligence (AI) algorithms, implementing a human-machine fight mode. For example, the virtual objects possess various skills or capabilities that a game player uses to achieve a goal. For example, the virtual object possesses one or more weapons, props, tools, etc. that may be used to eliminate other objects from the game. Such skills or capabilities may be activated by a player of the game using one of a plurality of preset touch operations with a touch display screen of the terminal. The processor may be configured to present a corresponding game screen in response to an operation instruction generated by a touch operation of a user.
It should be noted that the scene schematic diagram of the editing system of a virtual scene shown in fig. 1 is merely an example, and the image processing system and the scene described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not form a limitation on the technical solution provided in the embodiment of the present application, and it is known by a person of ordinary skill in the art that the technical solution provided in the embodiment of the present application is also applicable to similar technical problems with the evolution of the editing system of a virtual scene and the appearance of a new service scene.
Based on the above problems, embodiments of the present application provide a method and an apparatus for editing a virtual scene, a computer device, and a storage medium, which can improve efficiency of editing a virtual scene. The following are detailed below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
The embodiment of the present application provides a method for editing a virtual scene, where the method may be executed by a terminal or a server.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating an editing method for a virtual scene according to an embodiment of the present disclosure. The specific flow of the editing method for the virtual scene may be as follows:
101. in response to a location-specifying event for the virtual scene, a specified scene location in the virtual scene is determined.
In the embodiment of the present application, the virtual scene may be a virtual scene in a game, and the virtual scene may include a complete game scene map. The virtual scene may be created or edited by a scene editor.
Wherein the location-specific event can be used to select a scene location from the virtual scene. The location specifying event may be triggered by a user, and specifically, the user triggers the location specifying event for the virtual scene through the scene editor.
In some embodiments, the terminal device provides a graphical user interface, which may be a scene editing interface of a scene editor, which may display a virtual scene. The step "determining a specified scene location in the virtual scene in response to the location-specifying event for the virtual scene" may comprise the operations of:
responding to the click operation aiming at the graphical user interface, and acquiring the click position on the graphical user interface;
and carrying out position collision detection on the virtual scene and the click position, and determining a scene position corresponding to the click position in the virtual scene to obtain a specified scene position.
Specifically, when the clicking operation of the user on the graphical user interface is detected, the position of the clicking operation on the graphical user interface, that is, the clicking position, is obtained. And if the graphical user interface is a two-dimensional plane, the clicking position is a two-dimensional coordinate position.
Wherein, the position collision detection of the virtual scene and the click position comprises the following steps: the virtual camera based on the virtual scene sends a ray to the click position, the position where the ray collides in the virtual scene is obtained and used as the position of the appointed scene, the position of the appointed scene is the position in the virtual scene, and the position of the appointed scene is the position of the three-dimensional coordinate due to the fact that the virtual scene is the three-dimensional scene.
In the embodiment of the application, the same or different virtual scene models can be set at different scene positions in a virtual scene, and in order to quickly set the virtual scene models at the different scene positions, matching conditions of the virtual scene models, which need to be set at the different scene positions, are configured for the virtual scene. The virtual scene model refers to scene elements in a virtual scene, i.e., virtual objects included in the virtual scene. For example, the virtual scene model may be a mountain, a river, a tree model, etc.
In order to ensure the splicing effect between different virtual scene models, a virtual scene model includes one or more virtual objects, for example, the virtual scene model may be a virtual scene model including sand beach and water, and the virtual scene model may be spliced between the sand beach and the water in the virtual scene to ensure the splicing effect between different scenes in the virtual scene.
102. And determining a scene area to be processed from the virtual scene according to the designated scene position and the preset distance.
The preset distance refers to a distance set based on the virtual scene, for example, the preset distance may be a distance of 5 meters in the virtual scene.
In some embodiments, in order to improve the editing efficiency of the virtual scene, the step "determining a scene area to be processed from the virtual scene according to the specified scene position and the preset distance" may include the following processes:
determining an edge scene position with a distance to a specified scene position not greater than a preset distance from a virtual scene;
and obtaining a scene area to be processed based on the scene area formed by the appointed scene position and the edge scene position in the virtual scene.
Specifically, a plurality of scene positions with preset distances from the scene positions are determined from the virtual scene, and the edge scene positions are obtained. Furthermore, the plurality of edge scene positions are used as the edges of the area range, and the formed area range can be the area of the scene to be processed.
In some embodiments, the regions to be processed may be of different shapes in order to meet the editing requirements of different virtual scenes. For example, please refer to fig. 3, and fig. 3 is a schematic view of an application scenario of an editing method for a virtual scenario according to an embodiment of the present application. In the virtual scene shown in fig. 3, the scene positions are specified as: the preset distance may be: and L, determining a circular area from the virtual scene by taking the position A as a center and a preset distance L as a radius to obtain a to-be-processed scene area.
For another example, please refer to fig. 4, where fig. 4 is a schematic view of an application scenario of another editing method for a virtual scenario provided in the embodiment of the present application. In the virtual scene shown in fig. 4, the scene positions are specified as: the preset distance may be: and L, taking the position A as a center, and determining a rectangular area from the virtual scene by using a preset distance L to obtain a to-be-processed scene area.
In the embodiment of the present application, in order to facilitate editing operations on different scene positions of a virtual scene, the virtual scene may be divided into a plurality of scene modules, and the size of each scene module may be the same.
For example, referring to fig. 5, fig. 5 is a schematic view of an application scenario of another editing method for a virtual scenario provided in the embodiment of the present application. In the virtual scene shown in fig. 5, the virtual scene is divided into a plurality of scene modules.
In some embodiments, in order to improve the processing efficiency of the virtual scene, the step "determining an edge scene position having a distance from the specified scene position not greater than a preset distance from the virtual scene" may include the following operations:
determining a first scene module corresponding to a designated scene position from a plurality of scene modules;
and determining a second scene module, the distance between which and the first scene module is not more than the preset distance, from the plurality of scene modules.
Specifically, a scene module in which the designated scene position is located in the plurality of scene modules of the virtual scene is determined, and a first scene module is obtained.
In the embodiment of the present application, when the virtual scene includes a plurality of scene modules, in order to reduce the amount of calculation, the scene modules may be regarded as a unit distance, that is, one scene module is regarded as one unit distance. The preset distance may also be determined according to a scene module, for example, the preset distance may be one scene module.
Further, after determining the first scene module, a second scene module may be determined from the virtual scene that is a distance from the first scene module that is within one scene module.
For example, please refer to fig. 6, and fig. 6 is a schematic view of an application scenario of another editing method for a virtual scenario provided in the embodiment of the present application. In the virtual scene shown in fig. 6, the virtual scene may include: scene module 1, scene module 2, scene module 3, scene module 4, scene module 5, scene module 6, scene module 7, scene module 8, scene module 9, scene module 10, scene module 11, and scene module 12. The designated location may be: position a, which is in the scene module 6 in the virtual scene, may determine that the target scene module is: and a scene module 6. Wherein, the preset distance can be: one scene module, which is selected from the plurality of scene modules of the virtual scene, and whose distance from the scene module 6 is not greater than one scene module, may be: the scene modules 1, 2, 3, 5, 7, 9, 10 and 11 obtain a plurality of second scene modules.
In some embodiments, the step "obtaining a scene region to be processed based on a scene region formed in the virtual scene by the specified scene position and the edge scene position" may include the following operations:
and obtaining a scene area to be processed based on the first scene module and the second scene module.
With continued reference to fig. 6, based on the specified scene position: at position a, the obtained area to be processed is: the system comprises a region consisting of a scene module 1, a scene module 2, a scene module 3, a scene module 5, a scene module 6, a scene module 7, a scene module 9, a scene module 10 and a scene module 11.
103. And determining at least one target virtual scene model meeting the matching condition with the scene area to be processed from a plurality of preset virtual scene models.
The preset virtual scene models are scene models required by a pre-created virtual scene. In the embodiment of the present application, for a plurality of virtual scene models created in advance, the plurality of virtual scene models may be stored in a data table of a scene editor to facilitate the invocation.
In the embodiment of the present application, the virtual scene may include a plurality of different scene types, and each scene position may correspond to a scene type, where virtual scene models matched with different scene types are predefined. The matching condition may comprise a virtual scene model matching a scene type corresponding to a scene position in the virtual scene.
For example, the preset plurality of virtual scene models may include: a first scene model, a second scene model, a third scene model. The virtual scene may include scene types of: scene type a, scene type b, scene type c, and scene type d, etc. The virtual scene model defining the scene type a match may be: a first scene model, a second scene model, a third scene model; the virtual scene model for scene type b matching may be: the fourth scene model, the fifth scene model, the sixth scene model, and the virtual scene model matched with the scene type c may be: the seventh scene model, the eighth scene model, and the virtual scene model matched with the scene type d may be: a ninth scene model, a tenth scene model.
For example, scene type a may be: river type, then the virtual scene model that scene type a matches may include: water models, ship models, aquatic creatures models, and the like. A river-type virtual scene can be formed by a water model, a ship model, a beach model and the like.
In some embodiments, in order to quickly acquire a virtual scene model required by a to-be-processed area, the step "determining at least one target virtual scene model satisfying a matching condition with the to-be-processed scene area from a preset plurality of virtual scene models" may include the following operations:
acquiring a first scene type of a specified scene position;
determining a second scene type of scene positions except the appointed scene position in the scene area to be processed according to the first scene type;
and selecting a virtual scene model matched with the first scene type and the second scene type from the plurality of virtual scene models to obtain a target virtual scene model.
Specifically, a scene type corresponding to a designated scene position in the virtual scene is obtained, and a first scene type is obtained. Then, the scene type corresponding to the other scene position connected with the specified scene position in the region to be processed can be determined according to the first scene type corresponding to the specified scene position.
In some embodiments, in order to obtain an accurate virtual scene model required for the region to be processed, the step of "determining a second scene type of scene positions other than the designated scene position in the region to be processed according to the first scene type" may include the following operations:
acquiring an incidence relation between scene types in a virtual scene;
and determining scene types associated with the first scene types based on the association relationship to obtain second scene types of scene positions except the appointed scene position in the scene area to be processed.
The incidence relation refers to a plurality of scene types which can be connected in the virtual scene.
For example, scene types of a virtual scene may include: river type, land type, beach type, mountain type, etc. Wherein the river type may be connected with the beach type, the land type may be connected with the mountain type, and the like. An association between the river type and the beach type, an association between the land type and the beach type, and an association between the land type and the mountain type may be established.
Further, according to the incidence relation among the scene types in the virtual scene, the scene type having the incidence relation with the first scene type is determined, and then the second scene type corresponding to each scene position in the region to be processed can be obtained. And determining the scene type of the scene position adjacent to the specified scene position in the region to be processed according to the incidence relation of the scene types, and obtaining the accurate scene type included in the region to be processed.
For example, please continue to refer to fig. 6, the scene type corresponding to the scene obtaining module 6 may be: river type, in a virtual scene, the scene type associated with a river type may be: sand beach type, the scene types of scene module 1, scene module 2, scene module 3, scene module 5, scene module 7, scene module 9, scene module 10, and scene module 11 may be determined to be sand beach type. Or, the scene type corresponding to each scene module in the scene area to be processed can be determined by combining the sizes of the area ranges of different scene types.
In some embodiments, the step of selecting a virtual scene model matching the first scene type and the second scene type from a plurality of virtual scene models to obtain the target virtual scene model may include the following operations:
determining a first candidate virtual scene model matching the first scene type from the plurality of virtual scene models;
determining a second candidate virtual scene model matching the second scene type from the plurality of virtual scene models;
and respectively randomly selecting a virtual scene model from the first virtual scene model and the second candidate virtual scene model to obtain a target scene model.
Specifically, a plurality of virtual scene models matched with a first scene type are determined from a plurality of preset virtual scene models, and a plurality of first candidate virtual scene models are obtained; and determining a plurality of virtual scene models matched with the second scene type from a plurality of preset virtual scene models to obtain a plurality of second virtual scene models.
Because the screened first candidate virtual scene models are all virtual scene models which can be placed at the appointed scene position, in order to avoid placing a plurality of virtual scene models at the appointed scene position, one first candidate virtual scene model can be randomly selected from the plurality of first candidate virtual scene models to serve as a target scene model placed at the appointed scene position.
Similarly, since the screened second candidate virtual scene models are all virtual scene models which can be placed at scene positions in the to-be-processed area except the designated scene position, in order to avoid placing a plurality of virtual scene models at the scene positions in the to-be-processed area except the designated scene position, one second candidate virtual scene model can be randomly selected from the second candidate virtual scene models to serve as a target scene model placed at the scene positions in the to-be-processed area except the designated scene position. By the aid of the virtual scene model obtaining method, diversification of different virtual scene models placed in the same scene position can be guaranteed, and virtual scene representation is enriched.
104. And forming a virtual scene module based on at least one target virtual scene model, setting the virtual scene module in the region to be processed, and generating a virtual scene corresponding to the region to be processed.
The virtual scene module refers to a virtual scene model combination formed by a plurality of virtual scene models.
In some embodiments, the step of "constructing a virtual scene module based on at least one target virtual scene model" may comprise the operations of:
and splicing at least one target virtual scene model based on the position relation of each scene position in the scene area to be processed to obtain a virtual scene module.
And determining a target virtual scene model correspondingly placed at each scene position in the scene area to be processed through the steps, and splicing and combining the plurality of target virtual scene models according to the target virtual scene models correspondingly placed at each scene position and the position relation of each scene position. Wherein the positional relationship may include adjacency.
For example, the to-be-processed scene area may include a first scene position, a second scene position, and a third scene position, where the first scene position is adjacent to the second scene position, the second scene position is adjacent to the third scene position, and the first scene position is not adjacent to the third scene position, the target virtual scene model corresponding to the first scene position may be spliced with the target virtual scene model corresponding to the second scene position, and the target virtual scene model corresponding to the second scene position is spliced with the target virtual scene model corresponding to the third scene position, so as to obtain a virtual scene module group formed by combining a plurality of target virtual scene models.
In some embodiments, in order to obtain a plurality of virtual scene modules forming a virtual scene, a user may trigger a position specifying event for different scene positions of the virtual scene, so as to generate virtual scene modules corresponding to the different scene positions.
Further, the obtained virtual scene module is arranged in the corresponding to-be-processed scene area, and a partial virtual scene corresponding to the to-be-processed scene area is generated. The virtual scene modules obtained through different appointed scene positions are respectively arranged in different scene areas to be processed, so that a complete virtual scene can be generated.
In some embodiments, for the virtual scene modules obtained by splicing, in order to ensure the splicing accuracy of the virtual scene modules, after the target virtual object models are spliced, the orientation directions of the target virtual object models in the virtual scene modules may be detected, so as to ensure the consistency of the orientation directions of the target virtual object models. If there is a target virtual object model with a different orientation direction, the orientation direction of the target virtual object model can be adjusted.
In some embodiments, after the virtual scene is changed, the virtual scene model obtained by splicing may be stored in order to facilitate the modification of the virtual scene module by the user. When the virtual scene needs to be adjusted, the virtual scene can be opened through the scene editor, and the stored virtual scene module is modified.
The embodiment of the application discloses a method for editing a virtual scene, which comprises the following steps: responding to a position specification event aiming at a virtual scene, and determining a specified scene position in the virtual scene, wherein matching conditions of virtual scene models required to be set at different scene positions are configured in the virtual scene; determining a scene area to be processed from a virtual scene according to the designated scene position and the preset distance; determining at least one target virtual scene model meeting matching conditions with a scene area to be processed from a plurality of preset virtual scene models; the virtual scene module is formed based on at least one target virtual scene model, and the virtual scene module is arranged in the region to be processed to generate a virtual scene corresponding to the region to be processed, so that the editing efficiency of the virtual scene can be improved.
Based on the above description, the method for editing a virtual scene will be further described below by way of example. Referring to fig. 7, fig. 7 is a schematic flowchart of another editing method for a virtual scene provided in the embodiment of the present application, and taking an example that the editing method for a virtual scene is applied to a terminal, a specific process may be as follows:
201. the terminal provides a scene editor interface.
In this embodiment of the present application, a terminal may be installed with a scene editor, and the scene editor may be configured to process a virtual scene, including: create, modify, update, etc. The scene editor interface includes a virtual scene to be processed.
202. And the terminal receives the click operation of the user aiming at the scene editor interface and acquires the click position of the click operation on the scene editor interface.
Wherein, the user can click on the scene editor interface through the mouse.
Specifically, when it is detected that the user clicks the scene editor interface through the mouse, the click position of the mouse on the scene editor interface may be acquired.
203. And the terminal determines the position of the target scene from the virtual scene displayed on the editor interface according to the click position.
Specifically, a ray is emitted to a click Position of the Mouse through a Get Mouse Position node in the scene editor, and a Position in the virtual scene struck by the ray can be used as a target scene Position, namely, a Position in the virtual scene selected by the user through click operation.
204. The terminal determines a target scene module from a plurality of scene modules of the virtual scene based on the target scene position.
In the embodiment of the present application, a virtual scene may be divided into a plurality of scene modules.
Firstly, a central scene module is determined according to a scene module corresponding to the position of a target scene, then a scene module adjacent to the central scene module is determined from a plurality of scene modules, and the target scene module is obtained based on the central scene module and the adjacent scene modules.
205. And the terminal selects a target scene model from a plurality of preset virtual scene models according to the scene type of the target scene module.
In the embodiment of the application, a plurality of preset virtual scene models are created in advance, and the plurality of preset virtual scene models can be virtual scene models required for constructing virtual scenes. The multiple preset virtual scene models can be stored in a data table of the scene editor, and use is convenient.
Specifically, the virtual scene may include a plurality of scene types, and the corresponding virtual scene models are matched for different scene types in advance.
For example, the object scene module may include: a scene module a, a scene module b, a scene module c and a scene module d. And respectively selecting a virtual scene model matched with the scene type of each target scene module from a plurality of preset virtual scene models according to the scene type of each target scene module to obtain the target scene model corresponding to each target scene module.
206. And the terminal carries out splicing processing on the target scene model to obtain a scene module corresponding to the target scene module.
After the target scene models corresponding to the target scene modules are obtained, the target scene models Actor Blueprint can be spliced into a module, namely a scene module. Then, the module is placed in the area of the target scene module in the virtual scene, so that a scene module spliced by a plurality of virtual scene models is obtained. Moreover, a scene module is obtained by combining a plurality of scene modules, so that the number of the modules can be reduced, and the processing pressure on the virtual scene is reduced.
The embodiment of the application discloses a method for editing a virtual scene, which comprises the following steps: the method comprises the steps that a terminal provides a scene editor interface, click operation of a user for the scene editor interface is received, the click position of the click operation on the scene editor interface is obtained, a target scene position is determined from a virtual scene displayed on the scene editor interface according to the click position, a target scene module is determined from a plurality of scene modules of the virtual scene based on the target scene position, a target scene model is selected from a plurality of preset virtual scene models according to the scene type of the target scene module, splicing processing is carried out on the target scene model, and a scene module corresponding to the target scene module is obtained. Thus, the processing efficiency of the virtual scene can be improved.
In order to better implement the editing method for the virtual scene provided by the embodiment of the present application, the embodiment of the present application further provides an editing apparatus for the virtual scene based on the editing method for the virtual scene. The meaning of the noun is the same as that in the editing method of the virtual scene, and specific implementation details can refer to the description in the method embodiment.
Referring to fig. 8, fig. 8 is a block diagram of a virtual scene editing apparatus according to an embodiment of the present disclosure, where the apparatus includes:
a first determining unit 301, configured to determine a specified scene position in a virtual scene in response to a position specification event for the virtual scene, where matching conditions of virtual scene models that need to be set for different scene positions are configured in the virtual scene;
a second determining unit 302, configured to determine a scene area to be processed from the virtual scene according to the specified scene position and a preset distance;
a third determining unit 303, configured to determine, from a plurality of preset virtual scene models, at least one target virtual scene model that meets the matching condition with the to-be-processed scene region;
a generating unit 304, configured to form a virtual scene module based on the at least one target virtual scene model, set the virtual scene module in the to-be-processed area, and generate a virtual scene corresponding to the to-be-processed area.
In some embodiments, the third determining unit 303 may include:
a first obtaining subunit, configured to obtain a first scene type of the specified scene location;
a first determining subunit, configured to determine, according to the first scene type, a second scene type of a scene position in the to-be-processed scene area other than the designated scene position;
and the selecting subunit is used for selecting a virtual scene model matched with the first scene type and the second scene type from the plurality of virtual scene models to obtain the target virtual scene model.
In some embodiments, the first determining subunit may specifically be configured to:
acquiring an incidence relation between scene types in the virtual scene;
and determining the scene type associated with the first scene type based on the association relationship to obtain a second scene type of the scene position in the scene area to be processed except the appointed scene position.
In some embodiments, the selecting of the sub-unit may specifically be used to:
determining a first candidate virtual scene model matching the first scene type from the plurality of virtual scene models;
determining a second candidate virtual scene model matching the second scene type from the plurality of virtual scene models;
and respectively randomly selecting a virtual scene model from the first selected virtual scene model and the second candidate virtual scene model to obtain the target scene model.
In some embodiments, the second determining unit 302 may include:
a second determining subunit, configured to determine, from the virtual scene, an edge scene position where a distance from the specified scene position is not greater than the preset distance;
and the third determining subunit is configured to obtain the to-be-processed scene area based on a scene area formed by the specified scene position and the edge scene position in the virtual scene.
In some embodiments, the second determining subunit may be specifically configured to:
determining a first scene module corresponding to the designated scene position from the plurality of scene modules;
determining a second scene module, which is not more than the preset distance from the first scene module, from the plurality of scene modules.
In some embodiments, the third determining subunit may be specifically configured to:
and obtaining the scene area to be processed based on the first scene module and the second scene module.
In some embodiments, the generating unit 304 may include:
and the splicing subunit is used for splicing the at least one target virtual scene model based on the position relation of each scene position in the scene area to be processed to obtain the virtual scene module.
In some embodiments, the first determining unit 301 may include:
the second acquisition subunit is used for responding to the click operation aiming at the graphical user interface and acquiring the click position of the graphical user interface;
and the fourth determining subunit is configured to perform position collision detection on the virtual scene and the click position, and determine a scene position corresponding to the click position in the virtual scene to obtain the specified scene position.
The embodiment of the application discloses an editing device of a virtual scene, which determines a designated scene position in the virtual scene through a first determining unit 301 in response to a position designated event aiming at the virtual scene, wherein matching conditions of virtual scene models required to be set at different scene positions are configured in the virtual scene; the second determining unit 302 determines a scene area to be processed from the virtual scene according to the designated scene position and a preset distance; the third determining unit 303 determines at least one target virtual scene model satisfying the matching condition with the to-be-processed scene region from a plurality of preset virtual scene models; the generating unit 304 forms a virtual scene module based on the at least one target virtual scene model, sets the virtual scene module in the region to be processed, and generates a virtual scene corresponding to the region to be processed. Thus, the processing efficiency of the game scene can be improved.
Correspondingly, the embodiment of the application also provides a computer device, and the computer device can be a terminal. As shown in fig. 9, fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present application. The computer apparatus 500 includes a processor 501 having one or more processing cores, a memory 502 having one or more computer-readable storage media, and a computer program stored on the memory 502 and executable on the processor. The processor 501 is electrically connected to the memory 502. Those skilled in the art will appreciate that the computer device configurations illustrated in the figures are not meant to be limiting of computer devices and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components.
The processor 501 is a control center of the computer device 500, connects various parts of the entire computer device 500 using various interfaces and lines, performs various functions of the computer device 500 and processes data by running or loading software programs and/or modules stored in the memory 502, and calling data stored in the memory 502, thereby monitoring the computer device 500 as a whole.
In this embodiment of the application, the processor 501 in the computer device 500 loads instructions corresponding to processes of one or more applications into the memory 502, and the processor 501 runs the applications stored in the memory 502, so as to implement various functions as follows:
responding to a position specification event aiming at the virtual scene, and determining a specified scene position in the virtual scene, wherein matching conditions of virtual scene models required to be set at different scene positions are configured in the virtual scene;
determining a scene area to be processed from a virtual scene according to the designated scene position and the preset distance;
determining at least one target virtual scene model meeting matching conditions with a scene area to be processed from a plurality of preset virtual scene models;
and forming a virtual scene module based on at least one target virtual scene model, setting the virtual scene module in the region to be processed, and generating a virtual scene corresponding to the region to be processed.
In some embodiments, determining at least one target virtual scene model satisfying a matching condition with a scene area to be processed from a preset plurality of virtual scene models includes:
acquiring a first scene type of a specified scene position;
determining a second scene type of scene positions except the appointed scene position in the scene area to be processed according to the first scene type;
and selecting a virtual scene model matched with the first scene type and the second scene type from the plurality of virtual scene models to obtain a target virtual scene model.
In some embodiments, determining a second scene type of scene locations other than the designated scene location in the area of the scene to be processed from the first scene type comprises:
acquiring an incidence relation between scene types in a virtual scene;
and determining scene types associated with the first scene types based on the association relationship to obtain second scene types of scene positions except the appointed scene position in the scene area to be processed.
In some embodiments, selecting a virtual scene model matched with a first scene type and a second scene type from a plurality of virtual scene models to obtain a target virtual scene model includes:
determining a first candidate virtual scene model matching the first scene type from the plurality of virtual scene models;
determining a second candidate virtual scene model matching the second scene type from the plurality of virtual scene models;
and respectively randomly selecting a virtual scene model from the first virtual scene model and the second candidate virtual scene model to obtain a target scene model.
In some embodiments, determining a scene area to be processed from a virtual scene according to a specified scene position and a preset distance includes:
determining an edge scene position with a distance to a specified scene position not greater than a preset distance from a virtual scene;
and obtaining a scene area to be processed based on the scene area formed by the appointed scene position and the edge scene position in the virtual scene.
In some embodiments, a virtual scene includes a plurality of scene modules;
determining an edge scene position with a distance to a specified scene position not greater than a preset distance from a virtual scene, including:
determining a first scene module corresponding to a designated scene position from a plurality of scene modules;
determining a second scene module, the distance between which and the first scene module is not more than a preset distance, from the plurality of scene modules;
obtaining a scene area to be processed based on a scene area formed by the designated scene position and the edge scene position in the virtual scene, wherein the scene area to be processed comprises:
and obtaining a scene area to be processed based on the first scene module and the second scene module.
In some embodiments, constructing the virtual scene module based on the at least one target virtual scene model comprises:
and splicing at least one target virtual scene model based on the position relation of each scene position in the scene area to be processed to obtain a virtual scene module.
In some embodiments, the virtual scene is presented through a graphical user interface provided by the terminal device;
in response to a location-specifying event for a virtual scene, determining a specified scene location in the virtual scene, comprising:
responding to the click operation aiming at the graphical user interface, and acquiring the click position on the graphical user interface;
and performing position collision detection on the virtual scene and the click position, and determining a scene position corresponding to the click position in the virtual scene to obtain a specified scene position.
In the editing process of the virtual scene, when a position designated event aiming at the virtual scene is detected, determining a designated scene position from the virtual scene, wherein the virtual scene is configured with matching conditions of virtual scene models required to be set at different scene positions; then, determining a scene area to be processed from a virtual scene according to the designated scene position, and further determining a plurality of target virtual scene models which meet the matching condition with the scene area to be processed from a plurality of preset virtual scene models; the virtual scene module is formed based on the target virtual scene models, and is arranged in the area to be processed, so that the scene editing of the area to be processed is completed, and the editing efficiency of the virtual scene can be improved.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Optionally, as shown in fig. 9, the computer device 500 further includes: touch display screen 503, radio frequency circuit 504, audio circuit 505, input unit 506 and power supply 507. The processor 501 is electrically connected to the touch display screen 503, the radio frequency circuit 504, the audio circuit 505, the input unit 506, and the power supply 507, respectively. Those skilled in the art will appreciate that the computer device configuration illustrated in FIG. 9 does not constitute a limitation of the computer device, and may include more or fewer components than illustrated, or some components may be combined, or a different arrangement of components.
The touch display screen 503 can be used for displaying a graphical user interface and receiving an operation instruction generated by a user acting on the graphical user interface. The touch display screen 503 may include a display panel and a touch panel. Among other things, the display panel may be used to display information input by or provided to a user as well as various graphical user interfaces of the computer device, which may be made up of graphics, guide information, icons, video, and any combination thereof. Alternatively, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations of a user (for example, operations of the user on or near the touch panel by using a finger, a stylus pen, or any other suitable object or accessory) and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 501, and can receive and execute commands sent by the processor 501. The touch panel may overlay the display panel, and when the touch panel detects a touch operation thereon or nearby, the touch panel transmits the touch operation to the processor 501 to determine the type of the touch event, and then the processor 501 provides a corresponding visual output on the display panel according to the type of the touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 503 to implement input and output functions. However, in some embodiments, the touch panel and the touch panel can be implemented as two separate components to perform the input and output functions. That is, the touch display 503 can also be used as a part of the input unit 506 to implement an input function.
The rf circuit 504 may be used for transceiving rf signals to establish wireless communication with a network device or other computer device via wireless communication, and for transceiving signals with the network device or other computer device.
Audio circuitry 505 may be used to provide an audio interface between a user and a computer device through speakers and microphones. The audio circuit 505 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 505 and converted into audio data, which is then processed by the audio data output processor 501, and then transmitted to, for example, another computer device via the rf circuit 504, or output to the memory 502 for further processing. The audio circuitry 505 may also include an earbud jack to provide communication of a peripheral headset with the computer device.
The input unit 506 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 507 is used to power the various components of the computer device 500. Optionally, the power supply 507 may be logically connected to the processor 501 through a power management system, so as to implement functions of managing charging, discharging, power consumption management, and the like through the power management system. The power supply 507 may also include any component including one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown in fig. 9, the computer device 500 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described in detail herein.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
As can be seen from the above, the computer device provided in this embodiment may determine, in response to a position specification event for a virtual scene, a specified scene position in the virtual scene, where matching conditions of virtual scene models that need to be set at different scene positions are configured in the virtual scene; determining a scene area to be processed from a virtual scene according to the designated scene position and the preset distance; determining at least one target virtual scene model meeting matching conditions with a scene area to be processed from a plurality of preset virtual scene models; and forming a virtual scene module based on at least one target virtual scene model, setting the virtual scene module in the region to be processed, and generating a virtual scene corresponding to the region to be processed.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the present application provides a computer-readable storage medium, in which a plurality of computer programs are stored, where the computer programs can be loaded by a processor to execute the steps in any one of the methods for editing a virtual scene provided in the present application. For example, the computer program may perform the steps of:
responding to a position specification event aiming at a virtual scene, and determining a specified scene position in the virtual scene, wherein matching conditions of virtual scene models required to be set at different scene positions are configured in the virtual scene;
determining a scene area to be processed from a virtual scene according to the designated scene position and the preset distance;
determining at least one target virtual scene model meeting matching conditions with a scene area to be processed from a plurality of preset virtual scene models;
and forming a virtual scene module based on at least one target virtual scene model, setting the virtual scene module in the region to be processed, and generating a virtual scene corresponding to the region to be processed.
In some embodiments, determining at least one target virtual scene model satisfying a matching condition with a scene area to be processed from a preset plurality of virtual scene models includes:
acquiring a first scene type of a specified scene position;
determining a second scene type of scene positions except the appointed scene position in the scene area to be processed according to the first scene type;
and selecting a virtual scene model matched with the first scene type and the second scene type from the plurality of virtual scene models to obtain a target virtual scene model.
In some embodiments, determining a second scene type for scene locations other than the designated scene location in the scene area to be processed based on the first scene type comprises:
acquiring an incidence relation between scene types in a virtual scene;
and determining scene types associated with the first scene types based on the association relationship to obtain second scene types of scene positions except the appointed scene position in the scene area to be processed.
In some embodiments, selecting a virtual scene model matched with a first scene type and a second scene type from a plurality of virtual scene models to obtain a target virtual scene model includes:
determining a first candidate virtual scene model matching the first scene type from the plurality of virtual scene models;
determining a second candidate virtual scene model matching the second scene type from the plurality of virtual scene models;
and respectively randomly selecting a virtual scene model from the first virtual scene model and the second candidate virtual scene model to obtain a target scene model.
In some embodiments, determining a scene area to be processed from a virtual scene according to a specified scene position and a preset distance includes:
determining an edge scene position with a distance to a specified scene position not greater than a preset distance from a virtual scene;
and obtaining a scene area to be processed based on the scene area formed by the appointed scene position and the edge scene position in the virtual scene.
In some embodiments, the virtual scene includes a plurality of scene modules;
determining an edge scene position with a distance to a specified scene position not greater than a preset distance from a virtual scene, including:
determining a first scene module corresponding to a designated scene position from a plurality of scene modules;
determining a second scene module, the distance between which and the first scene module is not more than a preset distance, from the plurality of scene modules;
obtaining a scene area to be processed based on a scene area formed by the designated scene position and the edge scene position in the virtual scene, wherein the scene area to be processed comprises:
and obtaining a scene area to be processed based on the first scene module and the second scene module.
In some embodiments, constructing the virtual scene module based on the at least one target virtual scene model comprises:
and splicing at least one target virtual scene model based on the position relation of each scene position in the scene area to be processed to obtain a virtual scene module.
In some embodiments, the virtual scene is presented through a graphical user interface provided by the terminal device;
in response to a location-specifying event for a virtual scene, determining a specified scene location in the virtual scene, comprising:
responding to the click operation aiming at the graphical user interface, and acquiring the click position on the graphical user interface;
and carrying out position collision detection on the virtual scene and the click position, and determining a scene position corresponding to the click position in the virtual scene to obtain a specified scene position.
In the editing process of the virtual scene, when a position designated event aiming at the virtual scene is detected, determining a designated scene position from the virtual scene, wherein the virtual scene is configured with matching conditions of virtual scene models required to be set at different scene positions; then, determining a scene area to be processed from a virtual scene according to the designated scene position, and further determining a plurality of target virtual scene models which meet the matching condition with the scene area to be processed from a plurality of preset virtual scene models; the virtual scene module is formed based on the target virtual scene models, and is arranged in the area to be processed, so that the scene editing of the area to be processed is completed, and the editing efficiency of the virtual scene can be improved.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), random Access Memory (RAM), magnetic or optical disks, and the like.
Since the computer program stored in the storage medium can execute the steps in any of the editing methods for a virtual scene provided in the embodiments of the present application, beneficial effects that can be achieved by any of the editing methods for a virtual scene provided in the embodiments of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The method, the apparatus, the storage medium, and the computer device for editing a virtual scene provided in the embodiments of the present application are described in detail above, and a specific example is applied in the description to explain the principle and the implementation of the present application, and the description of the embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (11)

1. A method for editing a virtual scene, the method comprising:
in response to a position specification event aiming at a virtual scene, determining a specified scene position in the virtual scene, wherein matching conditions of virtual scene models required to be set at different scene positions are configured in the virtual scene;
determining a scene area to be processed from the virtual scene according to the designated scene position and a preset distance;
determining at least one target virtual scene model meeting the matching condition with the scene area to be processed from a plurality of preset virtual scene models;
and forming a virtual scene module based on the at least one target virtual scene model, setting the virtual scene module in the region to be processed, and generating a virtual scene corresponding to the region to be processed.
2. The method according to claim 1, wherein the determining, from a preset plurality of virtual scene models, at least one target virtual scene model satisfying the matching condition with the scene area to be processed comprises:
acquiring a first scene type of the appointed scene position;
determining a second scene type of scene positions in the scene area to be processed, except the designated scene position, according to the first scene type;
and selecting a virtual scene model matched with the first scene type and the second scene type from the plurality of virtual scene models to obtain the target virtual scene model.
3. The method of claim 2, wherein the determining the second scene type of the scene positions other than the designated scene position in the scene area to be processed according to the first scene type comprises:
acquiring an incidence relation between scene types in the virtual scene;
and determining the scene type associated with the first scene type based on the association relationship to obtain a second scene type of the scene position in the scene area to be processed except the appointed scene position.
4. The method of claim 2, wherein the selecting a virtual scene model from the plurality of virtual scene models that matches the first scene type and the second scene type to obtain the target virtual scene model comprises:
determining a first candidate virtual scene model matching the first scene type from the plurality of virtual scene models;
determining a second candidate virtual scene model matching the second scene type from the plurality of virtual scene models;
and respectively randomly selecting a virtual scene model from the first selected virtual scene model and the second candidate virtual scene model to obtain the target scene model.
5. The method according to claim 1, wherein the determining a scene area to be processed from the virtual scene according to the designated scene position and a preset distance comprises:
determining an edge scene position with a distance to the designated scene position not greater than the preset distance from the virtual scene;
and obtaining the scene area to be processed based on the scene area formed by the specified scene position and the edge scene position in the virtual scene.
6. The method of claim 5, wherein the virtual scene comprises a plurality of scene modules;
determining an edge scene position with a distance to the designated scene position not greater than the preset distance from the virtual scene, including:
determining a first scene module corresponding to the designated scene position from the plurality of scene modules;
determining a second scene module, which is not greater than the preset distance from the first scene module, from the plurality of scene modules;
the obtaining the scene area to be processed based on the scene area formed by the specified scene position and the edge scene position in the virtual scene includes:
and obtaining the scene area to be processed based on the first scene module and the second scene module.
7. The method according to claim 1, wherein said constructing a virtual scene model group based on said at least one target virtual scene model comprises:
and splicing the at least one target virtual scene model based on the position relation of each scene position in the scene area to be processed to obtain the virtual scene module.
8. The method according to claim 1, characterized in that the virtual scene is presented through a graphical user interface provided by a terminal device;
the determining a specified scene location in a virtual scene in response to a location-specifying event for the virtual scene comprises:
responding to the clicking operation aiming at the graphical user interface, and acquiring the clicking position of the graphical user interface;
and carrying out position collision detection on the virtual scene and the click position, determining a scene position corresponding to the click position in the virtual scene, and obtaining the specified scene position.
9. An apparatus for editing a virtual scene, the apparatus comprising:
a first determination unit, configured to determine a specified scene position in a virtual scene in response to a position specification event for the virtual scene, wherein matching conditions of virtual scene models that need to be set for different scene positions are configured in the virtual scene;
the second determining unit is used for determining a scene area to be processed from the virtual scene according to the designated scene position and a preset distance;
a third determining unit, configured to determine, from a plurality of preset virtual scene models, at least one target virtual scene model that satisfies the matching condition with the to-be-processed scene region;
and the generating unit is used for forming a virtual scene module based on the at least one target virtual scene model, setting the virtual scene module in the to-be-processed area and generating a virtual scene corresponding to the to-be-processed area.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and running on the processor, wherein the processor implements the method of editing a virtual scene as claimed in any one of claims 1 to 8 when executing the program.
11. A storage medium storing a plurality of instructions, the instructions being adapted to be loaded by a processor to execute the method for editing a virtual scene according to any one of claims 1 to 8.
CN202211401315.8A 2022-11-09 2022-11-09 Editing method and device of virtual scene, computer equipment and storage medium Pending CN115564916A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211401315.8A CN115564916A (en) 2022-11-09 2022-11-09 Editing method and device of virtual scene, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211401315.8A CN115564916A (en) 2022-11-09 2022-11-09 Editing method and device of virtual scene, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115564916A true CN115564916A (en) 2023-01-03

Family

ID=84769547

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211401315.8A Pending CN115564916A (en) 2022-11-09 2022-11-09 Editing method and device of virtual scene, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115564916A (en)

Similar Documents

Publication Publication Date Title
CN113101652A (en) Information display method and device, computer equipment and storage medium
CN112206517B (en) Rendering method, rendering device, storage medium and computer equipment
CN113952720A (en) Game scene rendering method and device, electronic equipment and storage medium
JP7186901B2 (en) HOTSPOT MAP DISPLAY METHOD, DEVICE, COMPUTER DEVICE AND READABLE STORAGE MEDIUM
CN113426124A (en) Display control method and device in game, storage medium and computer equipment
CN113082707A (en) Virtual object prompting method and device, storage medium and computer equipment
CN113786620A (en) Game information recommendation method and device, computer equipment and storage medium
US20220032188A1 (en) Method for selecting virtual objects, apparatus, terminal and storage medium
CN115040873A (en) Game grouping processing method and device, computer equipment and storage medium
CN113426129A (en) User-defined role appearance adjusting method, device, terminal and storage medium
CN115501581A (en) Game control method and device, computer equipment and storage medium
CN115193043A (en) Game information sending method and device, computer equipment and storage medium
CN115382201A (en) Game control method and device, computer equipment and storage medium
CN114225412A (en) Information processing method, information processing device, computer equipment and storage medium
CN115564916A (en) Editing method and device of virtual scene, computer equipment and storage medium
CN113413600A (en) Information processing method, information processing device, computer equipment and storage medium
CN113350801A (en) Model processing method and device, storage medium and computer equipment
CN113426115A (en) Game role display method and device and terminal
CN112245914A (en) Visual angle adjusting method and device, storage medium and computer equipment
CN113398590B (en) Sound processing method, device, computer equipment and storage medium
CN116271808A (en) Virtual scene processing method and device, computer equipment and storage medium
CN115430150A (en) Game skill release method and device, computer equipment and storage medium
CN116999835A (en) Game control method, game control device, computer equipment and storage medium
CN113398564A (en) Virtual role control method, device, storage medium and computer equipment
CN115040867A (en) Game card control method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination