CN112090071B - Virtual environment loading method and device, electronic equipment and computer storage medium - Google Patents
Virtual environment loading method and device, electronic equipment and computer storage medium Download PDFInfo
- Publication number
- CN112090071B CN112090071B CN202010989121.9A CN202010989121A CN112090071B CN 112090071 B CN112090071 B CN 112090071B CN 202010989121 A CN202010989121 A CN 202010989121A CN 112090071 B CN112090071 B CN 112090071B
- Authority
- CN
- China
- Prior art keywords
- virtual environment
- virtual
- page
- data
- preset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/80—Special adaptations for executing a specific game genre or game mode
- A63F13/822—Strategy games; Role-playing games
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/80—Special adaptations for executing a specific game genre or game mode
- A63F13/837—Shooting of targets
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/80—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
- A63F2300/8076—Shooting
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application provides a loading method and device of a virtual environment, electronic equipment and a computer storage medium, and relates to the field of virtual environments. The method comprises the following steps: acquiring corresponding virtual environment scene data based on the selected virtual environment scene identification; generating the first virtual environment page according to the virtual environment scene data; the first virtual environment page comprises at least one preset virtual object; when a confirmation instruction aiming at any virtual object in the at least one virtual object is received, acquiring object data of the any virtual object, and displaying a preset second virtual environment page based on the virtual environment scene data and the object data; and when the page display of the second virtual environment is finished, judging that the loading of the virtual environment is finished. The method and the device provide immersive experience for the user, and the user experience is improved.
Description
Technical Field
The present application relates to the field of virtual environment technologies, and in particular, to a method and an apparatus for loading a virtual environment, an electronic device, and a computer-readable storage medium.
Background
The current games are more and more exquisite in immersion and substitution, and shooting games are more and more exceptional. In the existing shooting game, a user can select a game role first, and after the game role is selected, the user can load the game role for a period of time and then enter a battle scene. However, when the user selects a game role, the user is already carrying out self substitution and playing, and loading for a while can destroy the current immersion and substitution of the user, and influence the user experience.
Disclosure of Invention
The application provides a loading method and device of a virtual environment, electronic equipment and a computer readable storage medium, which can solve the problem that the immersion feeling and substitution feeling of a user are damaged and the user experience is influenced when the existing game is loaded. The technical scheme is as follows:
in a first aspect, a method for loading a virtual environment is provided, and the method includes:
acquiring corresponding virtual environment scene data based on the selected virtual environment scene identification;
generating the first virtual environment page according to the virtual environment scene data; the first virtual environment page comprises at least one preset virtual object;
when a confirmation instruction aiming at any virtual object in the at least one virtual object is received, acquiring object data of the any virtual object, and displaying a preset second virtual environment page based on the virtual environment scene data and the object data;
and when the page display of the second virtual environment is finished, judging that the loading of the virtual environment is finished.
Preferably, the virtual environment scene data includes preset at least one object coordinate;
the generating the first virtual environment page according to the virtual environment scene data includes:
determining a first virtual environment scene picture observed when observation is carried out under a preset first visual angle based on any target object coordinate in the at least one object coordinate; the first visual angle is a visual angle for observing at least two virtual objects in each virtual object in the scene picture of the first virtual environment;
acquiring first virtual environment scene data corresponding to the first virtual environment scene picture from the virtual environment scene data;
rendering the first virtual environment scene data to obtain a first virtual environment scene picture;
and displaying the first virtual environment scene picture in a preset first virtual environment page, and displaying the at least two virtual objects in the first virtual environment scene picture.
Preferably, when receiving a confirmation instruction for any virtual object in the at least one virtual object, acquiring object data of the any virtual object includes:
receiving a selection instruction for any one of the at least one virtual object;
displaying any virtual object in the first virtual environment page at a preset second visual angle; the second view is a front view for independently observing any virtual object;
and when a confirmation instruction aiming at any virtual object is received, acquiring the object data of the any virtual object.
Preferably, the displaying a preset second virtual environment page based on the virtual environment scene data and the object data includes:
determining a second virtual environment scene picture observed when observation is carried out under a preset third visual angle based on any target object coordinate;
acquiring second virtual environment scene data corresponding to the second virtual environment scene picture from the virtual environment scene data; the third visual angle is a first person visual angle or a third person visual angle;
rendering the second virtual environment scene data to obtain a second virtual environment scene picture, and rendering the object data to obtain a virtual object picture of any virtual object;
and displaying the second virtual environment scene picture in the second virtual environment page, and displaying the virtual object picture on any object coordinate of the second virtual environment scene picture at the third view angle.
Preferably, when the presentation of the page of the second virtual environment is finished, the determining that the loading of the virtual environment is completed includes:
and when the display duration of the second virtual environment page reaches a preset display threshold, judging that the display of the second virtual environment page is finished, and judging that the loading of the virtual environment is finished.
In a second aspect, an apparatus for loading a virtual environment is provided, the apparatus comprising:
the first processing module is used for acquiring corresponding virtual environment scene data based on the selected virtual environment scene identification;
the display module is used for generating the first virtual environment page according to the virtual environment scene data; the first virtual environment page comprises at least one preset virtual object;
the second processing module is used for acquiring object data of any virtual object when a confirmation instruction aiming at any virtual object in the at least one virtual object is received, and displaying a preset second virtual environment page based on the virtual environment scene data and the object data;
and the judging module is used for judging that the loading of the virtual environment is finished when the page display of the second virtual environment is finished.
Preferably, the virtual environment scene data includes preset at least one object coordinate;
the display module comprises:
the first determining submodule is used for determining a first virtual environment scene picture observed when observation is carried out under a preset first visual angle based on any target object coordinate in the at least one object coordinate; the first visual angle is a visual angle for observing at least two virtual objects in each virtual object in the scene picture of the first virtual environment;
a first obtaining sub-module, configured to obtain, from the virtual environment scene data, first virtual environment scene data corresponding to the first virtual environment scene picture;
the first rendering submodule is used for rendering the scene data of the first virtual environment to obtain a scene picture of the first virtual environment;
and the first display sub-module is used for displaying the first virtual environment scene picture in a preset first virtual environment page and displaying the at least two virtual objects in the first virtual environment scene picture.
Preferably, the second processing module includes:
the receiving submodule is used for receiving a selection instruction aiming at any virtual object in the at least one virtual object;
the second display sub-module is used for displaying any virtual object in the first virtual environment page at a preset second visual angle; the second view is a front view for independently observing any virtual object;
the receiving submodule is used for receiving a confirmation instruction aiming at any virtual object;
and the second acquisition submodule is used for acquiring the object data of any virtual object.
Preferably, the second processing module further comprises:
the second determining submodule is used for determining a second virtual environment scene picture observed when observation is carried out under a preset third visual angle based on any target object coordinate;
the second obtaining submodule is used for obtaining second virtual environment scene data corresponding to the second virtual environment scene picture from the virtual environment scene data; the third visual angle is a first person visual angle or a third person visual angle;
the second rendering submodule is used for rendering the second virtual environment scene data to obtain a second virtual environment scene picture, and rendering the object data to obtain a virtual object picture of any virtual object;
and the second display sub-module is configured to display the second virtual environment scene picture in the second virtual environment page, and display the virtual object picture on the any object coordinate of the second virtual environment scene picture at the third view angle.
Preferably, the determination module is specifically configured to:
and when the display duration of the second virtual environment page reaches a preset display threshold, judging that the display of the second virtual environment page is finished, and judging that the loading of the virtual environment is finished.
In a third aspect, an electronic device is provided, which includes:
a processor, a memory, and a bus;
the bus is used for connecting the processor and the memory;
the memory is used for storing operation instructions;
the processor is configured to call the operation instruction, and the executable instruction enables the processor to execute an operation corresponding to the loading method of the virtual environment shown in the first aspect of the present application.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor, implements the loading method of the virtual environment shown in the first aspect of the present application.
The beneficial effect that technical scheme that this application provided brought is:
in the embodiment of the invention, when an application program detects a trigger instruction for displaying a preset first virtual environment page, corresponding virtual environment scene data is acquired based on a preset virtual environment scene identifier, and then the first virtual environment page is displayed based on the virtual environment scene data; the first virtual environment page comprises at least one preset virtual object; when a confirmation instruction aiming at any virtual object in the at least one virtual object is received, object data of the any virtual object is obtained, a preset second virtual environment page is displayed based on the virtual environment scene data and the object data, and when the display of the second virtual environment page is finished, the loading of the virtual environment is judged to be finished. In this way, when a first virtual environment page including a virtual object is displayed, corresponding virtual environment scene data is directly acquired based on a preset virtual environment scene identifier, then the first virtual environment page is constructed based on the virtual environment scene data, then a second virtual environment page is constructed based on the virtual environment scene data and the determined virtual object, and when the second virtual environment page is displayed, the loading of the virtual environment is completed. Because the first virtual environment page and the second virtual environment page are constructed in the same virtual environment, the first virtual environment page and the second virtual environment page are switched in the same virtual environment when being switched to the second virtual environment page, and the problem that different pages correspond to different scenes in the prior art, so that the different pages give a user a sharp feeling when being switched is solved, immersive experience is provided for the user, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments of the present application will be briefly described below.
Fig. 1 is a block diagram of a terminal device according to an embodiment of the present application;
FIG. 2 is a block diagram of a computer system provided in one embodiment of the present application;
fig. 3 is a flowchart illustrating a loading method for a virtual environment according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating an effect of a first virtual environment page based on a first perspective according to the present application;
5A-5B are schematic diagrams illustrating the effect of a first virtual environment page based on a second perspective according to the present application;
FIG. 6 is a schematic diagram illustrating an effect of a second virtual environment page based on a third perspective according to the present application;
FIG. 7 is a schematic diagram illustrating a page effect of the present application entering a next stage based on FIG. 6;
fig. 8 is a schematic structural diagram of a loading apparatus of a virtual environment according to another embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device for loading a virtual environment according to yet another embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The terms referred to in this application will first be introduced and explained:
virtual environment: is a virtual environment that is displayed (or provided) by an application when running on a device. The virtual environment may be a simulation environment of a real world, a semi-simulation semi-fictional environment, or a pure fictional environment. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment. The following embodiments illustrate the virtual environment as a three-dimensional virtual environment, but are not limited thereto.
Virtual object: refers to a movable object in a virtual environment. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in a three-dimensional virtual environment. Optionally, the virtual object is a three-dimensional volumetric model created based on animated skeletal techniques. Each virtual object has its own shape and volume in the three-dimensional virtual environment, occupying a portion of the space in the three-dimensional virtual environment.
The camera model is as follows: is a three-dimensional model located on the body of or around a virtual object in a three-dimensional virtual environment. When the first-person visual angle is adopted, the camera model is positioned near the head of the virtual object, at the head of the virtual object or at the chest of the virtual object and is bound with the three-dimensional model of the virtual object, namely a first-person camera for short; when the third person is used for looking at the camera model, the camera model can be located behind (behind or behind) the virtual object and bound with the virtual object, or can be located at any position away from the virtual object by a preset distance, which is called a third person for short, and the virtual object located in the three-dimensional virtual environment can be observed from different angles through the camera model. Alternatively, the camera model is not actually displayed in the virtual environment view displayed by the application, i.e., the camera model is not visible in the three-dimensional virtual environment displayed by the user interface.
First person perspective: the perspective of the virtual environment as viewed by a first-person camera disposed on or around the virtual object. Optionally, the first-person viewing angle is a viewing angle when the virtual environment is viewed by a first-person camera disposed on a chest of the virtual object; or, the first-person angle of view is an angle of view when the virtual environment is observed by a first-person camera provided at a head of the virtual object, or the first-person angle of view is an angle of view when the virtual environment is observed by a first-person camera provided at a neck of the virtual object. In the virtual environment screen corresponding to the first person perspective, the head or the trunk of the three-dimensional model of the virtual object cannot be viewed, but the arms or the feet of the virtual object may be viewed. Optionally, the first-person camera is first bound to the head, neck, or chest of the virtual object. This first binding means that in most cases the relative position of the first-person camera and the head, neck or chest of the virtual object does not change. That is, when the head, neck or chest of the virtual object is turned, the first-person camera is correspondingly rotated; when the head, neck or chest of the virtual object is displaced, the first-person camera is also displaced accordingly.
Third person called view: the perspective of the virtual environment as viewed by a camera model disposed behind or behind the brain of the virtual object. Alternatively, the third person refers to a viewing angle when the virtual environment is observed by a camera model disposed behind the brain of the virtual object, or the third person refers to a viewing angle when the virtual environment is observed by a camera model disposed behind the virtual object. In the virtual environment screen corresponding to the third person perspective, the head or the trunk of the three-dimensional model of the virtual object can be viewed. Optionally, a third person camera makes a second binding with the head or back of the virtual object. This second binding means that in most cases the relative position of the third person referring to the camera and the head or back of the virtual object does not change. That is, when the head of the virtual object turns, the third person-called camera also rotates correspondingly; when the head or the back of the virtual object is displaced, the third person called the camera is also displaced correspondingly.
Fig. 1 shows a block diagram of a terminal device according to an exemplary embodiment of the present application. The terminal device 100 includes: an application program 110 and an operating system 120.
The operating system is the base software that provides applications with secure access to the computer hardware.
An application is an application that supports a virtual environment. Optionally, the application is an application that supports a three-dimensional virtual environment. The application program can be any one of a virtual reality application program, a three-dimensional map program, a military simulation program, a Third-person Shooting Game (TPS), a First-person Shooting Game (FPS), an MOBA Game and a multi-player gunfight survival Game. The application may be a stand-alone application, such as a stand-alone 3D game program.
Fig. 2 shows a block diagram of a computer system provided in an exemplary embodiment of the present application. The computer system 200 includes: terminal device 210 and server 220.
The terminal device is installed and operated with an application program supporting a virtual environment. The application program can be any one of a virtual reality application program, a three-dimensional map program, a military simulation program, a TPS game, an FPS game, an MOBA game and a multi-player gunfight living game. The terminal device is a device used by a user, and the user uses the terminal device to control a virtual object located in the virtual environment to perform activities, including but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, throwing. Illustratively, the virtual object is a virtual character, such as a simulated persona or an animated persona.
The terminal equipment is connected with the server through a wireless network or a wired network.
The server comprises at least one of a server, a plurality of servers, a cloud computing platform and a virtualization center. The server is used for providing background services for the application programs supporting the three-dimensional virtual environment. Optionally, the server undertakes primary computing work, and the terminal device undertakes secondary computing work; or the server undertakes secondary calculation work, and the terminal equipment undertakes primary calculation work; or, the server and the terminal device adopt a distributed computing architecture for cooperative computing.
The terminal device types include: at least one of a game console, a desktop computer, a smartphone, a tablet, an e-book reader, an MP3 player, an MP4 player, and a laptop portable computer. The following embodiments are illustrated where the device is a desktop computer.
Those skilled in the art will appreciate that the number of terminal devices described above may be greater or fewer. For example, the number of the terminal devices may be only one, or the number of the terminal devices may be tens or hundreds, or more. The number and the type of the terminal devices are not limited in the embodiments of the present application.
The application provides a loading method and device of a virtual environment, an electronic device and a computer-readable storage medium, which aim to solve the above technical problems in the prior art.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
In one embodiment, a method for loading a virtual environment is provided, as shown in fig. 3, the method includes:
step S301, acquiring corresponding virtual environment scene data based on the selected virtual environment scene identification;
specifically, after the application is installed in the terminal device, the terminal device stores a plurality of virtual environment scene identifiers, and virtual environment scene data corresponding to each virtual environment scene identifier. When the application program receives a virtual environment scene identifier determined by a user, corresponding virtual environment scene data can be obtained based on the virtual environment scene identifier; or, the application program may also obtain corresponding virtual environment scene data based on a preset default virtual environment scene identifier. Wherein the virtual environment scene data may be map data.
Further, the application program may further include a preset trigger page, and a default virtual environment scene identifier is preset in the trigger page, and of course, the user may also change the virtual environment scene identifier in the trigger page, and after the virtual environment scene identifier is selected, the user triggers a confirmation instruction, and after the application program receives the confirmation instruction, the application program may obtain corresponding virtual environment data based on the selected virtual environment scene identifier.
Step S302, generating a first virtual environment page according to the virtual environment scene data; the first virtual environment page comprises at least one preset virtual object;
specifically, the first virtual environment page may be a designated page preset in the application program, where a background of the designated page is a first virtual environment picture, and the first virtual environment picture is obtained based on the virtual environment scene data. For example, the first virtual environment page may be a selection page of a virtual object, where a background of the selection page is a scene observed when a map is observed from a first view angle and a second view angle, and the map is determined by a user or is a default map preset for an application program.
After the application program obtains the virtual environment scene data, a first virtual environment page can be generated and displayed based on the virtual environment scene data, and the first virtual environment page comprises at least one preset virtual object.
Step S303, when a confirmation instruction aiming at any virtual object in at least one virtual object is received, object data of any virtual object is obtained, and a preset second virtual environment page is displayed based on the virtual environment scene data and the object data;
when the user determines a target virtual object from the virtual objects, the application program may obtain object data of the target virtual object, and display a preset second virtual environment page based on the virtual environment scene data and the object data. The second virtual environment page may be a transition page entering a next stage, and the virtual environment scene data required by the next stage is the same as the virtual environment scene data.
Further, the second virtual environment page may be a display page of the virtual object at a third viewing angle, the background of the selected page is a scene picture observed when the map is observed at the third viewing angle, and the map is the same as the map corresponding to the first virtual environment page.
Step S304, when the display of the second virtual environment page is finished, the loading of the virtual environment is judged to be finished.
And when the display duration is over, the whole preset loading stage is completed, and the next stage is entered.
In the embodiment of the invention, the application program acquires corresponding virtual environment scene data based on the selected virtual environment scene identification, and then generates a first virtual environment page according to the virtual environment scene data; the first virtual environment page comprises at least one preset virtual object; when a confirmation instruction aiming at any virtual object in at least one virtual object is received, object data of any virtual object is obtained, a preset second virtual environment page is displayed based on the virtual environment scene data and the object data, and when the display of the second virtual environment page is finished, the loading of the virtual environment is judged to be finished. In this way, when a first virtual environment page including a virtual object is displayed, corresponding virtual environment scene data is directly acquired based on a preset virtual environment scene identifier, then the first virtual environment page is constructed based on the virtual environment scene data, then a second virtual environment page is constructed based on the virtual environment scene data and the determined virtual object, and when the second virtual environment page is displayed, the loading of the virtual environment is completed. Because the first virtual environment page and the second virtual environment page are constructed in the same virtual environment, the first virtual environment page and the second virtual environment page are switched in the same virtual environment when being switched to the second virtual environment page, and the problem that different pages correspond to different scenes in the prior art, so that the different pages give a user a sharp feeling when being switched is solved, immersive experience is provided for the user, and the user experience is improved.
In another embodiment, a detailed description of a loading method of a virtual environment as shown in fig. 3 is continued.
Step S301, acquiring corresponding virtual environment scene data based on the selected virtual environment scene identification;
specifically, after the application is installed in the terminal device, the terminal device stores a plurality of virtual environment scene identifiers, and virtual environment scene data corresponding to each virtual environment scene identifier. When the application program receives a virtual environment scene identifier determined by a user, corresponding virtual environment scene data can be obtained based on the virtual environment scene identifier; or, the application program may also obtain corresponding virtual environment scene data based on a preset default virtual environment scene identifier. Wherein the virtual environment scene data may be map data.
Further, the application program may further include a preset trigger page, and a default virtual environment scene identifier is preset in the trigger page, and of course, the user may also change the virtual environment scene identifier in the trigger page, and after the virtual environment scene identifier is selected, the user triggers a confirmation instruction, and after the application program receives the confirmation instruction, the application program may obtain corresponding virtual environment data based on the selected virtual environment scene identifier.
The application program is a shooting game, and the virtual environment is a game map. For example, after the user opens the shooting game, the shooting game shows an initial page, and a default game map in the initial page is identified as "map one". The user modifies 'map one' to 'map two' in the initial page, then clicks 'start game' in the initial page, and the application program acquires map data of the map two based on an instruction corresponding to 'start game' and 'map two'.
Step S302, generating a first virtual environment page according to the virtual environment scene data; the first virtual environment page comprises at least one preset virtual object;
specifically, the first virtual environment page may be a designated page preset in the application program, where a background of the designated page is a first virtual environment picture, and the first virtual environment picture is obtained based on the virtual environment scene data. For example, the first virtual environment page may be a selection page of a virtual object, where a background of the selection page is a scene observed when a map is observed from a first view angle and a second view angle, and the map is determined by a user or is a default map preset for an application program.
After the application program obtains the virtual environment scene data, a first virtual environment page can be generated and displayed based on the virtual environment scene data, and the first virtual environment page comprises at least one preset virtual object.
As an example, the virtual object may be a game character in a shooting game, and the first virtual environment page may be a selection page of the game character, and the background of the page is generated based on the virtual environment scene data. For example, after the user clicks "start game", the application program obtains map data corresponding to "map two", then displays a selection page of the game character, the background of the selection page is generated based on the map data, and at least one preset game character is displayed in the selection page.
In a preferred embodiment of the present invention, the virtual environment scene data includes preset at least one object coordinate;
generating a first virtual environment page according to the virtual environment scene data, comprising:
determining a first virtual environment scene picture observed when observation is carried out under a preset first visual angle based on any target object coordinate in at least one object coordinate; the first visual angle is a visual angle for observing at least two virtual objects in each virtual object in a first virtual environment scene picture;
acquiring first virtual environment scene data corresponding to a first virtual environment scene picture from the virtual environment scene data;
rendering the first virtual environment scene data to obtain a first virtual environment scene picture;
and displaying a first virtual environment scene picture in the first virtual environment page, and displaying at least two virtual objects in the first virtual environment scene picture.
Specifically, because the size of the virtual environment is large, and the images observed by the camera model in different coordinates of the virtual environment are also different, at least one object coordinate may be preset, then each virtual object is displayed in an area corresponding to any object coordinate in each object coordinate, then the camera model observes at least two virtual objects in each virtual object at a preset first viewing angle, and the observed virtual environment scene image is used as a first virtual environment scene image, so that the first virtual environment scene image is determined.
The first perspective is a perspective from which at least two virtual objects in each virtual object are observed in the first virtual environment scene picture, and may also be referred to as a global perspective. In practical application, if the number of the preset virtual objects is small, all the virtual objects can be displayed in the first view; if the number of the preset virtual objects is large, at least two virtual objects among the respective virtual objects may be exposed in the first view. Of course, in addition to displaying the virtual object, other properties may be displayed to increase the interest, and the interest may be set according to actual requirements in practical application, which is not limited in the embodiment of the present invention.
Further, since the first virtual environment scene is a part of the virtual environment scene, after the application program determines the first virtual environment scene picture, the application program can acquire first virtual environment scene data corresponding to the first virtual environment scene picture from the virtual environment scene data, render the first virtual environment scene data to obtain the first virtual environment scene picture, use the first virtual environment scene picture as the background of the first virtual environment page, and display at least two virtual objects in the first virtual environment page at the first view angle.
For example, after the user clicks "start game", the application program obtains corresponding map data based on "map two", then determines a first map picture observed when the camera model is observed at a first viewing angle on a preset coordinate, obtains the map data corresponding to the first map picture from the completed map data, renders the map data to obtain the first map picture, then uses the first map picture as a background of a selection page of the game character, and simultaneously displays at least two game characters in the selection page, as shown in fig. 4.
It should be noted that, the first virtual environment page may further include a necessary UI (User Interface), such as the prompt information and countdown information of "please select a game character" in fig. 4. In practical application, the UI may be set according to actual requirements, which is not limited in this embodiment of the present invention.
Step S303, when a confirmation instruction aiming at any virtual object in at least one virtual object is received, object data of any virtual object is obtained, and a preset second virtual environment page is displayed based on the virtual environment scene data and the object data;
when the user determines a target virtual object from the virtual objects, the application program may obtain object data of the target virtual object, and display a preset second virtual environment page based on the virtual environment scene data and the object data. The second virtual environment page may be a transition page entering a next stage, and the virtual environment scene data required by the next stage is the same as the virtual environment scene data.
Further, the second virtual environment page may be a display page of the virtual object at a third viewing angle, the background of the selected page is a scene picture observed when the map is observed at the third viewing angle, and the map is the same as the map corresponding to the first virtual environment page.
For example, the next stage may be a combat stage in which the user actually operates the shooting game, and the second virtual environment page is a transition page into the combat stage.
In a preferred embodiment of the present invention, when a confirmation instruction for any virtual object in at least one virtual object is received, acquiring object data of any virtual object includes:
receiving a selection instruction for any one of the at least one virtual object;
displaying any virtual object in the first virtual environment page at a preset second visual angle; the second visual angle is a front visual angle for independently observing any virtual object;
when a confirmation instruction for any virtual object is received, object data of any virtual object is acquired.
Specifically, each virtual object may be displayed in the first virtual environment page, a selection instruction is initiated when the user clicks any one of the virtual objects, after receiving the selection instruction, the application program switches the first view angle to a preset second view angle, and displays the virtual object selected by the user in the first virtual environment page based on the second view angle, where the second view angle may be a front view angle at which the camera model independently observes any one of the virtual objects, and may also be referred to as a close-up view angle. And when the virtual object selected by the user is displayed, a virtual button for confirmation is also displayed, when the user clicks the virtual button, a confirmation instruction for the virtual object is initiated, and the application program can acquire the object data of the virtual object after receiving the confirmation instruction.
Wherein the object data may be data required to render the virtual object; when the first visual angle is switched to the second visual angle, the transition animation for switching the visual angles can be played, so that the sudden switching of the pictures brings a sharp feeling to a user when the first visual angle is directly switched to the second visual angle.
Further, when the virtual object selected by the user is displayed at the second view angle after the transitional animation is played, the virtual object can be independently displayed, and the related virtual attribute of the virtual object can also be displayed.
For example, in the page shown in fig. 4, after the user clicks a certain game character, the selected game character is shown in a close-up view, and the equipment and equipment attributes of the game character, including "injury", "range", "shooting speed", "clip capacity", and "recoil", are also shown, and a "confirmation" button for confirmation is also shown, as shown in fig. 5A, when the user clicks "confirmation", the object data of the game character can be acquired, and at the same time, the virtual button is in a non-interactive state, as shown in fig. 5B.
In a preferred embodiment of the present invention, the displaying a preset second virtual environment page based on the virtual environment scene data and the object data includes:
determining a second virtual environment scene picture observed when observation is carried out under a preset third visual angle based on any target object coordinate; the third visual angle is the first person visual angle or the third person visual angle;
acquiring second virtual environment scene data corresponding to a second virtual environment scene picture from the virtual environment scene data;
rendering the second virtual environment scene data to obtain a second virtual environment scene picture, and rendering the object data to obtain a virtual object picture of any virtual object;
and displaying a second virtual environment scene picture in the second virtual environment page, and displaying a virtual object picture on any object coordinate of the second virtual environment scene picture at a third visual angle.
Specifically, after the virtual object is determined, a second virtual environment scene picture observed when the virtual object is observed at a third view angle on the target object coordinates may be determined, then second virtual environment scene data corresponding to the second virtual environment scene picture is acquired from the virtual environment scene data, the second virtual environment scene data is rendered to obtain the second virtual environment scene picture, meanwhile, the determined object data of the virtual object is rendered to obtain a virtual object picture of the virtual object, the second virtual environment scene picture is used as a background of a second virtual environment page, the second view angle is switched to the third view angle, and the virtual object picture is displayed at any target object coordinate of the second virtual environment scene picture at the third view angle.
Wherein the third perspective can be the first person perspective or the third person perspective; when the second visual angle is switched to the third visual angle, the transition animation of switching the visual angles can be played, so that the sudden switching of the images brings a sharp feeling to a user when the second visual angle is directly switched to the third visual angle.
For example, for the page shown in fig. 5B, a second map surface observed when the camera model is observed at a third viewing angle on the target object coordinate is determined, then map data corresponding to the second map surface is obtained from the complete map data, the map data is rendered to obtain the second map surface, meanwhile, object data of the game character determined by the user is rendered to obtain a game character surface, then a transition animation switching from the second viewing angle to the third viewing angle is played, and after the transition animation is played, the second map surface and the game character surface are simultaneously displayed at the third viewing angle, as shown in fig. 6 (for example, the third viewing angle is taken as an example).
Since the page shown in fig. 4 includes the countdown UI, there is a possibility that the user does not specify the virtual object even when the countdown is finished. For this case, it may be detected at the end of the countdown whether there is a virtual object that has been selected but not confirmed currently, that is, the case shown in fig. 5A, and if so, the selected virtual object is taken as the confirmed virtual object; if not, the default virtual object is used as the confirmed virtual object, or one of the virtual objects is randomly selected as the confirmed virtual object.
Step S304, when the display of the second virtual environment page is finished, the loading of the virtual environment is judged to be finished.
And when the display duration is over, the whole preset loading stage is completed, and the next stage is entered.
In a preferred embodiment of the present invention, when the page rendering of the second virtual environment is finished, determining that the loading of the virtual environment is completed includes:
and when the display duration of the second virtual environment page reaches a preset display threshold, judging that the display of the second virtual environment page is finished and the loading of the virtual environment is finished.
Specifically, a timing UI and a preset display threshold may be set in the second virtual environment page, when the second virtual environment page is displayed, timing is started, and when the timing duration reaches the display threshold, it is determined that the display of the second virtual environment page is completed.
For example, in the page shown in fig. 6, a countdown UI "battle preparation" and a countdown number "3" are provided, and when the countdown number is "0", the entire loading phase is completed and the fighting phase is entered. In the combat phase, a corresponding UI, such as a user's operation UI, may be loaded, as shown in fig. 7.
In the embodiment of the invention, the application program acquires corresponding virtual environment scene data based on the selected virtual environment scene identification, and then generates a first virtual environment page according to the virtual environment scene data; the first virtual environment page comprises at least one preset virtual object; when a confirmation instruction aiming at any virtual object in at least one virtual object is received, object data of any virtual object is obtained, a preset second virtual environment page is displayed based on the virtual environment scene data and the object data, and when the display of the second virtual environment page is finished, the loading of the virtual environment is judged to be finished. In this way, when a first virtual environment page including a virtual object is displayed, corresponding virtual environment scene data is directly acquired based on a preset virtual environment scene identifier, then the first virtual environment page is constructed based on the virtual environment scene data, then a second virtual environment page is constructed based on the virtual environment scene data and the determined virtual object, and when the second virtual environment page is displayed, the loading of the virtual environment is completed. Because the first virtual environment page and the second virtual environment page are constructed in the same virtual environment, the first virtual environment page and the second virtual environment page are switched in the same virtual environment when being switched to the second virtual environment page, and the problem that different pages correspond to different scenes in the prior art, so that the different pages give a user a sharp feeling when being switched is solved, immersive experience is provided for the user, and the user experience is improved.
Furthermore, the next stage can be entered after the loading of the virtual environment is completed, the virtual environment scene data required by the next stage is the same as the virtual environment scene data acquired during the loading of the virtual environment, and the UI required by the next stage is loaded on the second virtual environment page to directly enter the next stage, so that the seamless linking of different stages is realized, and the immersive experience of the user is further improved.
In the embodiment of the present invention, the loading of the virtual environment is fully exemplified by taking the application as a shooting game, the virtual environment as a game map, and the virtual object as a game role.
The user clicks 'start game', the application program acquires game map data based on a game map identifier determined by the user, determines a first map picture observed when the game character is observed by the camera model under the global view angle, then acquires the map data corresponding to the first map picture from the map data, renders the map data to obtain the first map picture, displays the first map picture and the preset game character in a selection page of the game character, and displays a UI 'please select the game character'.
When a user clicks any game character, switching from a global view angle to a close-up view angle, playing a transition animation for switching the view angle while switching, and after the playing is finished, displaying a first picture and a UI related to the game character in a page at the close-up view angle, wherein the UI is used for displaying related virtual attributes of the game character, and meanwhile, the page at the close-up view angle also comprises a confirmation button and a countdown for confirming the game character.
When the user does not click the confirm button and the countdown is not over, the user may also continue to select other game characters.
When the user does not click a confirmation button and the countdown is finished, detecting whether a selected game role which is not confirmed exists in the page at the close-up view angle or not, and if so, automatically confirming the game role as a target game role; if not, a default game character is selected, or one game character is randomly selected from the game characters to be identified as the target game character.
When the user clicks the confirmation button, a second map which is observed when the camera model observes the target game role under the first person view angle or the third person view angle is determined, then map data corresponding to the second map is obtained from the map data, the map data is rendered to obtain a second map, and the role data corresponding to the target game role is obtained and rendered to obtain the target game role. Meanwhile, aiming at the target game role, the close-up visual angle is switched to the first person visual angle or the third person visual angle, and transition animation of switching the visual angles is played. And after the playing is finished, displaying a second map picture, a target game role based on the first person view angle or the third person view angle, and counting down.
And when the countdown is finished, finishing the loading stage of the shooting game, and directly loading the UI required by the fighting stage on the page of the target game role based on the first person visual angle or the third person visual angle, which displays the second map picture, so as to enter the fighting stage.
Fig. 8 is a schematic structural diagram of a loading apparatus of a virtual environment according to another embodiment of the present application, and as shown in fig. 8, the apparatus of this embodiment may include:
a first processing module 801, configured to obtain corresponding virtual environment scene data based on the selected virtual environment scene identifier;
a presentation module 802, configured to generate a first virtual environment page according to virtual environment scene data; the first virtual environment page comprises at least one preset virtual object;
the second processing module 803 is configured to, when a confirmation instruction for any virtual object in the at least one virtual object is received, acquire object data of the any virtual object, and display a preset second virtual environment page based on the virtual environment scene data and the object data;
and the judging module is used for judging that the loading of the virtual environment is finished when the page display of the second virtual environment is finished.
In a preferred embodiment of the present invention, the virtual environment scene data includes preset at least one object coordinate;
the display module includes:
the first determining submodule is used for determining a first virtual environment scene picture observed when observation is carried out under a preset first visual angle based on any target object coordinate in at least one object coordinate; the first visual angle is a visual angle for observing at least two virtual objects in each virtual object in a first virtual environment scene picture;
the first acquisition submodule is used for acquiring first virtual environment scene data corresponding to a first virtual environment scene picture from the virtual environment scene data;
the first rendering submodule is used for rendering the scene data of the first virtual environment to obtain a scene picture of the first virtual environment;
the first display submodule is used for displaying a first virtual environment scene picture in a preset first virtual environment page and displaying at least two virtual objects in the first virtual environment scene picture.
In a preferred embodiment of the present invention, the second processing module includes:
the receiving submodule is used for receiving a selection instruction aiming at any virtual object in at least one virtual object;
the second display sub-module is used for displaying any virtual object in the first virtual environment page at a preset second visual angle; the second visual angle is a front visual angle for independently observing any virtual object;
the receiving submodule is used for receiving a confirmation instruction aiming at any virtual object;
and the second acquisition submodule is used for acquiring the object data of any virtual object.
In a preferred embodiment of the present invention, the second processing module further includes:
the second determining submodule is used for determining a second virtual environment scene picture observed when observation is carried out under a preset third visual angle based on any target object coordinate;
the second obtaining submodule is used for obtaining second virtual environment scene data corresponding to a second virtual environment scene picture from the virtual environment scene data; the third visual angle is the first person visual angle or the third person visual angle;
the second rendering submodule is used for rendering the scene data of the second virtual environment to obtain a scene picture of the second virtual environment and rendering the object data to obtain a virtual object picture of any virtual object;
and the second display submodule is used for displaying a second virtual environment scene picture in the second virtual environment page and displaying a virtual object picture on any object coordinate of the second virtual environment scene picture at a third visual angle.
In a preferred embodiment of the present invention, the determining module is specifically configured to:
and when the display duration of the second virtual environment page reaches a preset display threshold, judging that the display of the second virtual environment page is finished and the loading of the virtual environment is finished.
The loading device of the virtual environment in this embodiment can execute the loading methods of the virtual environment shown in the first embodiment and the second embodiment of this application, and the implementation principles are similar, and are not described herein again.
In the embodiment of the invention, the application program acquires corresponding virtual environment scene data based on the selected virtual environment scene identification, and then generates a first virtual environment page according to the virtual environment scene data; the first virtual environment page comprises at least one preset virtual object; when a confirmation instruction aiming at any virtual object in at least one virtual object is received, object data of any virtual object is obtained, a preset second virtual environment page is displayed based on the virtual environment scene data and the object data, and when the display of the second virtual environment page is finished, the loading of the virtual environment is judged to be finished. In this way, when a first virtual environment page including a virtual object is displayed, corresponding virtual environment scene data is directly acquired based on a preset virtual environment scene identifier, then the first virtual environment page is constructed based on the virtual environment scene data, then a second virtual environment page is constructed based on the virtual environment scene data and the determined virtual object, and when the second virtual environment page is displayed, the loading of the virtual environment is completed. Because the first virtual environment page and the second virtual environment page are constructed in the same virtual environment, the first virtual environment page and the second virtual environment page are switched in the same virtual environment when being switched to the second virtual environment page, and the problem that different pages correspond to different scenes in the prior art, so that the different pages give a user a sharp feeling when being switched is solved, immersive experience is provided for the user, and the user experience is improved.
In another embodiment of the present application, there is provided an electronic device including: a memory and a processor; at least one program stored in the memory for execution by the processor, which when executed by the processor, implements: in the embodiment of the invention, the application program acquires corresponding virtual environment scene data based on the selected virtual environment scene identification, and then generates a first virtual environment page according to the virtual environment scene data; the first virtual environment page comprises at least one preset virtual object; when a confirmation instruction aiming at any virtual object in at least one virtual object is received, object data of any virtual object is obtained, a preset second virtual environment page is displayed based on the virtual environment scene data and the object data, and when the display of the second virtual environment page is finished, the loading of the virtual environment is judged to be finished. In this way, when a first virtual environment page including a virtual object is displayed, corresponding virtual environment scene data is directly acquired based on a preset virtual environment scene identifier, then the first virtual environment page is constructed based on the virtual environment scene data, then a second virtual environment page is constructed based on the virtual environment scene data and the determined virtual object, and when the second virtual environment page is displayed, the loading of the virtual environment is completed. Because the first virtual environment page and the second virtual environment page are constructed in the same virtual environment, the first virtual environment page and the second virtual environment page are switched in the same virtual environment when being switched to the second virtual environment page, and the problem that different pages correspond to different scenes in the prior art, so that the different pages give a user a sharp feeling when being switched is solved, immersive experience is provided for the user, and the user experience is improved.
In an alternative embodiment, an electronic device is provided, as shown in fig. 9, an electronic device 9000 shown in fig. 9 comprising: a processor 9001 and a memory 9003. Among other things, the processor 9001 and memory 9003 are coupled, such as via a bus 9002. Optionally, the electronic device 9000 can also include a transceiver 9004. Note that the transceiver 9004 is not limited to one in practical use, and the structure of the electronic device 9000 is not limited to the embodiment of the present application.
The processor 9001 may be a CPU, general purpose processor, DSP, ASIC, FPGA or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 9001 may also be a combination of computing functions, e.g., comprising one or more microprocessors, a combination of DSPs and microprocessors, or the like.
The bus 9002 may include a pathway to transfer information between the aforementioned components. The bus 9002 may be a PCI bus or an EISA bus, etc. The bus 9002 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 9, but this does not indicate only one bus or one type of bus.
The memory 9003 may be a ROM or other type of static storage device that may store static information and instructions, a RAM or other type of dynamic storage device that may store information and instructions, an EEPROM, a CD-ROM or other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to.
The memory 9003 is used to store application code for performing aspects of the present application and is controlled by the processor 9001 for execution. The processor 9001 is configured to execute application program code stored in the memory 9003 to implement any of the method embodiments shown above.
Among them, electronic devices include but are not limited to: mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like.
Yet another embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, which, when run on a computer, enables the computer to perform the corresponding content in the aforementioned method embodiments. Compared with the prior art, in the embodiment of the invention, the application program obtains the corresponding virtual environment scene data based on the selected virtual environment scene identification, and then generates the first virtual environment page according to the virtual environment scene data; the first virtual environment page comprises at least one preset virtual object; when a confirmation instruction aiming at any virtual object in at least one virtual object is received, object data of any virtual object is obtained, a preset second virtual environment page is displayed based on the virtual environment scene data and the object data, and when the display of the second virtual environment page is finished, the loading of the virtual environment is judged to be finished. In this way, when a first virtual environment page including a virtual object is displayed, corresponding virtual environment scene data is directly acquired based on a preset virtual environment scene identifier, then the first virtual environment page is constructed based on the virtual environment scene data, then a second virtual environment page is constructed based on the virtual environment scene data and the determined virtual object, and when the second virtual environment page is displayed, the loading of the virtual environment is completed. Because the first virtual environment page and the second virtual environment page are constructed in the same virtual environment, the first virtual environment page and the second virtual environment page are switched in the same virtual environment when being switched to the second virtual environment page, and the problem that different pages correspond to different scenes in the prior art, so that the different pages give a user a sharp feeling when being switched is solved, immersive experience is provided for the user, and the user experience is improved.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.
Claims (10)
1. A loading method of a virtual environment is characterized by comprising the following steps:
acquiring corresponding virtual environment scene data based on the selected virtual environment scene identification;
generating a first virtual environment page according to the virtual environment scene data; the first virtual environment page comprises at least one preset virtual object;
when a confirmation instruction aiming at any virtual object in the at least one virtual object is received, acquiring object data of the any virtual object, and displaying a preset second virtual environment page based on the virtual environment scene data and the object data;
and when the page display of the second virtual environment is finished, judging that the loading of the virtual environment is finished.
2. The loading method of the virtual environment according to claim 1, wherein the virtual environment scene data includes preset at least one object coordinate;
the generating the first virtual environment page according to the virtual environment scene data includes:
determining a first virtual environment scene picture observed when observation is carried out under a preset first visual angle based on any target object coordinate in the at least one object coordinate; the first visual angle is a visual angle for observing at least two virtual objects in each virtual object in the scene picture of the first virtual environment;
acquiring first virtual environment scene data corresponding to the first virtual environment scene picture from the virtual environment scene data;
rendering the first virtual environment scene data to obtain a first virtual environment scene picture;
and displaying the first virtual environment scene picture in a preset first virtual environment page, and displaying the at least two virtual objects in the first virtual environment scene picture.
3. The loading method of the virtual environment according to claim 1, wherein when receiving a confirmation instruction for any virtual object in the at least one virtual object, acquiring object data of the any virtual object comprises:
receiving a selection instruction for any one of the at least one virtual object;
displaying any virtual object in the first virtual environment page at a preset second visual angle; the second view is a front view for independently observing any virtual object;
and when a confirmation instruction aiming at any virtual object is received, acquiring the object data of the any virtual object.
4. The loading method of the virtual environment according to claim 1 or 2, wherein said displaying a preset second virtual environment page based on the virtual environment scene data and the object data comprises:
determining a second virtual environment scene picture observed when observation is carried out under a preset third visual angle based on any target object coordinate;
acquiring second virtual environment scene data corresponding to the second virtual environment scene picture from the virtual environment scene data; the third visual angle is a first person visual angle or a third person visual angle;
rendering the second virtual environment scene data to obtain a second virtual environment scene picture, and rendering the object data to obtain a virtual object picture of any virtual object;
and displaying the second virtual environment scene picture in the second virtual environment page, and displaying the virtual object picture on any object coordinate of the second virtual environment scene picture at the third view angle.
5. The loading method of the virtual environment according to claim 1, wherein said determining that the loading of the virtual environment is completed when the presentation of the page of the second virtual environment is finished comprises:
and when the display duration of the second virtual environment page reaches a preset display threshold, judging that the display of the second virtual environment page is finished and the loading of the virtual environment is finished.
6. A loading apparatus of a virtual environment, comprising:
the first processing module is used for acquiring corresponding virtual environment scene data based on the selected virtual environment scene identification;
the display module is used for generating a first virtual environment page according to the virtual environment scene data; the first virtual environment page comprises at least one preset virtual object;
the second processing module is used for acquiring object data of any virtual object when a confirmation instruction aiming at any virtual object in the at least one virtual object is received, and displaying a preset second virtual environment page based on the virtual environment scene data and the object data;
and the judging module is used for judging that the loading of the virtual environment is finished when the page display of the second virtual environment is finished.
7. The loading apparatus of the virtual environment according to claim 6, wherein the virtual environment scene data includes at least one preset object coordinate;
the display module comprises:
the first determining submodule is used for determining a first virtual environment scene picture observed when observation is carried out under a preset first visual angle based on any target object coordinate in the at least one object coordinate; the first visual angle is a visual angle for observing at least two virtual objects in each virtual object in the scene picture of the first virtual environment;
a first obtaining sub-module, configured to obtain, from the virtual environment scene data, first virtual environment scene data corresponding to the first virtual environment scene picture;
the first rendering submodule is used for rendering the scene data of the first virtual environment to obtain a scene picture of the first virtual environment;
and the first display sub-module is used for displaying the first virtual environment scene picture in a preset first virtual environment page and displaying the at least two virtual objects in the first virtual environment scene picture.
8. The loading apparatus of the virtual environment according to claim 6, wherein the second processing module comprises:
the receiving submodule is used for receiving a selection instruction aiming at any virtual object in the at least one virtual object;
the second display sub-module is used for displaying any virtual object in the first virtual environment page at a preset second visual angle; the second view is a front view for independently observing any virtual object;
the receiving submodule is used for receiving a confirmation instruction aiming at any virtual object;
and the second acquisition submodule is used for acquiring the object data of any virtual object.
9. An electronic device, comprising:
a processor, a memory, and a bus;
the bus is used for connecting the processor and the memory;
the memory is used for storing operation instructions;
the processor is configured to execute the loading method of the virtual environment according to any one of claims 1 to 5 by calling the operation instruction.
10. A computer storage medium for storing computer instructions which, when run on a computer, cause the computer to perform the method of loading a virtual environment of any of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010989121.9A CN112090071B (en) | 2020-09-18 | 2020-09-18 | Virtual environment loading method and device, electronic equipment and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010989121.9A CN112090071B (en) | 2020-09-18 | 2020-09-18 | Virtual environment loading method and device, electronic equipment and computer storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112090071A CN112090071A (en) | 2020-12-18 |
CN112090071B true CN112090071B (en) | 2022-02-11 |
Family
ID=73760402
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010989121.9A Active CN112090071B (en) | 2020-09-18 | 2020-09-18 | Virtual environment loading method and device, electronic equipment and computer storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112090071B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112957726B (en) * | 2021-02-01 | 2024-05-03 | 北京海天维景科技有限公司 | Interactive control method and device for virtual motion scene |
CN114095686A (en) * | 2021-11-18 | 2022-02-25 | 平安普惠企业管理有限公司 | Virtual image switching method and device, electronic equipment and storage medium |
CN116820290A (en) * | 2022-03-22 | 2023-09-29 | 北京有竹居网络技术有限公司 | Display method, display device, terminal and storage medium for house three-dimensional model |
CN118118643A (en) * | 2024-04-15 | 2024-05-31 | 腾讯科技(深圳)有限公司 | Video data processing method and related device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106991713A (en) * | 2017-04-13 | 2017-07-28 | 网易(杭州)网络有限公司 | Method and apparatus, medium, processor and the terminal of scene in more new game |
CN108717733A (en) * | 2018-06-07 | 2018-10-30 | 腾讯科技(深圳)有限公司 | View angle switch method, equipment and the storage medium of virtual environment |
CN109952757A (en) * | 2017-08-24 | 2019-06-28 | 腾讯科技(深圳)有限公司 | Method, terminal device and storage medium based on virtual reality applications recorded video |
CN111068310A (en) * | 2019-11-21 | 2020-04-28 | 珠海剑心互动娱乐有限公司 | Method and system for realizing seamless loading of game map |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10748342B2 (en) * | 2018-06-19 | 2020-08-18 | Google Llc | Interaction system for augmented reality objects |
-
2020
- 2020-09-18 CN CN202010989121.9A patent/CN112090071B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106991713A (en) * | 2017-04-13 | 2017-07-28 | 网易(杭州)网络有限公司 | Method and apparatus, medium, processor and the terminal of scene in more new game |
CN109952757A (en) * | 2017-08-24 | 2019-06-28 | 腾讯科技(深圳)有限公司 | Method, terminal device and storage medium based on virtual reality applications recorded video |
CN108717733A (en) * | 2018-06-07 | 2018-10-30 | 腾讯科技(深圳)有限公司 | View angle switch method, equipment and the storage medium of virtual environment |
CN111068310A (en) * | 2019-11-21 | 2020-04-28 | 珠海剑心互动娱乐有限公司 | Method and system for realizing seamless loading of game map |
Also Published As
Publication number | Publication date |
---|---|
CN112090071A (en) | 2020-12-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112090071B (en) | Virtual environment loading method and device, electronic equipment and computer storage medium | |
CN111399639B (en) | Method, device and equipment for controlling motion state in virtual environment and readable medium | |
US11810234B2 (en) | Method and apparatus for processing avatar usage data, device, and storage medium | |
CN108434735B (en) | Virtual article display method, device, electronic device and storage medium | |
CN112090069A (en) | Information prompting method and device in virtual scene, electronic equipment and storage medium | |
JP7477640B2 (en) | Virtual environment screen display method, device, and computer program | |
EP3943175A1 (en) | Information display method and apparatus, and device and storage medium | |
CN111672111A (en) | Interface display method, device, equipment and storage medium | |
JP2022540277A (en) | VIRTUAL OBJECT CONTROL METHOD, APPARATUS, TERMINAL AND COMPUTER PROGRAM | |
CN112891931A (en) | Virtual role selection method, device, equipment and storage medium | |
CN110801629B (en) | Method, device, terminal and medium for displaying virtual object life value prompt graph | |
JP2022552752A (en) | Screen display method and device for virtual environment, and computer device and program | |
CN113633975A (en) | Virtual environment picture display method, device, terminal and storage medium | |
CN111651616B (en) | Multimedia resource generation method, device, equipment and medium | |
CN111599292A (en) | Historical scene presenting method and device, electronic equipment and storage medium | |
WO2023071808A1 (en) | Virtual scene-based graphic display method and apparatus, device, and medium | |
CN114612553B (en) | Control method and device for virtual object, computer equipment and storage medium | |
CN114307150B (en) | Method, device, equipment, medium and program product for interaction between virtual objects | |
CN112169321B (en) | Mode determination method, device, equipment and readable storage medium | |
CN115671734B (en) | Virtual object control method and device, electronic equipment and storage medium | |
Quek et al. | Obscura: A mobile game with camera based mechanics | |
CN111760283B (en) | Skill distribution method and device for virtual object, terminal and readable storage medium | |
CN114146413B (en) | Virtual object control method, device, equipment, storage medium and program product | |
CN117753004A (en) | Message display method, device, equipment, medium and program product | |
CN117753007A (en) | Interactive processing method and device for virtual scene, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |