WO2021258994A1 - 虚拟场景的显示方法、装置、设备及存储介质 - Google Patents

虚拟场景的显示方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2021258994A1
WO2021258994A1 PCT/CN2021/096717 CN2021096717W WO2021258994A1 WO 2021258994 A1 WO2021258994 A1 WO 2021258994A1 CN 2021096717 W CN2021096717 W CN 2021096717W WO 2021258994 A1 WO2021258994 A1 WO 2021258994A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
scene
texture map
rendering texture
camera
Prior art date
Application number
PCT/CN2021/096717
Other languages
English (en)
French (fr)
Inventor
周靖奇
岳宗元
林雪莹
陈楚
季明阳
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP21830213.1A priority Critical patent/EP4070865A4/en
Priority to KR1020227018077A priority patent/KR20220083839A/ko
Priority to JP2022554376A priority patent/JP2023517917A/ja
Publication of WO2021258994A1 publication Critical patent/WO2021258994A1/zh
Priority to US17/733,819 priority patent/US20220249949A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • A63F13/5252Changing parameters of virtual cameras using two or more virtual cameras concurrently or sequentially, e.g. automatically switching between fixed virtual cameras when a character changes room or displaying a rear-mirror view in a car-driving game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • A63F13/5255Changing parameters of virtual cameras according to dedicated instructions from a player, e.g. using a secondary joystick to rotate the camera around a player's character
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/53Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing
    • A63F2300/538Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing for performing operations on behalf of the game client, e.g. rendering

Definitions

  • This application relates to the field of computers, and in particular to a method, device, device, and storage medium for displaying a virtual scene.
  • Displaying game progress in the form of scrolls is a common way of expression.
  • different textures of the virtual scroll are used to display different levels of the application corresponding to the game.
  • Each texture is arranged in sequence, and when the texture receives an interactive operation triggered by the player, You can enter the corresponding game level.
  • the way of expressing the virtual picture scroll is to obtain the virtual picture scroll by splicing multiple preset static texture images, and each static texture image is arranged horizontally in the virtual environment.
  • the embodiments of the present application provide a method, device, device, and storage medium for displaying a virtual scene, which can increase the content richness of the screen displayed in the virtual scene and enrich the display form.
  • the technical solution is as follows:
  • an embodiment of the present application provides a method for displaying a virtual scene, which is executed by a terminal, and the method includes:
  • Drawing a first scene picture in a first rendering texture map the first scene picture being obtained by shooting the virtual scene by a first virtual camera, and the first virtual camera is located at a first position in the virtual scene;
  • a second scene picture is drawn in the second rendering texture map.
  • the second scene picture is obtained by shooting the virtual scene by a second virtual camera, and the second virtual camera is located in the A second position in the virtual scene, and the second position is different from the first position;
  • an embodiment of the present application provides a virtual scene display device, the device including:
  • the first drawing module is configured to draw a first scene picture in a first rendering texture map, the first scene picture being obtained by shooting the virtual scene by a first virtual camera, and the first virtual camera is located in the virtual The first position in the scene;
  • a texture display module configured to display the first rendering texture map in a virtual scene, where the virtual scene is used for activity display;
  • the second drawing module is configured to draw a second scene picture in the second rendering texture map in response to receiving the activity advance signal, and the second scene picture is obtained by shooting the virtual scene by a second virtual camera, the A second virtual camera is located at a second position in the virtual scene, and the second position is different from the first position;
  • the texture replacement module is used to replace the first rendering texture map in the virtual scene with the second rendering texture map; wherein, when the virtual object in the viewing range corresponding to the virtual camera in the virtual scene changes , The scene picture drawn in the rendered texture map changes.
  • a computer device comprising: a processor and a memory, the memory stores at least one instruction, at least one program, code set, or instruction set, the at least one The instructions, the at least one program, the code set or the instruction set are loaded and executed by the processor to implement the method for displaying the virtual scene as described in the above aspect.
  • a computer-readable storage medium stores at least one instruction, at least one program, code set, or instruction set, the at least one instruction, the at least one program ,
  • the code set or instruction set is loaded and executed by the processor to implement the method for displaying the virtual scene as described in the above aspect.
  • a computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the virtual scene display method provided in the foregoing various implementation manners.
  • Fig. 1 shows a structural block diagram of a computer system provided by an exemplary embodiment of the present application
  • Fig. 2 shows a flowchart of a method for displaying a virtual scene provided by an exemplary embodiment of the present application
  • FIG. 3 is a schematic diagram showing the principle of generating a rendering texture map according to an exemplary embodiment of the present application
  • FIG. 4 is a schematic diagram of a first rendering texture map generation interface shown in an exemplary embodiment of the present application
  • FIG. 5 is a schematic diagram of a second rendering texture map generation interface shown in an exemplary embodiment of the present application.
  • Fig. 6 is a schematic diagram of a rendering texture map replacement interface shown in an exemplary embodiment of the present application.
  • Fig. 7 shows a flowchart of a method for displaying a virtual scene provided by another exemplary embodiment of the present application.
  • Fig. 8 is a schematic diagram showing the principle of movement of a virtual camera according to an exemplary embodiment of the present application.
  • Fig. 9 is a schematic diagram of an interface for moving a virtual camera according to an exemplary embodiment of the present application.
  • Fig. 10 is a schematic diagram showing the principle of a virtual camera based on different lens moving trajectories according to an exemplary embodiment of the present application
  • Fig. 11 shows a flowchart of a method for displaying a virtual scene provided by another exemplary embodiment of the present application.
  • FIG. 12 is a schematic diagram showing an interface for responding to interactive operations according to an exemplary embodiment of the present application.
  • FIG. 13 is a structural block diagram of a display device for a virtual scene provided by an exemplary embodiment of the present application.
  • Fig. 14 shows a structural block diagram of a terminal provided by an exemplary embodiment of the present application.
  • Virtual environment It is the virtual environment displayed (or provided) when the application is running on the terminal.
  • the virtual environment may be a simulation environment of the real world, a semi-simulation and semi-fictional environment, or a purely fictitious environment.
  • the virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment, which is not limited in this application.
  • the virtual environment is a three-dimensional virtual environment as an example.
  • Virtual object refers to the movable object in the virtual environment.
  • the movable objects may be virtual characters, virtual animals, cartoon characters, etc., such as: characters, animals, plants, oil barrels, walls, stones, etc. displayed in a three-dimensional virtual environment.
  • the virtual object is a three-dimensional model created based on the animation skeletal technology. Each virtual object has its own shape and volume in the three-dimensional virtual environment, and occupies a part of the space in the three-dimensional virtual environment.
  • the picture scroll is a classic Chinese element symbol.
  • the element of the picture scroll is applied to the expression of the progress of the game level; in related technologies, the virtual picture scroll is formed by splicing multiple static stickers.
  • Rendering texture map It is a texture map that can be created and updated when the game is running. It is used for the display of the virtual scroll in the embodiment of this application.
  • the virtual scroll displays at least one rendering texture map, and can realize different rendering textures. Replacement and switching of textures.
  • Fig. 1 shows a structural block diagram of a computer system provided by an exemplary embodiment of the present application.
  • the computer system 100 includes a terminal 110 and a server cluster 120.
  • the terminal 110 installs and runs a client supporting a virtual environment.
  • the terminal 110 displays an application program interface of the client on the screen.
  • the client can be an online application or an offline application.
  • the client can support virtual scroll display in a virtual environment as an example.
  • the terminal 110 can display the current game progress or all game levels through a virtual picture scroll.
  • the virtual scene displayed by the virtual picture scroll is a three-dimensional (3 Dimensional, 3D) scene.
  • the 3D scene can be composed of multiple layers of 3D pictures, such as a foreground picture and a background picture.
  • the virtual object in the virtual environment is the foreground picture, and the scene outside the virtual object is the background picture, and by moving the foreground picture, It can simulate the effect of virtual objects moving in the background picture.
  • the device type of the terminal 110 includes at least one of a smart phone, a tablet computer, an e-book reader, a laptop portable computer, and a desktop computer.
  • terminal 130 Only one terminal is shown in FIG. 1, but there are multiple other terminals 130 that can access the server cluster 120 in different embodiments. In some embodiments, there is also at least one terminal 130 corresponding to the developer.
  • a development and editing platform for the client of the virtual environment is installed on the terminal 130, and the developer can edit and update the client on the terminal 130.
  • the updated client installation package is transmitted to the server cluster 120 via a wired or wireless network, and the terminal 110 can download the client installation package from the server cluster 120 to update the client.
  • the terminal 110 and other terminals 130 are connected to the server cluster 120 through a wireless network or a wired network.
  • the server cluster 120 includes at least one of a server, multiple servers, a cloud computing platform, and a virtualization center.
  • the server cluster 120 is used to provide background services for clients supporting the virtual environment.
  • the server cluster 120 is responsible for the main calculation work, and the terminal is responsible for the secondary calculation work; or, the server cluster 120 is responsible for the secondary calculation work, and the terminal is responsible for the main calculation work; or, the server cluster 120 and each terminal adopt a distributed computing architecture for collaborative calculation. .
  • the aforementioned terminals and servers are all computer equipment.
  • the server cluster 120 includes a server 121 and a server 122.
  • the server 121 includes a processor 123, a user account database 124, and a user-oriented input/output interface (Input/Output Interface, I/O interface) 126,
  • the server 122 includes a virtual picture scroll processing module 125.
  • the processor 123 is used to load instructions stored in the server 121, and to process data in the user account database 124 and the virtual scroll processing module 125;
  • the user account database 124 is used to store data of user accounts used by the terminal 110 and other terminals 130 , Such as the avatar of the user account, the nickname of the user account, the historical interactive operation record of the user account on the virtual picture scroll, the service area where the user account is located, etc.;
  • the virtual picture scroll processing module 125 is used to control the display of the virtual picture scroll and according to the received interaction The operation is performed to switch and replace dynamic stickers;
  • the user-oriented I/O interface 126 is used to establish communication and exchange data with the terminal 110 via a wireless network or a wired network.
  • the method for displaying the virtual picture scroll in the virtual environment provided by the embodiment of the present application will be described, and the execution subject of the method is the terminal 110 shown in FIG. 1 as an example.
  • the terminal 110 runs an application program, and the application program is a program that supports a virtual environment.
  • FIG. 2 shows a flowchart of a method for displaying a virtual scene provided by an exemplary embodiment of the present application.
  • the method can be applied to the terminal 110 in the computer system as shown in FIG. 1.
  • the method includes the following steps:
  • Step 201 Draw a first scene picture in a first rendering texture map, the first scene picture is obtained by shooting a virtual scene by a first virtual camera, and the first virtual camera is located at a first position in the virtual scene.
  • the virtual picture scroll is formed by splicing several static textures, and the static textures are preset textures and will not change with changes in the virtual scene.
  • the first rendering texture map is obtained to display the virtual scroll.
  • a first scene picture is drawn in the first rendering texture map
  • the first scene picture is obtained by shooting a virtual scene by a first virtual camera
  • the first virtual camera is located at a first position in the virtual scene. Therefore, when the virtual scene or the virtual object in the virtual scene changes, it can be tracked by the first virtual camera, and the shot of the first scene picture is also dynamically changed, instead of the static virtual picture scroll in the related technology show.
  • the shooting angle of view when the first virtual camera shoots the virtual scene
  • the shooting angle of view may be a preset angle of view, or it may be an observation angle of a simulated user when observing a virtual picture scroll in a virtual environment.
  • the shooting angle of view of the virtual camera is the angle when the virtual scene is observed at a fixed position in the virtual environment.
  • the first virtual camera is located at the first position in the virtual scene, and the The first scene image is obtained by shooting the virtual scene from the shooting angle of view of the position.
  • the shaded area in the virtual scroll is the scene screen displayed by the terminal.
  • the first virtual camera 410 is first set to shoot the virtual scene to obtain the first scene
  • the first scene image is drawn in the first rendering texture map 411.
  • the first virtual camera 410 is set at the first position in the viewing angle.
  • Step 202 Display the first rendering texture map in the virtual scene, and the virtual scene is used for activity display.
  • the virtual scene screen displayed by the terminal is the first rendering texture map, where the virtual scene is used for activity display.
  • the virtual scroll screen displayed by the terminal is the first rendering texture map, and the scroll is used for level display in the form of a scroll.
  • interactive controls are also displayed in the display screen of the virtual picture scroll, and the interactive controls are used to advance the current game progress or perform interactive operations on the current display picture of the virtual picture scroll.
  • the first rendered texture map 411 displays the scene picture as shown in the figure. Further, the user can slide the current scene picture, and the terminal generates a level advance signal according to the sliding operation.
  • the virtual scene is a three-dimensional 3D scene.
  • the 3D scene may be composed of multiple layers of 3D pictures, such as a foreground picture and a background picture.
  • the virtual object presented by the girl is the foreground picture
  • the scene outside the virtual object is the background picture.
  • the effect of the virtual object moving in the background picture can be simulated; or, as shown in Figure 3(b), the virtual object in the virtual scene is a three-dimensional virtual object, which can be controlled by controlling the three-dimensional virtual object in the virtual scene Walking in
  • Step 203 in response to receiving the activity advance signal, draw a second scene picture in the second rendering texture map, the second scene picture is obtained by shooting the virtual scene by the second virtual camera, and the second virtual camera is located in the first virtual scene. Two positions, and the second position is different from the first position.
  • the activity advance signal may be a level advance signal, and the image of the virtual scene may be displayed through a virtual scroll.
  • the level advancing conditions such as completing the game task or decryption task corresponding to the current scene screen, etc.
  • the level advancing signal is triggered, and accordingly, the player can unlock the next scene screen in the virtual picture scroll.
  • the terminal operates the level advancing signal according to the trigger, and then when receiving the level advancing signal, draws the second scene picture in the second rendering texture map.
  • the second scene picture is obtained by shooting the virtual scene by the second virtual camera.
  • the second virtual camera is located at the second position in the virtual scene.
  • the virtual scene is preset. Based on the limitation of the terminal display range, the complete virtual scene cannot be displayed through the first virtual camera. Therefore, the above-mentioned first scene picture and second scene picture are different scene pictures in the virtual scene. , And the second position is different from the first position.
  • the second scene picture is drawn in the second rendering texture map 421, and the second scene picture is changed from the first
  • the second virtual camera 420 shoots the virtual object in the virtual scene at the shooting angle of the second position. After the shooting is completed, the current scene image displayed by the virtual scroll is the second rendering texture map 421 (that is, the shadow part).
  • the second rendering texture map 421 displays the scene picture as shown in the figure. After unlocking the current level, the player advances to the opening of the next level.
  • the terminal opens a new virtual camera, the second virtual camera, and draws the second scene image taken by the second virtual camera on the second rendering texture map 421.
  • the second rendering texture map 421 can be displayed on one side of the previous rendering texture map (ie, the first rendering texture map 411).
  • Step 204 Replace the first rendering texture map in the virtual scene with the second rendering texture map.
  • the virtual scroll image displayed by the terminal will transition from the first rendering texture map to the second rendering texture map. Second, render the texture map.
  • the player can perform a sliding operation on the virtual scroll. For example, if sliding the current scene screen to the left, the scene displayed by the player's vision changes dynamically during the sliding operation.
  • FIG. 3 are schematic diagrams of the principle of rendering texture map generation
  • Figs. 4 to 6 are schematic diagrams of the interface of rendering texture map generation after bringing in a visible virtual environment.
  • the scene picture drawn in the rendering texture map changes.
  • the virtual objects in the virtual scene include multiple display pictures, and different display pictures are triggered by different interactive operations. Therefore, when the background pictures remain unchanged, the screen changes of the virtual objects can be triggered by corresponding interactive operations.
  • the scene picture taken by the virtual camera will also change, that is, the scene picture drawn in the rendered texture map will change.
  • the first scene image drawn by the first rendering texture map is used for display.
  • the difference from the static map display method in the related art is that the first The scene picture is captured by the first virtual camera corresponding to the texture, which can realize dynamic tracking of the scene picture; further, based on the level display function of the virtual picture scroll, when the level advance signal is received, the second virtual camera is reset, and the The captured second scene image is drawn on the second rendering texture map, and then the first rendering texture map in the virtual picture scroll is replaced with the second rendering texture map; in addition, it responds to the virtual object in the viewfinder corresponding to the virtual camera in the virtual scene When the change occurs, the scene picture drawn in the rendered texture map will also change.
  • the solution provided in the embodiment of the present application can increase the content richness of the pictures displayed in the virtual picture scroll and enrich the display form.
  • FIG. 7 shows a flowchart of a method for displaying a virtual scene provided by an exemplary embodiment of the present application.
  • the method can be applied to the terminal 110 in the computer system as shown in FIG. 1.
  • the method includes the following steps:
  • Step 701 Draw a first scene picture in a first rendering texture map, the first scene picture is obtained by shooting a virtual scene by a first virtual camera, and the first virtual camera is located at a first position in the virtual scene.
  • step 201 please refer to step 201, which will not be repeated here in the embodiment of the present application.
  • Step 702 Display the first rendering texture map in the virtual scene, and the virtual scene is used for activity display.
  • step 202 please refer to step 202, which will not be repeated here in the embodiment of the present application.
  • Step 703 in response to receiving the viewfinder adjustment operation on the first rendered texture map, and the adjustment parameter indicated by the viewfinder adjustment operation is within the adjustment range, adjust the viewfinder parameter of the first virtual camera according to the adjustment parameter, the adjustment parameter includes the viewfinder angle adjustment parameter Or at least one of the viewing distance adjustment parameters.
  • each rendering texture map is set with adjustment parameters related to viewfinder, and the adjustment parameters include at least one of a viewfinder angle adjustment parameter or a viewfinder distance adjustment parameter.
  • the framing parameter in the first position is fixed.
  • the first virtual camera is adjusted according to the adjustment parameter. Viewing parameters of a virtual camera.
  • Step 704 Obtain the number of cameras that have created virtual cameras.
  • step 705 in response to the number of cameras being greater than the number threshold, according to the creation sequence of the created virtual cameras, clear at least one created virtual camera and the rendering texture map corresponding to the created virtual camera.
  • a new virtual camera and a rendering texture map need to be created for each level advancement. Too many will cause excessive memory usage and affect the smooth progress of the game process. Therefore, a judging mechanism about the number of cameras is added. By introducing the above mechanism, when the number of virtual cameras is large, the terminal can delete the virtual cameras created earlier, so as to reduce the memory usage of the game application.
  • the number threshold is set to 8.
  • the terminal obtains the creation time of each created virtual camera, and sorts each virtual camera from morning to night according to the creation time to obtain the virtual camera Create a sequence table for the first two created virtual cameras; further, after the elimination, if a new virtual camera is created, the new virtual camera information is directly recorded in the last position of the creation sequence table , So that the next time the number of cameras reaches the number threshold, there is no need to re-acquire the creation time of all the created virtual cameras, and the two created virtual cameras that are ranked first in the creation sequence table can be directly deleted.
  • Step 706 in response to receiving the activity advance signal, create a second virtual camera at the first position and create a second rendering texture map, and the second virtual camera is bound to the second rendering texture map.
  • the second virtual camera is created at the position of the previous virtual camera, that is, the second virtual camera is created at the first position.
  • the received activity advancing signal may be a level advancing signal.
  • the first virtual camera 410 is located at the first position as shown in the figure.
  • a second virtual camera 420 (not shown in the figure) is created at the first position, and then the first position The second virtual camera 420 at the location moves until it moves to the second position as shown in the figure.
  • Step 707 Control the second virtual camera to move from the first position to the second position.
  • the terminal queries the second position from the camera position list according to the first position, and the camera position list includes position information of virtual cameras in different levels; further , The terminal controls the second virtual camera to move from the first position to the second position found, and the second position corresponds to the camera position of the next level.
  • the terminal queries the target lens movement between the first position and the second position from the lens movement track list Trajectory, and control the second virtual camera to move from the first position to the second position according to the target mirror movement trajectory, where the mirror movement trajectory list includes the mirror movement trajectory information when the virtual camera moves between different positions, thereby realizing terminal pairing
  • the rapid movement of the second virtual camera, and based on the preset lens movement trajectory, further improves the natural transition effect when the rendering texture map is replaced.
  • the mirror movement trajectory can be a straight mirror movement trajectory, a curved mirror movement trajectory, or other preset mirror movement trajectories. Game developers can preset the mirror movement trajectory between different camera positions according to the requirements of the screen switching effect.
  • the mirror movement trajectory is a linear mirror movement trajectory.
  • a lens moving scene in which the second virtual camera moves from the first position to the second position is shown.
  • the terminal in response to receiving the level advance signal, the terminal creates a second virtual camera at the first position of the first virtual camera, and controls the second virtual camera according to the mirror movement trajectory shown by the arrow in the figure. The camera moves from the first position to the second position.
  • the mirror movement trajectory includes at least two types of mirror movement trajectories according to the screen switching effect requirements.
  • different mirror movement trajectories of the virtual cameras can achieve different screen switching effects, so as to improve the visual display effect.
  • the lens movement trajectory between the first virtual camera 410 and the second virtual camera 420 is a linear lens movement trajectory
  • the lens movement trajectory between the second virtual camera 420 and the third virtual camera 430 is a polygonal lens movement trajectory
  • the third The mirror movement trajectory between the virtual camera 430 and the fourth virtual camera 440 is still a linear mirror movement trajectory.
  • the scene image captured by the first virtual camera 410 is the first rendering texture map 411
  • the scene image captured by the second virtual camera 420 is the second rendering texture map 421
  • the scene captured by the third virtual camera 430 The picture is the third rendering texture map 431
  • the scene picture captured by the fourth virtual camera 440 is the fourth rendering texture map 441.
  • Step 708 Draw a second scene image in the second rendering texture map according to the image captured by the second virtual camera.
  • the terminal draws the second scene image in the second rendering texture map according to the image captured by the second virtual camera.
  • step 709 the second rendering texture map and the first rendering texture map are stitched horizontally.
  • Step 710 Replace the first rendering texture map in the virtual scene with the second rendering texture map.
  • step 204 please refer to step 204, which will not be repeated in the embodiment of the present application.
  • Step 711 In response to receiving the lateral sliding operation on the virtual scene, switch the second rendering texture map in the virtual picture scroll to the first rendering texture map.
  • step 709 is further included after step 708, and step 711 is further included after step 710.
  • any rendered texture map is part of the virtual scene.
  • the image of the virtual scene can be displayed through a virtual scroll. Therefore, After the second scene image is drawn in the second rendering texture map according to the image captured by the second virtual camera, the second rendering texture map and the first rendering texture map may be stitched horizontally to implement an example of step 711, for example, receiving After the horizontal sliding operation on the virtual picture scroll, the second rendering texture map in the virtual picture scroll is switched to the first rendering texture map.
  • the horizontal sliding operation may be one of a leftward sliding operation and a rightward sliding operation.
  • Figure 5 can be a schematic diagram of the horizontal stitching of the second rendering texture map and the first rendering texture map
  • Figure 6 can be a schematic diagram of the process of switching the rendering texture map according to the horizontal sliding operation on the virtual scroll .
  • a second virtual camera is created at the first position, and the first rendering texture is realized in the process of moving the second virtual camera from the first position to the second position
  • the texture is replaced with the natural transition effect of the second rendering texture texture.
  • the terminal presets a camera position list and a mirror movement track list to achieve effective control of scene switching, and the mirror movement track list is not Contains a single mirror movement trajectory, which improves the mirror movement effect when advancing in different levels.
  • a judgment mechanism regarding the number of cameras is also provided, so that when the number of cameras is large, the terminal can delete the virtual camera created earlier, so as to reduce the memory usage of the game application.
  • the splicing of different rendering texture maps can be realized to realize the display effect of the virtual scroll, and based on the horizontal sliding operation on the virtual scroll triggered by the user, the second rendering texture map in the virtual scroll can be switched to
  • the first rendering of texture maps makes the transition effect of screen switching more natural.
  • each rendering texture map is set with adjustment parameters related to framing, and the adjustment parameters include at least one of a framing angle adjustment parameter or a framing distance adjustment parameter, so as to realize the user-defined adjustment of the virtual scroll display screen and improve The user's operability and visual experience of the virtual picture scroll.
  • the virtual environment is a 3D scene, that is, the 3D scene is composed of multiple layers of 3D pictures, including foreground pictures and background pictures. If the foreground picture is a two-dimensional picture, it can be set that when the two-dimensional picture is interactively operated, the virtual environment transforms the two-dimensional picture in response to the interactive operation.
  • FIG. 11 shows a flowchart of a method for displaying a virtual scene provided by another exemplary embodiment of the present application.
  • the method can be applied to the terminal 110 in the computer system as shown in FIG. Or after step 702, it also includes the following steps:
  • Step 1101 In response to receiving an interactive operation on the first rendering texture map, determine an interactive virtual object corresponding to the interactive operation, and the interactive virtual object belongs to a virtual object in the virtual scene.
  • Step 1101 includes the following contents one to three.
  • Content 1 Determine the first interactive coordinate of the interactive operation in the first rendered texture map.
  • the first interaction coordinates are two-dimensional coordinates, that is, the user triggers the two-dimensional screen of the terminal display interface, and the triggered position can determine the two-dimensional coordinates corresponding to the interactive operation.
  • the first interaction coordinate is the coordinate of the touch point.
  • the first interaction coordinate is mapped to the second interaction coordinate in the virtual scene.
  • the terminal can determine the three-dimensional coordinates corresponding to the interactive operation according to the first interactive coordinates and the viewing angle of the first virtual camera, that is, the first interactive coordinates Mapping is the second interactive coordinate in the virtual scene.
  • the terminal determines a ray with the first interaction coordinate as the starting point and the framing angle as the direction according to the first interaction coordinate and the viewing angle, and the coordinate of the intersection of the ray and the virtual object in the virtual scene Determined as the second interactive coordinate.
  • Content 3 Determine the virtual object located at the second interactive coordinate in the virtual scene as the interactive virtual object.
  • the terminal maps the two-dimensional coordinates (ie, the first interaction coordinates) under the interactive operation to the three-dimensional coordinates (ie, the second interaction coordinates) according to the viewing angle of the first virtual camera, thereby determining the interactive virtual object.
  • Step 1102 Control the interactive virtual object to respond to the interactive operation, where the response screen of the interactive virtual object to the interactive operation is displayed in the first rendering texture map.
  • the terminal controls the interactive virtual object to perform interactive response according to the operation type of the interactive operation. For example, when the interactive operation is a drag operation, the interactive virtual object is controlled to move in the virtual scene; when the interactive operation is a drawing operation, it is controlled to generate virtual props corresponding to the drawing operation in the virtual scene.
  • the virtual objects included in the virtual environment are divided into interactive objects and non-interactive objects.
  • each interactive object is preset with at least one interactive special effect or interactive picture.
  • the interactive object 1 corresponds to 3 kinds of interactive special effects, and various interactive special effects are triggered by different interactive operations.
  • the user clicks on the interactive object 1 to trigger the display of interactive special effect A and long press on the interactive object 1 can trigger the display of interactive special effect B
  • Double-clicking the interactive object 1 can trigger the display of interactive special effects C; or
  • the interactive object 1 is provided with at least one interactive control, and different interactive special effects can be displayed by operating different interactive controls.
  • the interactive virtual object is the little girl in the first rendered texture map 411, and the interactive virtual object is an interactive object; further, the interactive virtual object is provided with an interactive control 1210, To perform interactive operations on the interactive virtual object in the first rendered texture map 411, for example, by triggering an intermediate control in the interactive control 1210, the shape of the little girl (interactive virtual object) in the first rendered texture map 411 can be changed.
  • the user when the virtual environment is a 3D scene, the user can perform interactive operations on virtual objects, and the terminal can control the interactive virtual objects to respond to the interactive operations; compared with the static map display method in the related technology, the implementation of this application
  • the example can not only realize the natural transition when converting or switching between different rendering texture maps in the above embodiment, but also respond to the user's interactive operation on virtual objects, that is, realize the dynamic scene picture without switching or replacing the rendering texture map.
  • the changes have further improved the user's visual experience and enriched the content of game operations.
  • Fig. 13 is a structural block diagram of a virtual scene display device provided by an exemplary embodiment of the present application.
  • the device includes:
  • the first drawing module 1301 is configured to draw a first scene picture in a first rendering texture map, the first scene picture being obtained by shooting a virtual scene by a first virtual camera, and the first virtual camera is located in the virtual scene First position in
  • the texture display module 1302 is configured to display the first rendered texture map in a virtual scene, and the virtual scene is used for active display; in one example, the virtual scene can be displayed in the form of a virtual scroll, where the virtual The picture scroll is used for level display in the form of a picture scroll.
  • the second drawing module 1303 is configured to draw a second scene picture in the second rendering texture map in response to receiving the activity advance signal, and the second scene picture is obtained by shooting the virtual scene by the second virtual camera, so The second virtual camera is located at a second position in the virtual scene, and the second position is different from the first position;
  • the texture replacement module 1304 is configured to replace the first rendering texture map in the virtual scene with the second rendering texture map; wherein, when the virtual object in the viewing range corresponding to the virtual camera in the virtual scene changes , The scene picture drawn in the rendered texture map changes.
  • the second drawing module 1303 includes:
  • the first rendering unit is configured to create the second virtual camera at the first position and create the second rendering texture map in response to receiving the activity advancing signal, the second virtual camera and the The second rendering texture map binding;
  • a second drawing unit configured to control the second virtual camera to move from the first position to the second position
  • the third drawing unit is configured to draw the second scene image in the second rendering texture map according to the image captured by the second virtual camera.
  • the second drawing unit includes:
  • the first drawing subunit is configured to query the second position from a camera position list according to the first position, and the camera position list includes position information of virtual cameras in different activities;
  • the second drawing subunit is used to control the second virtual camera to move from the first position to the second position found.
  • the second drawing subunit is also used to query the target lens movement trajectory between the first position and the second position from the lens movement trajectory list, and the mirror movement trajectory list includes the virtual camera in different positions.
  • the device also includes:
  • the quantity acquisition module is used to acquire the number of cameras for which virtual cameras have been created
  • the camera clearing module is configured to clear at least one of the created virtual cameras and the rendering texture map corresponding to the created virtual cameras in response to the number of cameras being greater than the number threshold, according to the creation sequence of the created virtual cameras.
  • the device also includes:
  • a texture splicing module for horizontally splicing the second rendering texture map and the first rendering texture map
  • the texture switching module is configured to switch the second rendering texture map in the virtual scene to the first rendering texture map in response to receiving a lateral sliding operation on the virtual scene.
  • the device also includes:
  • An object determining module configured to determine an interactive virtual object corresponding to the interactive operation in response to receiving an interactive operation on the first rendering texture map, where the interactive virtual object belongs to a virtual object in the virtual scene;
  • the interactive response module is configured to control the interactive virtual object to respond to the interactive operation, wherein the response screen of the interactive virtual object to the interactive operation is displayed in the first rendering texture map.
  • the object determination module includes:
  • a first determining unit configured to determine the first interaction coordinate of the interaction operation in the first rendering texture map
  • a second determining unit configured to map the first interaction coordinate to the second interaction coordinate in the virtual scene according to the viewing angle of the first virtual camera
  • the third determining unit is configured to determine a virtual object located at the second interactive coordinate in the virtual scene as the interactive virtual object.
  • the device also includes:
  • the framing adjustment module is configured to adjust the first virtual camera according to the adjustment parameter in response to receiving the framing adjustment operation on the first rendering texture map, and the adjustment parameter indicated by the framing adjustment operation is within the adjustment range
  • the framing parameter of the, the adjustment parameter includes at least one of a framing angle adjustment parameter or a framing distance adjustment parameter.
  • the first scene picture drawn by the first rendering texture map is used for display.
  • the difference from the static map display method in the related art is that the first scene picture is corresponding to the texture It is captured by the first virtual camera, which can realize dynamic tracking of the scene picture; further, based on the level display function of the virtual picture scroll, when the level advance signal is received, the second virtual camera is reset and the captured second scene The picture is drawn on the second rendering texture map, and then the first rendering texture map in the virtual scroll is replaced with the second rendering texture map; in addition, when the virtual object in the virtual camera corresponding to the viewing range in the virtual scene changes, the rendering texture map The scene screen drawn in will also change.
  • the solution provided in the embodiment of the present application can increase the content richness of the pictures displayed in the virtual picture scroll and enrich the display form.
  • FIG. 14 shows a structural block diagram of a terminal 1400 provided by an exemplary embodiment of the present application.
  • the terminal 1400 may be a portable mobile terminal, such as a smart phone, a tablet computer, a moving Picture Experts Group Audio Layer III (MP3) player, and a moving Picture Experts compressed standard audio layer 4 (Moving Picture Experts Group III, MP3) player. Experts Group Audio Layer IV, MP4) player.
  • the terminal 1400 may also be called user equipment, portable terminal and other names.
  • the terminal 1400 includes a processor 1401 and a memory 1402.
  • the processor 1401 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on.
  • the processor 1401 may adopt at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA).
  • DSP Digital Signal Processing
  • FPGA Field-Programmable Gate Array
  • PDA Programmable Logic Array
  • the processor 1401 may also include a main processor and a coprocessor.
  • the main processor is a processor used to process data in the awake state, also called a central processing unit (CPU);
  • the coprocessor is A low-power processor used to process data in the standby state.
  • the processor 1401 may be integrated with a graphics processing unit (GPU), and the GPU is used for rendering and drawing content that needs to be displayed on the display screen.
  • the processor 1401 may further include an artificial intelligence (AI) processor, and the AI processor is used to process computing operations related to machine learning.
  • AI artificial intelligence
  • the memory 1402 may include one or more computer-readable storage media, which may be tangible and non-volatile.
  • the memory 1402 may also include high-speed random access memory and non-volatile memory, such as one or more magnetic disk storage devices and flash memory storage devices.
  • the non-volatile computer-readable storage medium in the memory 1402 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 1401 to implement the method provided in the embodiment of the present application.
  • the terminal 1400 may further include: a peripheral device interface 1403 and at least one peripheral device.
  • the peripheral device includes: at least one of a radio frequency circuit 1404, a touch display screen 1405, a camera 1406, an audio circuit 1407, a positioning component 1408, and a power supply 1409.
  • the terminal 1400 further includes one or more sensors 1410.
  • the one or more sensors 1410 include, but are not limited to: an acceleration sensor 1411, a gyroscope sensor 1412, a pressure sensor 1413, a fingerprint sensor 1414, an optical sensor 1415, and a proximity sensor 1416.
  • FIG. 14 does not constitute a limitation on the terminal 1400, and may include more or fewer components than shown in the figure, or combine some components, or adopt different component arrangements. Adopting the solution provided by the embodiments of the present application helps to increase the content richness of the pictures displayed in the virtual picture scroll and enrich the display form.
  • the embodiment of the present application also provides a computer-readable storage medium, the readable storage medium stores at least one instruction, at least one program, code set or instruction set, the at least one instruction, the at least one program, the The code set or instruction set is loaded and executed by the processor to implement the method for displaying the virtual picture scroll described in any of the foregoing embodiments.
  • the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the virtual picture scroll display method provided in the foregoing embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

一种虚拟场景的显示方法、装置、设备及存储介质。其中方法包括:在第一渲染纹理贴图(411)中绘制第一场景画面,第一场景画面由第一虚拟相机(410)对虚拟场景进行拍摄得到;在虚拟场景中显示第一渲染纹理贴图(411),虚拟场景用于进行活动展示;响应于接收到活动推进信号,在第二渲染纹理贴图(421)中绘制第二场景画面,第二场景画面由第二虚拟相机(420)对虚拟场景进行拍摄得到;将虚拟场景中的第一渲染纹理贴图(411)替换为第二渲染纹理贴图(421);其中,当虚拟场景中虚拟相机对应取景范围内的虚拟对象发生变化时,渲染纹理贴图中绘制的场景画面发生变化。

Description

虚拟场景的显示方法、装置、设备及存储介质
本申请要求于2020年6月24日提交中国专利局、申请号为202010589591.6、发明名称为“虚拟画卷的显示方法、装置、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机领域,特别涉及一种虚拟场景的显示方法、装置、设备及存储介质。
发明背景
以画卷形式显示游戏进度是一种常见的表现方式,如通过虚拟画卷的不同贴图来展示应用程序对应游戏的不同关卡,各个贴图之间依次排列,且在贴图接收到玩家触发的交互操作时,可进入对应的游戏关卡。
相关技术中,对虚拟画卷的表现方式为,通过对预设的多张静态贴图进行拼接以得到虚拟画卷,各张静态贴图在虚拟环境中横向排列。
玩家可对当前所显示的虚拟画卷进行左右拖动,以实现对不同静态贴图的阅览,然而,由于虚拟画卷中的静态贴图无法变化,导致虚拟画面所能展示的内容有限,且显示形式单一。
发明内容
本申请实施例提供了一种虚拟场景的显示方法、装置、设备及存储介质,可以提高虚拟场景中所显示画面的内容丰富度,并丰富显示形式。所述技术方案如下:
根据本申请的一方面,本申请实施例提供了一种虚拟场景的显示方法,由终端执行,所述方法包括:
在第一渲染纹理贴图中绘制第一场景画面,所述第一场景画面由第一虚拟相机对所述虚拟场景进行拍摄得到,所述第一虚拟相机位于所述虚拟场景中的第一位置;
在所述虚拟场景中显示所述第一渲染纹理贴图,所述虚拟场景用于进行活动展示;
响应于接收到活动推进信号,在第二渲染纹理贴图中绘制第二场景画面,所述第二场景画面由第二虚拟相机对所述虚拟场景进行拍摄得到,所述第二虚拟相机位于所述虚拟场景中的第二位置,且所述第二位置不同于所述第一位置;
将所述虚拟场景中的所述第一渲染纹理贴图替换为所述第二渲染纹理贴图;
其中,当所述虚拟场景中虚拟相机对应取景范围内的虚拟对象发生变化时,渲染纹理贴图中绘制的场景画面发生变化。
根据本申请的另一方面,本申请实施例提供了一种虚拟场景的显示装置,所述装置包括:
第一绘制模块,用于在第一渲染纹理贴图中绘制第一场景画面,所述第一场景画面由第一虚拟相机对所述虚拟场景进行拍摄得到,所述第一虚拟相机位于所述虚拟场景中的第一位置;
贴图显示模块,用于在虚拟场景中显示所述第一渲染纹理贴图,所述虚拟场景用于进行活动展示;
第二绘制模块,用于响应于接收到活动推进信号,在第二渲染纹理贴图中绘制第二场景画面,所述第二场景画面由第二虚拟相机对所述虚拟场景进行拍摄得到,所述第二虚拟相机位于所述虚拟场景中的第二位置,且所述第二位置不同于所述第一位置;
贴图替换模块,用于将所述虚拟场景中的所述第一渲染纹理贴图替换为所述第二渲染纹理贴图;其中,当所述虚拟场景中虚拟相机对应取景范围内的虚拟对象发生变化时,渲染纹理贴图中绘制的场景画面发生变化。
根据本申请的另一方面,提供了一种计算机设备,所述计算机设备包括:处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如上方面所述的虚拟场景的显示方法。
根据本申请的另一方面,提供了一种计算机可读存储介质,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如上方面所述的虚拟场景的显示方法。
根据本申请的一个方面,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述各种实现方式中提供的虚拟场景的显示方法。
附图简要说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示出了本申请一个示例性实施例提供的计算机系统的结构框图;
图2示出了本申请一个示例性实施例提供的虚拟场景的显示方法的流程图;
图3是本申请一个示例性实施例示出的渲染纹理贴图生成原理示意图;
图4是本申请一个示例性实施例示出的第一渲染纹理贴图生成界面示意图;
图5是本申请一个示例性实施例示出的第二渲染纹理贴图生成界面示意图;
图6是本申请一个示例性实施例示出的渲染纹理贴图替换界面示意图;
图7示出了本申请另一个示例性实施例提供的虚拟场景的显示方法的流程图;
图8是本申请一个示例性实施例示出的虚拟相机移动的原理示意图;
图9是本申请一个示例性实施例示出的虚拟相机移动的界面示意图;
图10是本申请一个示例性实施例示出的虚拟相机基于不同运镜轨迹的原理示意图;
图11示出了本申请另一个示例性实施例提供的虚拟场景的显示方法的流程图;
图12是本申请一个示例性实施例示出的响应交互操作的界面示意图;
图13是本申请一个示例性实施例提供的虚拟场景的显示装置的结构框图;
图14示出了本申请一个示例性实施例提供的终端的结构框图。
实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
首先,对本申请实施例中涉及的名词进行介绍:
虚拟环境:是应用程序在终端上运行时显示(或提供)的虚拟环境。该虚拟环境可以是对真实世界的仿真环境,也可以是半仿真半虚构的环境,还可以是纯虚构的环境。虚拟环境可以是二维虚拟环境、2.5维虚拟环境和三维虚拟环境中的任意一种,本申请对此不加以限定。下述实施例以虚拟环境是三维虚拟环境来举例说明。
虚拟对象:是指虚拟环境中的可活动对象。该可活动对象可以是虚拟人物、虚拟动物、动漫人物等,比如:在三维虚拟环境中显示的人物、动物、植物、油桶、墙壁、石块等。虚拟对象是基于动画骨骼技术创建的三维立体模型。每个虚拟对象在三维虚拟环境中具有自身的形状和体积,占据三维虚拟环境中的一部分空间。
虚拟画卷:画卷为经典的中国元素符号,在本申请实施例中,将画卷这一元素应用于游戏关卡进度的表达;在相关技术中,虚拟画卷由多张静态贴图拼接而成。
渲染纹理贴图:是一种能够在游戏运行时创建和更新的纹理贴图,用于本申请实施例中的虚拟画卷的显示,虚拟画卷显示有至少一张渲染纹理贴图,且可实现对不同渲染纹理贴图的替换与切换。
图1示出了本申请一个示例性实施例提供的计算机系统的结构框图。该计算机系统100包括:终端110和服务器集群120。
终端110安装和运行有支持虚拟环境的客户端,当终端运行客户端时,终端110的屏幕上显示客户端的应用程序界面。该客户端可以是在线式应用程序或离线式应用程序。在本实施例中,以该客户端可支持虚拟环境下的虚拟画卷显示来举例说明。终端110可通过虚拟画卷来展示当前游戏进度或所有的游戏关卡。
虚拟画卷所显示的虚拟场景为三维(3 Dimensional,3D)场景。进一步的,该3D场景可以由多层3D画片构成,如包括前景画片和背景画片,虚拟环境中的虚拟对象为前景画片,而该虚拟对象之外的场景为背景画片,且通过移动前景画片,可模拟出虚拟对象在背景画片中移动的效果。
终端110的设备类型包括:智能手机、平板电脑、电子书阅读器、膝上型便携计算机和台式计算机中的至少一种。
图1中仅示出了一个终端,但在不同实施例中存在多个其它终端130可以接入服务器集群120。在一些实施例中,还存在至少一个终端130是开发者对应的终端,在终端130上安装有虚拟环境的客户端的开发和编辑平台,开发者可在终端130上对客户端进行编辑和更新,并将更新后的客户端安装包通过有线或无线网络传输至服务器集群120,终端110可从服务器集群120下载客户端安装包实现对客户端的更新。
终端110以及其它终端130通过无线网络或有线网络与服务器集群120相连。
服务器集群120包括一台服务器、多台服务器、云计算平台和虚拟化中心中的至少一种。服务器集群120用于为支持虚拟环境的客户端提供后台服务。服务器集群120承担主要计算工作,终端承担次要计算工作;或者,服务器集群120承担次要计算工作,终端承担主要计算工作;或者,服务器集群120和各个终端之间采用分布式计算架构进行协同计算。
上述终端和服务器均为计算机设备。
在一个示意性的例子中,服务器集群120包括服务器121和服务器122,服务器121包括处理器123、用户帐号数据库124和面向用户的输入/输出接口(Input/Output Interface,I/O接口)126,服务器122包括虚拟画卷处理模块125。其中,处理器123用于加载服务器121中存储的指令,处理用户帐号数据库124和虚拟画卷处理模块125中的数据;用户帐号数据库124用于存储终端110以及其它终端130所使用的用户帐号的数据,比如用户帐号的头像、用户帐号的昵称、用户帐号对虚拟画卷的历史交互操作记录,用户帐号所在的服务区等等;虚拟画卷处理模块125用于控制虚拟画卷的显示以及根据接收到的交互操作进行动态贴图切换与替换;面向用户的I/O接口126用于通过无线网络或有线网络和终端110建立通信交换数据。
结合上述对虚拟环境的介绍,对本申请实施例提供的虚拟环境中的虚拟画卷的显示方法进行说明,以该方法的执行主体为图1所示出的终端110来举例说明。该终端110运行有应用程序,该应用程序是支持虚拟环境的程序。
请参考图2,其示出了本申请一个示例性实施例提供的虚拟场景的显示方法的流程图,该方法可应用于如图1所示的计算机系统中的终端110中。该方法包括如下步骤:
步骤201,在第一渲染纹理贴图中绘制第一场景画面,第一场景画面由第一虚拟相机对虚拟场景进行拍摄得到,第一虚拟相机位于虚拟场景中的第一位置。
在相关技术中,虚拟画卷由若干张静态贴图拼接而成,且该静态贴图为预先设置的贴图,并不会跟随虚拟场景变化而发生变化。
而在本申请实施例中,当终端所显示的用户界面为虚拟场景的场景画面,比如,虚拟画卷的场景画面时,首先,获取第一渲染纹理贴图来对虚拟画卷进行显示。其中,在第一渲染纹理贴图中绘制第一场景画面,第一场景画面由第一虚拟相机对虚拟场景进行拍摄得到,第一虚拟相机位于虚拟场景中的第一位置。由此,当虚拟场景或虚拟场景中的虚拟对象发生变化时,可通过第一虚拟相机进行追踪,并使得所拍摄的第一场景画面也发生动态变化,而不是相关技术中的虚拟画卷的静态显示。
其中,第一虚拟相机对虚拟场景进行拍摄时存在拍摄视角,拍摄视角可以是预先设置的视角,也可以是模拟用户在虚拟环境中观察虚拟画卷时的观察视角。本申请的实施例中,虚拟相机的拍摄视角是在虚拟环境中以固定位置对虚拟场景进行观察时的角度,如第一虚拟相机位于虚拟场景中的第一位置,第一虚拟相机以该固定位置的拍摄视角对虚拟场景进行拍摄得到第一场景画面。
如图3所示,在图(a)中,虚拟画卷中的阴影区域为终端所显示的场景画面,在游戏的初始过程中,首先设置第一虚拟相机410对虚拟场景进行拍摄得到第一场景画面,并在第一渲染纹理贴图411中绘制该第一场景画面,如图所示,第一虚拟相机410以该视角下的第一位置进行设置。
步骤202,在虚拟场景中显示第一渲染纹理贴图,虚拟场景用于进行活动展示。
在游戏初始阶段,终端所显示的虚拟场景画面为第一渲染纹理贴图,其中,虚 拟场景用于进行活动展示。在一个示例中,终端所显示的虚拟画卷画面为第一渲染纹理贴图,画卷用于以画卷形式进行关卡展示。为了推进游戏进程,在虚拟画卷的显示画面中还展示有交互控件,该交互控件用于对当前游戏进程进行推进,或对虚拟画卷当前的显示画面进行交互操作。
如图4所示,第一渲染纹理贴图411显示有如图所示的场景画面,进一步的,用户可对当前场景画面进行滑动,终端根据该滑动操作生成关卡推进信号。
在本申请实施例中,虚拟场景为三维3D场景。进一步的,该3D场景可以由多层3D画片构成,如包括前景画片和背景画片,如图4所示,女孩所呈现的虚拟对象为前景画片,而该虚拟对象之外的场景为背景画片,且通过移动前景画片,可模拟出虚拟对象在背景画片中移动的效果;或者,如图3的(b)所示,虚拟场景中的虚拟对象为立体虚拟对象,可通过控制立体虚拟对象在虚拟场景中行走
步骤203,响应于接收到活动推进信号,在第二渲染纹理贴图中绘制第二场景画面,第二场景画面由第二虚拟相机对虚拟场景进行拍摄得到,第二虚拟相机位于虚拟场景中的第二位置,且第二位置不同于第一位置。
在一个示例中,活动推进信号可为关卡推进信号,虚拟场景的画面可通过虚拟画卷进行展示。在满足关卡推进条件时(如完成当前场景画面对应的游戏任务或解密任务等等),即触发关卡推进信号,相应的,玩家可解锁虚拟画卷中的下一场景画面。终端根据该触发操作关卡推进信号,继而在接收到关卡推进信号时,在第二渲染纹理贴图中绘制第二场景画面,类似的,第二场景画面由第二虚拟相机对虚拟场景进行拍摄得到,第二虚拟相机位于虚拟场景中的第二位置。
其中,虚拟场景为预先设置的,基于终端显示范围的局限,无法通过第一虚拟相机将完整的虚拟场景进行显示,因此,上述第一场景画面和第二场景画面为虚拟场景中的不同场景画面,且第二位置不同于第一位置。
如图3的(b)所示,若虚拟场景中的虚拟对象为立体虚拟对象,则接收到关卡推进信号后,在第二渲染纹理贴图421中绘制第二场景画面,第二场景画面由第二虚拟相机420对虚拟场景中的虚拟对象以第二位置的拍摄角度进行拍摄得到,完成拍摄后,虚拟画卷所显示的当前场景画面为第二渲染纹理贴图421(即阴影部分)。
又如图5所示,第二渲染纹理贴图421显示有如图所示的场景画面。玩家在解锁当前关卡后推进了下一关卡的开启,终端开启一个新虚拟相机,即第二虚拟相机,并将第二虚拟相机拍摄的第二场景画面绘制于第二渲染纹理贴图421,该第二渲染纹理贴图421可显示于上一张渲染纹理贴图(即第一渲染纹理贴图411)的一侧。
步骤204,将虚拟场景中的第一渲染纹理贴图替换为第二渲染纹理贴图。
基于接收到的活动推进信号,比如,关卡推进信号,获取第二渲染纹理贴图后,因为虚拟场景的画面可通过虚拟画卷进行展示,终端所显示的虚拟画卷画面将由第一渲染纹理贴图过渡为第二渲染纹理贴图。
在一个示例中,如图3的(c)所示,玩家可对虚拟画卷进行滑动操作,如向左滑动当前场景画面,则在滑动操作过程中,玩家视觉所呈现的场景画面为动态变化的场景画面422,当滑动操作停止,则场景画面422确定。
又如图6所示,在实际的贴图替换过程中,所呈现的为一种动态过渡过程,即在将虚拟画卷中的第一渲染纹理贴图411替换为第二渲染纹理贴图421时,存在如图所示的过渡画面。
需要说明的是,上述图3的(a)至(c)为渲染纹理贴图生成的原理示意图,而图4至图6为带入有可见虚拟环境后的渲染纹理贴图生成的界面示意图。
此外,需要说明的是,虚拟场景中虚拟相机对应取景范围内的虚拟对象发生变化时,渲染纹理贴图中绘制的场景画面发生变化。在一个示例中,虚拟场景中的虚拟对象包含多种展示画片,且不同展示画片通过不同交互操作触发,因此,在背景画片不变的情况下,可通过对应的交互操作触发虚拟对象的画面变化,相应的,虚拟相机所拍摄的场景画面也会发生变化,即渲染纹理贴图中绘制的场景画面发生变化。
综上所述,本申请实施例中,在虚拟画卷初次显示时,以第一渲染纹理贴图所绘制的第一场景画面进行显示,与相关技术中的静态贴图显示方式不同的是,该第一场景画面由贴图对应的第一虚拟相机拍摄得到,可实现对场景画面的动态追踪;进一步的,基于虚拟画卷的关卡展示作用,在接收到关卡推进信号时,重新设置第二虚拟相机,并将所拍摄的第二场景画面绘制于第二渲染纹理贴图,继而将虚拟画卷中的第一渲染纹理贴图替换为第二渲染纹理贴图;此外,响应于虚拟场景中虚拟相机对应取景范围内的虚拟对象发生变化,渲染纹理贴图中绘制的场景画面也会发生变化。与相关技术中的通过静态贴图显示虚拟画卷的方法相比,本申请实施例提供的方案能够提高虚拟画卷中所显示画面的内容丰富度,并丰富显示形式。
请参考图7,其示出了本申请一个示例性实施例提供的虚拟场景的显示方法的流程图,该方法可应用于如图1所示的计算机系统中的终端110中。该方法包括如下步骤:
步骤701,在第一渲染纹理贴图中绘制第一场景画面,第一场景画面由第一虚拟相机对虚拟场景进行拍摄得到,第一虚拟相机位于虚拟场景中的第一位置。
本步骤请参考步骤201,本申请实施例在此不再赘述。
步骤702,在虚拟场景中显示第一渲染纹理贴图,虚拟场景用于进行活动展示。
本步骤请参考步骤202,本申请实施例在此不再赘述。
步骤703,响应于接收到对第一渲染纹理贴图的取景调整操作,且取景调整操作指示的调整参数位于调整范围内,根据调整参数调整第一虚拟相机的取景参数,调整参数包括取景角度调整参数或取景距离调整参数中的至少一种。
在一种实施方式中,各个渲染纹理贴图设置有取景相关的调整参数,调整参数包括取景角度调整参数或取景距离调整参数中的至少一种。
第一虚拟相机设置时以第一位置下的取景参数进行固定,响应于接收到对第一渲染纹理贴图的取景调整操作,且取景调整操作指示的调整参数位于调整范围内,根据调整参数调整第一虚拟相机的取景参数。
步骤704,获取已创建虚拟相机的相机数量。
步骤705,响应于相机数量大于数量阈值,根据已创建虚拟相机的创建顺序,清除至少一个已创建虚拟相机,以及已创建虚拟相机对应的渲染纹理贴图。
在一种实施方式中,考虑每次关卡推进都需要创建新的虚拟相机和渲染纹理贴图,数量过多会导致内存占用过大,影响游戏进程的顺利推进,因此增加关于相机数量的判断机制。通过引入上述机制,当虚拟相机数量较多时,终端可对较早创建的虚拟相机进行删除,以减少游戏应用程序的内存占用率。
在一个示例中,设置数量阈值为8,当已创建虚拟相机的相机数量达到8时,终端获取各个已创建虚拟相机的创建时间,根据创建时间从早到晚对各个虚拟相机进行排序得到虚拟相机的创建顺序表,并将前两个已创建虚拟相机进行消除;进一步的,在消除后,若创建有新的虚拟相机,则将该新的虚拟相机信息直接记录于创建顺序表的末位位置,使得下次相机数量达到数量阈值时,无需重新获取所有已创 建虚拟相机的创建时间,可直接将创建顺序表中排序靠前的两个已创建虚拟相机删除。
步骤706,响应于接收到活动推进信号,在第一位置创建第二虚拟相机,并创建第二渲染纹理贴图,第二虚拟相机与第二渲染纹理贴图绑定。
为了实现第一渲染纹理贴图替换为第二渲染纹理贴图时的自然过渡效果(即一种运镜效果),第二虚拟相机在上一个虚拟相机的位置处进行创建,即在第一位置创建第二虚拟相机,并创建第二渲染纹理贴图,且将第二虚拟相机与第二渲染纹理贴图绑定,从而当第二虚拟相机从第一位置移动至第二位置时,场景画面发生变化,从而达到一种过渡自然的运镜效果。在一个具体示例中,接收到的活动推进信号可为关卡推进信号。
如图8所示,第一虚拟相机410位于如图所示的第一位置,当接收到关卡推进信号,在第一位置创建第二虚拟相机420(图中未显示),进而将第一位置处的第二虚拟相机420进行移动,直至移动至如图所示的第二位置。
步骤707,控制第二虚拟相机从第一位置移动至第二位置。
在控制第二虚拟相机从第一位置移动至第二位置的过程中,终端根据第一位置从相机位置列表中查询第二位置,相机位置列表中包括不同关卡中虚拟相机的位置信息;进一步的,终端控制第二虚拟相机从第一位置移动至查询到的第二位置,第二位置对应下一关卡的相机位置。
由于上述从第一位置移动至第二位置的运镜效果呈现于终端界面,因此,为了实现良好的运镜效果,终端从运镜轨迹列表中查询第一位置和第二位置之间目标运镜轨迹,并根据目标运镜轨迹控制第二虚拟相机从第一位置移动至第二位置,其中,运镜轨迹列表中包括虚拟相机在不同位置之间移动时的运镜轨迹信息,从而实现终端对第二虚拟相机的快速移动,且基于预设的运镜轨迹,进一步提高了渲染纹理贴图替换时的自然过渡效果。
运镜轨迹可以是直线运镜轨迹、曲线运镜轨迹或者其他预设运镜轨迹,游戏开发人员可根据画面切换效果需求预先设置好不同相机位置之间的运镜轨迹。
在一个示例中,运镜轨迹是直线运镜轨迹。如图9所示,示出了第二虚拟相机从第一位置移动至第二位置的运镜场景。若第一渲染纹理贴图如图所示,响应于接收到关卡推进信号,终端在第一虚拟相机的第一位置创建第二虚拟相机,并按照如图箭头所示的运镜轨迹控制第二虚拟相机从第一位置移动至第二位置。
在另一个示例中,运镜轨迹根据画面切换效果需求包含至少两种类型的运镜轨迹。如图10所示,在存在多个创建的虚拟相机的情况下,虚拟相机的不同运镜轨迹可实现出不同的画面切换效果,以提高视觉显示效果。如,第一虚拟相机410与第二虚拟相机420之间的运镜轨迹为直线运镜轨迹,第二虚拟相机420与第三虚拟相机430之间的运镜轨迹为折线运镜轨迹,第三虚拟相机430与第四虚拟相机440之间的运镜轨迹仍为直线运镜轨迹。其中,第一虚拟相机410所拍摄到的场景画面为第一渲染纹理贴图411,第二虚拟相机420所拍摄到的场景画面为第二渲染纹理贴图421,第三虚拟相机430所拍摄到的场景画面为第三渲染纹理贴图431,第四虚拟相机440所拍摄到的场景画面为第四渲染纹理贴图441。
步骤708,根据第二虚拟相机拍摄到的画面在第二渲染纹理贴图中绘制第二场景画面。
进一步的,实现上述第二虚拟相机的位移后,在第二位置处,终端根据第二虚拟相机拍摄到的画面在第二渲染纹理贴图中绘制第二场景画面。
步骤709,对第二渲染纹理贴图和第一渲染纹理贴图横向拼接。
步骤710,将虚拟场景中的第一渲染纹理贴图替换为第二渲染纹理贴图。
本步骤请参考步骤204,本申请实施例再此不再赘述。
步骤711,响应于接收到对虚拟场景的横向滑动操作,将虚拟画卷中的第二渲染纹理贴图切换为第一渲染纹理贴图。
在一种实施方式中,步骤708之后还包括步骤709,以及步骤710之后还包括步骤711。虽然,在终端显示的界面中,仅显示了单一渲染纹理贴图对应的场景画面,但任意渲染纹理贴图为虚拟场景的一部分,在一个示例中,虚拟场景的画面可通过虚拟画卷进行展示,因此,根据第二虚拟相机拍摄到的画面在第二渲染纹理贴图中绘制第二场景画面之后,可对第二渲染纹理贴图和第一渲染纹理贴图横向拼接,以实现步骤711的一个示例,比如,接收到对虚拟画卷的横向滑动操作后,将虚拟画卷中的第二渲染纹理贴图切换为第一渲染纹理贴图。
横向滑动操作可以为左向滑动操作和右向滑动操作中的一种。如图5和图6所示,图5可为第二渲染纹理贴图和第一渲染纹理贴图横向拼接的示意图,图6可为根据对虚拟画卷的横向滑动操作对渲染纹理贴图进行切换的过程示意图。
本申请实施例中,当接收到关卡推进信号时,在第一位置处创建第二虚拟相机,在将第二虚拟相机从第一位置移动至第二位置的过程中,实现了第一渲染纹理贴图替换为第二渲染纹理贴图时的自然过渡效果。
本申请实施例中,基于虚拟画卷的关卡推进作用,对于不同的关卡或游戏进度,终端预设有相机位置列表以及运镜轨迹列表,以实现对场景切换的有效控制,且运镜轨迹列表并非包含单一运镜轨迹,从而提高了不同关卡推进时的运镜效果。
本申请实施例中,还设置有关于相机数量的判断机制,使得相机数量较多时,终端可对较早创建的虚拟相机进行删除,以减少游戏应用程序的内存占用率。
本申请实施例中,可实现对不同渲染纹理贴图的拼接,以实现虚拟画卷的显示效果,且基于用户触发的对虚拟画卷的横向滑动操作,可将虚拟画卷中的第二渲染纹理贴图切换为第一渲染纹理贴图,使得画面切换的过渡效果更为自然。
本申请实施例中,各个渲染纹理贴图设置有取景相关的调整参数,调整参数包括取景角度调整参数或取景距离调整参数中的至少一种,以实现用户对虚拟画卷显示画面的自定义调整,提高用户对虚拟画卷的可操作性以及视觉体验。
在一种实施方式中,虚拟环境为3D场景,即3D场景由多层3D画片构成,包括前景画片和背景画片。若前景画片为二维画片,可设置对该二维画片进行交互操作时,该虚拟环境对该二维画片进行变换以响应该交互操作。
请参考图11,其示出了本申请另一个示例性实施例提供的虚拟场景的显示方法的流程图,该方法可应用于如图1所示的计算机系统中的终端110中,在步骤202或步骤702之后,还包括如下步骤:
步骤1101,响应于接收到对第一渲染纹理贴图的交互操作,确定交互操作对应的交互虚拟对象,交互虚拟对象属于虚拟场景中的虚拟对象。
步骤1101包括如下内容一至三。
内容一、确定第一渲染纹理贴图中交互操作的第一交互坐标。
其中,第一交互坐标为二维坐标,即用户对终端显示界面的二维画面进行触发,所触发的位置可决定出交互操作对应的二维坐标。
在一些实施例中,该第一交互坐标为触控点的坐标。
内容二、根据第一虚拟相机的取景角度,将第一交互坐标映射为虚拟场景中的 第二交互坐标。
进一步的,在虚拟场景中,不同虚拟对象对应有不同的三维坐标,因此,终端可根据第一交互坐标以及第一虚拟相机的取景角度确定出该交互操作对应的三维坐标,即将第一交互坐标映射为虚拟场景中的第二交互坐标。
在一些实施例中,终端根据第一交互坐标以及取景角度,确定出一条以该第一交互坐标为起点,以该取景角度为方向的射线,并将该射线与虚拟场景中虚拟对象的交点坐标确定为第二交互坐标。
内容三、将虚拟场景中位于第二交互坐标的虚拟对象确定为交互虚拟对象。
其中,终端根据第一虚拟相机的取景角度,将交互操作下的二维坐标(即第一交互坐标)映射为三维坐标(即第二交互坐标),从而确定出交互虚拟对象。
步骤1102,控制交互虚拟对象响应交互操作,其中,交互虚拟对象对交互操作的响应画面展示在第一渲染纹理贴图中。
在一些实施例中,终端根据交互操作的操作类型,控制交互虚拟对象进行交互响应。比如,当交互操作为拖动操作时,控制交互虚拟对象在虚拟场景中移动;当交互操作为画图操作时,控制在虚拟场景中生成与画图操作对应的虚拟道具。
在一种实施方式中,虚拟环境中所包括的虚拟对象分为可交互对象和不可交互对象。对于可交互对象而言,各个可交互对象预设有至少一种交互特效或交互画片。如可交互对象1对应3种交互特效,各种交互特效由不同的交互操作触发,如用户对可交互对象1点击可触发展示交互特效A,对可交互对象1长按可触发展示交互特效B,对可交互对象1双击可触发展示交互特效C;或者,可交互对象1设置有至少一种交互控件,通过对不同交互控件的操作实现不同交互特效的展示。
在一个示例中,如图12所示,交互虚拟对象为第一渲染纹理贴图411中的小女孩,且该交互虚拟对象为可交互对象;进一步的,该交互虚拟对象设置有交互控件1210,用于对第一渲染纹理贴图411中的交互虚拟对象进行交互操作,如通过触发交互控件1210中的中间控件可使第一渲染纹理贴图411中的小女孩(交互虚拟对象)发生形态变化。
本申请实施例中,考虑虚拟环境为3D场景时,用户可对虚拟对象进行交互操作的情况,终端能够控制交互虚拟对象响应交互操作;与相关技术中的静态贴图显示方法相比,本申请实施例不仅可实现上述实施例中对不同渲染纹理贴图转换或切换时的自然过渡,还可以响应用户对虚拟对象的交互操作,即在无渲染纹理贴图切换或替换的情况下,实现场景画面的动态变化,进一步提高了用户视觉体验,以及丰富了游戏操作内容。
图13是本申请一个示例性实施例提供的虚拟场景的显示装置的结构框图。该装置包括:
第一绘制模块1301,用于在第一渲染纹理贴图中绘制第一场景画面,所述第一场景画面由第一虚拟相机对虚拟场景进行拍摄得到,所述第一虚拟相机位于所述虚拟场景中的第一位置;
贴图显示模块1302,用于在虚拟场景中显示所述第一渲染纹理贴图,所述虚拟场景用于进行活动展示;在一个示例中,可通过虚拟画卷的形式显示虚拟场景,其中,所述虚拟画卷用于以画卷形式进行关卡展示。
第二绘制模块1303,用于响应于接收到活动推进信号,在第二渲染纹理贴图中绘制第二场景画面,所述第二场景画面由第二虚拟相机对所述虚拟场景进行拍摄得到,所述第二虚拟相机位于所述虚拟场景中的第二位置,且所述第二位置不同于所 述第一位置;
贴图替换模块1304,用于将所述虚拟场景中的所述第一渲染纹理贴图替换为所述第二渲染纹理贴图;其中,当所述虚拟场景中虚拟相机对应取景范围内的虚拟对象发生变化时,渲染纹理贴图中绘制的场景画面发生变化。
所述第二绘制模块1303,包括:
第一绘制单元,用于响应于接收到所述活动推进信号,在所述第一位置创建所述第二虚拟相机,并创建所述第二渲染纹理贴图,所述第二虚拟相机与所述第二渲染纹理贴图绑定;
第二绘制单元,用于控制所述第二虚拟相机从所述第一位置移动至所述第二位置;
第三绘制单元,用于根据所述第二虚拟相机拍摄到的画面在所述第二渲染纹理贴图中绘制所述第二场景画面。
所述第二绘制单元,包括:
第一绘制子单元,用于根据所述第一位置从相机位置列表中查询所述第二位置,所述相机位置列表中包括不同活动中虚拟相机的位置信息;
第二绘制子单元,用于控制所述第二虚拟相机从所述第一位置移动至查询到的所述第二位置。
所述第二绘制子单元,还用于从运镜轨迹列表中查询所述第一位置和所述第二位置之间目标运镜轨迹,所述运镜轨迹列表中包括虚拟相机在不同位置之间移动时的运镜轨迹信息;
根据所述目标运镜轨迹控制所述第二虚拟相机从所述第一位置移动至所述第二位置。
所述装置还包括:
数量获取模块,用于获取已创建虚拟相机的相机数量;
相机清除模块,用于响应于所述相机数量大于数量阈值,根据所述已创建虚拟相机的创建顺序,清除至少一个所述已创建虚拟相机,以及所述已创建虚拟相机对应的渲染纹理贴图。
所述装置还包括:
贴图拼接模块,用于对所述第二渲染纹理贴图和所述第一渲染纹理贴图横向拼接;
贴图切换模块,用于响应于接收到对所述虚拟场景的横向滑动操作,将所述虚拟场景中的所述第二渲染纹理贴图切换为所述第一渲染纹理贴图。
所述装置还包括:
对象确定模块,用于响应于接收到对所述第一渲染纹理贴图的交互操作,确定所述交互操作对应的交互虚拟对象,所述交互虚拟对象属于所述虚拟场景中的虚拟对象;
交互响应模块,用于控制所述交互虚拟对象响应所述交互操作,其中,所述交互虚拟对象对所述交互操作的响应画面展示在所述第一渲染纹理贴图中。
所述对象确定模块,包括:
第一确定单元,用于确定所述第一渲染纹理贴图中所述交互操作的第一交互坐标;
第二确定单元,用于根据所述第一虚拟相机的取景角度,将所述第一交互坐标映射为所述虚拟场景中的第二交互坐标;
第三确定单元,用于将所述虚拟场景中位于所述第二交互坐标的虚拟对象确定为所述交互虚拟对象。
所述装置还包括:
取景调整模块,用于响应于接收到对所述第一渲染纹理贴图的取景调整操作,且所述取景调整操作指示的调整参数位于调整范围内,根据所述调整参数调整所述第一虚拟相机的取景参数,所述调整参数包括取景角度调整参数或取景距离调整参数中的至少一种。
本申请实施例中,在虚拟画卷初次显示时,以第一渲染纹理贴图所绘制的第一场景画面进行显示,与相关技术中的静态贴图显示方式不同的是,该第一场景画面由贴图对应的第一虚拟相机拍摄得到,可实现对场景画面的动态追踪;进一步的,基于虚拟画卷的关卡展示作用,在接收到关卡推进信号时,重新设置第二虚拟相机并将所拍摄的第二场景画面绘制于第二渲染纹理贴图,继而将虚拟画卷中的第一渲染纹理贴图替换为第二渲染纹理贴图;此外,当虚拟场景中虚拟相机对应取景范围内的虚拟对象发生变化时,渲染纹理贴图中绘制的场景画面也会发生变化。与相关技术中的通过静态贴图显示虚拟画卷的方法相比,本申请实施例提供的方案能够提高虚拟画卷中所显示画面的内容丰富度,并丰富显示形式。
请参考图14,其示出了本申请一个示例性实施例提供的终端1400的结构框图。该终端1400可以是便携式移动终端,比如:智能手机、平板电脑、动态影像专家压缩标准音频层面3(Moving Picture Experts Group Audio Layer III,MP3)播放器、动态影像专家压缩标准音频层面4(Moving Picture Experts Group Audio Layer IV,MP4)播放器。终端1400还可能被称为用户设备、便携式终端等其他名称。
通常,终端1400包括有:处理器1401和存储器1402。
处理器1401可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1401可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器1401也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称中央处理器(Central Processing Unit,CPU);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1401可以集成有图像处理器(Graphics Processing Unit,GPU),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器1401还可以包括人工智能(Artificial Intelligence,AI)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器1402可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是有形的和非易失性的。存储器1402还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1402中的非易失性的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器1401所执行以实现本申请实施例提供的方法。
在一些实施例中,终端1400还可包括有:外围设备接口1403和至少一个外围设备。具体地,外围设备包括:射频电路1404、触摸显示屏1405、摄像头1406、音频电路1407、定位组件1408和电源1409中的至少一种。
在一些实施例中,终端1400还包括有一个或多个传感器1410。该一个或多个传感器1410包括但不限于:加速度传感器1411、陀螺仪传感器1412、压力传感器1413、指纹传感器1414、光学传感器1415以及接近传感器1416。
本领域技术人员可以理解,图14中示出的结构并不构成对终端1400的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。采用本申请实施例提供的方案有助于提高虚拟画卷中所显示画面的内容丰富度,并丰富显示形式。
本申请实施例还提供一种计算机可读存储介质,该可读存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现上述任一实施例所述的虚拟画卷的显示方法。
本申请还提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述实施例中提供的虚拟画卷的显示方法。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请的实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种虚拟场景的显示方法,由终端执行,其特征在于,所述方法包括:
    在第一渲染纹理贴图中绘制第一场景画面,所述第一场景画面由第一虚拟相机对所述虚拟场景进行拍摄得到,所述第一虚拟相机位于所述虚拟场景中的第一位置;
    在所述虚拟场景中显示所述第一渲染纹理贴图,所述虚拟场景用于进行活动展示;
    响应于接收到活动推进信号,在第二渲染纹理贴图中绘制第二场景画面,所述第二场景画面由第二虚拟相机对所述虚拟场景进行拍摄得到,所述第二虚拟相机位于所述虚拟场景中的第二位置,且所述第二位置不同于所述第一位置;
    将所述虚拟场景中的所述第一渲染纹理贴图替换为所述第二渲染纹理贴图;
    其中,当所述虚拟场景中虚拟相机对应取景范围内的虚拟对象发生变化时,渲染纹理贴图中绘制的场景画面发生变化。
  2. 根据权利要求1所述的方法,其特征在于,所述响应于接收到活动推进信号,在第二渲染纹理贴图中绘制第二场景画面,包括:
    响应于接收到所述活动推进信号,在所述第一位置创建所述第二虚拟相机,并创建所述第二渲染纹理贴图,所述第二虚拟相机与所述第二渲染纹理贴图绑定;
    控制所述第二虚拟相机从所述第一位置移动至所述第二位置;
    根据所述第二虚拟相机拍摄到的画面在所述第二渲染纹理贴图中绘制所述第二场景画面。
  3. 根据权利要求2所述的方法,其特征在于,所述控制所述第二虚拟相机从所述第一位置移动至所述第二位置,包括:
    根据所述第一位置从相机位置列表中查询所述第二位置,所述相机位置列表中包括不同活动中虚拟相机的位置信息;
    控制所述第二虚拟相机从所述第一位置移动至查询到的所述第二位置。
  4. 根据权利要求3所述的方法,其特征在于,所述控制所述第二虚拟相机从所述第一位置移动至查询到的所述第二位置,包括:
    从运镜轨迹列表中查询所述第一位置和所述第二位置之间目标运镜轨迹,所述运镜轨迹列表中包括虚拟相机在不同位置之间移动时的运镜轨迹信息;
    根据所述目标运镜轨迹控制所述第二虚拟相机从所述第一位置移动至所述第二位置。
  5. 根据权利要求2所述的方法,其特征在于,所述在所述第一位置创建所述第二虚拟相机,并创建所述第二渲染纹理贴图之前,所述方法还包括:
    获取已创建虚拟相机的相机数量;
    响应于所述相机数量大于数量阈值,根据所述已创建虚拟相机的创建顺序,清除至少一个所述已创建虚拟相机,以及所述已创建虚拟相机对应的渲染纹理贴图。
  6. 根据权利要求1至5任一所述的方法,其特征在于,所述响应于接收到活动推进信号,在第二渲染纹理贴图中绘制第二场景画面之后,所述方法还包括:
    对所述第二渲染纹理贴图和所述第一渲染纹理贴图横向拼接;
    所述将所述虚拟场景中的所述第一渲染纹理贴图替换为所述第二渲染纹理贴图之后,所述方法还包括:
    响应于接收到对所述虚拟场景的横向滑动操作,将所述虚拟场景中的所述第二 渲染纹理贴图切换为所述第一渲染纹理贴图。
  7. 根据权利要求1至5任一所述的方法,其特征在于,所述在虚拟场景中显示所述第一渲染纹理贴图之后,所述方法还包括:
    响应于接收到对所述第一渲染纹理贴图的交互操作,确定所述交互操作对应的交互虚拟对象,所述交互虚拟对象属于所述虚拟场景中的虚拟对象;
    控制所述交互虚拟对象响应所述交互操作,其中,所述交互虚拟对象对所述交互操作的响应画面展示在所述第一渲染纹理贴图中。
  8. 根据权利要求7所述的方法,其特征在于,所述确定所述交互操作对应的交互虚拟对象,包括:
    确定所述第一渲染纹理贴图中所述交互操作的第一交互坐标;
    根据所述第一虚拟相机的取景角度,将所述第一交互坐标映射为所述虚拟场景中的第二交互坐标;
    将所述虚拟场景中位于所述第二交互坐标的虚拟对象确定为所述交互虚拟对象。
  9. 根据权利要求1至5任一所述的方法,其特征在于,所述在虚拟场景中显示所述第一渲染纹理贴图之后,所述方法还包括:
    响应于接收到对所述第一渲染纹理贴图的取景调整操作,且所述取景调整操作指示的调整参数位于调整范围内,根据所述调整参数调整所述第一虚拟相机的取景参数,所述调整参数包括取景角度调整参数或取景距离调整参数中的至少一种。
  10. 一种虚拟场景的显示装置,其特征在于,所述装置包括:
    第一绘制模块,用于在第一渲染纹理贴图中绘制第一场景画面,所述第一场景画面由第一虚拟相机对所述虚拟场景进行拍摄得到,所述第一虚拟相机位于所述虚拟场景中的第一位置;
    贴图显示模块,用于在所述虚拟场景中显示所述第一渲染纹理贴图,所述虚拟场景用于进行活动展示;
    第二绘制模块,用于响应于接收到活动推进信号,在第二渲染纹理贴图中绘制第二场景画面,所述第二场景画面由第二虚拟相机对所述虚拟场景进行拍摄得到,所述第二虚拟相机位于所述虚拟场景中的第二位置,且所述第二位置不同于所述第一位置;
    贴图替换模块,用于将所述虚拟场景中的所述第一渲染纹理贴图替换为所述第二渲染纹理贴图;其中,当所述虚拟场景中虚拟相机对应取景范围内的虚拟对象发生变化时,渲染纹理贴图中绘制的场景画面发生变化。
  11. 根据权利要求10所述的装置,其特征在于,所述第二绘制模块包括:
    第一绘制单元,用于响应于接收到所述活动推进信号,在所述第一位置创建所述第二虚拟相机,并创建所述第二渲染纹理贴图,所述第二虚拟相机与所述第二渲染纹理贴图绑定;
    第二绘制单元,用于控制所述第二虚拟相机从所述第一位置移动至所述第二位置;
    第三绘制单元,用于根据所述第二虚拟相机拍摄到的画面在所述第二渲染纹理贴图中绘制所述第二场景画面。
  12. 根据权利要求11所述的装置,其特征在于,所述第二绘制单元包括:
    第一绘制子单元,用于根据所述第一位置从相机位置列表中查询所述第二位置,所述相机位置列表中包括不同活动中虚拟相机的位置信息;
    第二绘制子单元,用于控制所述第二虚拟相机从所述第一位置移动至查询到的所述第二位置。
  13. 根据权利要求12所述的装置,其特征在于,所述第二绘制子单元,还用于从运镜轨迹列表中查询所述第一位置和所述第二位置之间目标运镜轨迹,其中,所述运镜轨迹列表中包括虚拟相机在不同位置之间移动时的运镜轨迹信息;
    所述第二绘制子单元,还用于根据所述目标运镜轨迹控制所述第二虚拟相机从所述第一位置移动至所述第二位置。
  14. 根据权利要求11所述的装置,其特征在于,所述装置还包括:
    数量获取模块,用于获取已创建虚拟相机的相机数量;
    相机清除模块,用于响应于所述相机数量大于数量阈值,根据所述已创建虚拟相机的创建顺序,清除至少一个所述已创建虚拟相机,以及所述已创建虚拟相机对应的渲染纹理贴图。
  15. 根据权利要求10至14任一项所述的装置,其特征在于,所述装置还包括:
    贴图拼接模块,用于对所述第二渲染纹理贴图和所述第一渲染纹理贴图横向拼接;
    贴图切换模块,用于响应于接收到对所述虚拟场景的横向滑动操作,将所述虚拟场景中的所述第二渲染纹理贴图切换为所述第一渲染纹理贴图。
  16. 根据权利要求10至14任一项所述的装置,其特征在于,所述装置还包括:
    对象确定模块,用于响应于接收到对所述第一渲染纹理贴图的交互操作,确定所述交互操作对应的交互虚拟对象,所述交互虚拟对象属于所述虚拟场景中的虚拟对象;
    交互响应模块,用于控制所述交互虚拟对象响应所述交互操作,其中,所述交互虚拟对象对所述交互操作的响应画面展示在所述第一渲染纹理贴图中。
  17. 根据权利要求16所述的装置,其特征在于,所述对象确定模块,包括:
    第一确定单元,用于确定所述第一渲染纹理贴图中所述交互操作的第一交互坐标;
    第二确定单元,用于根据所述第一虚拟相机的取景角度,将所述第一交互坐标映射为所述虚拟场景中的第二交互坐标;
    第三确定单元,用于将所述虚拟场景中位于所述第二交互坐标的虚拟对象确定为所述交互虚拟对象。
  18. 根据权利要求10至14任一项所述的装置,其特征在于,所述装置还包括:
    取景调整模块,用于响应于接收到对所述第一渲染纹理贴图的取景调整操作,且所述取景调整操作指示的调整参数位于调整范围内,根据所述调整参数调整所述第一虚拟相机的取景参数,其中,所述调整参数包括取景角度调整参数或取景距离调整参数中的至少一种。
  19. 一种计算机设备,所述计算机设备包括:处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行,以实现如权利要求1至9任一项所述的虚拟场景的显示方法。
  20. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行,以实现如权利要求1至9任一项所述的虚拟场景的显示方法。
PCT/CN2021/096717 2020-06-24 2021-05-28 虚拟场景的显示方法、装置、设备及存储介质 WO2021258994A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP21830213.1A EP4070865A4 (en) 2020-06-24 2021-05-28 VIRTUAL SCENE DISPLAY METHOD AND APPARATUS, AND RECORDING DEVICE AND MEDIUM
KR1020227018077A KR20220083839A (ko) 2020-06-24 2021-05-28 가상 장면을 표시하는 방법 및 장치, 그리고 기기 및 저장 매체
JP2022554376A JP2023517917A (ja) 2020-06-24 2021-05-28 仮想シーンの表示方法、装置、機器、及びコンピュータープログラム
US17/733,819 US20220249949A1 (en) 2020-06-24 2022-04-29 Method and apparatus for displaying virtual scene, device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010589591.6 2020-06-24
CN202010589591.6A CN111701238B (zh) 2020-06-24 2020-06-24 虚拟画卷的显示方法、装置、设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/733,819 Continuation US20220249949A1 (en) 2020-06-24 2022-04-29 Method and apparatus for displaying virtual scene, device, and storage medium

Publications (1)

Publication Number Publication Date
WO2021258994A1 true WO2021258994A1 (zh) 2021-12-30

Family

ID=72543455

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/096717 WO2021258994A1 (zh) 2020-06-24 2021-05-28 虚拟场景的显示方法、装置、设备及存储介质

Country Status (6)

Country Link
US (1) US20220249949A1 (zh)
EP (1) EP4070865A4 (zh)
JP (1) JP2023517917A (zh)
KR (1) KR20220083839A (zh)
CN (1) CN111701238B (zh)
WO (1) WO2021258994A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112884874A (zh) * 2021-03-18 2021-06-01 腾讯科技(深圳)有限公司 在虚拟模型上贴花的方法、装置、设备及介质
CN114640838A (zh) * 2022-03-15 2022-06-17 北京奇艺世纪科技有限公司 画面合成方法、装置、电子设备及可读存储介质
CN114949846A (zh) * 2022-05-17 2022-08-30 网易(杭州)网络有限公司 场景地形的生成方法、装置、电子设备及介质

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111701238B (zh) * 2020-06-24 2022-04-26 腾讯科技(深圳)有限公司 虚拟画卷的显示方法、装置、设备及存储介质
CN112263837B (zh) * 2020-11-16 2021-12-21 腾讯科技(深圳)有限公司 虚拟环境中的天气渲染方法、装置、设备及存储介质
CN112587921A (zh) * 2020-12-16 2021-04-02 成都完美时空网络技术有限公司 模型处理方法和装置、电子设备和存储介质
CN112791409A (zh) * 2021-01-21 2021-05-14 北京字跳网络技术有限公司 数据生成控制方法、装置、电子设备及存储介质
US20220295040A1 (en) * 2021-03-11 2022-09-15 Quintar, Inc. Augmented reality system with remote presentation including 3d graphics extending beyond frame
CN113244616B (zh) * 2021-06-24 2023-09-26 腾讯科技(深圳)有限公司 基于虚拟场景的互动方法、装置、设备及可读存储介质
CN113487662A (zh) * 2021-07-02 2021-10-08 广州博冠信息科技有限公司 画面显示方法、装置、电子设备和存储介质
CN113821345B (zh) * 2021-09-24 2023-06-30 网易(杭州)网络有限公司 游戏中的移动轨迹渲染方法、装置及电子设备
CN116828131A (zh) * 2022-03-17 2023-09-29 北京字跳网络技术有限公司 基于虚拟现实的拍摄处理方法、装置及电子设备
CN115019019B (zh) * 2022-06-01 2024-04-30 大连东软信息学院 一种实现3d特效编辑器的方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070057944A1 (en) * 2003-09-17 2007-03-15 Koninklijke Philips Electronics N.V. System and method for rendering 3-d images on a 3-d image display screen
US20170280133A1 (en) * 2014-09-09 2017-09-28 Nokia Technologies Oy Stereo image recording and playback
CN108399634A (zh) * 2018-01-16 2018-08-14 达闼科技(北京)有限公司 基于云端计算的rgb-d数据生成方法及装置
CN110740310A (zh) * 2019-09-29 2020-01-31 北京浪潮数据技术有限公司 一种虚拟场景漫游方法、系统、装置、设备及计算机介质
CN110889384A (zh) * 2019-11-30 2020-03-17 北京城市网邻信息技术有限公司 场景切换方法及装置、电子设备和存储介质
CN111701238A (zh) * 2020-06-24 2020-09-25 腾讯科技(深圳)有限公司 虚拟画卷的显示方法、装置、设备及存储介质

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3653744B2 (ja) * 1994-05-18 2005-06-02 株式会社セガ 画面切替え方法及びこれを用いたゲーム装置
JP2000102670A (ja) * 1998-09-29 2000-04-11 Square Co Ltd ゲーム装置、ゲーム方法および情報記録媒体
JP4071571B2 (ja) * 2002-08-08 2008-04-02 Kpe株式会社 遊技機、遊技機の画像表示制御装置及び画像表示制御プログラム
US7515734B2 (en) * 2006-03-27 2009-04-07 Eyecue Vision Technologies Ltd. Device, system and method for determining compliance with a positioning instruction by a figure in an image
CN101308524A (zh) * 2007-05-15 2008-11-19 上海灵禅信息技术有限公司 一种通用的二维横版游戏地图编辑设计
CN104702936A (zh) * 2015-03-31 2015-06-10 王子强 一种基于裸眼3d显示的虚拟现实交互方法
JP6592481B2 (ja) * 2017-07-31 2019-10-16 任天堂株式会社 ゲームプログラム、情報処理システム、情報処理装置、および、ゲーム処理方法
CN108389245B (zh) * 2018-02-13 2022-11-04 鲸彩在线科技(大连)有限公司 动画场景的渲染方法、装置、电子设备和可读存储介质
CN109939440B (zh) * 2019-04-17 2023-04-25 网易(杭州)网络有限公司 三维游戏地图的生成方法、装置、处理器及终端

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070057944A1 (en) * 2003-09-17 2007-03-15 Koninklijke Philips Electronics N.V. System and method for rendering 3-d images on a 3-d image display screen
US20170280133A1 (en) * 2014-09-09 2017-09-28 Nokia Technologies Oy Stereo image recording and playback
CN108399634A (zh) * 2018-01-16 2018-08-14 达闼科技(北京)有限公司 基于云端计算的rgb-d数据生成方法及装置
CN110740310A (zh) * 2019-09-29 2020-01-31 北京浪潮数据技术有限公司 一种虚拟场景漫游方法、系统、装置、设备及计算机介质
CN110889384A (zh) * 2019-11-30 2020-03-17 北京城市网邻信息技术有限公司 场景切换方法及装置、电子设备和存储介质
CN111701238A (zh) * 2020-06-24 2020-09-25 腾讯科技(深圳)有限公司 虚拟画卷的显示方法、装置、设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4070865A4 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112884874A (zh) * 2021-03-18 2021-06-01 腾讯科技(深圳)有限公司 在虚拟模型上贴花的方法、装置、设备及介质
CN112884874B (zh) * 2021-03-18 2023-06-16 腾讯科技(深圳)有限公司 在虚拟模型上贴花的方法、装置、设备及介质
CN114640838A (zh) * 2022-03-15 2022-06-17 北京奇艺世纪科技有限公司 画面合成方法、装置、电子设备及可读存储介质
CN114640838B (zh) * 2022-03-15 2023-08-25 北京奇艺世纪科技有限公司 画面合成方法、装置、电子设备及可读存储介质
CN114949846A (zh) * 2022-05-17 2022-08-30 网易(杭州)网络有限公司 场景地形的生成方法、装置、电子设备及介质

Also Published As

Publication number Publication date
EP4070865A4 (en) 2023-05-24
JP2023517917A (ja) 2023-04-27
EP4070865A1 (en) 2022-10-12
CN111701238A (zh) 2020-09-25
US20220249949A1 (en) 2022-08-11
KR20220083839A (ko) 2022-06-20
CN111701238B (zh) 2022-04-26

Similar Documents

Publication Publication Date Title
WO2021258994A1 (zh) 虚拟场景的显示方法、装置、设备及存储介质
CN110147231B (zh) 组合特效生成方法、装置及存储介质
KR101623288B1 (ko) 렌더링 시스템, 렌더링 서버, 그 제어 방법, 및 기록 매체
CN112156464B (zh) 虚拟对象的二维形象展示方法、装置、设备及存储介质
US20240078703A1 (en) Personalized scene image processing method, apparatus and storage medium
CN111773709A (zh) 场景地图的生成方法及装置、计算机存储介质、电子设备
CN112337091B (zh) 人机交互方法、装置及电子设备
WO2022247204A1 (zh) 游戏的显示控制方法、非易失性存储介质及电子装置
CN114615513A (zh) 视频数据生成方法、装置、电子设备及存储介质
CN111142967B (zh) 一种增强现实显示的方法、装置、电子设备和存储介质
CN110889384A (zh) 场景切换方法及装置、电子设备和存储介质
CN112206519B (zh) 实现游戏场景环境变化的方法、装置、存储介质及计算机设备
US20230347240A1 (en) Display method and apparatus of scene picture, terminal, and storage medium
US20230330532A1 (en) Methods, terminal device, and storage medium for picture display
US20230315246A1 (en) Computer program, method, and server device
US11983840B2 (en) Method and apparatus for adding map element, terminal, and storage medium
CN114742970A (zh) 虚拟三维模型的处理方法、非易失性存储介质及电子装置
CN110597392B (zh) 一种基于vr模拟世界的交互方法
CN111973984A (zh) 虚拟场景的坐标控制方法、装置、电子设备及存储介质
US11948257B2 (en) Systems and methods for augmented reality video generation
US20230405475A1 (en) Shooting method, apparatus, device and medium based on virtual reality space
WO2023246307A1 (zh) 虚拟环境中的信息处理方法、装置、设备及程序产品
KR102533209B1 (ko) 다이나믹 확장현실(xr) 콘텐츠 생성 방법 및 시스템
KR102575771B1 (ko) 3d 인터랙티브 동영상 생성 방법 및 시스템
US20230316617A1 (en) Computer program, method, and server device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21830213

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20227018077

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021830213

Country of ref document: EP

Effective date: 20220706

ENP Entry into the national phase

Ref document number: 2022554376

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE