CN111939567A - Game virtual scene transformation method and device and electronic terminal - Google Patents

Game virtual scene transformation method and device and electronic terminal Download PDF

Info

Publication number
CN111939567A
CN111939567A CN202010918866.6A CN202010918866A CN111939567A CN 111939567 A CN111939567 A CN 111939567A CN 202010918866 A CN202010918866 A CN 202010918866A CN 111939567 A CN111939567 A CN 111939567A
Authority
CN
China
Prior art keywords
transformation area
target
area
transformation
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010918866.6A
Other languages
Chinese (zh)
Inventor
李籽良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202010918866.6A priority Critical patent/CN111939567A/en
Publication of CN111939567A publication Critical patent/CN111939567A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress

Abstract

The application provides a game virtual scene transformation method, a game virtual scene transformation device and an electronic terminal, relates to the technical field of games, and solves the technical problem that the real experience degree of virtual scene transformation in games is low. The method comprises the following steps: responding to a transformation operation triggered by a control virtual object in an initial transformation area, and determining first movement information of the virtual object in a current game scene, wherein the current game scene is a game scene corresponding to the initial transformation area; determining a starting outgoing state of the first movement information relative to the starting transformation area by taking the starting transformation area as a reference; replacing the reference of the initial outgoing state with a target transformation area to obtain a target outgoing state, and converting the target outgoing state into second movement information in a target game scene corresponding to the target transformation area; and controlling the virtual object to be displayed in the target game scene corresponding to the target transformation area based on the second movement information.

Description

Game virtual scene transformation method and device and electronic terminal
Technical Field
The present application relates to the field of game technologies, and in particular, to a method and an apparatus for changing a virtual scene of a game, and an electronic terminal.
Background
In many game scenes of games, the situation that the virtual scene is changed often occurs, and the game scenes in different spaces can be connected through the change of the virtual scene. For example, when the virtual object enters the shuttle gate from the current game scene, the virtual object can rapidly reach another target game scene which is not connected with the current game scene or even far away from the current game scene through the shuttle gate, thereby realizing the effect of spatial shuttle.
At present, when a virtual object passes through a transformation area of a virtual scene, an effect of directly moving the virtual object from a current game scene to a target game scene is displayed, so that the real experience of a player undergoing virtual scene transformation in a game is low.
Disclosure of Invention
The invention aims to provide a game virtual scene transformation method, a game virtual scene transformation device and an electronic terminal, so as to relieve the technical problem of low real experience degree of virtual scene transformation in a game.
In a first aspect, an embodiment of the present application provides a game virtual scene transformation method, where a three-dimensional game scene of a game includes a scene transformation area and a virtual object, and the scene transformation area includes a start transformation area and a target transformation area; the method comprises the following steps:
responding to a transformation operation triggered by controlling the virtual object in a starting transformation area, and determining first movement information of the virtual object in a current game scene, wherein the current game scene is a game scene corresponding to the starting transformation area, and the movement information comprises at least one of the following information: direction of movement, speed of movement and current orientation;
determining a starting outgoing state of the first movement information relative to the starting transformation area by taking the starting transformation area as a reference;
replacing the reference of the starting outgoing state with the target transformation area to obtain a target outgoing state, and converting the target outgoing state into second movement information in a target game scene corresponding to the target transformation area;
and controlling the virtual object to be displayed in a target game scene corresponding to the target transformation area based on the second movement information.
In one possible implementation, the movement information includes a direction of movement;
the first moving direction in the first moving information is on the same line with respect to the incident direction of the start transformation area and the second moving direction in the second moving information is on the same line with respect to the emission direction of the target transformation area.
In one possible implementation, the movement information further includes a movement speed;
the second moving speed in the second moving information is a result of multiplying the magnitude of the first moving speed in the first moving information by the second moving direction.
In one possible implementation, the movement information includes a current orientation;
the first current orientation in the first movement information is on the same line with respect to the incident direction of the initial transformation area and the second current orientation in the second movement information is on the same line with respect to the emergent direction of the target transformation area.
In one possible implementation, the movement information further includes a relative position;
a first relative position of the virtual object with respect to the start transformation area in the first movement information is the same as a second relative position of the virtual object with respect to the target transformation area in the second movement information.
In one possible implementation, the step of determining a starting outgoing state of the first movement information with respect to the starting transformation area with reference to the starting transformation area includes:
performing mirror image processing on the first moving direction relative to a first longitudinal plane to obtain a first sub-moving direction relative to the initial transformation area, wherein the first longitudinal plane is perpendicular to the plane of the initial transformation area; carrying out mirror image processing on the first sub-moving direction relative to a second longitudinal plane to obtain a second sub-moving direction relative to the initial transformation area, wherein the second longitudinal plane is parallel to the plane of the initial transformation area; determining the second sub-movement direction as a starting outgoing movement direction of the first movement information relative to the starting transformation area; or the like, or, alternatively,
carrying out mirror image processing on the first moving direction relative to the second longitudinal plane to obtain a third sub-moving direction relative to the initial transformation area; carrying out mirror image processing on the third sub-moving direction relative to the first longitudinal plane to obtain a fourth sub-moving direction relative to the initial transformation area; determining the fourth sub-movement direction as a starting outgoing movement direction of the first movement information relative to the starting transformation area.
In one possible implementation, the step of determining a starting outgoing state of the first movement information with respect to the starting transformation area with reference to the starting transformation area includes:
performing mirror image processing on the first current orientation relative to a first longitudinal plane to obtain a first sub-current orientation relative to the initial transformation area, wherein the first longitudinal plane is perpendicular to the plane of the initial transformation area; performing mirror image processing on the first sub current orientation relative to a second longitudinal plane to obtain a second sub current orientation relative to the initial transformation area, wherein the second longitudinal plane is parallel to the plane of the initial transformation area; determining the second sub-current orientation as a starting outgoing orientation of the first movement information relative to the starting transformation region; or the like, or, alternatively,
performing mirror image processing on the first current orientation relative to the second longitudinal plane to obtain a third sub current orientation relative to the initial transformation area; performing mirror image processing on the third sub current orientation relative to the first longitudinal plane to obtain a fourth sub current orientation relative to the initial transformation area; determining the fourth sub-current orientation as a starting outgoing orientation of the first movement information relative to the starting transformation region.
In one possible implementation, the step of determining a starting outgoing state of the first movement information with respect to the starting transformation area with reference to the starting transformation area includes:
performing mirror image processing on a plane relative to the initial transformation area based on the first relative position to obtain a third relative position relative to the initial transformation area;
determining the third relative position as a starting outgoing relative position of the first movement information relative to the starting transformation area.
In one possible implementation, before the step of determining a starting outgoing state of the first movement information relative to the starting transformation area by taking the starting transformation area as a reference, the method further includes:
converting the first movement information into a starting incoming state of the virtual object in the current game scene relative to the starting transformation area.
In one possible implementation, the step of converting the target outgoing state into second movement information in a target game scene corresponding to the target transformation area includes:
and converting a target outgoing state relative to the target transformation area into second movement information of the virtual object in the target game scene relative to the target transformation area.
In one possible implementation, the three-dimensional game scene further comprises a transformation area virtual camera, the transformation area virtual camera is bound with the target transformation area, and the transformation area virtual camera faces to the direction of the target game scene; the method further comprises the following steps:
when the virtual object is in the current game scene, acquiring a target image corresponding to a three-dimensional game scene in the target game scene through the transformation area virtual camera;
rendering the target image to obtain target texture;
and pasting the target texture to a range formed by a frame of the initial transformation area, and displaying the initial transformation area after pasting the texture in a graphical user interface.
In one possible implementation, the step of capturing, by the transformation area virtual camera, a target image corresponding to a three-dimensional game scene in the target game scene includes:
acquiring a current starting position of the current virtual object in the current game scene, and determining a current target position relative to the target transformation area in the target game scene based on the current starting position; determining the current target position as a current position of a transformation area virtual camera;
acquiring a first sight direction of the virtual object facing the initial transformation area in the current game scene, and determining a second sight direction of the virtual object facing the target transformation area based on the first sight direction, wherein the first sight direction is on the same ray with respect to the incident direction of the initial transformation area and the incident direction of the second sight direction with respect to the target transformation area; determining the emergent direction of the second sight line direction as the current facing direction of the virtual camera of the transformation area;
and acquiring a target image corresponding to a three-dimensional game scene in the current target game scene through the transformation area virtual camera which faces the current direction and is located at the current position.
In one possible implementation, the step of rendering the target image to obtain the target texture includes:
when the sight line of the virtual object moves, rendering the target image to obtain target texture;
rendering the target image by using a camera view rendering frame rate of the virtual camera in the transformation area to obtain a target texture, wherein the camera view rendering frame rate is determined according to the distance between the virtual object and the initial transformation area, and the camera view rendering frame rate is less than or equal to the rendering frame rates of other areas except the target texture in the graphical user interface;
performing intersection test between the geometric model of the initial transformation area shape and the camera view cone of the virtual object view angle to obtain a test result, and if the test result is that the geometric model is not in the camera view cone, canceling the rendering process of the target image;
rendering the target image based on the camera resolution of the virtual camera in the transformation area to obtain a target texture, wherein the camera resolution is determined according to the distance between the virtual object and the initial transformation area, and the camera resolution is less than or equal to the resolution of the other areas except the target texture in the graphical user interface.
In one possible implementation, the step of rendering the target image to obtain the target texture includes:
performing the following steps by using a recursion function cycle based on a preset maximum recursion frequency until judging that other initial transformation areas except the current initial transformation area do not exist in the current transformation area virtual camera view angle bound by the current target transformation area, and rendering the target image based on a plurality of determined initial transformation areas to obtain a target texture:
performing intersection test between the geometric body model of the initial transformation area shape and the camera view cone of the current transformation area virtual camera view angle to obtain a test result;
judging whether other initial transformation areas except the current initial transformation area exist in the current transformation area virtual camera visual angle according to the test result;
if so, taking the other initial transformation areas as the current initial transformation area in the next recursion judgment process.
In one possible implementation, the step of controlling the virtual object to be displayed in the target game scene corresponding to the target transformation area based on the second movement information includes:
copying the object model of the virtual object transformed in the scene transformation area, wherein the appearance and the action of the two copied object models are consistent;
determining a first part of the object models on one side of the starting transformation area in one of the two object models and determining a second part of the object models on one side of the target transformation area in the other object model;
displaying the first partial object model in the start transformation area based on the first movement information and displaying the second partial object model in the target transformation area based on the second movement information in a graphical user interface.
In a second aspect, a game virtual scene transformation device is provided, wherein a three-dimensional game scene of a game comprises a scene transformation area and a virtual object, and the scene transformation area comprises a starting transformation area and a target transformation area; the device comprises:
a first determining module, configured to determine, in response to a transformation operation triggered by controlling the virtual object in a starting transformation area, first movement information of the virtual object in a current game scene, where the current game scene is a game scene corresponding to the starting transformation area, and the movement information includes at least one of the following information: direction of movement, speed of movement and current orientation;
a second determining module, configured to determine, with the initial transformation area as a reference, an initial outgoing state of the first movement information with respect to the initial transformation area;
the conversion module is used for replacing the reference of the starting outgoing state with the target conversion area to obtain a target outgoing state, and converting the target outgoing state into second mobile information in a target game scene corresponding to the target conversion area;
and the control module is used for controlling the virtual object to be displayed in a target game scene corresponding to the target transformation area based on the second movement information.
In a third aspect, an embodiment of the present application further provides an electronic terminal, which includes a memory and a processor, where the memory stores a computer program that is executable on the processor, and the processor executes the computer program to implement the method in the first aspect.
In a fourth aspect, this embodiment of the present application further provides a computer-readable storage medium storing machine executable instructions, which, when invoked and executed by a processor, cause the processor to perform the method of the first aspect.
The embodiment of the application brings the following beneficial effects:
the embodiment of the application provides a game virtual scene transformation method, a game virtual scene transformation device and an electronic terminal, which can determine first movement information of a virtual object in a current game scene when a transformation operation is triggered in an initial transformation area, then determine an initial outgoing state of the first movement information relative to the initial transformation area based on the initial transformation area, then replace the reference of the initial outgoing state with a target transformation area to obtain a target outgoing state, convert the target outgoing state into second movement information in a target game scene corresponding to the target transformation area, and finally control the virtual object to be displayed in the target game scene based on the second movement information. And then the determined outgoing state of the virtual object relative to the initial transformation area after passing through the initial transformation area can be kept unchanged, the reference of the outgoing state is replaced by the target transformation area, and the outgoing state relative to the target transformation area is obtained, and the converted second movement information can still keep the original movement state unchanged, so that the displayed virtual object can achieve the effect of really passing through the scene transformation area, and the virtual object is not directly subjected to transient movement, so that the reality degree and game experience of passing through the scene transformation area are enhanced, and the technical problem that the real experience degree of a player in a game for virtual scene transformation is low is solved.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the detailed description of the present application or the technical solutions in the prior art, the drawings needed to be used in the detailed description of the present application or the prior art description will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic terminal in an application scenario provided in the embodiment of the present application;
fig. 3 is a schematic flowchart of a method for changing a game virtual scene according to an embodiment of the present disclosure;
fig. 4 is an example of corresponding incoming and outgoing states of a virtual object when the virtual object passes through a scene change area in the game virtual scene change method provided in the embodiment of the present application;
fig. 5 is a schematic diagram of a graphical user interface for displaying a scene change area according to an embodiment of the present application;
fig. 6 is a schematic view of a visual line direction of a virtual object and a transformation area virtual camera according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a game virtual scene transformation apparatus according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions of the present application will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "comprising" and "having," and any variations thereof, as referred to in the embodiments of the present application, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Currently, virtual scene changes are a mechanism in the game, such as a shuttle gate, that allows the virtual object to reach the target area immediately as it passes through the shuttle gate. Some games use a shuttle door mechanism as a core playing method, and players need to solve various spatial puzzles through the shuttle door.
In the existing game, when the virtual object passes through the shuttle gate, the displayed effect is that the virtual object is directly moved to the target game scene from the current game scene instantly, so that the real experience degree of the player using the shuttle gate in the game is lower.
Based on this, the embodiment of the application provides a game virtual scene transformation method, a game virtual scene transformation device and an electronic terminal, and the technical problem that the real experience of the player in the game virtual scene transformation is low can be solved through the method.
The game virtual scene transformation method in the embodiment of the application can be applied to the electronic terminal. Wherein the electronic terminal comprises a display for presenting a graphical user interface, an input device and a processor. The input device may be a keyboard, mouse, touch screen, or the like for receiving operations directed to the graphical user interface.
In practical application, the electronic terminal may be a computer device, or may also be a touch terminal such as a touch screen mobile phone and a tablet computer. As an example, the electronic terminal is a touch terminal, and the display and the input device thereof may be integrated into a touch screen for presenting and receiving operations for a graphical user interface.
In some embodiments, when the electronic terminal operates the graphical user interface, the graphical user interface may be used to operate content local to the electronic terminal, and may also be used to operate content of the peer server.
For example, as shown in fig. 1, fig. 1 is a schematic view of an application scenario provided in the embodiment of the present application. The application scenario may include an electronic terminal (e.g., a cell phone 102) and a server 101, and the electronic terminal may communicate with the server 101 through a wired network or a wireless network. The electronic terminal is used for running a virtual desktop, and can interact with the server 101 through the virtual desktop to operate the content in the server 101.
The electronic terminal of the embodiment is described by taking the mobile phone 102 as an example. The handset 102 includes Radio Frequency (RF) circuitry 110, memory 120, a touch screen 130, a processor 140, and the like. Those skilled in the art will appreciate that the handset configuration shown in fig. 2 is not intended to be limiting and may include more or fewer components than those shown, or may combine certain components, or split certain components, or arranged in different components. Those skilled in the art will appreciate that the touch screen 130 is part of a User Interface (UI) and that the cell phone 102 may include fewer than or the same User Interface as illustrated.
The RF circuitry 110 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
The memory 120 may be used to store software programs and modules, and the processor 140 executes various functional applications and data processing of the handset 102 by executing the software programs and modules stored in the memory 120. The memory 120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the stored data area may store data created from use of the handset 102, and the like. Further, the memory 120 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The touch screen 130 may be used to display a graphical user interface and receive user operations with respect to the graphical user interface. A particular touch screen 130 may include a display panel and a touch panel. The Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may collect contact or non-contact operations (e.g., operations of a user on or near the touch panel using any suitable object or accessory such as a finger, a stylus, etc.) of the user thereon or nearby, and generate preset operation instructions. In addition, the touch panel may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction and gesture of a user, detects signals brought by touch operation and transmits the signals to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into information that can be processed by the processor, sends the information to the processor 140, and receives and executes a command sent by the processor 140. In addition, the touch panel may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, a surface acoustic wave, and the like, and may also be implemented by any technology developed in the future. Further, the touch panel may cover the display panel, a user may operate on or near the touch panel covered on the display panel according to a graphical user interface displayed by the display panel, the touch panel detects an operation thereon or nearby and transmits the operation to the processor 140 to determine a user input, and the processor 140 provides a corresponding visual output on the display panel in response to the user input. In addition, the touch panel and the display panel can be realized as two independent components or can be integrated.
The processor 140 is the control center of the handset 102, connects various parts of the entire handset using various interfaces and lines, and performs various functions and processes of the handset 102 by running or executing software programs and/or modules stored in the memory 120 and calling data stored in the memory 120, thereby performing overall monitoring of the handset.
Embodiments of the present invention are further described below with reference to the accompanying drawings.
Fig. 3 is a schematic flow chart of a game virtual scene transformation method according to an embodiment of the present application. The three-dimensional game scene of the game comprises a scene transformation area and a virtual object, wherein the scene transformation area comprises a starting transformation area and a target transformation area. As shown in fig. 1, the method includes:
in step S310, in response to a transformation operation triggered by controlling a virtual object in a start transformation area, first movement information of the virtual object in a current game scene is determined.
And the current game scene is the game scene corresponding to the initial transformation area. Further, the movement information may include at least one of: direction of movement, speed of movement, and current orientation. The first movement information in this step refers to a movement state of the current game scene corresponding to the initial transformation area, for example, a movement direction, a movement speed, a current orientation, and the like of the virtual object in the current game scene. Of course, the movement information in the embodiment of the present application may also include movement states of other various attributes such as a relative position.
In the embodiment of the present application, the virtual object may be any object in the game, such as a virtual character, a virtual animal, a virtual weapon, a virtual tool, a virtual building, and the like.
In practical applications, the scene transformation regions are paired, that is, one transformation region corresponds to another transformation region, and the two transformation regions can transform scenes with each other. For example, two transformation areas are placed in a game scene, the orientation of the two transformation areas is different by 180 degrees, in the process that a virtual object passes through one transformation area, the passing transformation area is a starting transformation area, and the transformation area transformed into the target transformation area is a target transformation area. The initial transformation area and the target transformation area can be placed anywhere in a game scene, and when the virtual object is located in the initial transformation area, the transformation operation can be triggered.
Step S320 is to determine a start outgoing state of the first movement information relative to the start transformation area based on the start transformation area.
The reference is understood to be a fixed coordinate system based on the initial transformation area. For example, the transformation start area is a solid area composed of a regular quadrangle, and similarly to a door frame, one quadrangle vertex of the transformation start area is used as an origin of the coordinate system, and the sides of the boundary length, width and height of the area extending from the origin are respectively the x axis, the y axis and the z axis of the coordinate system.
In this step, the moving state of the first movement information with respect to the initial transformation area is not changed. And a start outgoing state with respect to the start transformation area refers to a state with respect to the start transformation area when the virtual object has passed through the start transformation area.
For example, during the process that the virtual object passes through the transformation start area, the states of the aspects of the moving direction, the moving speed, the current orientation and the like of the virtual object relative to the transformation start area are kept unchanged, so that the moving direction, the moving speed, the current orientation and the like of the virtual object relative to the transformation start area before passing through the transformation start area and the moving direction, the moving speed, the current orientation and the like relative to the transformation start area after passing through the transformation start area are unchanged, and the moving states before and after passing through the door frame are unchanged similar to the effect of passing through one door frame.
For example, as shown in fig. 4, the virtual object passes through the start transformation area a in the moving direction of a vector v1 relative to the start transformation area a, and passes out of the start transformation area a with respect to the moving vector v2 of the start transformation area a, the direction of the vector v1 relative to the start transformation area a is still maintained.
Step S330, replacing the reference of the initial outgoing state with the target transformation area to obtain a target outgoing state, and converting the target outgoing state into second movement information in the target game scene corresponding to the target transformation area.
The second world state refers to a moving state of the target game scene corresponding to the target change area, and for example, a moving direction, a moving speed, a current orientation, and the like of the virtual object in the target game scene.
In this step, the relative state between the initial outgoing state with respect to the initial transformation area and the target outgoing state with respect to the target transformation area is the same, but the relative reference is different between the two states. In step S320, the reference of the initial outgoing state with respect to the initial transformation area is the initial transformation area, and in this step, the reference is replaced with the target transformation area, so that the target outgoing state with respect to the target transformation area is obtained.
For example, as shown in fig. 4, the virtual object passes out of the start transform region a with respect to the start transform region a in the moving direction of the vector v2, and the moving vector v 2' of the virtual object with respect to the target transform region B when passing out of the target transform region B is the same as the relative direction of the vector v2 with respect to the start transform region. It can also be understood that the orientation of a vector with respect to the originating transform region is also maintained with respect to the orientation of the target transform region as it passes through and out of the originating transform region.
In step S340, the target game scene corresponding to the virtual object in the target transformation area is controlled to be displayed based on the second movement information.
The virtual object can appear in the target game scene corresponding to the target transformation area in the state of the second mobile information, so that the effect that the virtual object enters the scene transformation area from the initial transformation area and penetrates out of the target transformation area is achieved.
The method comprises the steps of determining the outgoing state of first movement information relative to an initial transformation area when a virtual object is incoming, keeping the original movement state of the virtual object relative to the initial transformation area unchanged when the initial transformation area triggers transformation operation, enabling the determined outgoing state of the virtual object relative to the initial transformation area to be unchanged after the virtual object passes through the initial transformation area, replacing the reference of the outgoing state with a target transformation area, further obtaining the outgoing state relative to the target transformation area, enabling the converted second movement information to still keep the original movement state unchanged, further enabling the displayed virtual object to achieve the effect of really passing through the scene transformation area, and not directly carrying out transient movement on the virtual object, and further enhancing the reality degree and game experience of passing through the scene transformation area.
The above steps are described in detail below.
In some embodiments, the direction of movement of the virtual object as it passes through the scene change region may be kept constant to make the player feel more realistic to pass through a tandem scene change region. As one example, the movement information includes a direction of movement; the first moving direction in the first moving information is on the same line with respect to the incident direction of the start transformation area and the second moving direction in the second moving information is on the same line with respect to the emission direction of the target transformation area.
For example, as shown in fig. 4, a virtual object penetrates the transformation start area a with respect to the transformation start area a in the incident direction of the movement vector v 1. The incident direction of the motion vector v1 with respect to the start transformation area a is the same as the incident direction of the motion vector v 1' with respect to the target transformation area B. The movement vector v2 'of the virtual object relative to the target transformation area B when passing out of the target transformation area B is on a same line as the movement vector v 1' of the virtual object relative to the target transformation area B when passing into the target transformation area B.
When the virtual object passes through the scene change area, the incident moving direction relative to the initial change area and the emergent moving direction relative to the target change area are on the same line, so that the moving direction of the virtual object when passing through the scene change area can be kept unchanged, and a player feels that the virtual object really passes through a front-back connected change area.
Based on this, the moving direction of the virtual object after the scene change can be calculated by the mirroring process, so that the moving direction of the virtual object when passing through the pair of scene change areas is kept unchanged. As an example, the step S320 may include any one of the following steps:
step a), carrying out mirror image processing on the first moving direction relative to a first longitudinal plane to obtain a first sub-moving direction relative to the initial transformation area, wherein the first longitudinal plane is vertical to the plane of the initial transformation area; carrying out mirror image processing on the first sub-moving direction relative to a second longitudinal plane to obtain a second sub-moving direction relative to the initial transformation area, wherein the second longitudinal plane is parallel to the plane of the initial transformation area; determining the second sub-moving direction as a starting outgoing moving direction of the first moving information relative to the starting transformation area;
step b), carrying out mirror image processing on the first moving direction relative to the second longitudinal plane to obtain a third sub-moving direction relative to the initial transformation area; carrying out mirror image processing on the third sub-moving direction relative to the first longitudinal plane to obtain a fourth sub-moving direction relative to the initial transformation area; the fourth sub-movement direction is determined as a starting outgoing movement direction of the first movement information relative to the starting transformation area.
For the above step a), for example, assuming that the movement vector of the virtual object entering the transformation start area a relative to the transformation start area a is v1, the vector v1 is mirrored toward the vertical plane perpendicular to the transformation start area a, and then mirrored toward the plane of the transformation start area a to obtain the movement vector v 2.
For the above step b), for example, assuming that the motion vector of the virtual object entering the transformation start area a relative to the transformation start area a is v1, the vector v1 is mirrored toward the plane of the transformation start area a, and then mirrored toward the vertical plane perpendicular to the transformation start area a, so as to obtain the motion vector v 2.
The moving direction of the virtual object after the scene change is calculated by carrying out mirror image processing twice on the moving vector entering the initial change area, so that the moving direction of the virtual object when the virtual object passes through the scene change area is kept unchanged, and the calculating efficiency of the moving direction of the virtual object after the scene change can be improved.
In some embodiments, the moving speed of the virtual object through the scene change area may be kept constant, so that the player may feel more realistic that the virtual object passes through a front-back transition area. As one example, the movement information further includes a movement speed; the second moving speed in the second moving information is a result of multiplying the magnitude of the first moving speed in the first moving information by the second moving direction.
The virtual object has a moving speed when passing through the initial transformation area, the speed of the virtual object is kept unchanged when the virtual object comes out of the target transformation area, and the calculation method of the speed direction unit vector is the same as that of the moving direction unit vector.
The speed and the speed direction are processed separately, and the original speed is multiplied by the calculated unit vector of the moving direction after the scene is changed, so that the overall moving speed after the scene is changed is obtained, the relative speed of the speed after the scene is changed relative to the target changing area and the relative speed before the scene is changed relative to the initial changing area are kept unchanged, and the reality degree of the player passing through the changing area is improved.
In some embodiments, the current orientation of the virtual object as it passes through a pair of scene change regions may be maintained to make the player feel more realistic to pass through a tandem scene change region. As one example, the movement information includes a current orientation; the first current orientation in the first movement information is on the same line with respect to the incident direction of the initial transformation area and the second current orientation in the second movement information is on the same line with respect to the emergent direction of the target transformation area.
Wherein, the current orientation refers to the orientation of the model body of the virtual object, namely the model rotation angle of the virtual object in the game scene. As shown in fig. 4, when the virtual object penetrates into the transformation start area a, an incident vector with respect to the current orientation of the transformation start area a is a vector v 1. The incident direction of the orientation vector v1 with respect to the start transformation area a is the same as the incident direction of the orientation vector v 1' with respect to the target transformation area B. The vector v2 'of the orientation of the virtual object with respect to the target transformed region B as it exits the target transformed region B is on the same ray as the vector v 1' of the orientation of the virtual object with respect to the target transformed region B as it exits the target transformed region B.
When the virtual object passes through the scene change area pair, the incident orientation direction relative to the initial change area and the emergent orientation direction relative to the target change area are on the same line, so that the orientation of the model of the virtual object passing through the scene change area pair can be kept unchanged, and a player can feel that the virtual object really passes through a scene change area which is connected in front and back.
Based on this, the orientation of the virtual object after the scene change can be calculated by the mirroring process, so that the orientation of the virtual object when passing through the scene change area remains unchanged. As an example, the step S320 may include any one of the following steps:
step c), carrying out mirror image processing on the first current orientation relative to a first longitudinal plane to obtain a first sub current orientation relative to the initial transformation area, wherein the first longitudinal plane is vertical to the plane of the initial transformation area; carrying out mirror image processing on the first sub current orientation relative to a second longitudinal plane to obtain a second sub current orientation relative to the initial transformation area, wherein the second longitudinal plane is parallel to the plane of the initial transformation area; determining the second sub-current orientation as a starting outgoing orientation of the first movement information relative to the starting transformation area;
step d), carrying out mirror image processing on the first current orientation relative to the second longitudinal plane to obtain a third sub current orientation relative to the initial transformation area; carrying out mirror image processing on the third sub current orientation relative to the first longitudinal plane to obtain a fourth sub current orientation relative to the initial transformation area; determining the fourth sub-current orientation as a starting outgoing orientation of the first movement information relative to the starting transformation area.
For the above step a), for example, assuming that the orientation vector of the virtual object entering the transformation start area a relative to the transformation start area a is v1, the vector v1 is mirrored towards the vertical plane perpendicular to the transformation start area a, and then mirrored towards the plane of the transformation start area a to obtain the orientation vector v 2.
For the above step b), for example, assuming that the orientation vector of the virtual object entering the transformation start area a relative to the transformation start area a is v1, the vector v1 is mirrored toward the plane of the transformation start area a, and then mirrored toward the vertical plane perpendicular to the transformation start area a to obtain the orientation vector v 2.
The current orientation of the virtual object after the scene change is calculated by carrying out mirror image processing twice on the vector of the current orientation when the virtual object enters the initial change area, so that the orientation of the virtual object when the virtual object passes through the scene change area is kept unchanged, and the calculation efficiency of the orientation after the scene change can be improved.
In some embodiments, the relative positions of the virtual objects before and after passing through the scene change region may be maintained, in some embodiments, to make the player feel more realistic that a scene change region is passed in tandem. As one example, the movement information also includes a relative position; a first relative position of the virtual object with respect to the start transformation area in the first movement information is the same as a second relative position of the virtual object with respect to the target transformation area in the second movement information.
In the embodiment of the present application, the relative position of the virtual object with respect to the start transformation area when penetrating into the start transformation area is the same as the relative position of the virtual object with respect to the target transformation area when penetrating out of the target transformation area. For example, the virtual object passes through the start transformation area from the upper left position of the start transformation area, and when the virtual object passes through the target transformation area, the virtual object also passes through the target transformation area from the upper left position of the target transformation area.
By making the penetrating position of the virtual object relative to the starting transformation area the same as the penetrating position relative to the target transformation area when the virtual object passes through the scene transformation area, the relative position of the virtual object relative to the scene transformation area can be kept unchanged when the virtual object passes through the scene transformation area, and the player can feel that the virtual object actually passes through a scene transformation area which is connected in front and back.
Based on this, the position of the virtual object after the scene change with respect to the target change region can be calculated by the mirroring process, so that the relative position of the virtual object when passing through the scene change region is kept unchanged. As an example, the step S320 may include the steps of:
step e), carrying out mirror image processing relative to the plane of the initial transformation area based on the first relative position to obtain a third relative position relative to the initial transformation area;
step f), determining the third relative position as a starting outgoing relative position of the first movement information with respect to the starting transformation area.
For the calculation process of the position of the virtual object after the scene transformation, assuming that the position of the player relative to the initial transformation region when entering the initial transformation region is p1, the position of p1 relative to the plane of the initial transformation region is subjected to mirroring processing to obtain p2, and finally the position of the virtual object when penetrating out of the target transformation region relative to the target transformation region is the same as the relative position of p2 relative to the initial transformation region.
The relative position of the virtual object after the scene change is calculated by carrying out mirror image processing on the relative position when the virtual object enters the initial change area, so that the relative position of the virtual object before and after passing through the scene change area is kept unchanged, and the calculation efficiency of the position of the virtual object after the scene change can be improved.
In some embodiments, the first movement information relative to the current game scenario may be first converted to a start incoming state relative to the start transition region prior to determining a start outgoing state relative to the start transition region. As an example, before step S320, the method may further include the steps of:
and step g), converting the first movement information into a starting incoming state of the virtual object in the current game scene relative to the starting transformation area.
The first movement information is movement information with respect to the current game scene corresponding to the start conversion area as a reference. In the embodiment of the application, the first movement information relative to the current game scene is firstly converted into the starting incoming state relative to the starting transformation area, so that the starting outgoing state relative to the starting transformation area is determined by keeping the starting incoming state unchanged.
Based on the step g), the target outgoing state relative to the target transformation area may be first converted into second movement information relative to the target game scene, so as to display the virtual object in the target game scene based on the second movement information. As an example, the process of converting the target outgoing state into the second movement information in the target game scene corresponding to the target transformation area in step S330 may include the following steps:
and h), converting the target outgoing state relative to the target transformation area into second movement information of the virtual object relative to the target game scene in the target transformation area.
The second movement information is movement information with respect to the target game scene to which the target conversion area is directed, with reference to the target game scene. In this embodiment of the application, the target outgoing state with respect to the target transformation area may be first converted into second movement information with respect to the target game scene by using the world transformation matrix of the target transformation area, so that the virtual object may be displayed based on the second movement information subsequently.
In some embodiments, the player may observe a scene image of the target game scene corresponding to the target transformation area through the launch transformation area. As an example, the three-dimensional game scene further comprises a transformation area virtual camera, the transformation area virtual camera is bound with the target transformation area, and the transformation area virtual camera faces the direction of the target game scene; the method may further comprise the steps of:
step i), when the virtual object is in the current game scene, acquiring a target image corresponding to a three-dimensional game scene in a target game scene through a transformation area virtual camera;
step j), rendering the target image to obtain a target texture;
and k), pasting the target texture into the range formed by the frame of the initial transformation area, and displaying the initial transformation area after pasting the texture in the graphical user interface.
In the embodiment of the application, a transformation area virtual camera can be bound to a target transformation area, then a scene of a target game scene acquired by the transformation area virtual camera is rendered on a texture through the visual angle of the transformation area virtual camera, and then the texture is attached to an initial transformation area in front of a virtual object, as shown in fig. 5, a player can see a target game scene C corresponding to the target transformation area through the initial transformation area a, so that the player can see a corresponding scene opposite to the scene transformation area.
By using an additional virtual camera of the transformation area and rendering textures, the rendering process of the target game scene opposite to the scene transformation area is realized, even if the target game scene is not connected with the current game scene in space, a player can see the target game scene through the initial transformation area, and the reality degree and the game experience of the scene transformation area are improved.
Based on the steps i), j) and k), the target game scene displayed in the initial transformation area can change along with the position movement and the sight line movement of the virtual object in the current game scene, so that the display effect is dynamic and more real. As an example, the step i) may include the steps of:
step l), acquiring the current initial position of the current virtual object in the current game scene, and determining the current target position relative to the target transformation area in the target game scene based on the current initial position; determining the current target position as the current position of the transformation area virtual camera;
step m), acquiring a first sight direction of the current virtual object facing the initial transformation area in the current game scene, and determining a second sight direction of the virtual object facing the target transformation area based on the first sight direction, wherein the first sight direction and the second sight direction are on the same ray; determining the emergent direction of the second sight line direction as the current orientation direction of the virtual camera in the transformation area;
and n), acquiring a target image corresponding to the three-dimensional game scene in the current target game scene through the transformation area virtual camera which faces the direction at present and is located at the current position.
For example, as shown in fig. 6, the direction of the line of sight of the virtual object through the transformation start area is an arrow Va, and the direction of image capture of the transformation area virtual camera on the target transformation area toward the target game scene is an arrow Vb. The direction of Va relative to the initial transformation area is the same as the direction of Vb relative to the target transformation area, so that the transformation area virtual camera shoots the target game scene in the same sight line direction as the virtual object.
For a specific algorithm for transforming the orientation of the area virtual camera, the specific acquisition orientation of the transformation area virtual camera may be determined based on the gaze direction of the virtual object by using the mirror image processing method of the current orientation in step c) and step d) above. For the specific algorithm of the position of the transformation area virtual camera, the specific acquisition position of the transformation area virtual camera may be determined based on the relative position of the virtual object with respect to the initial transformation area using the mirror processing method of the current orientation in step e) and step f) above.
The shooting direction of the virtual camera in the conversion area and the position of the virtual camera relative to the target conversion area are determined according to the sight line direction of the virtual object in the current game scene and the position of the virtual object relative to the initial conversion area, so that the visual angle of the virtual object is simulated, the effect that the target game scene displayed in the initial conversion area changes along with the movement of the position and the sight line of the virtual object in the current game scene is achieved, and the dynamic display effect is more real.
Based on the steps i), j) and k), the rendering frame rate, the rendering resolution and whether the target game scene image displayed in the initial transformation area frame is displayed or not can be optimized based on different specific conditions of the virtual object in the current game scene. As an example, the step j) may include the steps of:
step o), when the sight of the virtual object moves, rendering the target image to obtain target texture;
step p), rendering the target image by adopting a camera view rendering frame rate of the virtual camera in the transformation area to obtain a target texture, wherein the camera view rendering frame rate is determined according to the distance between the virtual object and the initial transformation area, and the camera view rendering frame rate is less than or equal to the rendering frame rate of other areas except the target texture in the graphical user interface;
step q), performing intersection test between the geometric model of the initial transformation area shape and the camera view cone of the virtual object view angle to obtain a test result, and if the test result is that the geometric model is not in the camera view cone, canceling the rendering process of the target image;
and r), rendering the target image based on the camera resolution of the virtual camera in the transformation area to obtain a target texture, wherein the camera resolution is determined according to the distance between the virtual object and the initial transformation area, and the camera resolution is less than or equal to the resolution of the other areas except the target texture in the graphical user interface.
For the above step p), illustratively, the rendering frame rate (refresh rate) of the transformation area virtual camera may be adjusted according to the distance of the virtual object from the start transformation area, and a frame rate lower than the main viewing angle of the virtual object may be used. For example, the main view angle is to render 60 frames per second, and if the virtual object is closer to the initial transformation area, the rendering frame rate of the virtual camera in the transformation area may be 50 frames per second; if the virtual object is farther from the starting transformation region, the transformation region virtual camera may use a lower rendering frame rate, such as 10 frames per second; when the virtual object is close to the initial transformation area, the rendering frame rate of the virtual camera in the transformation area is increased; it is also possible to update the rendering of the transform area virtual camera only when the perspective of the virtual object moves, otherwise the previous rendering results are used. In this way, the player is neither aware nor able to save performance overhead of data processing.
For step q) above, it is exemplary that the image captured by the transform area virtual camera may not be rendered directly if the starting transform area is not visible to the virtual object. For example, a rectangle may be used to represent the starting transformation area, an intersection test is performed with the rectangle and the camera view frustum of the virtual object, and if the rectangle is found not to be within the view frustum, the camera rendering of the virtual camera of the transformation area may be skipped. Thus, if there are many unseen start transform regions in the scene, a large amount of CPU and GPU processing time can be saved.
For the above step r), for example, the rendering efficiency may be optimized by appropriately reducing the resolution of the rendered texture, because the transformation start area does not completely occupy the entire screen, and the image in the area frame of the transformation start area, that is, the image captured by the transformation area virtual camera, may be rendered by using a size lower than the resolution of the screen. For example, rendering textures of different sizes are pre-allocated, and when the virtual object gets closer to the transformation start area, the texture is switched to a texture of a larger resolution, and when the virtual object gets farther from the transformation start area, the texture is switched to a smaller size.
By optimizing the rendering frame rate, the rendering resolution, the elimination and the like of the virtual camera in the transformation area, the performance overhead of image processing can be reduced, and particularly when the number of initial transformation areas in the three-dimensional scene is large. Moreover, the situation that the scene in the area frame of the initial transformation area is rendered twice (one time is a normal view angle, and the other time is a view angle of the transformation area virtual camera) when the virtual object looks at the initial transformation area can be avoided, so that the great pressure caused by the performance is relieved.
Based on the above steps i), j) and k), the rendering of the initial transformation area can be realized by means of recursive rendering. As an example, the step j) may include the steps of:
step s), circularly performing the following steps by using a recursion function based on a preset maximum recursion frequency until judging that other initial transformation areas except the current initial transformation area do not exist in the current transformation area virtual camera view angle bound by the current target transformation area, and rendering the target image based on a plurality of determined initial transformation areas to obtain the target texture:
step t), performing intersection test between the geometric solid model of the initial transformation area shape and the camera view cone of the virtual camera view angle of the current transformation area to obtain a test result;
step u), judging whether other initial transformation areas except the current initial transformation area exist in the visual angle of the virtual camera in the current transformation area according to the test result;
and v), if so, taking other initial transformation areas as the current initial transformation area in the next recursion judgment process.
In practical applications, when a virtual object sees another transformation start area through one transformation start area, the transformation start area inside the transformation start area needs to perform recursive rendering. This is similar to the situation where one mirror is facing the other, and a scene with infinite recursive extension in the mirror can be seen.
In the embodiment of the application, when the virtual camera in the transformation area is rendered, whether other initial transformation areas exist in the visual angle of the virtual camera is judged, the mode of judgment can be that a viewing cone is combined with a rectangle to solve an intersection test, and if other initial transformation areas exist in the viewing cone, other initial transformation areas in the initial transformation area are rendered. The whole judgment process is carried out recursively, namely the initial transformation area in the initial transformation area needs to make corresponding judgment continuously, so that a more effective recursive rendering effect is realized under the condition that other initial transformation areas are nested in the initial transformation area.
To prevent infinite recursion (as in the case of two starting transformation areas facing each other), the maximum number of recursions can be controlled by presetting the maximum number of recursions. When the preset maximum recursion times are reached, the operation is stopped to judge whether other initial transformation areas exist in the initial transformation area.
It should be noted that, each time the calculation method using the recursive function is the same, but each time the calculation process is performed at a different position, the starting transform region is changed to another starting transform region inside the starting transform region, and the position angle of the starting transform region where the starting transform region is located is changed each time. It should be noted that, when rendering the transformation start area in the transformation start area, not the perspective position of the virtual object but the perspective position of the transformation start area or the perspective position of the virtual camera of the transformation start area is used, because the perspective from the transformation start area is different from the perspective of the virtual object.
The recursive effect is rendered through recursive processing, and whether the virtual object can see other initial transformation areas from the area frame of the initial transformation area can be determined more efficiently, accurately and effectively.
In some embodiments, a seamless transition may also be achieved during the virtual object's passage through the scene transition region. As an example, the step S340 may include the following steps:
step w), copying the object model of the virtual object transformed in the scene transformation area, wherein the appearance and the action of the two copied object models are consistent;
step x), determining a first part of the object model on one side of the starting transformation area in one of the two object models, and determining a second part of the object model on one side of the target transformation area in the other object model;
and step y), displaying the first part of the object model in the starting transformation area based on the first movement information and displaying the second part of the object model in the target transformation area based on the second movement information in the graphical user interface.
For the above step w), two identical object models can be controlled simultaneously using the same motion instruction, i.e. the appearance and motion of the two object models coincide.
For steps x) and y) above, one of the two object models displays only a part of the object model (e.g. a part of the body) on one side of the start transformation area at one end (e.g. the incoming position) of the start transformation area, and a part of the object model (e.g. another part of the body) on the other side of the start transformation area is not displayed; the other of the two object models only shows a part of the object model (e.g. a part of the body) on one side of the object transformation area at one end (e.g. the efferent location) of the object transformation area, and the part of the object model (e.g. the other part of the body) on the other side of the object transformation area is not shown.
By enabling one part of the body of the virtual object to be displayed only on one side of the initial transformation area and the other part to be displayed only on one side of the target transformation area, the appearance and the action of the models respectively corresponding to the two parts are ensured to be consistent, so that the seamless display effect is realized, the transient transformation effect with low reality degree is avoided, and the penetrating effect of the virtual object penetrating through the scene transformation area is more real.
Fig. 7 provides a schematic structural diagram of a game virtual scene transformation device. The three-dimensional game scene of the game comprises a scene transformation area and a virtual object, wherein the scene transformation area comprises a starting transformation area and a target transformation area. As shown in fig. 7, the game virtual scene transformation device 700 includes:
a first determining module 701, configured to determine, in response to a transformation operation triggered by controlling a virtual object in a starting transformation area, first movement information of the virtual object in a current game scene, where the current game scene is a game scene corresponding to the starting transformation area, and the movement information includes at least one of the following information: direction of movement, speed of movement and current orientation;
a second determining module 702, configured to determine, by taking the initial transformation area as a reference, an initial outgoing state of the first movement information relative to the initial transformation area;
a first conversion module 703, configured to replace a reference of the initial outgoing state with a target conversion area to obtain a target outgoing state, and convert the target outgoing state into second movement information in a target game scene corresponding to the target conversion area;
and a control module 704, configured to control display of a target game scene corresponding to the virtual object in the target transformation area based on the second movement information.
In some embodiments, the movement information includes a direction of movement;
the first moving direction in the first moving information is on the same line with respect to the incident direction of the start transformation area and the second moving direction in the second moving information is on the same line with respect to the emission direction of the target transformation area.
In some embodiments, the movement information further includes a movement speed;
the second moving speed in the second moving information is a result of multiplying the magnitude of the first moving speed in the first moving information by the second moving direction.
In some embodiments, the movement information includes a current orientation;
the first current orientation in the first movement information is on the same line with respect to the incident direction of the initial transformation area and the second current orientation in the second movement information is on the same line with respect to the emergent direction of the target transformation area.
In some embodiments, the movement information further includes a relative position;
a first relative position of the virtual object with respect to the start transformation area in the first movement information is the same as a second relative position of the virtual object with respect to the target transformation area in the second movement information.
In some embodiments, the second determining module 702 is specifically configured to:
carrying out mirror image processing on the first moving direction relative to a first longitudinal plane to obtain a first sub-moving direction relative to the initial transformation area, wherein the first longitudinal plane is vertical to the plane of the initial transformation area; carrying out mirror image processing on the first sub-moving direction relative to a second longitudinal plane to obtain a second sub-moving direction relative to the initial transformation area, wherein the second longitudinal plane is parallel to the plane of the initial transformation area; determining the second sub-moving direction as a starting outgoing moving direction of the first moving information relative to the starting transformation area; or the like, or, alternatively,
carrying out mirror image processing on the first moving direction relative to the second longitudinal plane to obtain a third sub-moving direction relative to the initial transformation area; carrying out mirror image processing on the third sub-moving direction relative to the first longitudinal plane to obtain a fourth sub-moving direction relative to the initial transformation area; the fourth sub-movement direction is determined as a starting outgoing movement direction of the first movement information relative to the starting transformation area.
In some embodiments, the second determining module 702 is specifically configured to:
carrying out mirror image processing on the first current orientation relative to a first longitudinal plane to obtain a first sub current orientation relative to the initial transformation area, wherein the first longitudinal plane is vertical to the plane of the initial transformation area; carrying out mirror image processing on the first sub current orientation relative to a second longitudinal plane to obtain a second sub current orientation relative to the initial transformation area, wherein the second longitudinal plane is parallel to the plane of the initial transformation area; determining the second sub-current orientation as a starting outgoing orientation of the first movement information relative to the starting transformation area; or the like, or, alternatively,
carrying out mirror image processing on the first current orientation relative to the second longitudinal plane to obtain a third sub current orientation relative to the initial transformation area; carrying out mirror image processing on the third sub current orientation relative to the first longitudinal plane to obtain a fourth sub current orientation relative to the initial transformation area; determining the fourth sub-current orientation as a starting outgoing orientation of the first movement information relative to the starting transformation area.
In some embodiments, the second determining module 702 is specifically configured to:
performing mirror image processing relative to the plane of the initial transformation area based on the first relative position to obtain a third relative position relative to the initial transformation area;
the third relative position is determined as a starting outgoing relative position of the first movement information with respect to the starting transformation area.
In some embodiments, the start transformation area is located in a current game scene of the game; the device also includes:
and the second conversion module is used for converting the first movement information into a starting incoming state of the virtual object in the current game scene relative to the starting transformation area before determining a starting outgoing state of the first movement information relative to the starting transformation area by taking the starting transformation area as a reference.
In some embodiments, the first conversion module 703 is specifically configured to:
and converting the target outgoing state relative to the target transformation area into second movement information of the virtual object relative to the target game scene in the target transformation area.
In some embodiments, a transformation area virtual camera is further included in the three-dimensional game scene, the transformation area virtual camera is bound with the target transformation area, and the transformation area virtual camera faces the direction of the target game scene; the device also includes:
the acquisition module is used for acquiring a target image corresponding to a three-dimensional game scene in a target game scene through a transformation area virtual camera when the virtual object is in the current game scene;
the rendering module is used for rendering the target image to obtain a target texture;
and the texture module is used for pasting the target texture to the range formed by the frame of the initial transformation area and displaying the initial transformation area after pasting the texture in the graphical user interface.
In some embodiments, the acquisition module is specifically configured to:
acquiring the current initial position of the current virtual object in the current game scene, and determining the current target position relative to the target transformation area in the target game scene based on the current initial position; determining the current target position as the current position of the transformation area virtual camera;
acquiring a first sight direction of a current virtual object facing an initial transformation area in a current game scene, and determining a second sight direction of the virtual object facing a target transformation area based on the first sight direction, wherein the first sight direction is on a same ray with an incident direction of the initial transformation area and the second sight direction with respect to the incident direction of the target transformation area; determining the emergent direction of the second sight line direction as the current orientation direction of the virtual camera in the transformation area;
and acquiring a target image corresponding to the three-dimensional game scene in the current target game scene through the transformation area virtual camera which faces the direction at present and is located at the current position.
In some embodiments, the rendering module is specifically configured to:
when the sight of the virtual object moves, rendering the target image to obtain target texture;
rendering the target image by adopting a camera view rendering frame rate of the virtual camera in the conversion area to obtain a target texture, wherein the camera view rendering frame rate is determined according to the distance between the virtual object and the initial conversion area, and the camera view rendering frame rate is less than or equal to the rendering frame rate of other areas except the target texture in the graphical user interface;
performing intersection test between the geometric model of the initial transformation area shape and the camera view cone of the virtual object view angle to obtain a test result, and if the test result is that the geometric model is not in the camera view cone, canceling the rendering process of the target image;
and rendering the target image based on the camera resolution of the virtual camera in the transformation area to obtain a target texture, wherein the camera resolution is determined according to the distance between the virtual object and the initial transformation area, and the camera resolution is less than or equal to the resolution of the other areas except the target texture in the graphical user interface.
In some embodiments, the rendering module is specifically configured to:
performing the following steps by using a recursion function cycle based on the preset maximum recursion times until judging that other initial transformation areas except the current initial transformation area do not exist in the current transformation area virtual camera view angle bound by the current target transformation area, and rendering the target image based on the determined plurality of initial transformation areas to obtain the target texture:
performing intersection test between the geometric body model of the initial transformation area shape and the camera view cone of the virtual camera view angle of the current transformation area to obtain a test result;
judging whether other initial transformation areas except the current initial transformation area exist in the current transformation area virtual camera view angle according to the test result;
if so, taking other initial transformation areas as the current initial transformation area in the next recursion judgment process.
In some embodiments, the control module 704 is specifically configured to:
copying the object models of the virtual objects transformed in the scene transformation area, wherein the appearance and the action of the two copied object models are consistent;
determining a first part of object models on one side of the starting transformation area in one of the two object models, and determining a second part of object models on one side of the target transformation area in the other object model;
in the graphical user interface, a first portion of the object model is displayed in the start transformation area based on the first movement information and a second portion of the object model is displayed in the target transformation area based on the second movement information.
The game virtual scene transformation device provided by the embodiment of the application has the same technical characteristics as the game virtual scene transformation method provided by the embodiment, so that the same technical problems can be solved, and the same technical effects can be achieved.
Corresponding to the game virtual scene transformation method, the embodiment of the application also provides a computer-readable storage medium, wherein the computer-readable storage medium stores machine executable instructions, and when the computer executable instructions are called and executed by a processor, the computer executable instructions cause the processor to execute the steps of the game virtual scene transformation method.
The game virtual scene transformation device provided by the embodiment of the application can be specific hardware on equipment or software or firmware installed on the equipment. The device provided by the embodiment of the present application has the same implementation principle and technical effect as the foregoing method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the foregoing method embodiments where no part of the device embodiments is mentioned. It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the foregoing systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
For another example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments provided in the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the game virtual scene transformation method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus once an item is defined in one figure, it need not be further defined and explained in subsequent figures, and moreover, the terms "first", "second", "third", etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the scope of the embodiments of the present application. Are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (18)

1. A game virtual scene transformation method is characterized in that a three-dimensional game scene of a game comprises a scene transformation area and a virtual object, wherein the scene transformation area comprises a starting transformation area and a target transformation area; the method comprises the following steps:
responding to a transformation operation triggered by controlling the virtual object in a starting transformation area, and determining first movement information of the virtual object in a current game scene, wherein the current game scene is a game scene corresponding to the starting transformation area, and the movement information comprises at least one of the following information: direction of movement, speed of movement and current orientation;
determining a starting outgoing state of the first movement information relative to the starting transformation area by taking the starting transformation area as a reference;
replacing the reference of the starting outgoing state with the target transformation area to obtain a target outgoing state, and converting the target outgoing state into second movement information in a target game scene corresponding to the target transformation area;
and controlling the virtual object to be displayed in a target game scene corresponding to the target transformation area based on the second movement information.
2. The method of claim 1, wherein the movement information comprises a direction of movement;
the first moving direction in the first moving information is on the same line with respect to the incident direction of the start transformation area and the second moving direction in the second moving information is on the same line with respect to the emission direction of the target transformation area.
3. The method of claim 2, wherein the movement information further comprises a movement speed;
the second moving speed in the second moving information is a result of multiplying the magnitude of the first moving speed in the first moving information by the second moving direction.
4. The method of claim 1, wherein the movement information comprises a current orientation;
the first current orientation in the first movement information is on the same line with respect to the incident direction of the initial transformation area and the second current orientation in the second movement information is on the same line with respect to the emergent direction of the target transformation area.
5. The method of claim 1, wherein the movement information further comprises a relative position;
a first relative position of the virtual object with respect to the start transformation area in the first movement information is the same as a second relative position of the virtual object with respect to the target transformation area in the second movement information.
6. The method of claim 2, wherein the step of determining the initial outgoing state of the first movement information relative to the initial transformation area with respect to the initial transformation area comprises:
performing mirror image processing on the first moving direction relative to a first longitudinal plane to obtain a first sub-moving direction relative to the initial transformation area, wherein the first longitudinal plane is perpendicular to the plane of the initial transformation area; carrying out mirror image processing on the first sub-moving direction relative to a second longitudinal plane to obtain a second sub-moving direction relative to the initial transformation area, wherein the second longitudinal plane is parallel to the plane of the initial transformation area; determining the second sub-movement direction as a starting outgoing movement direction of the first movement information relative to the starting transformation area; or the like, or, alternatively,
carrying out mirror image processing on the first moving direction relative to the second longitudinal plane to obtain a third sub-moving direction relative to the initial transformation area; carrying out mirror image processing on the third sub-moving direction relative to the first longitudinal plane to obtain a fourth sub-moving direction relative to the initial transformation area; determining the fourth sub-movement direction as a starting outgoing movement direction of the first movement information relative to the starting transformation area.
7. The method of claim 4, wherein the step of determining the initial outgoing state of the first movement information relative to the initial transformation area with respect to the initial transformation area comprises:
performing mirror image processing on the first current orientation relative to a first longitudinal plane to obtain a first sub-current orientation relative to the initial transformation area, wherein the first longitudinal plane is perpendicular to the plane of the initial transformation area; performing mirror image processing on the first sub current orientation relative to a second longitudinal plane to obtain a second sub current orientation relative to the initial transformation area, wherein the second longitudinal plane is parallel to the plane of the initial transformation area; determining the second sub-current orientation as a starting outgoing orientation of the first movement information relative to the starting transformation region; or the like, or, alternatively,
performing mirror image processing on the first current orientation relative to the second longitudinal plane to obtain a third sub current orientation relative to the initial transformation area; performing mirror image processing on the third sub current orientation relative to the first longitudinal plane to obtain a fourth sub current orientation relative to the initial transformation area; determining the fourth sub-current orientation as a starting outgoing orientation of the first movement information relative to the starting transformation region.
8. The method of claim 5, wherein the step of determining the initial outgoing state of the first movement information relative to the initial transformation area with respect to the initial transformation area comprises:
performing mirror image processing on a plane relative to the initial transformation area based on the first relative position to obtain a third relative position relative to the initial transformation area;
determining the third relative position as a starting outgoing relative position of the first movement information relative to the starting transformation area.
9. The method according to any one of claims 1 to 8, wherein before the step of determining the initial outgoing state of the first movement information relative to the initial transformation area, with respect to the initial transformation area, further comprising:
converting the first movement information into a starting incoming state of the virtual object in the current game scene relative to the starting transformation area.
10. The method of claim 9, wherein the step of converting the target outgoing state into second movement information in the target game scene corresponding to the target transformation area comprises:
and converting a target outgoing state relative to the target transformation area into second movement information of the virtual object in the target game scene relative to the target transformation area.
11. The method of claim 10, further comprising a transform area virtual camera in the three-dimensional game scene, wherein the transform area virtual camera is bound to the target transform area and the transform area virtual camera faces the target game scene; the method further comprises the following steps:
when the virtual object is in the current game scene, acquiring a target image corresponding to a three-dimensional game scene in the target game scene through the transformation area virtual camera;
rendering the target image to obtain target texture;
and pasting the target texture to a range formed by a frame of the initial transformation area, and displaying the initial transformation area after pasting the texture in a graphical user interface.
12. The method of claim 11, wherein the step of capturing, by the transformation area virtual camera, a target image corresponding to a three-dimensional game scene in the target game scene comprises:
acquiring a current starting position of the current virtual object in the current game scene, and determining a current target position relative to the target transformation area in the target game scene based on the current starting position; determining the current target position as a current position of a transformation area virtual camera;
acquiring a first sight direction of the virtual object facing the initial transformation area in the current game scene, and determining a second sight direction of the virtual object facing the target transformation area based on the first sight direction, wherein the first sight direction is on the same ray with respect to the incident direction of the initial transformation area and the incident direction of the second sight direction with respect to the target transformation area; determining the emergent direction of the second sight line direction as the current facing direction of the virtual camera of the transformation area;
and acquiring a target image corresponding to a three-dimensional game scene in the current target game scene through the transformation area virtual camera which faces the current direction and is located at the current position.
13. The method of claim 11, wherein the step of rendering the target image to obtain the target texture comprises:
when the sight line of the virtual object moves, rendering the target image to obtain target texture;
rendering the target image by using a camera view rendering frame rate of the virtual camera in the transformation area to obtain a target texture, wherein the camera view rendering frame rate is determined according to the distance between the virtual object and the initial transformation area, and the camera view rendering frame rate is less than or equal to the rendering frame rates of other areas except the target texture in the graphical user interface;
performing intersection test between the geometric model of the initial transformation area shape and the camera view cone of the virtual object view angle to obtain a test result, and if the test result is that the geometric model is not in the camera view cone, canceling the rendering process of the target image;
rendering the target image based on the camera resolution of the virtual camera in the transformation area to obtain a target texture, wherein the camera resolution is determined according to the distance between the virtual object and the initial transformation area, and the camera resolution is less than or equal to the resolution of the other areas except the target texture in the graphical user interface.
14. The method of claim 11, wherein the step of rendering the target image to obtain the target texture comprises:
performing the following steps by using a recursion function cycle based on a preset maximum recursion frequency until judging that other initial transformation areas except the current initial transformation area do not exist in the current transformation area virtual camera view angle bound by the current target transformation area, and rendering the target image based on a plurality of determined initial transformation areas to obtain a target texture:
performing intersection test between the geometric body model of the initial transformation area shape and the camera view cone of the current transformation area virtual camera view angle to obtain a test result;
judging whether other initial transformation areas except the current initial transformation area exist in the current transformation area virtual camera visual angle according to the test result;
if so, taking the other initial transformation areas as the current initial transformation area in the next recursion judgment process.
15. The method according to claim 1, wherein the step of controlling the virtual object to be displayed in the target game scene corresponding to the target transformation area based on the second movement information comprises:
copying the object model of the virtual object transformed in the scene transformation area, wherein the appearance and the action of the two copied object models are consistent;
determining a first part of the object models on one side of the starting transformation area in one of the two object models and determining a second part of the object models on one side of the target transformation area in the other object model;
displaying the first partial object model in the start transformation area based on the first movement information and displaying the second partial object model in the target transformation area based on the second movement information in a graphical user interface.
16. A game virtual scene transformation device is characterized in that a three-dimensional game scene of a game comprises a scene transformation area and a virtual object, wherein the scene transformation area comprises a starting transformation area and a target transformation area; the device comprises:
a first determining module, configured to determine, in response to a transformation operation triggered by controlling the virtual object in a starting transformation area, first movement information of the virtual object in a current game scene, where the current game scene is a game scene corresponding to the starting transformation area, and the movement information includes at least one of the following information: direction of movement, speed of movement and current orientation;
a second determining module, configured to determine, with the initial transformation area as a reference, an initial outgoing state of the first movement information with respect to the initial transformation area;
the conversion module is used for replacing the reference of the starting outgoing state with the target conversion area to obtain a target outgoing state, and converting the target outgoing state into second mobile information in a target game scene corresponding to the target conversion area;
and the control module is used for controlling the virtual object to be displayed in a target game scene corresponding to the target transformation area based on the second movement information.
17. An electronic terminal comprising a memory and a processor, the memory having stored thereon a computer program operable on the processor, wherein the processor, when executing the computer program, performs the steps of the method of any of claims 1 to 15.
18. A computer readable storage medium having stored thereon machine executable instructions which, when invoked and executed by a processor, cause the processor to execute the method of any of claims 1 to 15.
CN202010918866.6A 2020-09-03 2020-09-03 Game virtual scene transformation method and device and electronic terminal Pending CN111939567A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010918866.6A CN111939567A (en) 2020-09-03 2020-09-03 Game virtual scene transformation method and device and electronic terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010918866.6A CN111939567A (en) 2020-09-03 2020-09-03 Game virtual scene transformation method and device and electronic terminal

Publications (1)

Publication Number Publication Date
CN111939567A true CN111939567A (en) 2020-11-17

Family

ID=73368039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010918866.6A Pending CN111939567A (en) 2020-09-03 2020-09-03 Game virtual scene transformation method and device and electronic terminal

Country Status (1)

Country Link
CN (1) CN111939567A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112473138A (en) * 2020-12-10 2021-03-12 网易(杭州)网络有限公司 Game display control method and device, readable storage medium and electronic equipment
CN112494945A (en) * 2020-12-03 2021-03-16 网易(杭州)网络有限公司 Game scene conversion method and device and electronic equipment
CN113633991A (en) * 2021-08-13 2021-11-12 腾讯科技(深圳)有限公司 Virtual skill control method, device, equipment and computer readable storage medium
CN116777730A (en) * 2023-08-25 2023-09-19 湖南马栏山视频先进技术研究院有限公司 GPU efficiency improvement method based on resource scheduling
CN112494945B (en) * 2020-12-03 2024-05-10 网易(杭州)网络有限公司 Game scene conversion method and device and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040229691A1 (en) * 2003-05-12 2004-11-18 Nintendo Co., Ltd. Game apparatus and storing medium that stores game program
CN108355354A (en) * 2018-02-11 2018-08-03 网易(杭州)网络有限公司 Information processing method, device, terminal and storage medium
CN108665553A (en) * 2018-04-28 2018-10-16 腾讯科技(深圳)有限公司 A kind of method and apparatus for realizing virtual scene conversion
CN110013670A (en) * 2019-04-26 2019-07-16 腾讯科技(深圳)有限公司 Map Switch method and apparatus, storage medium and electronic device in game application
CN110163976A (en) * 2018-07-05 2019-08-23 腾讯数码(天津)有限公司 A kind of method, apparatus, terminal device and the storage medium of virtual scene conversion
CN110174950A (en) * 2019-05-28 2019-08-27 广州视革科技有限公司 A kind of method for changing scenes based on transmission gate
JP2019145161A (en) * 2019-04-25 2019-08-29 株式会社コロプラ Program, information processing device, and information processing method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040229691A1 (en) * 2003-05-12 2004-11-18 Nintendo Co., Ltd. Game apparatus and storing medium that stores game program
CN108355354A (en) * 2018-02-11 2018-08-03 网易(杭州)网络有限公司 Information processing method, device, terminal and storage medium
CN108665553A (en) * 2018-04-28 2018-10-16 腾讯科技(深圳)有限公司 A kind of method and apparatus for realizing virtual scene conversion
CN110163976A (en) * 2018-07-05 2019-08-23 腾讯数码(天津)有限公司 A kind of method, apparatus, terminal device and the storage medium of virtual scene conversion
JP2019145161A (en) * 2019-04-25 2019-08-29 株式会社コロプラ Program, information processing device, and information processing method
CN110013670A (en) * 2019-04-26 2019-07-16 腾讯科技(深圳)有限公司 Map Switch method and apparatus, storage medium and electronic device in game application
CN110174950A (en) * 2019-05-28 2019-08-27 广州视革科技有限公司 A kind of method for changing scenes based on transmission gate

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112494945A (en) * 2020-12-03 2021-03-16 网易(杭州)网络有限公司 Game scene conversion method and device and electronic equipment
CN112494945B (en) * 2020-12-03 2024-05-10 网易(杭州)网络有限公司 Game scene conversion method and device and electronic equipment
CN112473138A (en) * 2020-12-10 2021-03-12 网易(杭州)网络有限公司 Game display control method and device, readable storage medium and electronic equipment
CN112473138B (en) * 2020-12-10 2023-11-17 网易(杭州)网络有限公司 Game display control method and device, readable storage medium and electronic equipment
CN113633991A (en) * 2021-08-13 2021-11-12 腾讯科技(深圳)有限公司 Virtual skill control method, device, equipment and computer readable storage medium
CN113633991B (en) * 2021-08-13 2023-11-03 腾讯科技(深圳)有限公司 Virtual skill control method, device, equipment and computer readable storage medium
CN116777730A (en) * 2023-08-25 2023-09-19 湖南马栏山视频先进技术研究院有限公司 GPU efficiency improvement method based on resource scheduling
CN116777730B (en) * 2023-08-25 2023-10-31 湖南马栏山视频先进技术研究院有限公司 GPU efficiency improvement method based on resource scheduling

Similar Documents

Publication Publication Date Title
CN111939567A (en) Game virtual scene transformation method and device and electronic terminal
CN106716302B (en) Method, apparatus, and computer-readable medium for displaying image
US9898844B2 (en) Augmented reality content adapted to changes in real world space geometry
US20180345144A1 (en) Multiple Frame Distributed Rendering of Interactive Content
US20170163958A1 (en) Method and device for image rendering processing
US11260300B2 (en) Image processing method and apparatus
CN108176049B (en) Information prompting method, device, terminal and computer readable storage medium
WO2012159392A1 (en) Interaction method for dynamic wallpaper and desktop component
CN107213636B (en) Lens moving method, device, storage medium and processor
US11430192B2 (en) Placement and manipulation of objects in augmented reality environment
US7391417B2 (en) Program and image processing system for rendering polygons distributed throughout a game space
US20230059116A1 (en) Mark processing method and apparatus, computer device, storage medium, and program product
CN111491208A (en) Video processing method and device, electronic equipment and computer readable medium
CN113168281B (en) Computer readable medium, electronic device and method
CN113469883B (en) Rendering method and device of dynamic resolution, electronic equipment and readable storage medium
CN109598672B (en) Map road rendering method and device
CN113426112A (en) Game picture display method and device, storage medium and electronic equipment
CN112669433A (en) Contour rendering method, apparatus, electronic device and computer-readable storage medium
CN112206519A (en) Method, device, storage medium and computer equipment for realizing game scene environment change
CN112973121B (en) Reflection effect generation method and device, storage medium and computer equipment
CN114742970A (en) Processing method of virtual three-dimensional model, nonvolatile storage medium and electronic device
CN109675312B (en) Game item list display method and device
WO2021021346A1 (en) Occlusion in mobile client rendered augmented reality environments
CN113813607B (en) Game view angle switching method and device, storage medium and electronic equipment
CN113709372B (en) Image generation method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination