CN108043027B - Storage medium, electronic device, game screen display method and device - Google Patents

Storage medium, electronic device, game screen display method and device Download PDF

Info

Publication number
CN108043027B
CN108043027B CN201711208105.6A CN201711208105A CN108043027B CN 108043027 B CN108043027 B CN 108043027B CN 201711208105 A CN201711208105 A CN 201711208105A CN 108043027 B CN108043027 B CN 108043027B
Authority
CN
China
Prior art keywords
target
dimensional image
dimensional
dimensional space
game
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711208105.6A
Other languages
Chinese (zh)
Other versions
CN108043027A (en
Inventor
蔡广益
王晗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shanghai Co Ltd
Original Assignee
Tencent Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shanghai Co Ltd filed Critical Tencent Technology Shanghai Co Ltd
Priority to CN201711208105.6A priority Critical patent/CN108043027B/en
Publication of CN108043027A publication Critical patent/CN108043027A/en
Application granted granted Critical
Publication of CN108043027B publication Critical patent/CN108043027B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/663Methods for processing data by generating or executing the game program for rendering three dimensional images for simulating liquid objects, e.g. water, gas, fog, snow, clouds
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6661Methods for processing data by generating or executing the game program for rendering three dimensional images for changing the position of the virtual camera

Abstract

The invention discloses a storage medium, an electronic device, a game picture display method and a game picture display device. Wherein, the method comprises the following steps: acquiring a first position of a first object in a first three-dimensional space, wherein the first three-dimensional space is a three-dimensional space where the first object is located in a plurality of three-dimensional spaces included in a game scene; obtaining a target three-dimensional image to be mapped according to an original three-dimensional image of a first object, vertically mapping the target three-dimensional image to a target plane in a game scene to obtain a first two-dimensional image, obtaining a target two-dimensional image to be displayed according to the first two-dimensional image, wherein the size of the target two-dimensional image is related to the distance between a first position and the target plane, and the size of the target two-dimensional image is different from that of a second two-dimensional image obtained by vertically mapping the original three-dimensional image to the target plane; and displaying the target two-dimensional image mapped on the target plane. The invention solves the technical problem that the area of the game animation picture occupied by stars in the starry sky scene is overlarge in the related technology.

Description

Storage medium, electronic device, game screen display method and device
Technical Field
The invention relates to the field of internet, in particular to a storage medium, an electronic device, a game picture display method and a game picture display device.
Background
In a game, the starry sky belongs to a very typical and classical scene, and the main implementation mode of the starry sky is realized by a sky box, a fixed model of human art or frame animation.
(1) The sky box implementation is shown in fig. 1, in which a game scene is enclosed by 6 boxes of maps (corresponding to the positive and negative poles of the X, Y, and Z axes in a three-dimensional coordinate system).
(2) The manner of implementation with fixed models of art is shown in fig. 2 and 3. The stars in fig. 2 and 3 may be of various shapes.
(3) The frame animation is implemented as shown in fig. 4, and the change of the stars, such as the size of the stars, the positions of the stars, etc., is implemented by the difference between the frames.
In the technical scheme of the 'implementation by using a sky box', the camera cannot rotate under perspective projection, so that the camera adopts orthogonal projection and can only move in parallel, and cannot express the sense of depth. As shown in fig. 5, the effect is obtained after the projection of the camera is changed to the orthogonal projection, and the cloud layer is significantly deformed, and the depth feeling cannot be displayed.
In the technical scheme of "realizing by using the fixed model of art", art needs to manufacture a lot of resources (how many stars need to be manufactured according to how many stars need to be manufactured), and has certain fixity, and the fixed model is monotonous or wastes art resources.
For the flickering stars, the game generally uses the frame animation to simulate the shining of the stars, the resource waste is large (including art resources and game resources), and the picture is single.
In addition, objects such as stars and the like are often projected into the animation in an orthogonal projection mode in the game, and due to the fact that the number of stars is large, the babysbreath occupies most of the area of a picture when being projected into the animation, and is not layered and poor in sense of reality, and game experience is affected.
Aiming at the technical problem that the area of a game animation picture occupied by stars in a starry sky scene in the related art is too large, an effective solution is not provided at present.
Disclosure of Invention
The embodiment of the invention provides a storage medium, an electronic device, a game picture display method and a game picture display device, and at least solves the technical problem that in the related technology, stars in a starry sky scene occupy too large area of a game animation picture.
According to an aspect of an embodiment of the present invention, there is provided a game screen display method including: acquiring a first position of a first object in a first three-dimensional space, wherein the first three-dimensional space is a three-dimensional space where the first object is located in a plurality of three-dimensional spaces included in a game scene, and the first object is an object in the game scene; obtaining a target three-dimensional image to be mapped according to an original three-dimensional image of a first object, vertically mapping the target three-dimensional image to a target plane in a game scene to obtain a first two-dimensional image, obtaining a target two-dimensional image to be displayed according to the first two-dimensional image, wherein the size of the target two-dimensional image is related to the distance between a first position and the target plane, and the size of the target two-dimensional image is different from that of a second two-dimensional image obtained by vertically mapping the original three-dimensional image to the target plane; and displaying the target two-dimensional image mapped on the target plane.
According to another aspect of the embodiments of the present invention, there is also provided a display apparatus of a game screen, the apparatus including: the game system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a first position of a first object in a first three-dimensional space, the first three-dimensional space is a three-dimensional space where the first object is located in a plurality of three-dimensional spaces included in a game scene, and the first object is an object in the game scene; the processing unit is used for obtaining a target three-dimensional image to be mapped according to an original three-dimensional image of a first object, vertically mapping the target three-dimensional image to a target plane in a game scene to obtain a first two-dimensional image, obtaining a target two-dimensional image to be displayed according to the first two-dimensional image, wherein the size of the target two-dimensional image is related to the distance between a first position and the target plane, and the size of the target two-dimensional image is different from that of a second two-dimensional image obtained by vertically mapping the original three-dimensional image to the target plane; and the display unit is used for displaying the target two-dimensional image mapped on the target plane.
In the embodiment of the invention, a first position of a first object in a game scene in a first three-dimensional space is obtained, a target three-dimensional image to be mapped is obtained according to an original three-dimensional image of the first object, the target three-dimensional image is vertically mapped to a target plane in the game scene to obtain a first two-dimensional image, a target two-dimensional image to be displayed is obtained according to the first two-dimensional image, and the size of the target two-dimensional image is related to the distance between the first position and the target plane; the target two-dimensional image mapped on the target plane is displayed, and because the size of the target two-dimensional image is different from that of a second two-dimensional image obtained by vertically mapping the original three-dimensional image on the target plane (namely, the target two-dimensional image is displayed in a perspective mode), when the first object is a star, the technical problem that the area of the game animation picture occupied by the star in the star sky scene in the related technology is too large can be solved, and the technical effects of reducing the area of the game animation picture occupied by the star and improving the reality of the game animation picture are achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic view of an alternative sky box in the related art;
FIG. 2 is a schematic diagram of an alternative starry sky in the related art;
FIG. 3 is a schematic diagram of an alternative starry sky in the related art;
FIG. 4 is a schematic illustration of an image frame of an alternative starry sky in the related art;
FIG. 5 is a schematic illustration of an alternative orthographic projection image of the related art;
FIG. 6 is a diagram of a hardware environment of a display method of a game screen according to an embodiment of the present invention;
FIG. 7 is a flow chart of an alternative method of displaying a game screen according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of an orthogonal projection and a perspective projection according to an embodiment of the invention;
FIG. 9 is a schematic view of an alternative game image according to an embodiment of the present invention;
FIG. 10 is a schematic view of an alternative game configuration interface according to an embodiment of the present invention;
FIG. 11 is a schematic view of an alternative game configuration interface according to an embodiment of the present invention;
FIG. 12 is a schematic diagram of an alternative sky map according to embodiments of the present invention;
FIG. 13 is a schematic illustration of an alternative star map according to an embodiment of the present invention;
FIG. 14 is a schematic illustration of an alternative star according to embodiments of the present invention;
FIG. 15 is a schematic illustration of an alternative starry sky in accordance with embodiments of the present invention;
FIG. 16 is a schematic illustration of an alternative configuration parameter variation according to an embodiment of the present invention;
FIG. 17 is a schematic illustration of an alternative configuration parameter variation according to an embodiment of the present invention;
FIG. 18 is a schematic view of an alternative game image according to an embodiment of the present invention;
FIG. 19 is a schematic view of an alternative game image according to an embodiment of the present invention;
FIG. 20 is a schematic view of an alternative game screen display device according to an embodiment of the present invention; and
fig. 21 is a block diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, partial terms or terms appearing in the description of the embodiments of the present invention are applied to the following explanations:
orthogonal projection: the projection of the projection line perpendicular to the projection plane belongs to orthogonal projection and belongs to one of parallel projection.
Alpha channel: is an 8-bit gray channel that records transparency information in an image with 256 levels of gray, defining transparent, opaque and translucent areas, where white indicates opaque, black indicates transparent and gray indicates translucent.
A sky box: the PANORAMIC image can be mapped by a PANORAMIC picture or PANORAMIC photograph, which is also called PANORAMA, and the English is PANORAMIC PHOTO, or PANORAMA, and generally refers to a photograph which is shot according to the normal effective visual angle of two eyes of a human (about horizontal 90 degrees and vertical 70 degrees) or comprises the visual angle of the residual light of two eyes (about horizontal 180 degrees and vertical 90 degrees) or even a complete scene range of 360 degrees.
View cone body: is a three-dimensional volume whose position is relative to the camera, and the shape of the view frustum determines how the model is projected from the camera space onto the screen. The most common type of projection, perspective projection, is such that objects near the camera are projected larger, while objects further from the camera are projected smaller. Perspective projection uses a pyramid as the view frustum, with the camera located at the vertex of the pyramid. The pyramid is truncated by two planes, one in front of the other, forming a Frustum called the viewing pyramid ViewFrustum, only the model inside the Frustum being visible.
Unity 3D: is a comprehensive game development tool developed by Unity Technologies to create multiple platforms of types of interactive content, such as three-dimensional video games, building visualizations, real-time three-dimensional animations, etc., is a fully integrated professional game engine.
NPC: the concept of Non-Player Character, which is an abbreviation of English Non Player Character, is also called Non-person Character or Non-Player Character, and the concept originally originated from a single-Player game and gradually extended to the whole game field, which generally refers to all characters in games that are not controlled by players.
According to an embodiment of the present invention, there is provided a method embodiment of a display method of a game screen.
Alternatively, in the present embodiment, the above-described display method of the game screen may be applied to a hardware environment constituted by the server 602 and the terminal 604 as shown in fig. 6. As shown in fig. 6, a server 602 is connected to a terminal 604 via a network, including but not limited to: the terminal 604 is not limited to a PC, a mobile phone, a tablet computer, etc. The game screen display method according to the embodiment of the present invention may be executed by the server 602, the terminal 604, or both the server 602 and the terminal 604. The terminal 604 may execute the game screen display method according to the embodiment of the present invention by a client installed thereon.
Fig. 7 is a flowchart of an alternative game screen display method according to an embodiment of the present invention, and as shown in fig. 7, the method may include the following steps:
step S702 is to obtain a first position of a first object in a first three-dimensional space, where the first three-dimensional space is a three-dimensional space where the first object is located in a plurality of three-dimensional spaces included in a game scene, and the first object is an object in the game scene.
The first object is an NPC object in a game scene, such as a prop of a player in the game, a prop in the scene (such as stars, moon, sun, mountain, water, river, field, etc.), and an NPC character (such as soldiers, auxiliary characters, pets, etc.).
The game scene is a game scene of a game, the game scene may be composed of a plurality of three-dimensional spaces, the game further includes a second object (i.e., an object controlled by a game account in the game), and the first three-dimensional space is a three-dimensional space in which the first object needs to be displayed and is also a three-dimensional space currently visible to the game account (player).
When the game is running in the client, the game engine (e.g., Unity) generates the game scene, and the camera captures a picture of the first three-dimensional space currently visible by the game account (step S704), and displays the picture.
Step S704, obtaining a target three-dimensional image to be mapped according to the original three-dimensional image of the first object, vertically mapping the target three-dimensional image to a target plane in the game scene to obtain a first two-dimensional image, obtaining a target two-dimensional image to be displayed according to the first two-dimensional image, where the size of the target two-dimensional image is related to the distance between the first position and the target plane, and the size of the target two-dimensional image is different from the size of a second two-dimensional image obtained by vertically mapping the original three-dimensional image to the target plane.
Alternatively, the "second two-dimensional image in which the size of the target two-dimensional image is different from the original three-dimensional image vertically mapped into the target plane" includes the following two meanings: one is that the size of the target two-dimensional image is smaller than that of the second two-dimensional image, namely the target two-dimensional image is obtained by reducing the second two-dimensional image; the size of the target two-dimensional image is larger than that of the second two-dimensional image, namely the target two-dimensional image is obtained by amplifying the second two-dimensional image. The following description will be given by taking a first mode (i.e., corresponding to a reduction) as an example, and a second mode is similar to the first mode.
As shown in fig. 8, the orthogonal mapping (orthogonal mapping, i.e. vertical mapping, is used in the game logic) is used in the present application, but in order to improve the reality of the object display, when the target three-dimensional image of the first object (such as a sphere) is mapped to the target plane to obtain the two-dimensional image, the perspective mapping effect needs to be achieved, and the mapping manner of the orthogonal mapping is not changed.
In the real environment, when an object is farther away from an observer, the image of the object is smaller, and the size of the first object mapped on the target plane is not changed when the target three-dimensional image is mapped on the target plane by adopting orthogonal mapping, so that the difference exists between the method adopting orthogonal mapping and the real image of the observed object.
The target plane is a plane of the first three-dimensional space facing the camera, and the distance between the plane and the camera is a preset closest distance allowing the game account to be observed; the other side opposite to the target plane is the farthest distance which is preset and allows the observation of the game account number, and the space between the two planes forms the first three-dimensional space.
Step S706, a target two-dimensional image mapped on the target plane is displayed.
The client of the game may be installed on a mobile terminal (e.g., a mobile phone, a tablet), a PC, a server, or the like, so that the target two-dimensional image is displayed on the client of the terminal.
In the technical scheme, the mapping mode (orthogonal mapping) of the game logic is not changed, so that the game logic does not need to be edited again, the perspective mapping effect can be achieved, the first object is displayed according to the mode meeting the perspective effect, and the first object is displayed according to the principle of 'far, small and near big' when displayed in the picture of the game animation, so that the situation that the whole picture is an object with equal size (such as stars and moonlights) is avoided, and the 'far, small and near big' display mode can show the layering sense on a two-dimensional plane, and further the game experience of a player can be improved.
Through the steps S702 to S706, obtaining a first position of a first object in a game scene in a first three-dimensional space, obtaining a target three-dimensional image to be mapped according to an original three-dimensional image of the first object, vertically mapping the target three-dimensional image to a target plane in the game scene to obtain a first two-dimensional image, obtaining a target two-dimensional image to be displayed according to the first two-dimensional image, wherein the size of the target two-dimensional image is related to the distance between the first position and the target plane; the target two-dimensional image mapped on the target plane is displayed, and because the size of the target two-dimensional image is different from that of a second two-dimensional image obtained by vertically mapping the original three-dimensional image on the target plane (namely, the target two-dimensional image is displayed according to a perspective mode of 'far-small-near-large'), when the first object is a star, the technical problem that the area of the game animation picture occupied by the star in the star sky scene in the related technology is too large can be solved, and the technical effects of reducing the area of the game animation picture occupied by the star and improving the reality of the game animation picture are achieved.
The technical solution of the present application is described in detail below with reference to the steps shown in fig. 7.
In the technical solution provided in step S702, the game scene may include a plurality of three-dimensional spaces, a first three-dimensional space in which a first object is located in the plurality of three-dimensional spaces is determined, and a first position of the first object in the first three-dimensional space is detected.
For the first object, there may be two cases, one of which is first present in the game scene at the current time and the other of which is already present in the game scene before the current time, and for the two cases, the method of acquiring the position may be different, and details are described below as to the configuration of the first object (when and in what form it is present in the game scene), how to acquire the position of the first object.
Configuration of a first object
Attribute information of each first object appearing in the first three-dimensional space is configured prior to acquiring a first position of the first object in the first three-dimensional space.
It should be noted that, in the present application, a plurality of object models (a small number, such as less than 10, or less than 20) may be predefined, and a plurality of colors for rendering the object models may be predefined. The role of the attribute information includes at least one of:
(1) the first object is indicated to be one of a plurality of preset objects which are randomly selected;
(2) for instructing to randomly select one from a plurality of preset colors for the first object;
(3) for indicating the first object to rotate randomly by an angle (including angles along the X-axis, the Y-axis, and the Z-axis) at the original position, so that the first object takes different shapes when facing the target plane, which is equivalent to indicating a random selection of one of the first object from a plurality of preset shapes;
(4) for indicating an initial distance between the first object and the target plane;
(5) for indicating the velocity vector (including the direction of motion and the magnitude of the velocity of motion) of the first object in the game scene.
(II) determining the position of the first object
(1) Case 1, the first occurrence in the game scene at the current moment
When the first object does not exist in the second three-dimensional space at the moment before the current moment, the first position configured for the first object in the first three-dimensional space is obtained, namely the first position is determined according to the initial distance between the attribute information and the target plane, the first three-dimensional space comprises the second object controlled by the game account number, the plane where the second object exists can be used as the target plane, namely the second object is always kept at the nearest end, and the control interaction of the game is facilitated.
The second three-dimensional space is a three-dimensional space in which the first object exists at a time immediately before the current time among the plurality of three-dimensional spaces.
(2) Case 2, having appeared in the game scene before the current time
And under the condition that the first object exists in the second three-dimensional space at the moment before the current moment, acquiring a second position of the first object in the second three-dimensional space, and determining a first position according to the second position and the moving speed of the first object, wherein the first object exists in the first three-dimensional space at the current moment.
As shown in fig. 8, if the speed direction of the first object indicated by the attribute information in the game scene is away from the camera along the Z-axis and the speed magnitude is V, the first position is different from its coordinates on the Z-axis only with respect to the second position, the changed coordinates are L — L0+ VT, L0 is the Z-axis coordinates of the second position, and T is the time difference between the current time and the previous time.
In the technical scheme of the application, the color, the angle, the initial distance and the velocity vector of the object model (such as the star model) with extremely small quantity are changed, so that the effect of 'starry sky' and different stars (size and color) can be realized, the model is prevented from being manufactured for each different star, and the art resources are saved; and in the game process, a large number of object models are prevented from being rendered, so that the game resources are saved.
In the technical solution provided in step S704, a target three-dimensional image to be mapped is obtained according to an original three-dimensional image of a first object, the target three-dimensional image is vertically mapped onto a target plane in a game scene to obtain a first two-dimensional image, a target two-dimensional image to be displayed is obtained according to the first two-dimensional image, a size of the target two-dimensional image is related to a distance between a first position and the target plane, and a size of the target two-dimensional image is different from a size of a second two-dimensional image obtained by vertically mapping the original three-dimensional image onto the target plane.
(one) shape and color with respect to mapping
In the above embodiment, the step of vertically mapping the target three-dimensional image onto the target plane in the game scene to obtain the first two-dimensional image may comprise the steps of:
in step S11, attribute information configured for the first object is acquired, the attribute information indicating a shape (which can be determined by the rotation angle indicated by the attribute information) and a color of the target three-dimensional image vertically mapped on the target plane.
Step S12, the target three-dimensional image is vertically mapped onto the target plane according to the shape and color indicated by the attribute information, resulting in a first two-dimensional image.
(II) size of the map
In the technical scheme of the application, the size of the finally obtained target two-dimensional image is related to the distance between the first position and the target plane through preprocessing before mapping and/or post processing after mapping, and the perspective display effect is achieved. The following description is made for these two implementations, respectively:
(1) pre-mapping preprocessing
Step S21, adjusting the size of the original three-dimensional image according to the distance between the first position and the target plane to obtain a target three-dimensional image, where the size of the target three-dimensional image is related to the distance between the first position and the target plane.
It should be noted that, the adjustment of the size of the original three-dimensional image may be an adjustment of the image itself, that is, the original three-dimensional image is scaled down to obtain the target three-dimensional image; or the first object can be scaled down and then rendered to obtain a target three-dimensional image.
Step S22, vertically mapping the target three-dimensional image onto a target plane to obtain a first two-dimensional image, and taking the first two-dimensional image as the target two-dimensional image.
Alternatively, the step S21 of "adjusting the size of the original three-dimensional image according to the distance between the first position and the target plane to obtain the target three-dimensional image" may be implemented as follows:
step S211, a first distance between the first position and the target plane is acquired.
Step S212, a reduction scale corresponding to the first distance is obtained according to a target relationship, wherein the target relationship is used for indicating a relationship between the distance between the position of the first object in the three-dimensional space and the target plane and the reduction scale.
Alternatively, the above target relationship may be described by the following relationship:
k ═ 1-Z/L, K denotes the reduction scale, Z denotes the first distance, and L denotes the farthest distance that the camera is allowed to capture (i.e., the farthest distance that the game account number sees), which can be used for the distance representation between that plane in the first three-dimensional space that corresponds to the target plane and the target plane.
Step S213, reduces the original three-dimensional image according to the reduction ratio corresponding to the first distance, to obtain the target three-dimensional image.
(2) Post-processing after mapping
In step S31, an original three-dimensional image of the first object is acquired as a target three-dimensional image.
Step S32, vertically mapping the target three-dimensional image onto the target plane to obtain a first two-dimensional image without changing the size of the first object.
Step S33, adjusting the size of the first two-dimensional image according to the distance between the first position and the target plane, to obtain a target two-dimensional image.
The step S33 of "adjusting the size of the first two-dimensional image according to the distance between the first position and the target plane to obtain the target two-dimensional image" may be implemented as follows:
in step S331, a first distance between the first position and the target plane is obtained.
Step S332, acquiring a reduction scale corresponding to the first distance according to a target relationship, where the target relationship is used to indicate a relationship between the distance between the position of the first object in the three-dimensional space and the target plane and the reduction scale.
Optionally, the target relationship may be described by the relationship, and the specific calculation manner is similar to that described above, and is not described herein again.
Step S333, the first two-dimensional image is reduced according to the reduction ratio corresponding to the first distance, and the target two-dimensional image is obtained.
In the technical solution provided in step S706, a two-dimensional image of the target mapped on the target plane is displayed.
As shown in fig. 9, when the first object is a star, in a general game, the background of the starry sky is generally realized by a sky box or a real art model, but due to the particularity of part of the game, orthogonal projection and a camera with a fixed angle are added, if the sky box is used, only the parallel movement of the starry sky background along with a character can be seen, and the depth sense (namely, near-fast and far-slow) of perspective projection in the subconscious of a player cannot be generated, and if the art resource is used, the manufacturing cost of art is very high, or the texture of the starry sky is very monotonous.
In the technical scheme of the application, simple resources for art manufacturing are used, perspective projection is simulated, grid mesh is automatically and randomly generated, and a starry sky background with perspective is formed.
For the glaring stars in the foreground, the method in the common game is to make frame animation, and the frame animation also has the problems of insufficient randomness and excessive resources. According to the scheme, a random shape and random flicker can be presented to the maximum extent by using simple resources according to the Alpha value in the art resources and the random time. Therefore, under orthogonal projection, the depth sense of the background starry sky and the shining effect of the foreground starry are realized.
As an alternative example, the following detailed description will be made of an implementation manner of the present application taking the first object as a star:
the game engine can select Unity, can edit in an editor of Unity, and can finish the technical scheme of the application by configuring parameters and preparing two maps.
The configuration of the starry sky in the corresponding script can adjust the effect of the art by changing the parameters of the script. The method can be adjusted during operation, and the stars can be adjusted by configuring material parameters.
As shown in fig. 10, parameters such as the initial number of stars, the initial size, the size attenuation (the ratio of each attenuation), the near-end distance (allowing the closest distance), the far-end distance (allowing the farthest distance), the color attenuation (e.g., fading, color fading), the moving speed of the camera, the initial position, and the material of the stars may be set.
As shown in fig. 11, on the star selection interface, one or more required stars may be selected from a plurality of star models of the star map, and the flicker amplification scale, the flicker frequency, and the Alpha channel value are configured, so that the same stars are configured differently to realize the full-sky stars and different stars.
The star and sky maps can be made by art, an optional star and star map in the sky is shown in fig. 12, Alpha channels in the maps are used, the color of the star map can be configured in the script, 9 shapes in the maps can be randomly selected from the codes, then a random rotation angle is added, and one of the colors configured by the art is randomly added.
An optional star map is shown in fig. 13, which has the shape of a star and can also carry information of Alpha values to be flashed. As described in detail below.
(1) Drift of starry sky and position in view window body are random
As shown in fig. 8, the position is first tied to the camera and an opposite relative velocity is added to the camera in each frame. This has a near-fast-far-slow effect, and since orthogonal projection is used in the game, i.e. the right side, it is difficult to show depth, and at this time, a perspective projection matrix M can be used for the space, so that the left side effect can be shown. The M matrix is as follows:
Figure BDA0001484089720000141
aspect denotes the projection plane (i.e. the aspect ratio of the object plane), fov denotes the angle of the view volume in the Z-axis direction, far denotes the distance from the far plane of the view volume to the focal point, near denotes the distance from the near plane of the view volume to the focal point.
Each piece of the starry sky is randomized to a position in the view volume space, and the three latitudes x, y and z are randomized.
For example, the coordinates of the star in the target space are (x1, y1, z1), and then the mapped coordinates are (x2, y2, z2), the mapping relationship between the two is: (x2, y2, z2, t2) ═ M (x1, y1, z1, t1) (M is the matrix), where t1 is an arbitrary number, such as 1. For the resulting (x2, y2, z2, t2), if mapped onto a plane, z2 and t2 can be left out, and x2, y2 are the desired coordinates.
(2) Space utilization of limited resources
In the foregoing, it is explained that the starry sky not only moves in reverse relative to the camera when the camera is moving, but also moves itself, and fragments of the starry sky may come out of view due to the movement of the starry sky.
Thus, the sky fragments are cut off, and in order to maintain the density of the sky fragments in the screen space, a large number of sky fragments are created in the whole game scene, so that the camera can see enough sky fragments when moving. However, doing so creates a significant overhead.
In order to reduce the amount of sky fragments, the present scheme does not actually generate as many sky fragments, although the positions of the sky fragments in the sky are always accumulated, as shown in fig. 14, the positions of the sky fragments are converted to the positions relative to the view volume during the actual rendering, and then the size of the view volume is modeled, so that the sky fragments are reset into the view volume after being moved out of the view volume, for example, for the star a, the star B and the star C are completely in the view volume, and the star fragments (partially in the view volume) after being converted and modeled. I.e. infinite starry sky is simulated with finite starry sky resources.
(3) Fading effect of starry sky
In order to simulate the effect of gradual hiding of starry sky from moving to a far place, the Alpha value of the fragments of the starry sky can be associated with the Z value, namely the farther the section is relative to the visual body, the lower the Alpha value of the fragments of the starry sky is, and the effect of gradual hiding of the starry sky from moving to the far place is further presented.
(4) Random algorithm of stars
As shown in fig. 15, in order to make the stars more realistic, the present solution randomly selects a star map from the previously described star maps, and then rotates a random angle, so that the real stars can be generated by using limited resources to the maximum extent, and different effects of the stars can be achieved (the small triangles and quadrilaterals in fig. 15 represent different stars).
(5) Sparkle of stars
Not only are the images of the stars varied, but they are also required to be glittering as in reality. The sparkle of a star includes a linear change in color value and an exponential change in Alpha. The change function Y of the exponent of Alpha is shown in FIG. 16 using X-axis time as the change parameter.
The x axis represents time, the y axis represents an alpha value, and the index is a function of a certain period of time which is flat and a certain period of time which is changed rapidly along with the time, so that the appearance of sudden twinkling of a star is met.
The change in color is linear, so that the luminance changes linearly with time, and the function is shown in fig. 17, where the X axis represents time and the Y axis represents luminance, and the luminance gradually increases and then gradually decreases with time, thereby showing a glittering effect.
As shown in fig. 18 and fig. 19, the starry sky is a very important part of the whole game, so that the player is in the air, and is the only scene of the game, and the texture of the starry sky is very important for the overall texture of the game, so that the experience of the player can be further improved after the starry sky is pushed out.
It should be noted that, although the game engine is Unity and the first object is a star in the above embodiments, the technical solution of the present application is not general, and other types of first objects may be implemented by other game engines, and the process is basically similar to the above.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
According to the embodiment of the invention, the game picture display device for implementing the game picture display method is also provided. Fig. 20 is a schematic diagram of an alternative game screen display device according to an embodiment of the present invention, and as shown in fig. 20, the device may include: an acquisition unit 2001, a processing unit 2003, and a display unit 2005.
The obtaining unit 2001 is configured to obtain a first position of a first object in a first three-dimensional space, where the first three-dimensional space is a three-dimensional space where the first object is located in a plurality of three-dimensional spaces included in a game scene, and the first object is an object in the game scene.
The first object is an NPC object in a game scene, such as a prop of a player in the game, a prop in the scene (such as stars, moon, sun, mountain, water, river, field, etc.), and an NPC character (such as soldiers, auxiliary characters, pets, etc.).
The game scene is a game scene of a game, the game scene may be composed of a plurality of three-dimensional spaces, the game further includes a second object (i.e., an object controlled by a game account in the game), and the first three-dimensional space is a three-dimensional space in which the first object needs to be displayed and is also a three-dimensional space currently visible to the game account (player).
When the game is run in the client, the game engine (such as Unity) generates the game scene, and the camera captures and displays the picture of the first three-dimensional space currently visible by the game account.
And the processing unit 2003 is configured to obtain a target three-dimensional image to be mapped according to the original three-dimensional image of the first object, vertically map the target three-dimensional image onto a target plane in the game scene to obtain a first two-dimensional image, and obtain a target two-dimensional image to be displayed according to the first two-dimensional image, where a size of the target two-dimensional image is related to a distance between the first position and the target plane, and a size of the target two-dimensional image is different from a size of a second two-dimensional image obtained by vertically mapping the original three-dimensional image onto the target plane.
As shown in fig. 8, the orthogonal mapping (orthogonal mapping, i.e. vertical mapping, is used in the game logic) is used in the present application, but in order to improve the reality of the object display, when the target three-dimensional image of the first object (such as a sphere) is mapped to the target plane to obtain the two-dimensional image, the perspective mapping effect needs to be achieved, and the mapping manner of the orthogonal mapping is not changed.
In the real environment, when an object is farther away from an observer, the image of the object is smaller, and the size of the first object mapped on the target plane is not changed when the target three-dimensional image is mapped on the target plane by adopting orthogonal mapping, so that the difference exists between the method adopting orthogonal mapping and the real image of the observed object.
The target plane is a plane of the first three-dimensional space facing the camera, and the distance between the plane and the camera is a preset closest distance allowing the game account to be observed; the other side opposite to the target plane is the farthest distance which is preset and allows the observation of the game account number, and the space between the two planes forms the first three-dimensional space.
And a display unit 2005 for displaying the target two-dimensional image mapped on the target plane.
The client of the game may be installed on a mobile terminal (e.g., a mobile phone, a tablet), a PC, a server, or the like, so that the target two-dimensional image is displayed on the client of the terminal.
In the technical scheme, the mapping mode (orthogonal mapping) of the game logic is not changed, so that the game logic does not need to be edited again, the perspective mapping effect can be achieved, the first object is displayed according to the mode meeting the perspective effect, and the first object is displayed according to the principle of 'far, small and near big' when displayed in the picture of the game animation, so that the situation that the whole picture is an object with equal size (such as stars and moonlights) is avoided, and the 'far, small and near big' display mode can show the layering sense on a two-dimensional plane, and further the game experience of a player can be improved.
It is to be noted that the acquiring unit 2001 in this embodiment may be configured to execute step S702 in embodiment 1 of this application, the processing unit 2003 in this embodiment may be configured to execute step S704 in embodiment 1 of this application, and the display unit 2005 in this embodiment may be configured to execute step S706 in embodiment 1 of this application.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of embodiment 1 described above. It should be noted that the modules described above as a part of the apparatus may operate in a hardware environment as shown in fig. 6, and may be implemented by software or hardware.
Through the module, a first position of a first object in a game scene in a first three-dimensional space is obtained, a target three-dimensional image to be mapped is obtained according to an original three-dimensional image of the first object, the target three-dimensional image is vertically mapped onto a target plane in the game scene to obtain a first two-dimensional image, a target two-dimensional image to be displayed is obtained according to the first two-dimensional image, and the size of the target two-dimensional image is related to the distance between the first position and the target plane; the target two-dimensional image mapped on the target plane is displayed, and because the size of the target two-dimensional image is different from that of a second two-dimensional image obtained by vertically mapping the original three-dimensional image on the target plane (namely, the target two-dimensional image is displayed according to a perspective mode of 'far-small-near-large'), when the first object is a star, the technical problem that the area of the game animation picture occupied by the star in the star sky scene in the related technology is too large can be solved, and the technical effects of reducing the area of the game animation picture occupied by the star and improving the reality of the game animation picture are achieved.
In an alternative embodiment, the processing unit may comprise: the first adjusting module is used for adjusting the size of the original three-dimensional image according to the distance between the first position and the target plane to obtain a target three-dimensional image, wherein the size of the target three-dimensional image is related to the distance between the first position and the target plane; and the first mapping module is used for vertically mapping the target three-dimensional image onto a target plane to obtain a first two-dimensional image, and taking the first two-dimensional image as the target two-dimensional image.
Optionally, the first adjusting module may include: the first obtaining submodule is used for obtaining a first distance between a first position and a target plane; the second obtaining submodule is used for obtaining a reduction scale corresponding to the first distance according to a target relation, wherein the target relation is used for indicating the relation between the distance between the position of the first object in the three-dimensional space and the target plane and the reduction scale; and the processing submodule is used for reducing the original three-dimensional image according to the reduction proportion corresponding to the first distance to obtain the target three-dimensional image.
In another alternative embodiment, the processing unit may comprise: the first acquisition module is used for acquiring an original three-dimensional image of a first object as a target three-dimensional image; the second mapping module is used for vertically mapping the target three-dimensional image to a target plane to obtain a first two-dimensional image; and the second adjusting module is used for adjusting the size of the first two-dimensional image according to the distance between the first position and the target plane to obtain a target two-dimensional image.
The second adjusting module is further configured to: acquiring a first distance between a first position and a target plane; acquiring a reduction scale corresponding to the first distance according to a target relation, wherein the target relation is used for indicating the relation between the distance between the position of the first object in the three-dimensional space and the target plane and the reduction scale; and reducing the first two-dimensional image according to a reduction proportion corresponding to the first distance to obtain a target two-dimensional image.
Alternatively, the processing unit may include: the second acquisition module is used for acquiring attribute information configured for the first object, wherein the attribute information is used for indicating the shape and the color of the target three-dimensional image vertically mapped on the target plane; and the third mapping module is used for vertically mapping the target three-dimensional image onto the target plane according to the shape and the color indicated by the attribute information to obtain the first two-dimensional image.
Optionally, the apparatus of the present application may further comprise: the configuration unit is used for configuring the attribute information of each first object appearing in the first three-dimensional space before acquiring the first position of the first object in the first three-dimensional space, wherein the first object is any one of a plurality of preset objects, the color indicated by the attribute information is any one of a plurality of preset colors, and the shape indicated by the attribute information is any one of a plurality of preset shapes.
The above-mentioned acquisition unit may be further configured to: when the first object exists in a second three-dimensional space at the moment before the current moment, acquiring a second position of the first object in the second three-dimensional space, and determining the first position according to the second position and the moving speed of the first object, wherein the first object exists in the first three-dimensional space at the current moment, and the second three-dimensional space is a three-dimensional space in which the first object exists at the moment before the current moment among a plurality of three-dimensional spaces; and when the first object does not exist in the second three-dimensional space at the moment before the current moment, acquiring a first position configured for the first object in the first three-dimensional space, wherein the first three-dimensional space comprises a second object controlled by the game account number, and the plane where the second object is located is a target plane.
For the glaring stars in the foreground, the method in the common game is to make frame animation, and the frame animation also has the problems of insufficient randomness and excessive resources. According to the scheme, a random shape and random flicker can be presented to the maximum extent by using simple resources according to the Alpha value in the art resources and the random time. Therefore, under orthogonal projection, the depth sense of the background starry sky and the shining effect of the foreground starry are realized.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of embodiment 1 described above. It should be noted that the modules described above as a part of the apparatus may be operated in a hardware environment as shown in fig. 6, may be implemented by software, and may also be implemented by hardware, where the hardware environment includes a network environment.
According to the embodiment of the invention, the server or the terminal for implementing the display method of the game picture is also provided.
Fig. 21 is a block diagram of a terminal according to an embodiment of the present invention, and as shown in fig. 21, the terminal may include: one or more processors 2101 (only one of which is shown in fig. 21), a memory 2103, and a transmission device 2105 (such as the transmission device in the above-described embodiment) may also include an input-output device 2107, as shown in fig. 21.
The memory 2103 may be used to store software programs and modules, such as program instructions/modules corresponding to the method and apparatus for displaying a game screen in the embodiment of the present invention, and the processor 2101 executes various functional applications and data processing by running the software programs and modules stored in the memory 2103, that is, the method for displaying a game screen is implemented. The memory 2103 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 2103 can further include memory located remotely from the processor 2101, which can be connected to the terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 2105 is used for receiving or transmitting data via a network, and can also be used for data transmission between the processor and the memory. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 2105 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices to communicate with the internet or a local area Network. In one example, the transmission device 2105 is a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
Among them, the memory 2103 is used for storing an application program in particular.
The processor 2101 may invoke an application program stored in the memory 2103 via the transmission 2105 to perform the following steps:
acquiring a first position of a first object in a first three-dimensional space, wherein the first three-dimensional space is a three-dimensional space where the first object is located in a plurality of three-dimensional spaces included in a game scene, and the first object is an object in the game scene;
obtaining a target three-dimensional image to be mapped according to an original three-dimensional image of a first object, vertically mapping the target three-dimensional image to a target plane in a game scene to obtain a first two-dimensional image, obtaining a target two-dimensional image to be displayed according to the first two-dimensional image, wherein the size of the target two-dimensional image is related to the distance between a first position and the target plane, and the size of the target two-dimensional image is different from that of a second two-dimensional image obtained by vertically mapping the original three-dimensional image to the target plane;
and displaying the target two-dimensional image mapped on the target plane.
The processor 2101 is further configured to perform the following steps:
when the first object exists in a second three-dimensional space at the moment before the current moment, acquiring a second position of the first object in the second three-dimensional space, and determining the first position according to the second position and the moving speed of the first object, wherein the first object exists in the first three-dimensional space at the current moment, and the second three-dimensional space is a three-dimensional space in which the first object exists at the moment before the current moment among a plurality of three-dimensional spaces;
and when the first object does not exist in the second three-dimensional space at the moment before the current moment, acquiring a first position configured for the first object in the first three-dimensional space, wherein the first three-dimensional space comprises a second object controlled by the game account number, and the plane where the second object is located is a target plane.
By adopting the embodiment of the invention, the first position of a first object in a game scene in a first three-dimensional space is obtained, a target three-dimensional image to be mapped is obtained according to an original three-dimensional image of the first object, the target three-dimensional image is vertically mapped onto a target plane in the game scene to obtain a first two-dimensional image, a target two-dimensional image to be displayed is obtained according to the first two-dimensional image, and the size of the target two-dimensional image is related to the distance between the first position and the target plane; the target two-dimensional image mapped on the target plane is displayed, and because the size of the target two-dimensional image is different from that of a second two-dimensional image obtained by vertically mapping the original three-dimensional image on the target plane (namely, the target two-dimensional image is displayed according to a perspective mode of 'far-small-near-large'), when the first object is a star, the technical problem that the area of the game animation picture occupied by the star in the star sky scene in the related technology is too large can be solved, and the technical effects of reducing the area of the game animation picture occupied by the star and improving the reality of the game animation picture are achieved.
Optionally, the specific examples in this embodiment may refer to the examples described in embodiment 1 and embodiment 2, and this embodiment is not described herein again.
It can be understood by those skilled in the art that the structure shown in fig. 21 is only an illustration, and the terminal may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, and a Mobile Internet Device (MID), a PAD, etc. Fig. 21 is a diagram illustrating a structure of the electronic device. For example, the terminal may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 21, or have a different configuration than shown in FIG. 21.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The embodiment of the invention also provides a storage medium. Alternatively, in the present embodiment, the storage medium may be a program code for executing a display method of a game screen.
Optionally, in this embodiment, the storage medium may be located on at least one of a plurality of network devices in a network shown in the above embodiment.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps:
s41, acquiring a first position of a first object in a first three-dimensional space, wherein the first three-dimensional space is a three-dimensional space where the first object is located in a plurality of three-dimensional spaces included in a game scene, and the first object is an object in the game scene;
s42, obtaining a target three-dimensional image to be mapped according to the original three-dimensional image of the first object, vertically mapping the target three-dimensional image to a target plane in a game scene to obtain a first two-dimensional image, obtaining a target two-dimensional image to be displayed according to the first two-dimensional image, wherein the size of the target two-dimensional image is related to the distance between the first position and the target plane, and the size of the target two-dimensional image is different from that of a second two-dimensional image obtained by vertically mapping the original three-dimensional image to the target plane;
and S43, displaying the target two-dimensional image mapped on the target plane.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
s51, when the first object exists in a second three-dimensional space at a time before the current time, obtaining a second position of the first object in the second three-dimensional space, and determining the first position according to the second position and a moving speed of the first object, where the first object exists in the first three-dimensional space at the current time, and the second three-dimensional space is a three-dimensional space existing in the first three-dimensional space at the time before the current time among the multiple three-dimensional spaces;
s52, when the first object does not exist in the second three-dimensional space at the previous moment of the current moment, acquiring a first position configured for the first object in the first three-dimensional space, wherein the first three-dimensional space comprises a second object controlled by the game account number, and the plane where the second object is located is a target plane.
Optionally, the specific examples in this embodiment may refer to the examples described in embodiment 1 and embodiment 2, and this embodiment is not described herein again.
Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (15)

1. A method for displaying a game screen, comprising:
acquiring a first position of a first object in a first three-dimensional space, wherein the first three-dimensional space is a three-dimensional space where the first object is located in a plurality of three-dimensional spaces included in a game scene, and the first object is an object in the game scene;
obtaining a first distance between the first position and a target plane within the game scene; acquiring a reduction scale corresponding to the first distance according to a target relation, wherein the target relation is used for indicating a relation between the distance from the position of the first object in the three-dimensional space to the target plane and the reduction scale;
reducing the original three-dimensional image of the first object according to a reduction scale corresponding to the first distance to obtain a target three-dimensional image of the first object, wherein the size of the target three-dimensional image is related to the distance between the first position and the target plane;
vertically mapping the target three-dimensional image onto the target plane to obtain a first two-dimensional image, and taking the first two-dimensional image as a target two-dimensional image, wherein the size of the target two-dimensional image is related to the distance between the first position and the target plane, and the size of the target two-dimensional image is different from that of a second two-dimensional image obtained by vertically mapping the original three-dimensional image onto the target plane;
displaying the target two-dimensional image mapped on the target plane.
2. The method of claim 1, wherein vertically mapping the target three-dimensional image onto a target plane within the game scene resulting in a first two-dimensional image comprises:
acquiring attribute information configured for the first object, wherein the attribute information is used for indicating the shape and color of the target three-dimensional image vertically mapped on the target plane;
and vertically mapping the target three-dimensional image onto the target plane according to the shape and the color indicated by the attribute information to obtain the first two-dimensional image.
3. The method of claim 1, wherein prior to acquiring the first location of the first object in the first three-dimensional space, the method further comprises:
configuring attribute information of each first object appearing in the first three-dimensional space, wherein the first object is any one of a plurality of preset objects, the color indicated by the attribute information is any one of a plurality of preset colors, and the shape indicated by the attribute information is any one of a plurality of preset shapes.
4. The method of claim 1, wherein obtaining a first position of a first object in a first three-dimensional space comprises:
when the first object exists in a second three-dimensional space at the moment before the current moment, acquiring a second position of the first object in the second three-dimensional space, and determining the first position according to the second position and the moving speed of the first object, wherein the first object exists in the first three-dimensional space at the current moment, and the second three-dimensional space is a three-dimensional space of the plurality of three-dimensional spaces in which the first object exists at the moment before the current moment;
and when the first object does not exist in the second three-dimensional space at the moment before the current moment, acquiring the first position configured for the first object in the first three-dimensional space, wherein the first three-dimensional space comprises a second object controlled by a game account number, and the plane where the second object is located is the target plane.
5. A method for displaying a game screen, comprising:
acquiring a first position of a first object in a first three-dimensional space, wherein the first three-dimensional space is a three-dimensional space where the first object is located in a plurality of three-dimensional spaces included in a game scene, and the first object is an object in the game scene;
taking an original three-dimensional image of the first object as a target three-dimensional image of the first object;
vertically mapping the target three-dimensional image to a target plane in the game scene to obtain a first two-dimensional image;
acquiring a first distance between the first position and the target plane; acquiring a reduction scale corresponding to the first distance according to a target relation, wherein the target relation is used for indicating a relation between the distance from the position of the first object in the three-dimensional space to the target plane and the reduction scale; reducing the first two-dimensional image according to a reduction proportion corresponding to the first distance to obtain a target two-dimensional image;
and displaying the target two-dimensional image.
6. The method of claim 5, wherein vertically mapping the target three-dimensional image onto a target plane within the game scene resulting in a first two-dimensional image comprises:
acquiring attribute information configured for the first object, wherein the attribute information is used for indicating the shape and color of the target three-dimensional image vertically mapped on the target plane;
and vertically mapping the target three-dimensional image onto the target plane according to the shape and the color indicated by the attribute information to obtain the first two-dimensional image.
7. The method of claim 5, wherein prior to acquiring the first position of the first object in the first three-dimensional space, the method further comprises:
configuring attribute information of each first object appearing in the first three-dimensional space, wherein the first object is any one of a plurality of preset objects, the color indicated by the attribute information is any one of a plurality of preset colors, and the shape indicated by the attribute information is any one of a plurality of preset shapes.
8. The method of claim 5, wherein obtaining a first position of a first object in a first three-dimensional space comprises:
when the first object exists in a second three-dimensional space at the moment before the current moment, acquiring a second position of the first object in the second three-dimensional space, and determining the first position according to the second position and the moving speed of the first object, wherein the first object exists in the first three-dimensional space at the current moment, and the second three-dimensional space is a three-dimensional space of the plurality of three-dimensional spaces in which the first object exists at the moment before the current moment;
and when the first object does not exist in the second three-dimensional space at the moment before the current moment, acquiring the first position configured for the first object in the first three-dimensional space, wherein the first three-dimensional space comprises a second object controlled by a game account number, and the plane where the second object is located is the target plane.
9. A game screen display device, comprising:
the game system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a first position of a first object in a first three-dimensional space, the first three-dimensional space is a three-dimensional space where the first object is located in a plurality of three-dimensional spaces included in a game scene, and the first object is an object in the game scene;
a processing unit comprising: a first adjustment module and a first mapping module, the first adjustment module comprising: the first obtaining submodule is used for obtaining a first distance between the first position and a target plane in the game scene; a second obtaining sub-module, configured to obtain a reduction scale corresponding to the first distance according to a target relationship, where the target relationship is used to indicate a relationship between a distance from a position of the first object in a three-dimensional space to the target plane and the reduction scale; a processing sub-module, configured to reduce an original three-dimensional image of the first object according to a reduction scale corresponding to the first distance to obtain a target three-dimensional image of the first object, where a size of the target three-dimensional image is related to a distance between the first position and the target plane, and the first mapping module is configured to vertically map the target three-dimensional image onto the target plane to obtain a first two-dimensional image, and use the first two-dimensional image as a target two-dimensional image, where a size of the target two-dimensional image is related to a distance between the first position and the target plane, and a size of the target two-dimensional image is different from a size of a second two-dimensional image obtained by vertically mapping the original three-dimensional image onto the target plane;
and the display unit is used for displaying the target two-dimensional image mapped on the target plane.
10. The apparatus of claim 9, wherein the processing unit comprises:
the second acquisition module is used for acquiring attribute information configured for the first object, wherein the attribute information is used for indicating the shape and the color of the target three-dimensional image vertically mapped on the target plane;
and the third mapping module is used for vertically mapping the target three-dimensional image onto the target plane according to the shape and the color indicated by the attribute information to obtain the first two-dimensional image.
11. The apparatus of claim 9, further comprising:
the configuration unit is used for configuring attribute information of each object appearing in the first three-dimensional space before acquiring a first position of the first object in the first three-dimensional space, wherein the first object is any one of a plurality of preset objects, the color indicated by the attribute information is any one of a plurality of preset colors, and the shape indicated by the attribute information is any one of a plurality of preset shapes.
12. The apparatus of claim 9, wherein the obtaining unit is configured to:
when the first object exists in a second three-dimensional space at the moment before the current moment, acquiring a second position of the first object in the second three-dimensional space, and determining the first position according to the second position and the moving speed of the first object, wherein the first object exists in the first three-dimensional space at the current moment, and the second three-dimensional space is a three-dimensional space of a plurality of three-dimensional spaces in which the first object exists at the moment before the current moment; and when the first object does not exist in the second three-dimensional space at the moment before the current moment, acquiring a first position configured for the first object in the first three-dimensional space, wherein the first three-dimensional space comprises a second object controlled by a game account, and a plane where the second object is located is the target plane.
13. A game screen display device, comprising:
the game system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a first position of a first object in a first three-dimensional space, the first three-dimensional space is a three-dimensional space where the first object is located in a plurality of three-dimensional spaces included in a game scene, and the first object is an object in the game scene;
a processing unit comprising: the first acquisition module is used for acquiring an original three-dimensional image of the first object as a target three-dimensional image; the second mapping module is used for vertically mapping the target three-dimensional image to a target plane of the game scene to obtain a first two-dimensional image; a second adjusting module, configured to adjust the size of the first two-dimensional image according to the distance between the first position and the target plane, to obtain a target two-dimensional image, where the second adjusting module is further configured to: acquiring a first distance between the first position and the target plane; acquiring a reduction scale corresponding to the first distance according to a target relation, wherein the target relation is used for indicating a relation between the distance between the position of the first object in the three-dimensional space and the target plane and the reduction scale; reducing the first two-dimensional image according to a reduction proportion corresponding to the first distance to obtain the target two-dimensional image;
and the display unit is used for displaying the target two-dimensional image mapped on the target plane.
14. A storage medium comprising a stored program, wherein the program when executed performs the method of any one of claims 1 to 4 or 5 to 8.
15. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the method of any of claims 1 to 4 or 5 to 8 by means of the computer program.
CN201711208105.6A 2017-11-27 2017-11-27 Storage medium, electronic device, game screen display method and device Active CN108043027B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711208105.6A CN108043027B (en) 2017-11-27 2017-11-27 Storage medium, electronic device, game screen display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711208105.6A CN108043027B (en) 2017-11-27 2017-11-27 Storage medium, electronic device, game screen display method and device

Publications (2)

Publication Number Publication Date
CN108043027A CN108043027A (en) 2018-05-18
CN108043027B true CN108043027B (en) 2020-09-11

Family

ID=62120624

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711208105.6A Active CN108043027B (en) 2017-11-27 2017-11-27 Storage medium, electronic device, game screen display method and device

Country Status (1)

Country Link
CN (1) CN108043027B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109107156A (en) * 2018-08-10 2019-01-01 腾讯科技(深圳)有限公司 Game object acquisition methods, device, electronic equipment and readable storage medium storing program for executing
CN110960854B (en) * 2019-11-21 2023-07-25 北京金山安全软件有限公司 Image mobile display method and device, electronic equipment and storage medium
CN111068333B (en) * 2019-12-20 2021-12-21 腾讯科技(深圳)有限公司 Video-based carrier abnormal state detection method, device, equipment and medium
CN111275813B (en) * 2020-01-20 2021-09-17 北京字节跳动网络技术有限公司 Data processing method and device and electronic equipment
CN112529769B (en) * 2020-12-04 2023-08-18 威创集团股份有限公司 Method and system for adapting two-dimensional image to screen, computer equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104781852A (en) * 2012-09-21 2015-07-15 欧克里德私人有限公司 A computer graphics method for rendering three dimensional scenes

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104781852A (en) * 2012-09-21 2015-07-15 欧克里德私人有限公司 A computer graphics method for rendering three dimensional scenes

Also Published As

Publication number Publication date
CN108043027A (en) 2018-05-18

Similar Documents

Publication Publication Date Title
CN108043027B (en) Storage medium, electronic device, game screen display method and device
CN112215934B (en) Game model rendering method and device, storage medium and electronic device
CN109658365A (en) Image processing method, device, system and storage medium
CN108648257B (en) Panoramic picture acquisition method and device, storage medium and electronic device
KR101947650B1 (en) Apparatus and method for generating learning image in game engine-based machine learning
CN110517355A (en) Environment for illuminating mixed reality object synthesizes
JP2010033296A (en) Program, information storage medium, and image generation system
CN110246209B (en) Image processing method and device
CN109255841A (en) AR image presentation method, device, terminal and storage medium
US9754398B1 (en) Animation curve reduction for mobile application user interface objects
CN107038745A (en) A kind of 3D tourist sights roaming interaction method and device
US11328437B2 (en) Method for emulating defocus of sharp rendered images
US20230206511A1 (en) Methods, systems, and media for generating an immersive light field video with a layered mesh representation
CA3199390A1 (en) Systems and methods for rendering virtual objects using editable light-source parameter estimation
CN113230659A (en) Game display control method and device
US11600041B2 (en) Computing illumination of an elongated shape having a noncircular cross section
CN115761105A (en) Illumination rendering method and device, electronic equipment and storage medium
US11158113B1 (en) Increasing the speed of computation of a volumetric scattering render technique
CN115082607A (en) Virtual character hair rendering method and device, electronic equipment and storage medium
US11308586B2 (en) Method for applying a vignette effect to rendered images
CN111243099B (en) Method and device for processing image and method and device for displaying image in AR (augmented reality) equipment
KR20230022153A (en) Single-image 3D photo with soft layering and depth-aware restoration
CN115035231A (en) Shadow baking method, shadow baking device, electronic apparatus, and storage medium
CN112396683A (en) Shadow rendering method, device and equipment of virtual scene and storage medium
Yang et al. Visual effects in computer games

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant