CN114452645B - Method, apparatus and storage medium for generating scene image - Google Patents

Method, apparatus and storage medium for generating scene image Download PDF

Info

Publication number
CN114452645B
CN114452645B CN202110778014.6A CN202110778014A CN114452645B CN 114452645 B CN114452645 B CN 114452645B CN 202110778014 A CN202110778014 A CN 202110778014A CN 114452645 B CN114452645 B CN 114452645B
Authority
CN
China
Prior art keywords
scene
frame
drawing command
module
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110778014.6A
Other languages
Chinese (zh)
Other versions
CN114452645A (en
Inventor
王国凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202110778014.6A priority Critical patent/CN114452645B/en
Publication of CN114452645A publication Critical patent/CN114452645A/en
Application granted granted Critical
Publication of CN114452645B publication Critical patent/CN114452645B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • A63F13/355Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an encoded video stream for transmitting to a mobile phone or a thin client
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/53Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing
    • A63F2300/538Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing for performing operations on behalf of the game client, e.g. rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a method, device and storage medium for generating a scene image, wherein the method comprises the steps of detecting the state of a game object; if the game object is in a motion state, generating a first type scene image, wherein the first type scene image presents a game scene in a first range; if the game object is in a non-motion state, generating a second type of scene image, wherein the second type of scene image presents a game scene in a second range; the first range is smaller than the second range. According to the scheme, the range of the game scene presented by the scene image is narrowed when the game object is in the motion state, so that the calculated amount of generating one frame of scene image by the equipment is reduced, and the load of the equipment when the game object is in the motion state is reduced.

Description

Method, apparatus and storage medium for generating scene image
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a storage medium for generating a scene image.
Background
With the development of computer technology, various electronic games are continuously presented on terminal devices (including smart phones, computers, etc.). In an electronic game, a user can operate a game object to move in a game scene, and a terminal device needs to generate a game scene in a specific range in real time according to the operation of the user. When the game scene changes rapidly, the terminal device needs to continuously acquire and generate new scene images at a higher frequency, resulting in a higher load of the terminal device.
Disclosure of Invention
The application provides a method, equipment and a storage medium for generating scene images, and aims to solve the problem of high equipment load when an electronic game is run.
In order to achieve the above object, the present application provides the following technical solutions:
a first aspect of the present application provides a method of generating an image of a scene, comprising:
judging whether the game object is in a motion state or not;
generating a second scene image when the game object is not in a motion state;
generating a first scene image when the game object is in a motion state; the range of game scenes presented by the first type of scene images is smaller than the range of game scenes presented by the second type of scene images.
The method has the advantages that the range of the presented game scene is reduced when the game object is in a motion state, so that the calculation amount required by the electronic equipment for generating the scene image is reduced, and the load of the electronic equipment is reduced.
In some alternative embodiments, before generating the first type of scene image, generating a plurality of frames of scene images with gradually smaller range of the game scene presented, so that the range of the game scene presented by the electronic device smoothly decreases from the range corresponding to the second type of scene image to the range corresponding to the first type of scene image.
The above embodiment has the advantage of smoothly reducing the range of the game scene presented by the electronic device when the game object is in a motion state, and avoiding abrupt change of the range of the game scene.
In some alternative embodiments, the first type of scene image is generated by:
obtaining object information used for drawing a scene image;
and cutting out the part which is positioned outside the first range in the object information used for drawing the scene image, and drawing the first type scene image by using the part which is positioned in the first range in the object information used for drawing the scene image.
In some alternative embodiments, the manner in which the status of a game object is detected is:
obtaining a current scene drawing command; the scene drawing command refers to a drawing command for drawing a scene image;
matching the current scene drawing command with the motion command stream; the motion command stream is a scene drawing command obtained when the game object is in a motion state;
if the current drawing command stream is successfully matched with the motion command stream, detecting that the game object is in a motion state;
if the matching of the current drawing command stream and the motion command stream fails, detecting that the game object is in a non-motion state.
In some alternative embodiments, the manner in which the status of a game object is detected is:
obtaining a multi-frame scene drawing command within a certain time;
calculating the speed of the game object according to the distance between the game object in each frame of scene drawing command and the object in the game scene;
if the speed of the game object is greater than the speed threshold, detecting that the game object is in a motion state;
if the speed of the game object is not greater than the speed threshold, detecting that the game object is in a non-motion state.
In some alternative embodiments, the speed of the game object is determined prior to generating the first type of scene image;
if the speed of the game object is smaller, continuing to generate a first type scene image;
if the speed of the game object is high, generating a third type scene image; the third type of scene image presents a range of game scenes that is less than the range of game scenes presented by the first type of scene image.
The above embodiment has the advantage of further reducing the range of game scenes presented by the electronic device when the speed of the game object is high, thereby further reducing the load of the electronic device.
In some optional embodiments, the method of generating a scene image is applied to an electronic device, a software layer of which includes a matching module;
The detecting the state of the game object comprises:
the matching module receives a frame of scene drawing command;
the matching module reads the motion command stream from the memory;
the matching module matches the motion command stream with the one-frame scene drawing command;
if the motion command stream and the one-frame scene drawing command are successfully matched, the matching module determines that the game object is in a motion state in the one-frame scene drawing command;
and if the matching of the motion command stream and the one-frame scene drawing command fails, the matching module determines that the game object is in a non-motion state in the one-frame scene drawing command.
In some alternative embodiments, the software layer of the electronic device further comprises a zoom-in module;
and if the game object is in a motion state, generating a first scene image, which comprises the following steps:
the matching module determines that the game object is in a motion state in the one-frame scene drawing command, and then sends the one-frame scene drawing command to the zooming-in module;
the zoom-in module receives the one-frame scene drawing command;
and the zooming-in module generates the first type scene image according to the one-frame scene drawing command.
In some optional embodiments, the software layer of the electronic device further includes a send-display interface;
after the zooming-in module generates the first type scene image according to the one-frame scene drawing command, the zooming-in module sends the first type scene image to the sending and displaying interface;
and the sending and displaying interface displays the first scene image on a display screen.
In some optional embodiments, the zooming-in module generates the first type scene image according to the one-frame scene drawing command, including:
the pull module creates a temporary frame buffer and view; the size of the temporary frame buffer and the size of the view both match the first range;
the zoom-in module reads object information from the one-frame scene drawing command;
and the zooming-in module calls a graphics processor based on the object information, the temporary frame buffer and the view, so that the graphics processor draws the first type scene image.
In some optional embodiments, the software layer of the electronic device further includes a callback module and a graphics library;
and if the game object is in a non-motion state, generating a second type of scene image, which comprises the following steps:
The matching module determines that the game object is in a non-motion state in the one-frame scene drawing command, and then sends the one-frame scene drawing command to the callback module;
the callback module receives the one-frame scene drawing command;
the callback module callback the one-frame scene drawing command to the graphic library;
and responding to the callback module to callback the one-frame scene drawing command, and generating the second-class scene image according to the one-frame scene drawing command by the graphic library.
In some alternative embodiments, the software layer of the electronic device further comprises an identification module;
before the matching module receives a frame of scene drawing command, the matching module further comprises:
the identification module receives a frame of drawing command stream;
after the recognition module determines the scene frame buffer, the recognition module determines the drawing command stored in the scene frame buffer in the one-frame drawing command stream as one-frame scene drawing command;
the identification module judges whether the memory stores a motion command stream or not;
and after the recognition module judges that the motion command stream is stored in the memory, the recognition module sends the one-frame scene drawing command to the matching module.
In some alternative embodiments, the identification module determines a scene frame buffer comprising:
the identification module receives a drawing command stream of a previous frame; the previous frame drawing command stream is another frame drawing command stream received by the identification module before the one frame drawing command stream;
the identification module counts the number of drawing commands stored in each frame buffer except the interface frame buffer in a plurality of frame buffers for storing the drawing command stream of the previous frame; the interface frame buffer is used for storing user interface drawing commands in the frame buffers; the user interface drawing command is a drawing command used for drawing a user interface image in the drawing command stream;
the recognition module determines a frame buffer with the largest number of drawing commands stored except the interface frame buffer among the plurality of frame buffers as the scene frame buffer.
In some alternative embodiments, the software layer of the electronic device further comprises a detection module;
after the identification module judges whether the memory stores the motion command stream, the identification module further comprises:
after the recognition module judges that the motion command stream is not stored in the memory, the recognition module sends the one-frame scene drawing command to the detection module;
The detection module receives the one-frame scene drawing command;
the detection module calculates the speed of a game object in the one-frame scene drawing command;
the detection module judges whether the speed of the game object in the one-frame scene drawing command is greater than a preset speed threshold value;
after the detection module judges that the speed of the game object in the one-frame scene drawing command is greater than the speed threshold, the detection module determines the motion command stream according to the one-frame scene drawing command;
the detection module stores the motion command stream into the memory.
In some alternative embodiments, the detecting module calculates a speed of the game object in the one-frame scene drawing command, including:
the detection module acquires a time stamp and a distance corresponding to the one-frame scene drawing command; the distance corresponding to the one-frame scene drawing command is the distance between the game object and the central coordinate of the image in the one-frame scene drawing command;
the detection module reads the timestamp and the distance corresponding to the scene drawing command of the previous frame from the memory; the distance corresponding to the previous frame of scene drawing command is the distance between the game object and the central coordinate of the image in the previous frame of scene drawing command; the previous frame of scene drawing command is another frame of scene drawing command received by the identification module before the one frame of scene drawing command;
Dividing the distance difference by the time difference by the detection module to obtain the speed of the game object in the one-frame scene drawing command; the distance difference value is the difference value between the distance corresponding to the one-frame scene drawing command and the distance corresponding to the previous-frame scene drawing command; the time difference value is the difference value between the time stamp corresponding to the one-frame scene drawing command and the time stamp corresponding to the previous-frame scene drawing command.
In some optional embodiments, the detecting module determining the motion command stream according to the one-frame scene drawing command includes:
the detection module reads out the first N frames of scene drawing commands from the memory; the speeds of the game objects in the first N frames of scene drawing commands are all larger than the speed threshold; the first N frame scene drawing commands are N frame scene drawing commands received by the identification module before the one frame scene drawing command; the N is a preset positive integer;
the detection module determines a drawing command sequence contained in the one-frame scene drawing command and the previous N-frame scene drawing command as the motion command stream; the drawing command sequence is a set of a plurality of continuous scene drawing commands.
In some alternative embodiments, the electronic device further comprises a gaming application and interception module;
before the identification module receives a frame of drawing command stream, the identification module further comprises:
the game application outputting the one frame of drawing command stream;
the interception module intercepts the one frame of drawing command stream output by the game application;
the interception module sends the one-frame drawing command stream to the identification module.
A second aspect of the present application provides an electronic device, a hardware layer of the electronic device including: one or more processors, memory, and a display screen;
the memory is used for storing one or more programs;
the one or more processors are configured to execute the one or more programs to cause the electronic device to:
detecting a state of a game object;
if the game object is in a motion state, generating a first type scene image, wherein the first type scene image presents a game scene in a first range;
if the game object is in a non-motion state, generating a second type of scene image, wherein the second type of scene image presents a game scene in a second range; the first range is smaller than the second range.
A third aspect of the present application provides a computer storage medium storing a computer program, which when executed is specifically adapted to carry out the method of generating an image of a scene provided in any one of the first aspects of the present application.
The application provides a method, device and storage medium for generating a scene image, wherein the method comprises the steps of detecting the state of a game object; if the game object is in a motion state, generating a first type scene image, wherein the first type scene image presents a game scene in a first range; if the game object is in a non-motion state, generating a second type of scene image, wherein the second type of scene image presents a game scene in a second range; the first range is smaller than the second range. According to the scheme, the range of a game scene presented by the scene image is narrowed when the game object is in the motion state, and the calculated amount of generating one frame of scene image by the equipment is reduced, so that the load of the equipment when the game object is in the motion state is reduced.
Drawings
Fig. 1 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a method for generating an image of a scene according to an embodiment of the present application;
FIG. 3 is a schematic view of the hardware layer and software layer of a device for performing the method for generating a scene image according to the embodiment of the present application;
fig. 4a, 4b and 4c are schematic diagrams of images of a scene disclosed in embodiments of the present application;
FIG. 5 is an exemplary diagram of the specific operation of the modules in the application framework layer shown in FIG. 3;
FIG. 6 is a method for smoothly narrowing the range of a scene presented by a scene image according to an embodiment of the present disclosure;
fig. 7 is a flowchart of a method for generating a scene image according to an embodiment of the present application.
Detailed Description
The terms first, second, third and the like in the description and in the claims and drawings are used for distinguishing between different objects and not for limiting the specified sequence.
In the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
The method for generating the scene image can be used for electronic equipment such as a smart phone, a tablet personal computer, a personal desktop computer or a notebook computer. The structure of the electronic device applying the method can be shown in fig. 1.
As shown in fig. 1, the electronic device may include a processor 110, an internal memory 120, a display screen 130, and the like.
It is to be understood that the structure illustrated in the present embodiment does not constitute a specific limitation on the electronic apparatus. In other embodiments, the electronic device may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
For example, in the present application, the processor 110 may execute one or more programs stored in the internal memory, so that the electronic device performs the method for generating a scene image provided in the embodiments of the present application.
A memory may also be provided in the processor 110 for storing commands and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold commands or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the command or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
The internal memory 120 may be used to store computer executable program code, including commands. The processor 110 executes various functional applications of the electronic device and data processing by executing commands stored in the internal memory 120. For example, in the present embodiment, the processor 110 may perform scene orchestration by executing commands stored in the internal memory 120. The internal memory 120 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 120 may include a high-speed random access memory, and may also include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications of the electronic device and data processing by executing commands stored in the internal memory 120 and/or commands stored in a memory provided in the processor.
For example, in the present application, the memory may store one or more programs, and when the stored programs are executed by the processor, the method for generating a scene image provided in the embodiment of the present application can be implemented.
The electronic device implements the generation function through the GPU, the display 130, and the application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display 130 and the application processor. The display 130 is used to generate images, videos, and the like. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program commands to generate or change generation information.
A series of graphical user interfaces (graphical user interface, GUI) may be generated on the display screen 130 of the electronic device with which a user may interact by directly manipulating the graphical user interfaces. For example, in embodiments of the present application, display 130 may generate virtual keys.
In addition, an operating system is run on the components. Such as the iOS operating system developed by apple corporation, the Android open source operating system developed by google corporation, the Windows operating system developed by microsoft corporation, and the hong system. An operating application may be installed on the operating system.
For ease of understanding, some technical terms in the schemes of the present application are explained and illustrated below:
regarding the graphic library:
the graphics library is also called a drawing library, and the graphics library is used for defining a cross-programming language, cross-platform application programming interface (Application Programming Interface, API), wherein a plurality of functions for processing graphics are included, and OpenGL (Open Graphics Library ) is taken as an example, an interface for drawing a two-dimensional image or a three-dimensional image is included in the API defined by OpenGL (the interface includes a drawing function, for example, a drawing function gldraw elements ()), an interface for presenting an image drawn by the drawing function onto a display interface (the interface includes a presentation function, for example, a function eglswappuffers ()), and the embodiment of the present application is not taken as an example herein. The functions in OpenGL may be called by commands, for example, drawing functions may be called by drawing commands to draw a two-dimensional image or a three-dimensional image. The drawing command is a command written by a developer according to a function in the graphic library when the game application is developed, and is used for calling an interface of the graphic library corresponding to the drawing command.
Regarding game images:
As indicated above, the two-dimensional or three-dimensional image drawn by the draw command invoking the draw function may include a game image, as well as other types of image frames. Specifically, the game application is displayed by continuously rendering and fast playing a frame of image during running. A frame of game image is a frame of still image displayed by the game application. Each frame of still image may be composed of a scene image, a User Interface (UI) image, and the like. By way of example, the scene image may include in-game scenery, game characters, background objects, special effects, skills, etc., the UI image may include rendering control buttons, minimap, hover text, etc., and some in-game character blood volume bars are also included in the UI image. It should be noted that, in either a game character or the like in a scene image or a rendering control button in a UI image, the object in a game image frame may be regarded as an object, and it is understood that each game image frame is composed of individual objects.
For a frame of game image, the scene image and the UI image that constitute the game image may respectively exist in two frame buffers (frame buffers), for example, the rendered UI image exists FB0, the rendered scene image exists FB2, and finally the UI image of FB0 and the scene image of FB2 are combined into a complete frame of game image.
Regarding drawing commands:
each object in the game image is obtained by rendering a rendering command by electronic device specific software or hardware. An object may be drawn by one or more drawing commands, and typically, an object corresponds to a drawing command one by one. It should be noted that, each drawing command further includes a specific parameter carried by the drawing command, where the specific parameter may include vertex information of an object corresponding to the drawing command, texture information, and the like, where the vertex information is used to describe the number and positions of vertices forming the corresponding object, and the texture information is used to describe a color or a specific pattern that needs to be filled on the surface of the corresponding object. Such as vertex information, etc. When the electronic device executes the drawing command, drawing an object corresponding to the drawing command based on specific parameters thereof.
Regarding the drawing command stream:
a drawing command stream is a command stream composed of one or more drawing commands, one drawing command stream is generally used to draw a frame of an image, and specifically, the GPU may implement drawing one or more objects in a game image by executing one or more drawing commands in the drawing command stream and invoking one or more interfaces of a graphics library. It should be noted that, each object drawn by the drawing command may be represented by data stored in the memory. For example, the set of drawing objects generated from the drawing command stream may constitute display data of a corresponding one-frame game image.
The drawing command stream generally includes three kinds of drawing commands, such as a scene drawing command, a User Interface (UI) drawing command, and an image transmission command. The scene drawing command is used for drawing the images of the scenes, in particular for drawing images of scenery, figures, special effects, skills and the like in the game, the UI drawing command is used for drawing the images of the UI, such as control buttons, small maps, floating words and the like, and some figures blood volume bars in the game are drawn by the UI drawing command, and the UI drawing command can be called a control drawing command. The image transmission and display command is used for placing the drawn image data in a system designated position so as to complete actual display.
Taking the example that the processor 110 in fig. 1 includes a CPU and a GPU, a procedure of drawing a scene image in the embodiment of the present application will be described. After the game is started, the CPU can obtain a drawing command stream and a rendering command stream which are output by the game, and send the drawing commands to the GPU so that the GPU can draw corresponding objects according to the drawing commands. As an example, the glbindbrame buffer () function may be included in the rendering command, and the drawing command may include the gldragwelements () function, the gldragwelements measured () function. Wherein the glBindFramebuffer () function can be used to indicate the currently bound frame buffer. For example, glbindbrame buffer (1) may indicate that the currently bound frame buffer is FB1, i.e., the GPU is instructed to execute the subsequent drawing commands gldragwelements/gldragwelementssinstant on FB 1. For ease of illustration, in the following examples, the set of gldraw element and gldraw element measured in a drawing command is referred to as a drawing operation (draw call).
Fig. 2 is a schematic diagram of a method for generating a scene image according to the present application.
After the processor obtains the drawing command stream of the nth frame of game image, the scene drawing command and the non-scene drawing command (commands except the scene drawing command) are identified, then whether the game object in the scene drawing command of the nth frame is in a motion state is judged, if the game object in the scene drawing command of the nth frame is judged to be in a non-motion state, then the non-scene image (such as a UI image) is drawn by the non-scene drawing command and the drawn non-scene image is sent to be displayed, a frame of second scene image is drawn by the scene drawing command and sent to be displayed, and then the processor merges the second scene image and the non-scene image to obtain the nth frame of game image and displays the nth frame of game image on a screen. The game object is an object for representing a character controlled by a user in a game.
And then processing the drawing command of each frame of game image obtained by the processor in the mode, assuming that the game object in the drawing command of the scene of the (N+1) th frame is in a non-motion state, drawing a second type of scene image by the drawing command of the scene of the (N+1) th frame, drawing a non-scene image by the drawing command of the non-scene, and combining the second type of scene image and the non-scene image by the processor to obtain the game image of the (N+1) th frame and displaying the game image on a screen.
And the second type of scene image is a scene image with a scene range of a second range. It can be seen that the electronic device can present a game scene within the second range to the user by displaying the nth through n+1st game images on the screen.
When the drawing command stream of the (n+2) th frame after the processor identifies the scene drawing command, judging that the game object in the scene drawing command of the (n+2) th frame is in a motion state, drawing a non-scene image (such as a UI image) by using the non-scene drawing command of the (n+2) th frame, sending the drawn non-scene image to display, drawing a first-class scene image by using the scene drawing command of the (n+2) th frame, and sending the first-class scene image to display.
Assuming that the game objects in the scene drawing commands from the (n+2) -th frame to the scene drawing commands from the (n+x) -th frame are all in a motion state, the processor sequentially draws corresponding first-class scene images by using the scene drawing commands from the (n+2) -th frame to the (n+x) -th frame, and displays the game images from the (n+2) -th frame to the (n+x) -th frame, which are obtained by combining the first-class scene images and the corresponding non-scene images, on a screen.
A first type of scene image, a scene image that presents a scene range that is a first range. It can be seen that the electronic device can present a game scene within a first range to a user by displaying an n+2th frame through an n+x frame game image on a screen. The first range is smaller than the aforementioned second range.
After the processor obtains the drawing command stream of the (n+x+1) -th frame and recognizes the scene drawing command, it is determined that the game object in the scene drawing command of the (n+x+1) -th frame is in a non-motion state, and then a second type of scene image is drawn by using the scene drawing command of the (n+x+1) -th frame.
By the method, the electronic equipment can present the second range of in-game scenes to the user when the game object is in the non-motion state, and present the first range of in-game scenes to the user when the game object is in the motion state.
Referring to fig. 3, specifically, a framework diagram of a software layer and a hardware layer of an electronic device executing a method for generating a scene image according to an embodiment of the present application is provided. The implementation process of the method for generating the scene image provided by the embodiment of the application specifically comprises the following steps:
and in a certain time after the game application is started, the identification module determines a frame buffer for storing the scene drawing command as a scene frame buffer according to the drawing command stream of the previous frame or the previous frames output by the game application. Thereafter, each time a frame drawing command stream is obtained, the recognition module recognizes a drawing command stored in the scene frame buffer as a scene drawing command in the frame drawing command stream, and sends the scene drawing command to the detection module.
And each time the detection module receives a frame of scene drawing command, obtaining a distance S and a time stamp T corresponding to the frame of scene drawing command and storing the distance S and the time stamp T corresponding to the frame of scene drawing command in a memory, and then calculating the speed of a game object in the frame of scene drawing command by using the distance S and the time stamp T corresponding to the previous frame or frames of scene drawing command stored in the memory and storing the speed into the memory. The distance corresponding to the scene drawing command refers to the distance between the game object and the image center coordinate in the scene drawing command, and the distance can be represented by the depth of the image center coordinate.
After detecting that the speeds of game objects in the multi-frame scene drawing commands are all greater than a speed threshold, the detection module detects a motion command stream by comparing the multi-frame scene drawing commands, and writes the motion command stream into the memory.
After the motion command stream exists in the memory, each frame of scene drawing command is sent to the matching module instead of the detection module. And each time the matching module obtains a frame of scene drawing command, matching the frame of scene drawing command with the motion command stream in the memory, and if the matching is successful, sending the frame of scene drawing command to the zooming-in module so as to draw a frame of first-class scene image by using the frame of scene drawing command.
The method for generating a scene image provided in the present application is specifically described below with reference to fig. 3.
As shown in fig. 3, the electronic device may include a software layer including an application layer 301 and a system framework layer (frame) 302, and a hardware layer 303. Wherein the hardware layer includes a CPU, GPU and memory 331.
The application layer 301 includes one or more applications that may run on an electronic device, such as a gaming application 304. For ease of understanding, the method illustrated in this application will be explained and illustrated below by taking gaming application 304 as an example.
Game application 304 includes game engine 341, which game engine 341 may call drawing functions within graphics library 305 to draw images of the game application through a graphics library interface.
The system framework layer 302 may include various graphics libraries 305, such as an embedded open graphics library (open graphics library for embedded system, openGL ES), and the like.
In the related art, when a user opens the game application 304, the electronic device starts the game application 304 in response to an operation of opening a game by the user. Game engine 341 invokes a draw function within the graphics library through the graphics library interface to draw the game image based on the draw command stream of the game image issued by the game application. After the image database generates the image data of the game image, a sending and displaying interface (such as eglSwapBuffers ()) is called, and the image data is sent to the memory queue. Based on the periodic signal for sending and displaying, the graphic library sends the image data in the memory queue to hardware (such as CPU) for synthesis, and finally sends the synthesized image data to a display screen of the electronic equipment for sending and displaying.
As shown in fig. 3, before the drawing command stream of the nth frame output by the game engine is transferred to the graphics library 305, the drawing command stream is intercepted by the interception module 306 and then transferred by the interception module 306 to the recognition module 307, where the recognition module 307 is configured to recognize a scene drawing command and a non-scene drawing command in the drawing command stream, where the non-scene drawing command may include the aforementioned UI drawing command, and may also include drawing commands except for the scene drawing command and the send-display command in the drawing command stream. The drawing command stream of the nth frame means a drawing command stream for drawing a game image of the nth frame.
The interception module may intercept the drawing command stream output by the game engine based on Hook (Hook) technology. The specific interception mode of the interception module is as follows: when the game application is started, the interception module modifies the function pointer list in the thread local store (Thread Local Storage, TLS) of the game thread, and replaces the graphic function pointer recorded by the list with the replacement pointer. Pointers to implementation functions of the recognition module 307 are preconfigured in the interception module. A graphic function pointer, which is a function pointer pointing to an implementation function of the graphic library 305; instead of pointers, function pointers are function pointers that point to implementation functions of the recognition module 307.
After the modification is completed, the drawing command directed to the graphic function pointer in the drawing command stream is directed to the alternative function pointer configured in the interception module, so that the interception module can intercept the drawing command stream. After intercepting the drawing command stream, the interception module passes the drawing command stream to the recognition module 307.
Each implementation function configured by the recognition module 307 corresponds to an implementation function of the graphic library 305, and table 1 below is an example of a portion of the implementation functions configured by the graphic library, and the corresponding implementation functions in the recognition module.
TABLE 1
Wherein MyGlDrawElements and MyGlDrawElementsInstance also belong to the drawing operation.
In a specific example, the interception module may replace the function pointer glClear of the function pointer list that points to the implementation function glClear of the graphics library with the function pointer MyGlClear that points to the implementation function MyGlClear. The drawing command of calling the function pointer glClear in the drawing command stream output by the game application is converted into the calling function pointer MyGlclear, so that the interception module can intercept the drawing command of calling the function pointer MyGlclear, and then the interception module calls the realizing function MyGlclear of the identification module according to the command.
Alternatively, the drawing command stream intercepted by the interception module may be recorded in the form of an array and transferred to other modules. Specifically, the interception module replaces each function pointer (such as the function pointer MyGlClear) in the function pointer list with an index interpolation function. The function pointer configured with the index interpolation function may add the index of the called implementation function to an index array for recording the drawing command stream when the corresponding implementation function is called each time, and the correspondence between the implementation function and the index may be preset in the interception module or may be stored in the memory 331, so that the detection module only needs to transfer the corresponding index array when the drawing command stream is transferred backward.
For example, table 2 below implements an example of the correspondence of functions and indexes:
TABLE 2
Realizing the function Index
MyGlClear 1
MyGlActiveTexture 2
MyGlAlphaFunc 3
Table 2 may be stored in the intercept module or in memory from which the intercept module may read when needed. The interception module converts each drawing command in the drawing command stream into a corresponding index according to the corresponding relation between the implementation function and the index recorded in table 2, so as to obtain an index array corresponding to the drawing command stream, and then stores the index array in the memory 331.
TABLE 3 Table 3
Game machine Command Index array
Game1 MyGlClear/MyGlActiveTexture 1,2
Game2 MyGlClear/MyGlAlphaFunc 1,3
Table 3 is an example of an index array converted from the correspondence of table 2. In table 3, game1 sequentially outputs a command for calling MyGlClear and a command for calling myglactifexture, after the two commands reach the interception module, the interception module converts the two commands into index arrays (1, 2) according to the corresponding relation of table 2, then stores the index arrays (1, 2) in the memory, similarly, game2 sequentially outputs a command for calling MyGlClear and a command for calling MyGlAlphaFunc, and the interception module converts the two commands into index arrays (1, 3) according to the corresponding relation of table 2, and then stores the index arrays (1, 3) in the memory.
The recognition module 307 may configure a corresponding implementation function for each implementation function of the graphics library, or may configure a corresponding implementation function for only a portion of the implementation functions of the graphics library.
The recognition module 307 may recognize the scene drawing command as follows:
the recognition module has not determined a frame buffer to store the scene drawing commands for a period of time (e.g., 15 milliseconds) after game initiation. At this time, the recognition module may directly call back all the drawing command streams to the graphics library for execution through the callback module, and then, in the process of executing the commands of the drawing command streams one by one, count the number of drawing operations stored in each frame buffer except the frame buffer of the UI command, and determine the frame buffer with the largest number of corresponding drawing operations as the frame buffer storing the scene image.
When the recognition module has determined a frame buffer storing the scene drawing commands, the recognition module may recognize all drawing operations stored within this frame buffer storing the scene drawing commands as scene drawing commands.
The drawing operation stored in one frame buffer refers to the drawing operation before the binding command myglbindframe buffer () corresponding to the frame buffer after the binding command myglbindframe buffer () corresponding to the frame buffer.
Taking the frame buffer FB1 as an example, after the identification module determines that the scene drawing command is stored in FB1, all drawing operations after myglbindbrame buffer (1) and before MyglBindFrameBuffer (x) may be identified as scene drawing commands, where myglbindbrame buffer (1) is a binding command corresponding to FB1, myglBindFrameBuffer (x) is a binding command corresponding to FBx, and FBx is another frame buffer different from FB 1.
For example, most game applications default to storing non-scene images with FB0, the recognition module may recognize myglbindframe buffer (0) and subsequent drawing operations as non-scene drawing commands, and then call the callback module 311 to callback the non-scene drawing commands to the graphics library.
For other commands binding frame buffers, i.e. for MyglBindFrameBuffer (y) (y is not equal to 0), the identification module identifies one of the commands, and establishes a counter with an initial value of 0 for the bound frame buffer, e.g. after identifying myglbindbrame buffer (2), establishes a counter with an initial value of 0 corresponding to FB 2. Then, every time a drawing operation draw call is recognized under the command, the counter is incremented by 1 until a new frame buffer is bound, thereby obtaining the number of drawing operations corresponding to the frame buffer. For example, after identifying myglbindframe buffer (2), 1 is added to the counter of FB2 every time a draw call is identified until myglbindframe buffer (3) is identified (i.e. a new frame buffer FB3 is bound), at this time, the number recorded by the counter of FB2 is the number of drawing operations corresponding to frame buffer FB 2.
After the recognition module recognizes the scene drawing command, it determines whether the memory 331 has a motion command stream, if the current memory 331 has no motion command stream, the scene drawing command may be transferred to the detection module 308, and if the current memory 331 has a motion command stream, the scene drawing command may be transferred to the matching module 309.
In a specific example, after the recognition module recognizes the scene drawing command, the recognition module reads the motion command stream from the memory, if the motion command is successfully read, it determines that there is a motion command stream in the memory, and if the motion command stream is not read, it determines that there is no motion command stream in the memory.
The motion command stream refers to a drawing command sequence with the largest number of repeated occurrence times or an index array corresponding to the drawing command sequence with the largest number of repeated occurrence times in a multi-frame scene drawing command with a game object in a motion state. The drawing command sequence refers to a set of a plurality of continuous drawing commands in the scene drawing commands, and an array formed by indexes corresponding to each drawing command in the drawing command sequence is an index array corresponding to the drawing command sequence.
For example, one frame of scene drawing commands may include drawing command 1 to drawing command 100, wherein four drawing commands of drawing command 1 to drawing command 4 may be regarded as one drawing command sequence.
Every time the detection module 308 obtains a scene drawing command of a frame, the scene drawing command is recalled to the graphic library through the callback module, a time stamp T corresponding to the scene drawing command of the frame and a distance S between a game object and a central coordinate of an image in the scene drawing command of the frame are obtained in a callback process, and the time stamp T and the distance S of the frame are stored in a memory.
Therefore, after the detection module 308 obtains the scene drawing command of the nth frame, it determines whether the distance S and the timestamp T of the nth frame exist in the memory, if the distance S and the timestamp T of the nth frame exist in the memory, the detection module reads the distance S and the timestamp T of the nth frame from the memory, and then calculates the speed of the game object in the scene drawing command of the nth frame by combining the distance S and the timestamp T of the nth frame. Wherein, the N-1 frame is one frame before the N frame.
The speed is calculated by the following steps:
subtracting the distance S of the N-1 frame from the distance S of the N frame to obtain a distance difference value, subtracting the time stamp T of the N-1 frame from the time stamp T of the N frame to obtain a time difference value, and dividing the distance difference value by the time difference value to obtain the speed of the N frame.
If the calculated speed of the scene drawing command of the nth frame is greater than the preset speed threshold, the detection module determines that the game object in the scene drawing command of the nth frame is in a motion state, and if the calculated speed of the scene drawing command of the nth frame is not greater than the preset speed threshold, the detection module determines that the game object in the scene drawing command of the nth frame is in a non-motion state. The speed threshold may be set at 5 meters per second. The speed and state corresponding to a frame of scene drawing command are stored in the memory 331 by the detection module.
The timestamp of the scene drawing command of any frame may be the system time of the electronic device when the scene image starts to be drawn according to the scene drawing command, or the system time of the electronic device when the scene image is drawn according to the scene drawing command and displayed.
The system time may be obtained by calling the following functions:
gettimeofday(mTimeMyStart,nullptr)。
the gettimeofday function may store the absolute time of the system when called at mTimeMyStart, which is a timeval structure in the form:
after calling gettimeofday (mttimemystart, nullptr), mttimemystart.tv_sec is the system time (in seconds) when gettimeofday is called.
Therefore, for a scene drawing command of a certain frame, gettimeofday (mttimemystart, nullptr) may be called when drawing of a scene image is started according to the scene drawing command, and then the value of mttimemystart.tv_sec is determined as the timestamp T of the scene drawing command of the certain frame.
Alternatively, in a First-person perspective shooting (FPS) game, the distance between the game object and the central coordinates of the image in the scene image may be considered to be equal to the depth value of the central coordinates, and the depth values of the central coordinates x, y of a frame of the scene image may be obtained by using the following commands:
glReadPixels(x,y,width,height,GL_DEPTH_COMPONENT,GL_FLOAT,&depth);
glReadPixels is a function configured in advance in an electronic device for reading pixels and information thereof in a specified range, the specified range is a rectangle, width and height are length and width of the rectangular range, x, y are coordinates of a starting point of the rectangular range, for example, coordinates of a top left corner vertex or coordinates of a bottom left corner vertex of the rectangular range, and pixels and information thereof at the coordinates x, y can be read by glReadPixels by setting both width and height to 1.
GL_DEPTH_COMPONENT is used to specify that glReadPixels read the DEPTH values of the pixels in the specified range, in the above example, when width and height are both set to 1, the DEPTH values of the pixels at coordinates x, y are read.
In the 3D game, the scene image is drawn by projecting an object located in a three-dimensional coordinate system onto a two-dimensional plane from a preset camera position, so as to obtain a scene image formed by projection (i.e., object) of the object, so that pixels in the scene image can be in one-to-one correspondence with specific points of the object in the three-dimensional coordinate system, and the depth value of each pixel is the distance between the point on the object corresponding to the pixel in the three-dimensional coordinate and the camera position.
GL_FLOAT is used to specify that glReadPixels convert the read depth value to a floating point variable, and depth is used to specify that glReadPixels store the read depth value in the variable depth. Therefore, after the call is executed, the value of the variable depth is the depth value of the coordinates x and y.
As described above, after the detection module obtains the distance S and the time stamp T between the game object and the central coordinate of the image of a frame of scene drawing command, the distance S and the time stamp T corresponding to the frame of scene drawing command are stored in the memory.
The speed of a game object in a frame of scene drawing commands may also be calculated from the distance S and the time stamp T of the frame and one or more frames of scene drawing commands within a certain time before the frame.
For example, the detection module 308 may calculate the speed of the game object in the nth frame scene drawing command by the nth frame scene drawing command and the N-2 th frame scene drawing command. The detection module obtains a distance difference value by taking the difference between the distance S of the N-2 frame scene drawing command and the distance S of the N frame scene drawing command, and then divides the distance difference value by the time difference value of the N-2 frame scene image and the N frame scene image to obtain the speed of the game object in the N frame scene image.
The time difference between the N-2 frame scene image and the N frame scene image refers to the difference between the time stamp T of the N-2 frame scene image and the time stamp of the N frame scene image.
And when the detection module obtains a frame of scene drawing command, copying the scene drawing command of the frame to obtain two parts of scene drawing commands of the frame, wherein one part of the scene drawing commands is used for being sent to the callback module or the zoom-in module so as to draw an Nth frame of scene image, and the other part of the scene drawing commands is stored in the memory 331. If the detection module detects that the game object in a certain frame of scene drawing command is in a non-motion state, the frame of scene drawing command stored in the memory 331 is deleted.
Optionally, when detecting that the game object in the one-frame scene drawing command is in a motion state, the detection module copies the one-frame scene drawing command and stores the copied one-frame scene drawing command into the memory, and if detecting that the game object in the one-frame scene drawing command is in a non-motion state, the detection module does not need to copy and store the one-frame scene drawing command into the memory, and directly sends the one-frame scene drawing command to the callback module.
When detecting that the game objects in the continuous several frames of scene drawing commands are all in a motion state, the detection module reads the several frames of scene drawing commands from the memory, compares the several frames of scene drawing commands, so as to identify a command sequence appearing in the several frames of scene drawing commands, determines the command sequence as a motion command stream, and then writes the motion command stream into the memory 331.
Or, when there is an index array corresponding to each frame of scene drawing command in the memory, the detection module may compare the index arrays corresponding to the frame of scene drawing commands, determine the index arrays appearing in the several index arrays as a motion command stream, and then write the motion command stream into the memory 331.
In a specific example, after the detection module detects that the game objects in the scene drawing commands from the N-4 frame to the N frame are all in a motion state, the scene drawing commands from the N-4 frame to the N frame are read from the memory, then the scene drawing commands from the N-4 frame to the N frame are aligned, and each of the scene drawing commands from the N-4 frame to the N frame is found by alignment, wherein each of the scene drawing commands comprises a command sequence: (drawing command 1, drawing command 2, drawing command 3, drawing command 4), the detection module then determines (drawing command 1, drawing command 2, drawing command 3, drawing command 4) as a motion command stream and stores the motion command stream in the memory.
After obtaining the scene drawing command of the nth frame, the matching module 309 may read the motion command stream from the memory 331, and then match each drawing command included in the motion command stream with the scene drawing command of the nth frame in sequence.
If the matching of the two is successful, the matching module 309 can determine that the game object in the nth frame of scene drawing command is in a motion state, at this time, the matching module transmits the scene drawing command to the zooming module to call the zooming module to generate the first type of scene image, if the matching of the two fails, the matching module 309 can determine that the game object in the nth frame of scene drawing command is in a non-motion state, at this time, the matching module can call back the scene drawing command to the graphic library through the callback module, and draw the second type of scene image through the graphic library.
The matching module matches the motion command stream and the Nth frame scene drawing command in the following manner:
the matching module judges whether a motion command stream exists in the drawing command of the Nth frame, if the motion command stream exists in the drawing command of the Nth frame, the matching module determines that the motion command stream and the drawing command of the Nth frame are successfully matched, and if the motion command stream does not exist in the drawing command of the Nth frame, the matching module determines that the matching of the motion command stream and the drawing command of the Nth frame fails.
The callback module may callback the intercepted drawing command to the graphics library by:
the callback module may backup the replaced graphic library pointer in the foregoing list of function pointers (the graphic library pointer is a function pointer pointing to the implementation function in the graphic library). When the callback module obtains the drawing command, the callback of the drawing command can be realized by continuously calling the realization functions pointed by the graphic library pointers in the graphic library through the replaced graphic library pointers. In this embodiment, the callback module may call back the UI drawing command to the graphic library through the backed-up graphic library pointer after receiving the UI drawing command sent by the identification module, so as to draw the UI image with the implementation function in the graphic library, or may call back the image sending command to the graphic library through the backed-up graphic library pointer after receiving the image sending command sent by the identification module, so as to send the image drawn by the graphic library, or the callback module may call back the scene drawing command to the graphic library through the backed-up graphic library pointer after receiving the scene drawing command sent by the detection module or the matching module, so as to draw the second type of scene image with the implementation function in the graphic library.
After the zooming-in module generates the first type scene image, the first type scene image can be written into the memory queue through the sending and displaying interface, and finally the first type scene image and the corresponding UI image are synthesized into a frame of game image by the graphic library and displayed on the display screen of the electronic equipment.
And when the zoom-in module generates the first scene image, a part of realization functions in the graphic library can be called according to the requirement.
The zoom-in module may specifically generate the first type of scene image as follows.
The zoom-in module may first read object information for drawing the nth frame scene image in the nth frame scene drawing command, and then create a new frame buffer, where the new frame buffer may be denoted as temporary frame buffer FB4, and the frame buffers for storing the nth frame scene drawing command are denoted as FB2, and FB2 and FB4 are both stored in the memory 331. The ratio of the size of FB4 to the size of FB2 is a preset ratio Z. For example, Z may be set to 90%, and then the size of FB4 is 90% of the size of FB 2. Object information, which may include the foregoing vertex information and texture information, may be used to map a corresponding object by invoking a specific implementation function.
Optionally, the new frame buffer FB4 may be created in advance, where the zooming-in module multiplexes the frame buffer FB4 each time the first type scene image is generated, or may be created by the zooming-in module each time the first type scene image is generated, and after the generation is finished, the frame buffer is released, and then created when the first type scene image needs to be generated next time.
The zoom-in module then obtains a view of the first scope, which may be created in advance, or in real-time as it is used.
The size of the second range may be consistent with the size of the screen of the electronic device, and the size of the first range may be the product of the size of the second range and the preset ratio Z, for example, the ratio Z is 90%, and the size of the first range is 90% of the size of the second range.
After the view of the first range is obtained, the zooming-in module can call the GPU, and the first type scene image is drawn in the view of the first range by using the drawing original object information.
The drawing process of the first scene image is as follows:
the pull-up module calls the GPU based on the FB4 and the original object information, and the GPU writes the object information into the FB4 after being called. In the writing process, since the size of FB4 is smaller than that of FB2, the GPU needs to cut the original object information according to the view in the first range, then write the cut object information into FB4, and the process of writing the object information is equivalent to the process of drawing the first type of scene image, and after writing is completed, the set of the object information in FB4 is equivalent to one frame of the first type of scene image.
The original object information is object information read out from the nth frame scene drawing command for drawing the nth frame scene image.
The process by which the GPU needs to crop the original object information with the view of the first range is that the GPU recognizes object information located within the view of the first range, retains the object information located within the view of the first range to write it to FB4, and recognizes object information located outside the view of the first range, discards the object information located outside the view of the first range, that is, does not write the object information located outside the view of the first range to FB4.
After the writing of the FB4 is completed, the zoom-in module sends the FB4 to the sending and displaying interface for sending and displaying. The size of the drawn first type scene image is smaller than that of a display screen of the electronic equipment, and the first type scene image of the frame is directly displayed on the screen, so that blanks appear around the screen. Therefore, before the sending and displaying interface sends and displays, the drawn first-class scene image of the frame needs to be stretched to the same size as the display screen. Specifically, before the display interface displays, the first type of scene image stored in the frame buffer FB4 is restored to the frame buffer FB2 originally used for storing the second type of scene image, and since the size of FB2 is larger than that of FB4, when the scene image of FB4 is restored to FB2, the display interface interpolates the first type of scene image stored in FB4, enlarges the size of the scene image of FB4 to the size corresponding to FB2, that is, enlarges the size of the scene image to the size of the display screen of the electronic device, and finally displays the enlarged first type of scene image with the size of FB2 in the display screen, thereby displaying a frame of first type of scene image consistent with the screen size on the display screen.
Referring to fig. 4a, fig. 4a is a schematic diagram of object information obtained by the zoom-in module, wherein a dashed box in the drawing indicates views of the first range, and it can be seen that, for each object corresponding to the object information, whether the object is in a positional relationship with the views of the first range can be determined according to the object information, and the positional relationship includes that all objects are located in the views, all objects are located outside the views, and a part of the objects are located in the views. As in fig. 4a, object 401 is located outside the view, a portion of objects 402 and 403 are located within the view, and object 404 is located entirely within the view.
According to the position relation, when the GPU is called by the zoom-in module and draws the first scene image, the object to be drawn is cut. Referring to fig. 4b, it can be seen that, when the first type of scene image is drawn, if the object (e.g. object 401) corresponding to the object information is all out of view, the object is not drawn, if a part of the object (e.g. object 402 and object 403) corresponding to the object information is out of view, a part of the object is in view, and a part of the object is not drawn, if the object (e.g. object 404) corresponding to the object information is all in view, the object is completely drawn according to the object information, finally, as shown in fig. 4b, the part of the object 402 and object 403 in the first range, and the object 404 form a game scene in the first range, and fig. 4b corresponds to a frame of drawn first type of scene image. The dashed box in fig. 4b represents the size of the display screen of the electronic device.
Finally, the first type scene image shown in fig. 4b is stretched by the display interface, so that a stretched first type scene image shown in fig. 4c is obtained, the stretched first type scene image is consistent with the size of the display screen of the electronic device, and then the display interface displays the stretched first type scene image on the display screen of the electronic device.
In order to avoid obvious saw-tooth of the contour edge of each object in the enlarged image, a multi-sampling anti-saw-tooth (MultiSampling Anti-Aliasing, MSAA) function of the electronic device can be started before the first type scene image is drawn, so that the process of drawing the image is optimized, and the contour edge of the object in the drawn first type scene image is smoother.
The method for generating the scene image has the following beneficial effects:
when a game object is in a motion state, a large number of new objects can be rapidly generated in a game scene around the game object, in this case, each frame of game image is drawn, and in particular, each frame of scene image is drawn, the objects obtained by projecting the new objects need to be drawn in the scene image, so that the calculation amount required for drawing one frame of scene image in the motion state is relatively higher than the calculation amount required for drawing one frame of scene image in the non-motion state, and devices such as a CPU, a GPU, a memory and the like of the electronic device are all operated in a high-load state when the game object is in the motion state compared with the situation when the game object is in the non-motion state.
In the embodiment, by narrowing the range of the scene presented in the scene image in the motion state (i.e. narrowing the range from the second range to the first range), the objects required to be drawn when drawing a frame of the scene image can be reduced in the motion state, thereby reducing the amount of calculation required to draw a frame of the scene image. For example, when drawing fig. 4a, which presents a second range of scenes, it is necessary to draw complete objects 401 to 404, and, in contrast, drawing fig. 4b, which presents a first range of scenes, it is only necessary to draw a portion of objects 402 and 403 (the portion lying within the first range) and complete object 404, and it is apparent that the amount of computation required to draw fig. 4b is less than that required to draw fig. 4 a.
Along with the reduction of the calculated amount required for drawing a scene image, the load of each hardware of the electronic equipment in a motion state can be correspondingly reduced, so that the problems of equipment blocking, excessively high power consumption and the like caused by the fact that each device of the electronic equipment continuously operates under high load are avoided.
Furthermore, when the game object is in a non-motion state, the method provided by the embodiment can restore the scene image to the second-type scene image with higher picture quality, thereby bringing better user experience to the user.
The method for generating a scene image according to the embodiment of the present application is further described below with reference to fig. 5, where fig. 5 is an exemplary diagram illustrating a specific operation of each module of the system frame layer in fig. 3.
After the game starts, the game application outputs a drawing command stream for drawing the first frame of game pictures, and the interception module intercepts the drawing command stream and transmits the drawing command stream to the identification module.
At this time, the recognition module does not determine the frame buffer for storing the scene drawing command, so the recognition module transmits the drawing command stream of the 1 st frame to the callback module, and draws the scene image and the non-scene image (the UI image is the non-scene image as described above) respectively by the callback module callback the realization function corresponding to the drawing command in the graphics library, and at the same time, the recognition module recognizes the frame buffer to which the scene drawing command belongs in the process of callback the drawing command stream of the 1 st frame to the graphics library.
The graphic library draws the scene image and the non-scene image by using a drawing command stream of the 1 st frame, then sends the scene image and the non-scene image to a sending and displaying interface, and the sending and displaying interface synthesizes the scene image and the non-scene image into the 1 st frame game image, and then displays the 1 st frame game image on a screen. For convenience, the scene image constituting the 1 st frame game image is referred to as a 1 st frame scene image, and the non-scene image constituting the 1 st frame game image is referred to as a 1 st frame non-scene image, and the following is the same. After the 1 st frame of scene image is drawn, the detection module may store in the memory a distance S (1) between the game object in the 1 st frame of scene image and the central coordinates of the image, and a timestamp T (1), where T (1) is a system time when the 1 st frame of scene image starts to be drawn. The 1 st frame scene image is a second type scene image.
After the game application outputs the drawing command stream of the 1 st frame game picture, the drawing command stream of the 2 nd frame is continuously output. Note that when the game application outputs the drawing command stream of the 2 nd frame, the 1 st frame game screen may be already generated and displayed or may not be displayed.
When the drawing command stream of the 2 nd frame is intercepted by the interception module and transmitted to the identification module, the identification module identifies the scene drawing command in the drawing command stream of the 2 nd frame according to the frame buffer to which the scene drawing command belongs.
And then, the identification module judges that the memory has no motion command stream, transmits the non-scene drawing command of the 2 nd frame to the callback module, and transmits the scene drawing command of the 2 nd frame to the detection module.
And after the detection module obtains the scene drawing command of the 2 nd frame, the scene drawing command of the 2 nd frame is transmitted to the callback module. Therefore, the graphic library draws the 2 nd frame of non-scene image by using the 2 nd frame of non-scene drawing command, draws the 2 nd frame of scene image by using the 2 nd frame of scene drawing command, and synthesizes the 2 nd frame of game image through the sending and displaying interface and displays the 2 nd frame of game image on a screen. The 2 nd frame scene image is a second type scene image.
The detection module obtains and stores the distance S (2) and the time stamp T (2) of the scene drawing command of the 2 nd frame in the memory in the process of recalling the scene drawing command of the 2 nd frame, and calculates the speed of the game object by utilizing the distance S (2) and the time stamp T (2) of the previous frame in the memory, namely S (1) and T (1). The scene drawing command of frame 2, the distance S (2), the time stamp T (2) and the speed of the game object may all be stored in memory.
After that, the interception module intercepts the drawing command stream of each frame output by the game application in real time, and sends the scene drawing command in the drawing command stream to the detection module through the identification module so as to trigger the detection module to acquire and store the distance, the time stamp and the speed of the game object of the scene drawing command of each frame.
After obtaining the scene drawing command of the nth frame, the detection module calculates that the speed of the game object in the scene drawing command of the nth frame is greater than the speed threshold value, and the scene drawing commands of the continuous 5 frames including the nth frame are all greater than the speed threshold value, so that the detection module reads the 5 frames of scene drawing commands from the memory, and extracts and stores the motion command stream based on the 5 frames of scene drawing commands. The motion command stream is stored in memory.
When the drawing command stream of the (n+1) th frame intercepted by the interception module reaches the identification module, the identification module judges that the motion command stream exists in the memory, so that the scene drawing command of the (n+1) th frame is directly sent to the matching module, and the matching module matches the scene drawing command of the (n+1) th frame with the motion command stream after obtaining the scene drawing command of the (n+1) th frame.
The matching module judges that the game object in the scene drawing command of the (n+1) th frame is in a motion state, and then sends the scene drawing command of the (n+1) th frame to the zooming-in module.
And after the zoom-in module obtains the scene drawing command of the (N+1) th frame, drawing the first scene image by using the scene drawing command of the (N+1) th frame. The zoom-in module is based on the image drawn by the scene drawing command of the (N+1) th frame and is the scene image of the (N+1) th frame.
The zoom-in module transmits the drawn n+1st frame scene image to the display-transmitting interface, then the display-transmitting interface merges the n+1st frame scene image and the n+1st frame non-scene image to obtain an n+1st frame game image, and then the n+1st frame game image is displayed on a screen.
Then, the scene drawing commands from the (N+2) -th frame to the (N+K-1) -th frame are matched with the motion command stream in the matching module and are matched successfully, so that the scene drawing commands from the (N+2) -th frame to the (N+K-1) -th frame are sent to the zooming-in module, and the zooming-in module draws the obtained first scene image according to the first range.
After the scene drawing command of the (N+K) th frame reaches the matching module, the matching module matches the scene drawing command of the (N+K) th frame with the motion command stream, and the matching result is that the matching is failed, so that the matching module sends the scene drawing command of the (N+K) th frame to the callback module, the callback module callback the scene drawing command of the (N+K) th frame to the graphic library, and the scene image of the (N+K) th frame is drawn through the graphic library, and the drawn scene image of the (N+K) th frame belongs to the second type of scene image. And merging the N+K frame scene image with the N+K frame non-scene image to obtain an N+K frame game image, and displaying the N+K frame game image on a screen.
It can be seen that, in the interactive flow shown in fig. 5, the electronic device displays the 1 st to nth frames of game images after the game starts, and displays the second range of scenes, and the n+1st to n+k1st frames of game images are displayed, because the game object is in a motion state, the game image displays the first range of scenes, and when the n+kst frames of game images are displayed, the game object enters a non-motion state again, so that the game image displays the second range of scenes.
After the drawing command stream of the n+K frame, the game application outputs a drawing command stream of each frame, wherein the scene drawing command is identified by the identification module and transmitted to the matching module, and the matching module sends the frame of scene drawing command to the zooming-in module or the callback module according to the matched result, so that a first scene picture is drawn and displayed when the game object is in a motion state, and a second scene picture is drawn and displayed when the game object is in a non-motion state. Until the game is finished.
In an optional implementation manner, when the motion command stream is not stored in the memory, the detection module may also determine, after each time obtaining a scene drawing command for drawing a frame of scene image, whether a speed of a game object in a scene image drawn in a previous frame is greater than a speed threshold, if so, send the scene drawing command to the zoom-in module, so that the zoom-in module draws the frame of scene image according to the first range, and if not, send the scene drawing command to the callback module.
In another alternative implementation manner, the matching module may not be provided, and the scene drawing command of each frame is sent to the detection module. In this implementation, each frame of scene image is drawn by the graphics library or the zoom-in module, the detection module calculates the speed of the game object in the frame of scene image according to the method, and then when the scene drawing command of the next frame is transferred to the detection module, the detection module can send the scene drawing command of the frame to the callback module or the zoom-in module according to whether the speed of the game object in the immediately drawn previous frame of scene image is greater than the speed threshold.
Alternatively, the detection module may be further configured to detect a stop motion command sequence and a start motion command sequence. Specifically, for a scene drawing command of a certain frame of scene image, after a motion command stream is identified, a sequence of several commands (or index of commands) from the first command appearing after the motion command stream ends to the next draw call may be determined as a stop motion command sequence.
In addition, for a scene drawing command of a certain frame of scene image, a sequence of several commands (or indexes of commands) from the previous command of the motion command stream to the previous draw call may be determined as a start motion command sequence after the motion command stream is identified.
For example, for a scene drawing command of a certain frame of scene image, the detection module determines that the commands k to k+n are motion command streams, and then the detection module may read from the commands k+n+1 until the first drawing call is read, and determine the read commands as a stop motion command sequence. In addition, the detection module may read forward from command k-1 until the first draw call is read, and determine these commands read as a start motion command sequence.
The motion command stream in memory 331 may be obtained by:
first, according to the method of the foregoing embodiment, the detection module obtains the motion command stream by comparing the scene drawing commands of the multi-frame game object in the motion state, and writes the motion command stream into the memory 331.
And secondly, after the game starts to run, the CPU reads the motion command stream corresponding to the currently running game from the memory to the memory. The memory may be a magnetic disk inherent to the electronic device, or may be an external storage device accessed through a USB interface (or other interface) of the electronic device, such as a U-disk, a secure data memory card (Secure Digital Memory Card, SD card), or the like.
Third, after the game starts to run, the electronic device downloads the streaming command from the designated server through the network and writes it into the memory 331.
The motion command stream stored in the memory of the second mode can be obtained by testing the game in the first mode in advance by the manufacturer of the electronic equipment, and then issued to the corresponding electronic equipment in a system update or software update mode. In addition, if the game has been run on the electronic device more than once before the present run, the CPU may transfer the motion command stream obtained by the first manner from the memory to the memory during the first several runs of the game.
The motion command stream stored by the server in the third mode can be obtained by manufacturers of the electronic devices in a pre-test mode and the server exists, in addition, the electronic devices which have used the game can upload the motion command stream obtained by the manufacturers in the first mode to the server for downloading by other electronic devices.
When the game object enters the motion state from the non-motion state, the scene represented by the game image suddenly reduces from the second range to the first range, which may be perceived by the user, thereby bringing poor visual experience to the user. To solve this problem, in some optional embodiments of the present application, the method for generating a scene image provided by the present application may gradually adjust the range of the generated scene image with a smaller amplitude by the following smooth reduction method until the first type of scene image is finally generated.
Specifically, when the zoom-in module obtains a scene drawing command of a kth frame, judging whether the range of a scene presented by a scene image of a previous frame (namely, a K-1 frame) which is already drawn is larger than a first range, if not, drawing the scene drawing command of the K frame according to the first range to obtain a first type of scene image. K is any positive integer.
If the range of the scene presented by the K-1 frame scene image is larger than the first range, the range presented by the K-1 frame is adjusted downwards by a smaller amplitude, if the adjusted range is smaller than (or consistent with) the first range, the range presented by the K frame is set as the first range, and if the adjusted range is larger than the first range, the range presented by the K frame is set as the adjusted-downwards range. And then drawing according to the set range of the K frame by utilizing a scene drawing command of the K frame to obtain a K frame scene image, wherein the range of the scene presented by the drawn K frame scene image is larger than the first range and smaller than the second range.
The smaller amplitude may be dynamically adjusted or may be a fixed step size (step), for example, step may be set to 1% of the second range. The step size may be a parameter preset in the electronic device or may be adjusted according to a user setting.
Referring to fig. 6, fig. 6 is a schematic diagram of a method for smoothing shrinkage provided in the present embodiment, taking step as 1% of the second range, and the set ratio Z as 90%, that is, the first range is 90% of the second range as an example, the execution process of the method for smoothing shrinkage may be:
assume that the rendered K-th frame of scene image is a second type of scene image (shown as 601) in which the game object is in a non-moving state.
The matching module (or the detection module) obtains a scene drawing command of the (K+1) th frame, and judges that a game object in the scene image of the (K+1) th frame is in a motion state, and then the scene drawing command of the (K+1) th frame is transmitted to the zooming module.
The zoom-in module reads the range of the K-th frame scene image presentation, namely a second range, and because the second range is larger than the first range, the zoom-in module adjusts 1% downwards on the basis of the second range to obtain 99% of the second range, the adjusted range (namely 99% of the second range) is larger than the first range, and then the zoom-in module draws the K+1st frame scene image according to 99% of the second range (shown as 602).
Then, the zoom-in module obtains a scene drawing command of the (k+2) th frame, continuously adjusts 1% down on the basis of the range of the scene image presentation of the (k+1) th frame to obtain 98% of the second range which is still larger than the first range, and then the zoom-in module draws according to 98% of the second range to obtain the scene image of the (k+2) th frame (as shown in 603).
Similarly, the range of each frame of scene image presentation is 1% smaller than the range of the previous frame by a second range, and finally, the zooming-in module draws the K+10 frame of scene image according to 90% of the second range, namely draws the scene image presenting the scene of the first range (as shown in 604).
Each frame of scene images after the (k+10) th frame of scene image is drawn according to the first range until the game object enters the non-motion state since the range in which the previous frame of scene image is presented is equal to (i.e., not greater than) the first range.
In the above process, the drawing methods of the 99% of the scene images in the second range, the 97% of the scene images in the second range, and the like may refer to the method of generating the first type of scene images, so long as the size of the new frame buffer and the range of the view are adjusted according to the corresponding proportion. For example, if 97% of the second range of scene images is rendered, the size of the newly created frame buffer may be set to 97% of the frame buffer that originally stored the second type of scene images, and the view corresponding to 97% of the second range.
The method is equivalent to sequentially generating scene images which are gradually reduced from the second range to the first range in a multi-frame presentation range when the game object enters the motion state from the non-motion state, and continuously generating the first scene images when the game object is reduced to the first range.
In some variations of the above method, the scene images, which are sequentially generated prior to generating the first type of scene image, may also be rendered with a smaller range of scenes not on a frame-by-frame basis, but every few frames. For example, in the above embodiment, after the K-th frame of scene image is drawn, a few consecutive frames of scene images with 99% of the second range are drawn, then a few consecutive frames of scene images with 98% of the second range are drawn, and so on, and finally the required first type of scene image is drawn.
Fig. 6 is merely an example of a manner of smoothly narrowing the range of a game scene in the present application, in a practical application scenario, the game scene displayed in each frame of scene image may be different, for example, in the process of gradually changing from 601 to 604 in the scene image in fig. 6, the position of the trolley in the game scene may be gradually changed, so as to represent a scene that the trolley runs along a road.
By the method, the range of the scene presented in the game image can be smoothly reduced on the premise that the user does not feel, and better use experience is brought to the user.
Alternatively, the above-described method of smoothing zoom out may be applied in reverse to a scene in which the range of presentation changes from the first range back to the second range (i.e., the game object switches from the moving state to the non-moving state).
Specifically, the detection module (and the matching module) obtains a scene drawing command of a certain frame (assumed to be an mth frame), and after judging that a game object in the mth frame scene image is in a non-motion state, can judge the range presented by the drawn previous frame scene image, judge whether the range presented by the drawn previous frame (i.e. the mth-1 frame) scene image is smaller than a second range, if not, send the scene drawing command of the mth frame to the callback module, so as to directly draw the mth frame scene image presenting the scene of the second range through the graphic library.
If the range of the scene presented by the M-1 frame scene image is smaller than the second range, the range presented by the M-1 frame is up-regulated by a smaller amplitude, and if the up-regulated range is larger than (or consistent with) the second range, a scene drawing command of an Mth frame is sent to a callback module, so that the Mth frame scene image of the scene presented by the second range is directly drawn through a graphic library. If the up-regulated range is smaller than the second range, the scene drawing command of the Mth frame and a range setting command are sent to the zooming-in module, wherein the range setting command carries the up-regulated range, so that the zooming-in module is triggered to draw and obtain an Mth frame scene image according to the up-regulated range by utilizing the scene drawing command of the Mth frame, and the range of the scene represented by the drawn Mth frame scene image is the up-regulated range. The amplitude of each adjustment during the up-scaling may be dynamically adjusted as well, i.e. the amplitude of each up-scaling is different, or set to a fixed step size (step), i.e. the amplitude of each up-scaling is the step size, e.g. step may be set to 1% of the second range. The step size may be a parameter preset in the electronic device or may be adjusted according to a user setting.
The method is equivalent to sequentially generating scene images which are gradually expanded from a first range to a second range and are presented by multiple frames when the game object enters a non-motion state from a motion state, and continuously generating second-type scene images until the scene images are expanded.
Similar to the smooth reduction method, by the method, the embodiment can smoothly enlarge the range of the scene presented in the game image and bring better use experience to the user when the game object changes from the motion state to the non-motion state on the premise that the user does not feel.
In other optional embodiments, in the method for generating a scene image provided by the present application, the motion state of the game object may be further divided into multiple motion states according to different speeds, for example, a first speed interval and a second speed interval may be divided, where a lower limit of the second speed interval is higher than an upper limit of the first speed interval, the first speed interval corresponds to the first motion state, and the second speed interval corresponds to the second motion state.
In some specific application scenarios, the game object may run in the game and use (ride or drive) the vehicle in the game scenario, and accordingly, the speed range of the game object during running may be taken as a first speed interval, the first motion state is a running state, the speed range of the game object during using the vehicle is taken as a second speed interval, and the second motion state is a state of using the vehicle.
According to the above-mentioned division of motion states, after obtaining a scene drawing command of a frame and judging that the game object in the frame is in motion state, the detection module (and matching module) can further distinguish whether the motion state of the game object is the first motion state or the second motion state.
In the implementation, the detection module may distinguish the motion state by the speed of the game object in the previous frame or frames of the drawn scene image, if the speed is within the first speed interval, it is determined that the game object is in the first motion state, and if the speed is within the second speed, it is determined that the game object is in the second motion state.
The matching module can respectively use different motion command streams corresponding to different motion states and the obtained scene drawing commands, if the obtained scene drawing commands are higher in matching degree with the motion command streams of the first motion state, the game object is judged to be in the first motion state, and if the obtained scene drawing commands are higher in matching degree with the motion command streams of the second motion state, the game object is judged to be in the second motion state.
The detection module may write a scene drawing command for drawing a frame of the scene image into the memory as a motion command stream of the first motion state when it is determined that a speed of a game object in the frame of the drawn scene image is located in the first speed interval.
Similarly, when the detection module determines that the speed of the game object in a frame of the drawn scene image is located in the second speed interval, the detection module can write a scene drawing command for drawing the frame of the scene image into the memory as a motion command stream of the second motion state.
After the detection module (and the matching module) judges that the game object is in the first motion state or the second motion state, the scene drawing command and the judgment result can be sent to the zooming-in module together.
If the game object is in the first motion state, the zooming-in module draws a scene image corresponding to the scene drawing command according to the first range, so that a first type of scene image is obtained, namely, the scene image with the range of the presented scene being the first range, and if the game object is in the second motion state, the zooming-in module draws a scene image corresponding to the scene drawing command according to the third range, so that a third type of scene image is obtained, namely, the scene image with the range of the presented scene being the third range. The third range is smaller than the first range. For example, the first range may be 90% of the second range, and the third range may be 80% of the second range.
The method for drawing the third type of scene image is consistent with the method for drawing the first type of scene image, and only needs to be correspondingly adjusted according to the third range when a new frame buffer is created and a view is set, and the specific process is not repeated.
By applying the method to the electronic device, after the game starts, if the game object is in a non-motion state, the electronic device generates a second type scene image in real time according to the scene change in the game and displays a game image formed by the second type scene image, if the user controls the game object to start running, the device judges that the game object is in a first motion state at the moment, and further reduces the scene presented to the user to a first range, namely generates the first type scene image in real time and displays the game image formed by the first type scene image, if the user controls the game object to use a vehicle, the device judges that the game object is in the second motion state at the moment, and further reduces the scene presented to the user to a third range, namely generates a third type scene image in real time and displays the game image formed by the third type scene image.
By implementing the method provided by the above embodiments, the following beneficial effects can be obtained:
the range of the scene presented is dynamically adjusted according to different speeds of game objects, so that the contradiction between maintaining the scene image with higher image quality and reducing the load can be better relieved.
Specifically, the method for generating the scene image provided by the embodiment can gradually reduce the range of the scene presented by the generated scene image along with the increase of the moving speed of the game object, thereby reducing the calculation amount required for generating one frame of scene image, controlling the load of the electronic equipment to be always in an acceptable load range and avoiding the overload of the electronic equipment. In addition, the method provided by the embodiment can present a larger range of scenes when the game object is in a non-motion state or a motion state with lower speed, and present game images with higher quality to the user as much as possible while avoiding the overload of the electronic equipment.
Referring to fig. 7, fig. 7 is a flowchart of a method for generating a scene image according to an embodiment of the present application, where the method may include the following steps:
s701, detecting a state of the game object.
If the game object is in a motion state, step S702 is executed, and if the game object is in a non-motion state, step S703 is executed.
S702, if the game object is in a motion state, generating a first scene image.
S703, if the game object is in a non-motion state, generating a second scene image.
According to the interaction schematic diagram shown in fig. 5, it can be understood that the method for generating a scene image provided in the embodiment shown in fig. 7 may run in real time after the game starts, specifically, whether the game object in a frame of game image is in a motion state may be determined before each frame of game image is drawn, if not, the scene image of the frame is drawn according to the second range, so as to obtain the second scene image, and if so, the scene image of the frame is drawn according to the first range, so as to obtain the first scene image.
Step S701 in the above embodiment may be implemented by the detection module or the matching module of the application framework layer shown in fig. 3, step S702 may be implemented by the zoom-in module shown in fig. 3, step S703 may be implemented by the graphic library shown in fig. 3, and the detailed description of each step is omitted herein.
The embodiment of the application also provides a computer storage medium for storing a computer program, and the computer program is specifically used for realizing the method for generating the scene image provided by any embodiment of the application when being executed.
Embodiments of the present application also provide a computer program product comprising a plurality of executable computer commands, which when executed, are particularly adapted to carry out the method for generating an image of a scene provided in any of the embodiments of the present application.

Claims (19)

1. A method of generating an image of a scene, comprising:
detecting a state of a game object, the game object being an object for representing a character controlled by a user in a game;
after detecting that the game object is switched from a non-motion state to a motion state, generating a first type of scene image, wherein the first type of scene image presents a game scene in a first range;
After detecting that the game object is switched from a motion state to a non-motion state, generating a second type of scene image, wherein the second type of scene image presents a game scene in a second range; the first range is smaller than the second range; the amount of computation required to generate the first type of scene image is less than the amount of computation required to generate the second type of scene image.
2. The method of claim 1, wherein prior to generating the first type of scene image, further comprising:
sequentially generating multi-frame scene images; the range of the game scene presented by the multi-frame scene image is sequentially narrowed in the order of being generated between the first range and the second range.
3. The method of claim 1 or 2, wherein the generating a first type of scene image comprises:
obtaining object information for drawing a second type scene image;
and drawing to obtain a first type scene image by using the object information used for drawing the second type scene image in the first range.
4. The method according to claim 1 or 2, wherein detecting the status of the game object comprises:
obtaining a current scene drawing command; the scene drawing command is a drawing command for drawing a scene image;
Matching the current scene drawing command with a motion command stream obtained in advance; the motion command stream is a scene drawing command obtained when the game object is in a motion state;
if the current scene drawing command is successfully matched with the motion command stream, detecting that the game object is in a motion state;
and if the matching of the current scene drawing command and the motion command stream fails, detecting that the game object is in a non-motion state.
5. The method according to claim 1 or 2, wherein detecting the status of the game object comprises:
obtaining a multi-frame scene drawing command within a preset time period;
determining the speed of the game object in the preset time period according to the obtained distance between the game object in the scene drawing command of each frame and the central coordinate of the scene image;
if the speed of the game object in the preset time period is greater than a speed threshold value, detecting that the game object is in a motion state;
and if the speed of the game object in the preset time period is smaller than or equal to the speed threshold value, detecting that the game object is in a non-motion state.
6. The method according to claim 1 or 2, wherein prior to generating the first type of scene image, further comprising:
determining a speed interval to which the speed of the game object belongs;
if the speed of the game object belongs to a first speed interval, executing the step of generating the first scene image;
if the speed of the game object belongs to the second speed interval, generating a third type scene image; the third type of scene image presents a third range of game scenes; the lower limit of the second speed interval is greater than the upper limit of the first speed interval; the third range is smaller than the first range.
7. A method of generating a scene image as claimed in claim 1 or 2, wherein the method of generating a scene image is applied to an electronic device, a software layer of the electronic device comprising a matching module;
the detecting the state of the game object comprises:
the matching module receives a frame of scene drawing command;
the matching module reads the motion command stream from the memory;
the matching module matches the motion command stream with the one-frame scene drawing command;
if the motion command stream and the one-frame scene drawing command are successfully matched, the matching module determines that the game object is in a motion state in the one-frame scene drawing command;
And if the matching of the motion command stream and the one-frame scene drawing command fails, the matching module determines that the game object is in a non-motion state in the one-frame scene drawing command.
8. The method of claim 7, wherein the software layer of the electronic device further comprises a zoom-in module;
after detecting that the game object is switched from a non-motion state to a motion state, generating a first scene image, including:
the matching module determines that the game object is in a motion state in the one-frame scene drawing command, and then sends the one-frame scene drawing command to the zooming-in module;
the zoom-in module receives the one-frame scene drawing command;
and the zooming-in module generates the first type scene image according to the one-frame scene drawing command.
9. The method of claim 8, wherein the software layer of the electronic device further comprises a presentation interface;
after the zooming-in module generates the first type scene image according to the one-frame scene drawing command, the zooming-in module sends the first type scene image to the sending and displaying interface;
and the sending and displaying interface displays the first scene image on a display screen.
10. The method of claim 8, wherein the zoom-in module generating the first type of scene image from the one-frame scene drawing command comprises:
the pull module creates a temporary frame buffer and view; the size of the temporary frame buffer and the size of the view both match the first range;
the zoom-in module reads object information from the one-frame scene drawing command;
and the zooming-in module calls a graphics processor based on the object information, the temporary frame buffer and the view, so that the graphics processor draws the first type scene image.
11. The method of claim 7, wherein the software layer of the electronic device further comprises a callback module and a graphics library;
after detecting that the game object is switched from a motion state to a non-motion state, generating a second type of scene image, including:
the matching module determines that the game object is in a non-motion state in the one-frame scene drawing command, and then sends the one-frame scene drawing command to the callback module;
the callback module receives the one-frame scene drawing command;
The callback module callback the one-frame scene drawing command to the graphic library;
and responding to the callback module to callback the one-frame scene drawing command, and generating the second-class scene image according to the one-frame scene drawing command by the graphic library.
12. The method of claim 7, wherein the software layer of the electronic device further comprises an identification module;
before the matching module receives a frame of scene drawing command, the matching module further comprises:
the identification module receives a frame of drawing command stream;
after the recognition module determines the scene frame buffer, the recognition module determines the drawing command stored in the scene frame buffer in the one-frame drawing command stream as one-frame scene drawing command;
the identification module judges whether the memory stores a motion command stream or not;
and after the recognition module judges that the motion command stream is stored in the memory, the recognition module sends the one-frame scene drawing command to the matching module.
13. The method of claim 12, wherein the identifying module determining a scene frame buffer comprises:
the identification module receives a drawing command stream of a previous frame; the previous frame drawing command stream is another frame drawing command stream received by the identification module before the one frame drawing command stream;
The identification module counts the number of drawing commands stored in each frame buffer except the interface frame buffer in a plurality of frame buffers for storing the drawing command stream of the previous frame; the interface frame buffer is used for storing user interface drawing commands in the frame buffers; the user interface drawing command is a drawing command used for drawing a user interface image in the drawing command stream;
the recognition module determines a frame buffer with the largest number of drawing commands stored except the interface frame buffer among the plurality of frame buffers as the scene frame buffer.
14. The method of claim 12, wherein the software layer of the electronic device further comprises a detection module;
after the identification module judges whether the memory stores the motion command stream, the identification module further comprises:
after the recognition module judges that the motion command stream is not stored in the memory, the recognition module sends the one-frame scene drawing command to the detection module;
the detection module receives the one-frame scene drawing command;
the detection module calculates the speed of a game object in the one-frame scene drawing command;
the detection module judges whether the speed of the game object in the one-frame scene drawing command is greater than a preset speed threshold value;
After the detection module judges that the speed of the game object in the one-frame scene drawing command is greater than the speed threshold, the detection module determines the motion command stream according to the one-frame scene drawing command;
the detection module stores the motion command stream into the memory.
15. The method of claim 14, wherein the detecting module calculating a speed of a game object in the one-frame scene drawing command comprises:
the detection module acquires a time stamp and a distance corresponding to the one-frame scene drawing command; the distance corresponding to the one-frame scene drawing command is the distance between the game object and the central coordinate of the image in the one-frame scene drawing command;
the detection module reads the timestamp and the distance corresponding to the scene drawing command of the previous frame from the memory; the distance corresponding to the previous frame of scene drawing command is the distance between the game object and the central coordinate of the image in the previous frame of scene drawing command; the previous frame of scene drawing command is another frame of scene drawing command received by the identification module before the one frame of scene drawing command;
dividing the distance difference by the time difference by the detection module to obtain the speed of the game object in the one-frame scene drawing command; the distance difference value is the difference value between the distance corresponding to the one-frame scene drawing command and the distance corresponding to the previous-frame scene drawing command; the time difference value is the difference value between the time stamp corresponding to the one-frame scene drawing command and the time stamp corresponding to the previous-frame scene drawing command.
16. The method of claim 14, wherein the detecting module determining the motion command stream from the one-frame scene drawing command comprises:
the detection module reads out the first N frames of scene drawing commands from the memory; the speeds of the game objects in the first N frames of scene drawing commands are all larger than the speed threshold; the first N frame scene drawing commands are N frame scene drawing commands received by the identification module before the one frame scene drawing command; the N is a preset positive integer;
the detection module determines a drawing command sequence contained in the one-frame scene drawing command and the previous N-frame scene drawing command as the motion command stream; the drawing command sequence is a set of a plurality of continuous scene drawing commands.
17. The method of claim 12, wherein the electronic device further comprises a gaming application and interception module;
before the identification module receives a frame of drawing command stream, the identification module further comprises:
the game application outputting the one frame of drawing command stream;
the interception module intercepts the one frame of drawing command stream output by the game application;
the interception module sends the one-frame drawing command stream to the identification module.
18. An electronic device, wherein a hardware layer of the electronic device comprises: one or more processors, memory, and a display screen;
the memory is used for storing one or more programs;
the one or more processors are configured to execute the one or more programs to cause the electronic device to:
detecting a state of a game object;
if the game object is in a motion state, generating a first type of scene image, wherein the first type of scene image presents a game scene in a first range;
if the game object is in a non-motion state, generating a second type of scene image, wherein the second type of scene image presents a game scene in a second range; the first range is smaller than the second range.
19. A computer storage medium storing a computer program, which when executed is adapted to carry out the method of generating an image of a scene as claimed in any one of claims 1 to 17.
CN202110778014.6A 2021-07-09 2021-07-09 Method, apparatus and storage medium for generating scene image Active CN114452645B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110778014.6A CN114452645B (en) 2021-07-09 2021-07-09 Method, apparatus and storage medium for generating scene image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110778014.6A CN114452645B (en) 2021-07-09 2021-07-09 Method, apparatus and storage medium for generating scene image

Publications (2)

Publication Number Publication Date
CN114452645A CN114452645A (en) 2022-05-10
CN114452645B true CN114452645B (en) 2023-08-04

Family

ID=81406106

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110778014.6A Active CN114452645B (en) 2021-07-09 2021-07-09 Method, apparatus and storage medium for generating scene image

Country Status (1)

Country Link
CN (1) CN114452645B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116095221B (en) * 2022-08-10 2023-11-21 荣耀终端有限公司 Frame rate adjusting method in game and related device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2995703B1 (en) * 1998-10-08 1999-12-27 コナミ株式会社 Image creation device, display scene switching method in image creation device, readable recording medium storing display scene switching program in image creation device, and video game device
CN101878056A (en) * 2007-11-30 2010-11-03 科乐美数码娱乐株式会社 Game program, game device and game control method
CN109499061A (en) * 2018-11-19 2019-03-22 网易(杭州)网络有限公司 Method of adjustment, device, mobile terminal and the storage medium of scene of game picture
CN109603152A (en) * 2018-12-14 2019-04-12 北京智明星通科技股份有限公司 A kind of scene of game image processing method, device and terminal
CN109675310A (en) * 2018-12-19 2019-04-26 网易(杭州)网络有限公司 The method and device of virtual lens control in a kind of game
CN110930307A (en) * 2019-10-31 2020-03-27 北京视博云科技有限公司 Image processing method and device
CN111228801A (en) * 2020-01-07 2020-06-05 网易(杭州)网络有限公司 Rendering method and device of game scene, storage medium and processor

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2995703B1 (en) * 1998-10-08 1999-12-27 コナミ株式会社 Image creation device, display scene switching method in image creation device, readable recording medium storing display scene switching program in image creation device, and video game device
CN101878056A (en) * 2007-11-30 2010-11-03 科乐美数码娱乐株式会社 Game program, game device and game control method
CN109499061A (en) * 2018-11-19 2019-03-22 网易(杭州)网络有限公司 Method of adjustment, device, mobile terminal and the storage medium of scene of game picture
CN109603152A (en) * 2018-12-14 2019-04-12 北京智明星通科技股份有限公司 A kind of scene of game image processing method, device and terminal
CN109675310A (en) * 2018-12-19 2019-04-26 网易(杭州)网络有限公司 The method and device of virtual lens control in a kind of game
CN110930307A (en) * 2019-10-31 2020-03-27 北京视博云科技有限公司 Image processing method and device
CN111228801A (en) * 2020-01-07 2020-06-05 网易(杭州)网络有限公司 Rendering method and device of game scene, storage medium and processor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于Android系统的自适应跟踪场景渲染技术;陆兴华等;《计算机与网络》;20150926(第18期);全文 *

Also Published As

Publication number Publication date
CN114452645A (en) 2022-05-10

Similar Documents

Publication Publication Date Title
US11601630B2 (en) Video processing method, electronic device, and non-transitory computer-readable medium
CN110650368B (en) Video processing method and device and electronic equipment
WO2020108082A1 (en) Video processing method and device, electronic equipment and computer readable medium
US11367465B2 (en) Real time video special effects system and method
US11941748B2 (en) Lightweight view dependent rendering system for mobile devices
WO2019135979A1 (en) Fusing, texturing, and rendering views of dynamic three-dimensional models
WO2022048097A1 (en) Single-frame picture real-time rendering method based on multiple graphics cards
CN110636365B (en) Video character adding method and device, electronic equipment and storage medium
US11587280B2 (en) Augmented reality-based display method and device, and storage medium
JP7209851B2 (en) Image deformation control method, device and hardware device
KR101656167B1 (en) Method, apparatus, device, program and recording medium for displaying an animation
US20090262139A1 (en) Video image display device and video image display method
CN111491208B (en) Video processing method and device, electronic equipment and computer readable medium
CN112884908A (en) Augmented reality-based display method, device, storage medium, and program product
EP3871037B1 (en) Efficiency enhancements to construction of virtual reality environments
CN110572717A (en) Video editing method and device
WO2023231235A1 (en) Method and apparatus for editing dynamic image, and electronic device
CN114452645B (en) Method, apparatus and storage medium for generating scene image
WO2022218042A1 (en) Video processing method and apparatus, and video player, electronic device and readable medium
WO2021008322A1 (en) Image processing method, apparatus, and device
JP7427786B2 (en) Display methods, devices, storage media and program products based on augmented reality
CN118159341A (en) Image frame rendering method and related device
CN108184054B (en) Preprocessing method and preprocessing device for images shot by intelligent terminal
JP2023522370A (en) Image display method, device, equipment and storage medium
CN108898652B (en) Skin image setting method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant