CN114452645A - Method, apparatus and storage medium for generating scene image - Google Patents

Method, apparatus and storage medium for generating scene image Download PDF

Info

Publication number
CN114452645A
CN114452645A CN202110778014.6A CN202110778014A CN114452645A CN 114452645 A CN114452645 A CN 114452645A CN 202110778014 A CN202110778014 A CN 202110778014A CN 114452645 A CN114452645 A CN 114452645A
Authority
CN
China
Prior art keywords
scene
frame
command
drawing command
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110778014.6A
Other languages
Chinese (zh)
Other versions
CN114452645B (en
Inventor
王国凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202110778014.6A priority Critical patent/CN114452645B/en
Publication of CN114452645A publication Critical patent/CN114452645A/en
Application granted granted Critical
Publication of CN114452645B publication Critical patent/CN114452645B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • A63F13/355Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an encoded video stream for transmitting to a mobile phone or a thin client
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/53Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing
    • A63F2300/538Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing for performing operations on behalf of the game client, e.g. rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a method, equipment and storage medium for generating scene images, wherein the method comprises the steps of detecting the state of a game object; if the game object is in a motion state, generating a first type of scene image, wherein the first type of scene image presents a game scene in a first range; if the game object is in a non-motion state, generating a second type of scene image, wherein the second type of scene image presents a game scene in a second range; the first range is less than the second range. The scheme reduces the range of the game scene presented by the scene image when the game object is in the motion state, reduces the calculation amount of the equipment for generating one frame of the scene image, and thus reduces the load of the equipment when the game object is in the motion state.

Description

Method, apparatus and storage medium for generating scene image
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a storage medium for generating a scene image.
Background
With the development of computer technology, various electronic games are continuously played on terminal devices (including smart phones, computers, and the like). In an electronic game, a user can operate a game object to move in a game scene, and a terminal device needs to generate the game scene in a specific range in real time according to the user operation. When the game scene changes rapidly, the terminal device needs to continuously acquire and generate new scene images at a high frequency, which results in a high load on the terminal device.
Disclosure of Invention
The application provides a method, equipment and a storage medium for generating a scene image, and aims to solve the problem of high equipment load when an electronic game is operated.
In order to achieve the above object, the present application provides the following technical solutions:
a first aspect of the present application provides a method for generating a scene image, including:
judging whether the game object is in a motion state;
generating a second type of scene image when the game object is not in a motion state;
generating a first type of scene image when the game object is in a motion state; the range of the game scenes represented by the first type of scene images is smaller than the range of the game scenes represented by the second type of scene images.
The method has the advantages that the range of the presented game scene is reduced when the game object is in the motion state, so that the calculation amount required by the electronic equipment for generating the scene image is reduced, and the load of the electronic equipment is reduced.
In some optional embodiments, before the first type of scene images are generated, the multi-frame scene images with gradually decreasing game scene range are generated, so that the range of the game scene presented by the electronic device is smoothly decreased from the range corresponding to the second type of scene images to the range corresponding to the first type of scene images.
The embodiment has the advantages that when the game object is in a motion state, the range of the game scene presented by the electronic equipment is smoothly reduced, and abrupt change of the range of the game scene is avoided.
In some optional embodiments, the manner of generating the first type of scene image is:
obtaining object information used for drawing a scene image;
and cutting a part which is positioned outside the first range in the object information used for drawing the scene image, and drawing the first type of scene image by using the part which is positioned in the first range in the object information used for drawing the scene image.
In some alternative embodiments, the manner of detecting the state of the game object is:
obtaining a current scene drawing command; the scene drawing command refers to a drawing command for drawing a scene image;
matching the current scene drawing command with the motion command stream; the motion command stream is a scene drawing command obtained when the game object is in a motion state;
if the current drawing command stream is successfully matched with the motion command stream, detecting that the game object is in a motion state;
and if the current drawing command stream is not matched with the motion command stream, detecting that the game object is in a non-motion state.
In some alternative embodiments, the manner of detecting the state of the game object is:
obtaining a multi-frame scene drawing command within a certain time;
calculating the speed of the game object according to the distance between the game object and the object in the game scene in each frame of scene drawing command;
if the speed of the game object is greater than the speed threshold value, detecting that the game object is in a motion state;
if the speed of the game object is not greater than the speed threshold, the game object is detected to be in a non-motion state.
In some optional embodiments, before generating the first type of scene image, determining the speed of the game object;
if the speed of the game object is lower, continuously generating the first type of scene images;
if the speed of the game object is higher, generating a third type scene image; the range of the game scene presented by the third type scene image is smaller than the range of the game scene presented by the first type scene image.
The above embodiment has the advantage of further reducing the range of game scenes presented by the electronic device when the speed of the game object is greater, thereby further reducing the load on the electronic device.
In some optional embodiments, the method for generating a scene image is applied to an electronic device, and a software layer of the electronic device comprises a matching module;
the detecting the state of the game object comprises:
the matching module receives a frame scene drawing command;
the matching module reads a motion command stream from a memory;
the matching module matches the motion command stream with the one-frame scene rendering command;
if the motion command stream is successfully matched with the scene rendering command, the matching module determines that the game object is in a motion state in the scene rendering command;
if the motion command stream and the scene rendering command are not matched, the matching module determines that the game object is in a non-motion state in the scene rendering command.
In some optional embodiments, the software layer of the electronic device further comprises a zoom-in module;
if the game object is in a motion state, generating a first type of scene image, including:
the matching module determines that the game object is in a motion state in the scene drawing command, and then the matching module sends the scene drawing command to the zooming-in module;
the zooming-in module receives the scene drawing command of the frame;
and the zooming-in module generates the first type scene image according to the one-frame scene drawing command.
In some optional embodiments, the software layer of the electronic device further comprises a transmission interface;
after the zooming-in module generates the first type of scene images according to the one-frame scene drawing command, the zooming-in module sends the first type of scene images to the display sending interface;
and the display sending interface displays the first type of scene images on a display screen.
In some optional embodiments, the zooming-in module generates the first type scene image according to the one-frame scene drawing command, including:
the zoom-in module creates a temporary frame buffer and a view; the size of the temporary frame buffer and the size of the view both match the first range;
the zooming-in module reads object information from the one-frame scene drawing command;
and the zooming-in module calls a graphic processor based on the object information, the temporal frame buffer and the view, so that the graphic processor draws the first type of scene images.
In some optional embodiments, the software layer of the electronic device further comprises a callback module and a graphics library;
if the game object is in a non-motion state, generating a second type of scene image, including:
the matching module determines that the game object is in a non-motion state in the scene drawing command, and then the matching module sends the scene drawing command to the callback module;
the callback module receives the scene drawing command;
the callback module is used for calling back the scene drawing command of the frame to the graphic library;
and responding to the callback module to callback the frame of scene drawing command, and generating the second type of scene images by the graphic library according to the frame of scene drawing command.
In some optional embodiments, the software layer of the electronic device further comprises an identification module;
before the matching module receives a scene rendering command, the method further comprises:
the identification module receives a frame of drawing command stream;
after the identification module determines that the scene frame is buffered, the identification module determines the drawing command stored in the scene frame buffer in the one-frame drawing command stream as a one-frame scene drawing command;
the identification module judges whether the memory stores a motion command stream or not;
and after the identification module judges that the motion command stream is stored in the memory, the identification module sends the one-frame scene drawing command to the matching module.
In some optional embodiments, the identifying module determines a scene frame buffer, comprising:
the identification module receives a previous frame drawing command stream; the previous frame of drawing command stream is another frame of drawing command stream received by the recognition module before the one frame of drawing command stream;
the identification module counts the number of drawing commands stored in each frame buffer except an interface frame buffer in a plurality of frame buffers storing the previous frame of drawing command stream; the interface frame buffer is used for storing a frame buffer of a user interface drawing command in the plurality of frame buffers; the user interface drawing command is a drawing command used for drawing a user interface image in the drawing command stream;
the recognition module determines, as the scene frame buffer, a frame buffer of the plurality of frame buffers, in which the number of drawing commands stored in addition to the interface frame buffer is the largest.
In some optional embodiments, the software layer of the electronic device further comprises a detection module;
after the identification module determines whether the memory stores the motion command stream, the method further includes:
after the identification module judges that the memory does not store the motion command stream, the identification module sends the scene rendering command to the detection module;
the detection module receives the scene rendering command of the frame;
the detection module calculates the speed of the game object in the scene rendering command;
the detection module judges whether the speed of the game object in the one-frame scene drawing command is larger than a preset speed threshold value or not;
after the detection module judges that the speed of the game object in the one-frame scene drawing command is greater than the speed threshold value, the detection module determines the motion command stream according to the one-frame scene drawing command;
the detection module stores the motion command stream into the memory.
In some optional embodiments, the detection module calculates a velocity of the game object in the one-frame scene rendering command, including:
the detection module acquires a timestamp and a distance corresponding to the one-frame scene drawing command; the distance corresponding to the one-frame scene drawing command is the distance between the game object and the image central coordinate in the one-frame scene drawing command;
the detection module reads a timestamp and a distance corresponding to the scene rendering command of the previous frame from the memory; the distance corresponding to the previous scene drawing command is the distance between the game object and the image central coordinate in the previous scene drawing command; the previous scene rendering command is another scene rendering command received by the identification module before the one scene rendering command;
the detection module divides the distance difference value by the time difference value to obtain the speed of the game object in the one-frame scene drawing command; the distance difference value is the difference value between the distance corresponding to the scene rendering command of the frame and the distance corresponding to the scene rendering command of the previous frame; the time difference value is a difference value between a time stamp corresponding to the one frame of scene rendering command and a time stamp corresponding to the previous frame of scene rendering command.
In some optional embodiments, the detecting module determines the motion command stream according to the one-frame scene rendering command, including:
the detection module reads out the scene rendering command of the first N frames from the memory; the speeds of the game objects in the first N scene drawing commands are all larger than the speed threshold; the first N scene rendering commands are N scene rendering commands received by the identification module before the one scene rendering command; n is a preset positive integer;
the detection module determines a drawing command sequence contained in the scene drawing command of one frame and the scene drawing command of the first N frames as the motion command stream; the sequence of drawing commands is a collection of consecutive scene drawing commands.
In some optional embodiments, the electronic device further comprises a game application and an interception module;
before the identifying module receives a frame of drawing command stream, the identifying module further comprises:
the game application outputting the stream of one frame of drawing commands;
the intercepting module intercepts the frame of drawing command stream output by the game application;
and the intercepting module sends the frame of drawing command stream to the identification module.
A second aspect of the present application provides an electronic device, where a hardware layer of the electronic device includes: one or more processors, memory, and a display screen;
the memory is used for storing one or more programs;
the one or more processors are operable to execute the one or more programs to cause the electronic device to perform the following:
detecting a state of a game object;
if the game object is in a motion state, generating a first type of scene image, wherein the first type of scene image presents a game scene in a first range;
if the game object is in a non-motion state, generating a second type of scene image, wherein the second type of scene image presents a game scene in a second range; the first range is less than the second range.
A third aspect of the present application provides a computer storage medium for storing a computer program, which when executed is particularly adapted to implement the method for generating an image of a scene as provided in any one of the first aspects of the present application.
The application provides a method, equipment and storage medium for generating scene images, wherein the method comprises the steps of detecting the state of a game object; if the game object is in a motion state, generating a first type of scene image, wherein the first type of scene image presents a game scene in a first range; if the game object is in a non-motion state, generating a second type of scene image, wherein the second type of scene image presents a game scene in a second range; the first range is less than the second range. The scheme reduces the range of the game scene presented by the scene image when the game object is in the motion state, reduces the calculation amount of the equipment for generating one frame of the scene image, and thus reduces the load of the equipment when the game object is in the motion state.
Drawings
Fig. 1 is a schematic structural diagram of a mobile terminal disclosed in an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating a method for generating an image of a scene according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating the hardware and software layers of a device for performing a method for generating an image of a scene according to an embodiment of the present disclosure;
fig. 4a, fig. 4b and fig. 4c are schematic diagrams of scene images disclosed in the embodiments of the present application;
FIG. 5 is an exemplary diagram of the detailed operation of the modules in the application framework layer shown in FIG. 3;
FIG. 6 is a method for smoothly narrowing a scene represented by a scene image according to an embodiment of the present disclosure;
fig. 7 is a flowchart of a method for generating a scene image according to an embodiment of the present disclosure.
Detailed Description
The terms "first", "second" and "third", etc. in the description and claims of this application and the description of the drawings are used for distinguishing between different objects and not for limiting a particular order.
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The method for generating the scene image can be used for electronic equipment such as a smart phone, a tablet personal computer, a personal desktop computer or a notebook computer. The structure of the electronic device to which the method is applied can be as shown in fig. 1.
As shown in fig. 1, the electronic device may include a processor 110, an internal memory 120, a display 130, and the like.
It is to be understood that the illustrated structure of the present embodiment does not constitute a specific limitation to the electronic device. In other embodiments, the electronic device may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
For example, in the present application, the processor 110 may execute one or more programs stored in the internal memory, so that the electronic device executes the method for generating a scene image provided in the embodiments of the present application.
A memory may also be provided in the processor 110 for storing commands and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold commands or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the command or data again, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
Internal memory 120 may be used to store computer executable program code, including commands. The processor 110 executes various functional applications and data processing of the electronic device by executing commands stored in the internal memory 120. For example, in the present embodiment, the processor 110 may perform scene composition by executing commands stored in the internal memory 120. The internal memory 120 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like. The data storage area can store data (such as audio data, a phone book and the like) created in the using process of the electronic equipment. In addition, the internal memory 120 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 110 executes various functional applications and data processing of the electronic device by executing commands stored in the internal memory 120 and/or commands stored in a memory provided in the processor.
For example, in the present application, the memory may store one or more programs, and the stored programs, when executed by the processor, can implement the method for generating a scene image provided by the embodiments of the present application.
The electronic device implements the generating function through the GPU, the display screen 130, and the application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display screen 130 and an application processor. The display screen 130 is used to generate images, videos, etc. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program commands to generate or change generation information.
A series of Graphical User Interfaces (GUIs) can be generated on the display 130 of the electronic device, and a user can interact with the electronic device by directly operating the GUI. For example, in the present embodiment, the display 130 may generate virtual keys.
In addition, an operating system runs on the above components. For example, the iOS os developed by apple, the Android os developed by google, the Windows os developed by microsoft, and the hong meng system. A running application may be installed on the operating system.
For ease of understanding, certain terms of art in the present application are explained and illustrated below:
with respect to the graphic library:
the Graphics Library is also referred to as a drawing Library, and the Graphics Library is used to define a cross-Programming language, cross-platform Application Programming Interface (API), which includes a plurality of functions for processing Graphics, for example, OpenGL (Open Graphics Library), where the OpenGL defined API includes an Interface for drawing a two-dimensional image or a three-dimensional image (the Interface includes a drawing function, such as a drawing function gldrawelementss ()), and an Interface for presenting an image drawn by the drawing function onto a display Interface (the Interface includes a presentation function, such as a function eglsepampbuffersuffers ()), and the embodiments of the present Application are not exemplified herein. The function in OpenGL can be called by a command, for example, a drawing function can be called by a drawing command to draw a two-dimensional image or a three-dimensional image. The drawing command is a command written by a developer according to a function in the graphics library when the game application is developed, and is used for calling an interface of the graphics library corresponding to the drawing command.
With respect to the game image:
as indicated above, the two-dimensional image or three-dimensional image rendered by the rendering command calling the rendering function may include game images, as well as other types of image frames. Specifically, the game application is displayed by continuously rendering and quickly playing one frame of image during running. One frame of the game image is a frame of the static image displayed by the game application. Each frame of still image may be composed of a scene image, a User Interface (UI) image, and the like. Illustratively, the scene image may include in-game scenery, game characters, background objects, special effects, skills, and the like, the UI image may include images of rendering control buttons, minimaps, floating text, and the like, and some in-game character blood volume bars may be included in the UI image. It should be noted that, both a game character and the like in the scene image and rendering control buttons in the UI image may be regarded as objects in the game image frame, and it is understood that each game image frame is composed of individual objects.
For one frame of game image, the scene image and the UI image constituting the game image may be respectively stored in two frame buffers (framebuffers), for example, the rendered UI image is stored in FB0, the rendered scene image is stored in FB2, and finally the UI image of FB0 and the scene image of FB2 are merged into one frame of complete game image.
Regarding the drawing commands:
each object in the game image is drawn by electronic device specific software or hardware executing drawing commands. An object may be drawn by one or more drawing commands, and in general, an object corresponds to a drawing command one to one. It should be noted that each drawing command further includes specific parameters carried by the drawing command, where the specific parameters may include vertex information, texture information, and the like of an object corresponding to the drawing command, the vertex information is used to describe the number and positions of vertices constituting the corresponding object, and the texture information is used to describe a color or a specific pattern that needs to be filled on a surface of the corresponding object. Such as vertex information, etc. When the electronic equipment executes the drawing command, drawing the object corresponding to the drawing command based on the specific parameters of the drawing command.
Regarding the drawing command stream:
the GPU may specifically implement drawing of one or more objects in the game image by executing one or more drawing commands in the drawing command stream and calling one or more interfaces of the graphics library. It should be noted that each object drawn by the drawing command may be represented by data stored in the memory. For example, a set of drawing objects generated from the drawing command stream may constitute display data of a corresponding one frame game image.
The drawing command stream generally includes three drawing commands, such as a scene drawing command, a User Interface (UI) drawing command, and an image display command. The scene drawing command is used for drawing the scene image, and is specifically used for drawing images of scenery, characters, special effects, skills and the like in the game, the UI drawing command is used for drawing the UI image, for example, for drawing images of control buttons, small maps, floating characters and the like, some pieces of blood volume of characters in the game are drawn by the UI drawing command, and the UI drawing command may also be referred to as a control drawing command. And the image display sending command is used for putting the drawn image data into a system designated position so as to complete actual display.
The process of drawing a scene image in the embodiment of the present application is described by taking the processor 110 in fig. 1 as an example. After the game is started, the CPU may obtain a drawing command stream and a rendering command stream output by the game, and send a drawing command therein to the GPU, so that the GPU draws a corresponding object according to the drawing command. As an example, the rendering command may include a glBindFramebuffer () function, and the drawing command may include a glDrawElements () function and a gldrawelementinstanced () function. Wherein the glBindFramebuffer () function may be used to indicate the frame buffer of the current binding. For example, glBindFramebuffer (1) may indicate that the currently bound frame is buffered as FB1, i.e., the GPU is instructed to execute the subsequent drawing commands glDrawElements/glDrawElementsInstanced on FB 1. For convenience of explanation, in the following example, a set of glDrawElement and glDrawElementsInstanced in the drawing command is referred to as a drawing operation (draw call).
Fig. 2 is a schematic diagram illustrating a method for generating an image of a scene according to the present application.
After obtaining the drawing command stream of the N frame game image, the processor identifies a scene drawing command and a non-scene drawing command (commands except the scene drawing command) in the N frame game image, then judges whether a game object in the scene drawing command of the N frame is in a motion state, assumes that the game object in the scene drawing command of the N frame is judged to be in a non-motion state, draws a non-scene image (such as a UI image) by using the non-scene drawing command and sends the drawn non-scene image to display, draws by using the scene drawing command to obtain a frame of second class scene image and sends the frame of second class scene image to display, and then combines the second class scene image and the non-scene image to obtain the N frame game image and displays the N frame game image on a screen. The game object is an object representing a character controlled by a user in a game.
And then, the processor processes each drawing command of obtaining one frame of game image according to the mode, supposing that the game object in the scene drawing command of the (N + 1) th frame is in a non-motion state, respectively draws the second type of scene image by using the scene drawing command of the (N + 1) th frame, draws the non-scene image by using the non-scene drawing command, and then combines the second type of scene image and the non-scene image to obtain the (N + 1) th frame of game image and displays the (N + 1) th frame of game image on a screen.
The second type of scene image is a scene image with a second range of scene range. It can be seen that the electronic device can present game scenes in the second range to the user by displaying the game images of the nth frame to the (N + 1) th frame on the screen.
After the drawing command stream of the (N + 2) th frame after the processor identifies the scene drawing command therein, the game object in the scene drawing command of the (N + 2) th frame is judged to be in a motion state, then the non-scene drawing command of the (N + 2) th frame is used for drawing a non-scene image (such as a UI image) and displaying the drawn non-scene image, and the scene drawing command of the (N + 2) th frame is used for drawing to obtain a frame of first-class scene image and displaying the frame of first-class scene image.
Assuming that game objects in the scene drawing commands from the (N + 2) th frame to the (N + X) th frame are all in a motion state, the processor draws corresponding first-type scene images by using the scene drawing commands from the (N + 2) th frame to the (N + X) th frame in sequence, and displays game images from the (N + 2) th frame to the (N + X) th frame, which are obtained by combining the first-type scene images and corresponding non-scene images, on a screen.
The first type of scene image is a scene image with a first range of scene range. As can be seen, the electronic device can present the game scenes in the first range to the user by displaying the game images of the (N + 2) th frame to the (N + X) th frame on the screen. The first range is less than the second range.
After the processor obtains the drawing command stream of the (N + X + 1) th frame and identifies the scene drawing command, the game object in the scene drawing command of the (N + X + 1) th frame is judged to be in a non-motion state, and then the scene drawing command of the (N + X + 1) th frame is used for drawing the second type of scene image.
By the method, the electronic equipment can present the in-game scenes in the second range to the user when the game object is in the non-motion state, and present the in-game scenes in the first range to the user when the game object is in the motion state.
Referring to fig. 3, a framework diagram of software layers and hardware layers of an electronic device executing a method for generating a scene image according to an embodiment of the present application is provided. The implementation process of the method for generating a scene image provided by the embodiment of the application specifically includes:
and in a certain time after the game application is started, the identification module determines the frame buffer for storing the scene drawing command as a scene frame buffer according to the drawing command stream of the previous frame or a plurality of previous frames output by the game application. After that, every time a frame of drawing command stream is obtained, the recognition module recognizes the drawing command stored in the scene frame buffer in the frame of drawing command stream as a scene drawing command, and sends the scene drawing command to the detection module.
The detection module obtains the distance S and the time stamp T corresponding to one frame of scene drawing command and stores the distance S and the time stamp T in the memory when receiving one frame of scene drawing command, and then calculates the speed of the game object in the frame of scene drawing command by using the distance S and the time stamp T corresponding to the frame of scene drawing command and the distance S and the time stamp T corresponding to one or more frames of scene drawing commands stored in the memory and stores the speed in the memory. The distance corresponding to the scene drawing command refers to a distance between the game object and the image center coordinate in the scene drawing command, and the distance can be represented by a depth of the image center coordinate.
After detecting that the speeds of the game objects in the multiple frames of scene drawing commands are all larger than the speed threshold value, the detection module detects a motion command stream by comparing the multiple frames of scene drawing commands, and writes the motion command stream into the memory.
After the motion command stream is stored in the memory, each scene rendering command is sent to the matching module instead of the detection module. The matching module matches the frame of scene drawing command with the motion command stream in the memory every time the frame of scene drawing command is obtained, and if the matching is successful, the frame of scene drawing command is sent to the zooming-in module so as to draw a frame of first-class scene image by the frame of scene drawing command.
The method for generating a scene image provided by the present application is specifically described below with reference to fig. 3.
As shown in fig. 3, the electronic device may include a software layer including an application layer 301 and a system framework layer (frame) 302, and a hardware layer 303. The hardware layer includes a CPU, a GPU and a memory 331.
The application layer 301 includes one or more applications, such as a gaming application 304, that may be run on the electronic device. For ease of understanding, the method illustrated in the present application will be explained and illustrated below with respect to the gaming application 304 as an example.
The game application 304 includes a game engine 341, and the game engine 341 may draw an image of the game application by calling a drawing function in the graphics library 305 through the graphics library interface.
The system framework layer 302 may include various graphics libraries 305, such as an embedded graphics library for embedded systems (OpenGL ES), and the like.
In the related art, when the user opens the game application 304, the electronic device starts the game application 304 in response to an operation of the user to open the game. The game engine 341 invokes a drawing function in the graphics library via the graphics library interface to draw the game image based on a drawing command stream of the game image issued by the game application. After the graphics library generates the image data of the game image, a display sending interface (such as eglSwapBuffers ()) is called to send the image data to the memory queue. The graphics library sends the image data in the memory queue to hardware (such as a CPU) for synthesis based on the periodic signal for display sending, and finally sends the synthesized image data to a display screen of the electronic equipment for display sending.
As shown in fig. 3, before the drawing command stream of the nth frame output by the game engine is transmitted to the graphics library 305, the drawing command stream is intercepted by the intercepting module 306, and then transmitted to the identifying module 307 by the intercepting module 306, where the identifying module 307 is configured to identify a scene drawing command and a non-scene drawing command in the drawing command stream, and the non-scene drawing command may include the aforementioned UI drawing command, and may also include a drawing command in the drawing command stream other than the scene drawing command and the rendering command. The drawing command stream of the nth frame is a drawing command stream for drawing the game image of the nth frame.
The interception module may intercept a stream of drawing commands output by the game engine based on Hook (Hook) technology. The specific interception mode of the interception module is as follows: when the game application is started, the interception module modifies a function pointer list in Thread Local Storage (TLS) of the game Thread, and replaces a graphic function pointer recorded in the list with a substitute pointer. The pointers to the implementation functions of the recognition module 307 are pre-configured in the interception module. A graph function pointer, which is a function pointer pointing to an implementation function of the graph library 305; instead of pointers, function pointers to implementation functions of the recognition module 307 are used.
After the modification is completed, the drawing command pointing to the graphics function pointer in the drawing command stream points to the substitute function pointer configured in the interception module, so that the interception module can intercept the drawing command stream. After intercepting the drawing command stream, the interception module transfers the drawing command stream to the recognition module 307.
Each implementation function configured by the recognition module 307 corresponds to an implementation function of the graphics library 305, and table 1 below is an example of a partial implementation function configured by the graphics library and a corresponding implementation function in the recognition module.
TABLE 1
Figure BDA0003156528550000091
Figure BDA0003156528550000101
MyGlDrawElements and MyGlDrawElementsInstance also belong to the drawing operation.
In a specific example, the interception module may replace the function pointer glClear pointing to the implementation function glClear of the graphics library in the function pointer list with the function pointer MyGlClear pointing to the implementation function MyGlClear. The drawing command for calling the function pointer glClear in the drawing command stream output by the game application is converted into the function pointer MyGlclear, so that the intercepting module can intercept the drawing command for calling the function pointer MyGlclear, and then the intercepting module calls the realization function MyGlclear of the recognition module according to the command.
Alternatively, the stream of drawing commands intercepted by the intercepting module may be recorded in the form of an array and transmitted to other modules. Specifically, the interception module replaces each function pointer (such as the function pointer MyGlClear) in the function pointer list, and an index interpolation function is configured. The function pointer configured with the index interpolation function may add the index of the called implementation function to an index array for recording the drawing command stream each time the corresponding implementation function is called, and the correspondence between the implementation function and the index may be preset in the interception module or may be stored in the memory 331, so that when the detection module transfers the drawing command stream backwards, only the corresponding index array needs to be transferred.
For example, the following table 2 implements an example of the correspondence of functions and indices:
TABLE 2
Implementing functions Index
MyGlClear 1
MyGlActiveTexture 2
MyGlAlphaFunc 3
Table 2 may be stored in the interception module or in memory, where the interception module may read from memory when needed. The interception module converts each drawing command in the drawing command stream into a corresponding index according to the correspondence between the implementation function and the index recorded in table 2, thereby obtaining an index array corresponding to the drawing command stream, and then stores the index array in the memory 331.
TABLE 3
Game machine Command Index array
Game1 MyGlClear/MyGlActiveTexture 1,2
Game2 MyGlClear/MyGlAlphaFunc 1,3
Table 3 is an example of the index array converted according to the correspondence relationship of table 2. In table 3, the Game1 sequentially outputs a command for calling MyGlClear and a command for calling MyGlActiveTexture, after the two commands reach the intercepting module, the intercepting module converts the two commands into an index array (1, 2) according to the corresponding relationship in table 2, then stores the index array (1, 2) in the memory, and similarly, the Game2 sequentially outputs a command for calling MyGlClear and a command for calling MyGlAlphaFunc, the intercepting module converts the two commands into an index array (1, 3) according to the corresponding relationship in table 2, and then stores the index array (1, 3) in the memory.
The identifying module 307 may configure corresponding implementation functions for each implementation function of the graph library, or configure corresponding implementation functions for only a part of the implementation functions in the graph library.
The recognition module 307 may recognize the scene drawing command as follows:
within a period of time (e.g., 15 milliseconds) after game start, the recognition module has not determined the frame buffer in which to deposit the scene drawing commands. At this time, the recognition module may directly call back all the drawing command streams to the graphics library for execution through the callback module, and then count the number of drawing operations stored in each frame buffer except for the frame buffer of the UI command during the process of executing the commands of the drawing command streams one by one, and determine the frame buffer with the largest number of corresponding drawing operations as the frame buffer for storing the scene image.
When the recognition module has determined the frame buffer in which the scene drawing command is stored, the recognition module may recognize all drawing operations stored in this frame buffer in which the scene drawing command is stored as the scene drawing command.
The drawing operation stored in one frame buffer refers to a drawing operation after the binding command MyglBindFrameBuffer () corresponding to the frame buffer and before the binding command MyglBindFrameBuffer () corresponding to the other frame buffer.
Taking frame buffer FB1 as an example, after the recognition module determines that the scene drawing command is stored in FB1, all drawing operations after MyglBindFrameBuffer (1) and before MyglBindFrameBuffer (x) can be recognized as scene drawing commands, where MyglBindFrameBuffer (1) is a binding command corresponding to FB1, MyglBindFrameBuffer (x) is a binding command corresponding to FBx, and FBx is another frame buffer different from FB 1.
For example, most game applications store non-scene images by using FB0 as a default, the recognition module may recognize MyglBindFrameBuffer (0) and the drawing operation thereafter as a non-scene drawing command, and then call the callback module 311 to call back the non-scene drawing command to the graphics library.
For other commands for binding frame buffer, i.e. for MyglBindFrameBuffer (y) (y is not equal to 0), after the identification module identifies one of the commands, a counter with an initial value of 0 is established for the bound frame buffer, for example, after the identification module identifies MyglBindFrameBuffer (2), a counter with an initial value of 0 corresponding to FB2 is established. Then, every time a drawing operation draw call is recognized under the command, the counter is increased by 1 until a new frame buffer is bound, thereby obtaining the number of drawing operations corresponding to the frame buffer. For example, after the MyglBindFrameBuffer (2) is identified, 1 is added to the counter of the FB2 every time a draw call is identified until the MyglBindFrameBuffer (3) is identified (i.e., a new frame buffer FB3 is bound), and at this time, the value recorded by the counter of the FB2 is the number of drawing operations corresponding to the frame buffer FB 2.
After the recognition module recognizes the scene drawing command, it determines whether there is a motion command stream in the memory 331, and if there is no motion command stream in the memory 331, the recognition module may transmit the scene drawing command to the detection module 308, and if there is a motion command stream in the memory 331, the recognition module may transmit the scene drawing command to the matching module 309.
In a specific example, after the recognition module recognizes the scene drawing command, the motion command stream is read from the memory, if the motion command is successfully read, it is determined that there is a motion command stream in the memory, and if the motion command stream is not read, it is determined that there is no motion command stream in the memory.
The motion command stream refers to a drawing command sequence with the largest number of repeated occurrences among the drawing commands for the multiple frames of scenes in which the game object is in motion, or an index array corresponding to the drawing command sequence with the largest number of repeated occurrences. The drawing command sequence refers to a set of a plurality of continuous drawing commands in the scene drawing commands, and an array formed by indexes corresponding to each drawing command in the drawing command sequence is an index array corresponding to the drawing command sequence.
For example, a frame of scene drawing commands may include drawing command 1 through drawing command 100, wherein the four drawing commands, drawing command 1 through drawing command 4, may be considered as a sequence of drawing commands.
The detection module 308, after obtaining a frame of scene drawing command, calls back the scene drawing command to the graphics library through the call-back module, obtains a timestamp T corresponding to the frame of scene drawing command and a distance S between the game object and the image center coordinate in the frame of scene drawing command in the call-back process, and stores the timestamp T and the distance S of the frame in a memory.
Therefore, after obtaining the scene drawing command of the nth frame, the detection module 308 determines whether the distance S and the time stamp T of the nth-1 frame are stored in the memory, and if the distance S and the time stamp T of the nth-1 frame are stored in the memory, the detection module reads the distance S and the time stamp T of the nth-1 frame from the memory, and then calculates the speed of the game object in the scene drawing command of the nth frame by combining the distance S and the time stamp T of the nth frame. Wherein the N-1 th frame is a frame before the Nth frame.
The speed is calculated as follows:
and subtracting the distance S of the (N-1) th frame from the distance S of the (N) th frame to obtain a distance difference value, subtracting the time stamp T of the (N-1) th frame from the time stamp T of the (N) th frame to obtain a time difference value, and finally dividing the time difference value by the distance difference value to obtain the speed of the (N) th frame.
If the speed of the calculated scene drawing command of the Nth frame is greater than the preset speed threshold, the detection module determines that the game object in the scene drawing command of the Nth frame is in a motion state, and if the speed of the calculated scene drawing command of the Nth frame is not greater than the preset speed threshold, the detection module determines that the game object in the scene drawing command of the Nth frame is in a non-motion state. The speed threshold may be set at 5 meters per second. The speed and state corresponding to a frame of scene rendering commands are stored in the memory 331 by the detection module.
The timestamp of the scene drawing command of any frame may be the system time of the electronic device when the scene image starts to be drawn according to the scene drawing command, or may be the system time of the electronic device when the scene image is drawn according to the scene drawing command and sent to the display.
The system time may be obtained by calling the following function:
gettimeofday(mTimeMyStart,nullptr)。
the gettimeoffday function may store the system absolute time when called at mtimemstart, which is a timeval construct of the form:
Figure BDA0003156528550000121
after gettimeoffset (mtimemstart, nullptr) is called, mtimemstart.
Therefore, for a scene drawing command of a certain frame, gettimeoffset (mtimemstart, nullptr) may be called when drawing of a scene image is started according to the scene drawing command, and then the value of mtimemstart.
Alternatively, in a First-person shooter (FPS) game, the distance between the game object in the scene image and the center coordinate of the image may be considered to be equal to the depth value of the center coordinate, and the depth value of the center coordinate x, y of the scene image in one frame may be obtained by the following commands:
glReadPixels(x,y,width,height,GL_DEPTH_COMPONENT,GL_FLOAT,&depth);
glReadPixels is a function which is configured in advance in the electronic equipment and is used for reading pixels and information thereof in a specified range, the specified range is a rectangle, width and height are the length and width of the rectangular range, x and y are coordinates of a starting point of the rectangular range, for example, coordinates of a vertex at the upper left corner or coordinates of a vertex at the lower left corner of the rectangular range, and by setting both width and height to 1, the pixels and the information thereof at the coordinates x and y can be read through the glReadPixels.
GL _ DEPTH _ COMPONENT is used to specify that glReadPixels reads the DEPTH value of the specified range pixel, and in the above example, when both width and height are set to 1, the DEPTH value of the pixel at coordinates x, y is read.
A scene image of a game in a 3D game is drawn by projecting an object located in a three-dimensional coordinate system from a preset camera position onto a two-dimensional plane to obtain a scene image composed of projections (i.e., objects) of the object, and thus, pixels in the scene image may correspond one-to-one to specific points of the object in the three-dimensional coordinate system, and a depth value of a pixel is a distance between a point on the object corresponding to the pixel in the three-dimensional coordinate system and the camera position.
GL _ FLOAT is used to specify glReadPixels to convert the read depth value into a floating-point type variable, & depth is used to specify glReadPixels to store the read depth value in the variable depth. Therefore, after the command is called to be executed, the depth value of the variable depth is the depth value of the coordinates x and y.
As described above, after obtaining the distance S and the time stamp T of the game object and the image center coordinate of one frame of scene rendering command, the detection module stores the distance S and the time stamp T corresponding to the one frame of scene rendering command in the memory.
The velocity of the game object in a scene drawing command for a frame may also be calculated using the distance S and the time stamp T for the frame and one or more scene drawing commands within a certain time period before the frame.
For example, the detection module 308 may calculate the velocity of the game object in the Nth scene-rendering command via the Nth scene-rendering command and the N-2 th scene-rendering command. The detection module obtains a distance difference value by subtracting the distance S of the N-2 th frame scene drawing command from the distance S of the N-2 th frame scene drawing command, and then obtains the speed of the game object in the N-2 th frame scene image by dividing the distance difference value by the time difference value of the N-2 th frame scene image and the N-2 th frame scene image.
The time difference between the N-2 th scene picture and the nth scene picture is the difference between the timestamp T of the N-2 th scene picture and the timestamp of the nth scene picture.
The detection module copies one scene drawing command of one frame to obtain two scene drawing commands of the frame, wherein one scene drawing command is used for being sent to the callback module or the zooming-in module so as to draw a scene image of the nth frame, and the other scene drawing command is stored in the memory 331. If the detection module detects that the game object is in a non-motion state in a certain scene rendering command, the scene rendering command stored in the memory 331 is deleted.
Optionally, when the detection module detects that the game object is in a motion state in the frame of scene rendering command, the detection module copies the frame of scene rendering command and stores the copied frame of scene rendering command in the memory, and if the detection module detects that the game object is in a non-motion state in the frame of scene rendering command, the detection module does not need to copy and store the game object in the memory, and directly sends the frame of scene rendering command to the callback module.
When detecting that the game objects are in motion state in several consecutive frames of scene rendering commands, the detection module reads out the several frames of scene rendering commands from the memory, compares the several frames of scene rendering commands, thereby recognizing a command sequence appearing in the several frames of scene rendering commands, determines the command sequence as a motion command stream, and then writes the motion command stream into the memory 331.
Alternatively, when the memory has the index array corresponding to each scene rendering command, the detection module may compare the index arrays corresponding to the scene rendering commands, determine the index arrays appearing in the index arrays as the motion command stream, and then write the motion command stream into the memory 331.
In a specific example, after the detection module detects that the game objects in the scene rendering commands of the N-4 th frame to the N-th frame are all in motion, the detection module reads the scene rendering commands of the N-4 th frame to the N-th frame from the memory, compares the scene rendering commands of the N-4 th frame to the N-th frame, and finds that each of the scene rendering commands of the N-4 th frame to the N-th frame includes a command sequence: (drawing command 1, drawing command 2, drawing command 3, drawing command 4), and the detection module determines (drawing command 1, drawing command 2, drawing command 3, drawing command 4) as a motion command stream and stores the motion command stream in the memory.
After obtaining the scene drawing command of the nth frame, the matching module 309 may read the motion command stream from the memory 331, and then sequentially match each drawing command included in the motion command stream with the scene drawing command of the nth frame.
If the matching between the first type of scene image and the second type of scene image is successful, the matching module 309 may determine that the game object in the nth frame of scene drawing command is in a motion state, at this time, the matching module transmits the scene drawing command to the zooming-in module to call the zooming-in module to generate the first type of scene image, if the matching between the first type of scene image and the second type of scene image fails, the matching module 309 may determine that the game object in the nth frame of scene drawing command is in a non-motion state, at this time, the matching module may call back the scene drawing command to the graphics library through the call-back module, and draw the second type of scene image through the graphics library.
The mode of matching the motion command stream and the N frame scene drawing command by the matching module is as follows:
the matching module judges whether a motion command stream exists in the N frame of scene drawing command, if the motion command stream exists in the N frame of scene drawing command, the matching module determines that the motion command stream and the N frame of scene drawing command are successfully matched, and if the motion command stream does not exist in the N frame of scene drawing command, the matching module determines that the motion command stream and the N frame of scene drawing command are unsuccessfully matched.
The callback module can call back the intercepted drawing command to the graphics library by the following method:
the callback module may back up the replaced graphics library pointer in the aforementioned list of function pointers (a graphics library pointer is a function pointer to a function implemented in the graphics library). When the callback module obtains the drawing command, the realization functions pointed by the graphics library pointers in the graphics library can be continuously called through the replaced graphics library pointers, and the callback of the drawing command is realized. In this embodiment, the callback module may call back the UI drawing command to the graphics library through the backed graphics library pointer after receiving the UI drawing command sent by the identification module, so as to draw the UI image by the implementation function in the graphics library, or may call back the image display command to the graphics library through the backed graphics library pointer after receiving the image display command sent by the identification module, so as to display the image drawn by the graphics library, or the callback module may call back the scene drawing command to the graphics library through the backed graphics library pointer after receiving the scene drawing command sent by the detection module or the matching module, so as to draw the second type of scene image by the implementation function in the graphics library.
After the first-class scene images are generated by the zooming-in module, the first-class scene images can be written into the memory queue through the display sending interface, and finally the first-class scene images and the corresponding UI images are synthesized into one frame of game image by the graphic library and displayed on the display screen of the electronic equipment.
When the zooming-in module generates the first type of scene images, part of the realization functions in the graphic library can be called as required.
The zooming-in module may specifically generate the first type of scene image in the following manner.
The zoom-in module may first read object information for drawing the nth scene image in the nth scene drawing command, and then create a new frame buffer, which may be denoted as a temporary frame buffer FB4, and a frame buffer for storing the nth scene drawing command is denoted as FB2, and FB2 and FB4 are both stored in the memory 331. The ratio of the size of FB4 to the size of FB2 is a preset ratio Z. For example, Z can be set to 90%, then the size of FB4 is 90% of the size of FB 2. The object information may include the vertex information and the texture information, and the corresponding object may be drawn by calling a specific implementation function.
Optionally, the new frame buffer FB4 may be created in advance, the frame buffer FB4 may be multiplexed by the zoom-in module each time the first type of scene image is generated, or the frame buffer FB may be created by the zoom-in module each time the first type of scene image is generated, the frame buffer is released after the generation is finished, and the frame buffer FB4 is created again next time the first type of scene image needs to be generated.
The zoom-in module then obtains a first range of views, which may be created in advance, or created in real-time when in use.
The size of the second range may be consistent with the size of the screen of the electronic device, and the size of the first range may be a product of the size of the second range and a preset ratio Z, for example, the ratio Z is 90%, and then the size of the first range is 90% of the size of the second range.
After the view in the first range is obtained, the zooming-in module can call the GPU, and the first type of scene images are obtained by drawing through the drawing original object information in the view in the first range.
The first type of scene image is drawn as follows:
the pull-in module calls the GPU based on the FB4 and the original object information, and the GPU writes the object information into the FB4 after being called. In the writing process, since the size of the FB4 is smaller than that of the FB2, the GPU needs to crop the original object information according to the view of the first range, and then write the cropped object information into the FB4, where the process of writing the object information is equivalent to the process of drawing the first-type scene image, and after the writing is completed, the set of object information in the FB4 is equivalent to one frame of the first-type scene image.
The original object information is object information read from the nth scene drawing command and used for drawing the nth scene image.
The process by which the GPU needs to crop the original object information into the first range of views is that the GPU recognizes object information that lies within the first range of views, retains the object information that lies within the first range of views to write it to FB4, and recognizes object information that lies outside the first range of views, discards the object information that lies outside the first range of views, i.e., does not write the object information that lies outside the first range of views to FB 4.
After the writing of the FB4 is completed, the pull-in module sends the FB4 to the display interface for display. The size of the drawn first-class scene image is smaller than that of a display screen of the electronic equipment, and blank spaces can be generated around the screen when the first-class scene image of the frame is directly displayed on the screen. Therefore, before the frame is sent to the display interface for display, the drawn first-class scene image of the frame needs to be stretched to a size consistent with that of the display screen. Specifically, before the display interface sends and displays, the first-class scene image stored in the frame buffer FB4 is transferred to the frame buffer FB2 originally used for storing the second-class scene image, because the size of FB2 is larger than the size of FB4, when the scene image of FB4 is transferred to FB2, the display interface interpolates the first-class scene image stored in FB4, enlarges the size of the scene image of FB4 to the size corresponding to FB2, that is, to the size of the display screen of the electronic device, and finally sends and displays the enlarged first-class scene image in which FB2 exists, so that a frame of the first-class scene image with the same size as the screen is displayed on the display screen.
Referring to fig. 4a, fig. 4a is a schematic diagram of object information obtained by the zoom-in module, where a dashed box in the diagram indicates the view in the first range, and it can be seen that, for each object corresponding to the object information, it can be determined whether the object is in a positional relationship with the view in the first range according to the object information, where the positional relationship includes that the object is all located in the view, the object is all located outside the view, and a part of the object is located in the view. As in fig. 4a, object 401 is located outside the view, a portion of objects 402 and 403 are located within the view, and object 404 is located entirely within the view.
And according to the position relation, when the GPU is called by the zooming-in module and draws the first type of scene images, cutting the objects needing to be drawn. Referring to fig. 4b, it can be seen that, when the first type of scene image is rendered, if all objects (e.g., the object 401) corresponding to the object information are outside the view, the object is not rendered, if part of the objects (e.g., the object 402 and the object 403) corresponding to the object information are outside the view and part of the objects are inside the view, the part of the objects inside the view is rendered, and the part of the objects outside the view is not rendered, if all the objects (e.g., the object 404) corresponding to the object information are inside the view, the objects are completely rendered according to the object information, and finally, as shown in fig. 4b, the part of the objects 402 and the objects 403 inside the first range and the objects 404 constitute a first range of game scene, and fig. 4b is equivalent to a frame of the rendered first type of scene image. The dashed boxes in fig. 4b represent the size of the display screen of the electronic device.
Finally, the display interface stretches the first-class scene image shown in fig. 4b to obtain the stretched first-class scene image shown in fig. 4c, the stretched first-class scene image is consistent with the size of the display screen of the electronic device, and then the display interface displays the stretched first-class scene image on the display screen of the electronic device.
In order to avoid the appearance of obvious jaggy on the contour edge of each object in the enlarged image, a MultiSampling Anti-Aliasing (MSAA) function of the electronic device may be turned on before the first type of scene image is drawn, so as to optimize the process of drawing the image, and make the contour edge of the object in the drawn first type of scene image smoother.
The method for generating the scene image has the following beneficial effects:
when a game object is in a motion state, a large number of new objects appear in a game scene around the game object rapidly, and in this case, when one frame of game image is drawn, particularly when one frame of scene image is drawn, objects obtained by projecting the new objects need to be drawn in the scene image, so that the amount of calculation required for drawing one frame of scene image in the motion state is relatively higher than that for drawing one frame of scene image in a non-motion state, and devices such as a CPU, a GPU, a memory and the like of the electronic device operate in a high-load state when the game object is in the motion state compared with when the game object is in the non-motion state.
In the embodiment, by reducing the range of the scene represented in the scene image in the motion state (i.e., reducing the range from the second range to the first range), the objects required to be drawn when drawing a frame of scene image can be reduced in the motion state, thereby reducing the amount of calculation required to draw a frame of scene image. For example, when drawing fig. 4a showing a second range scene, the complete objects 401 to 404 need to be drawn, and on the contrary, when drawing fig. 4b showing a first range scene, only a part (part located in the first range) of the objects 402 and 403 and the complete object 404 need to be drawn, and obviously, the calculation amount required for drawing fig. 4b is less than that required for drawing fig. 4 a.
Along with the reduction of the calculation amount required for drawing one frame of scene image, the load of each hardware of the electronic equipment in the motion state can be correspondingly reduced, so that the problems of equipment blockage, over-high power consumption and the like caused by continuous operation of each device of the electronic equipment under high load are avoided.
Further, when the game object is in a non-motion state, the method provided by the embodiment can restore the scene image to the second type of scene image with higher picture quality, so as to bring better user experience to the user.
The method for generating a scene image according to the embodiment of the present application is further described below with reference to fig. 5, where fig. 5 is an exemplary diagram of a specific working process of each module of the system framework layer in fig. 3.
After the game starts, the game application outputs a drawing command stream for drawing a first frame of game picture, and the interception module intercepts the drawing command stream and transmits the drawing command stream to the identification module.
At this time, the recognition module does not determine the frame buffer for storing the scene drawing command, so the recognition module transmits the drawing command stream of the 1 st frame to the callback module, and respectively draws the scene image and the non-scene image (for example, the UI image is the non-scene image) by calling back the implementation function corresponding to the drawing command in the graphics library through the callback module, and at the same time, the recognition module recognizes the frame buffer to which the scene drawing command belongs in the process of calling back the drawing command stream of the 1 st frame to the graphics library.
The graphics library draws a scene image and a non-scene image by using the drawing command stream of the 1 st frame, then sends the scene image and the non-scene image to a display sending interface, the display sending interface synthesizes the scene image and the non-scene image into a game image of the 1 st frame, and then displays the game image of the 1 st frame on a screen. For convenience, the scene image constituting the 1 st game image is referred to as a 1 st scene image, and the non-scene image constituting the 1 st game image is referred to as a 1 st non-scene image, as follows. After the 1 st scene image is drawn, the detection module may store, in the memory, a distance S (1) between the game object in the 1 st scene image and the image center coordinate, and a timestamp T (1), where T (1) is a system time when the 1 st scene image starts to be drawn. The 1 st scene image is a second type scene image.
After outputting the drawing command stream of the game picture of the 1 st frame, the game application continues to output the drawing command stream of the 2 nd frame. Note that when the game application outputs the drawing command stream of frame 2, the game screen of frame 1 may or may not have been generated and displayed.
When the drawing command stream of the frame 2 is intercepted by the intercepting module and transmitted to the identification module, the identification module identifies the scene drawing command in the drawing command stream of the frame 2 according to the frame buffer to which the scene drawing command belongs.
And then, the identification module judges that no motion command stream exists in the memory, transmits the non-scene drawing command of the frame 2 to the callback module, and transmits the scene drawing command of the frame 2 to the detection module.
And after the detection module obtains the scene drawing command of the 2 nd frame, transmitting the scene drawing command of the 2 nd frame to the callback module. Therefore, the graphics library draws the 2 nd frame non-scene image by using the 2 nd frame non-scene drawing command, draws the 2 nd frame scene image by using the 2 nd frame scene drawing command, and synthesizes the 2 nd frame game image through the display interface and displays the 2 nd frame game image on a screen. The 2 nd frame scene image is a second type scene image.
The detection module obtains and stores the distance S (2) and the time stamp T (2) of the scene drawing command of the frame 2 in the memory in the process of calling back the scene drawing command of the frame 2, and calculates the speed of the game object by using the distance S (2) and the time stamp T (2) and the distance and the time stamp of the previous frame in the memory, namely S (1) and T (1). The scene drawing command of frame 2, the distance S (2), the time stamp T (2), and the speed of the game object may all be stored in memory.
And then, the intercepting module intercepts the drawing command stream of each frame output by the game application in real time and sends the scene drawing command to the detection module through the identification module so as to trigger the detection module to obtain and store the distance, the time stamp and the speed of the game object of the scene drawing command of each frame.
After obtaining the scene drawing command of the nth frame, the detection module calculates that the speed of the game object in the scene drawing command of the nth frame is greater than the speed threshold, and the scene drawing commands of 5 consecutive frames including the nth frame are all greater than the speed threshold, so that the detection module reads the 5 frames of scene drawing commands from the memory, and extracts and stores the motion command stream based on the 5 frames of scene drawing commands. The motion command stream is stored in memory.
When the drawing command stream of the (N + 1) th frame intercepted by the intercepting module reaches the identification module, the identification module judges that a motion command stream exists in the memory, so that the scene drawing command of the (N + 1) th frame is directly sent to the matching module, and the matching module matches the scene drawing command of the (N + 1) th frame with the motion command stream after obtaining the scene drawing command of the (N + 1) th frame.
The scene drawing command of the (N + 1) th frame is successfully matched with the motion command stream, the matching module judges that the game object in the scene drawing command of the (N + 1) th frame is in a motion state, and then the scene drawing command of the (N + 1) th frame is sent to the zooming-in module.
And after the zooming-in module obtains the scene drawing command of the (N + 1) th frame, drawing the first type of scene images by using the scene drawing command of the (N + 1) th frame. And the zooming-in module draws the image based on the scene drawing command of the (N + 1) th frame, namely the (N + 1) th frame scene image.
The zoom-in module transmits the drawn N +1 frame scene image to a display sending interface, then the display sending interface combines the N +1 frame scene image and the N +1 frame non-scene image to obtain an N +1 frame game image, and then the N +1 frame game image is displayed on a screen.
And matching the scene drawing commands from the (N + 2) th frame to the (N + K-1) th frame with the motion command stream in the matching module, wherein the scene drawing commands from the (N + 2) th frame to the (N + K-1) th frame are successfully matched, so that the scene drawing commands from the (N + 2) th frame to the (N + K-1) th frame are all sent to the zooming-in module, and the zooming-in module draws the obtained first class of scene images according to the first range.
After the scene drawing command of the (N + K) th frame reaches the matching module, the matching module matches the scene drawing command of the (N + K) th frame with the motion command stream, and the matching result is that the matching fails, so that the matching module sends the scene drawing command of the (N + K) th frame to the callback module, the callback module callbacks the scene drawing command of the (N + K) th frame to the graphic library, the scene image of the (N + K) th frame is drawn through the graphic library, and the drawn scene image of the (N + K) th frame belongs to the second-class scene image. And combining the N + K frame scene image with the N + K frame non-scene image to obtain an N + K frame game image, and displaying the N + K frame game image on a screen.
It can be seen that, in the interaction flow shown in fig. 5, the electronic device displays the 1 st frame to the nth frame of game images after the game starts, and presents the scene in the second range, and the displayed N +1 st frame to N + K-1 st frame of game images, because the game object is in the motion state, the game image presents the scene in the first range, and when the N + K frame of game images is displayed, the game object enters the non-motion state again, so that the game image presents the scene in the second range.
After the drawing command stream of the (N + K) th frame, the game application outputs the drawing command stream of one frame, the scene drawing command in the drawing command stream is identified by the identification module and transmitted to the matching module, and the matching module sends the scene drawing command of the frame to the zooming-in module or the callback module according to the matched result, so that the first type of scene picture is drawn and displayed when the game object is in a motion state, and the second type of scene picture is drawn and displayed when the game object is in a non-motion state. Until the game is finished.
In an optional implementation manner, when the motion command stream is not stored in the memory, the detection module may also determine whether the speed of the game object in the previous frame of drawn scene image is greater than a speed threshold after a scene drawing command for drawing a frame of scene image is obtained each time, if so, send the scene drawing command to the zoom-in module, so that the zoom-in module draws the frame of scene image according to the first range, and if not, send the scene drawing command to the callback module.
In another alternative implementation, the matching module may not be provided, and the scene drawing command of each frame is sent to the detection module. In the implementation manner, each frame of scene image is drawn by the graphics library or the zoom-in module, the detection module calculates the speed of the game object in the frame of scene image according to the method, and then when the scene drawing command of the next frame is transmitted to the detection module, the detection module can send the scene drawing command of the frame to the callback module or the zoom-in module according to whether the speed of the game object in the immediately drawn previous frame of scene image is greater than the speed threshold.
Optionally, the detection module may be further configured to detect a stop motion command sequence and a start motion command sequence. Specifically, for a scene drawing command of a certain frame of scene image, after the motion command stream is recognized, a sequence of several commands (or indexes of commands) from the first command appearing after the motion command stream is ended to the next draw call may be determined as a stop motion command sequence.
In addition, for a scene drawing command of a certain frame of scene image, a sequence of several commands (or indexes of commands) from a command preceding the motion command stream to the previous draw call after the motion command stream is recognized may be determined as a start motion command sequence.
For example, for a scene drawing command of a certain frame of scene image, the detection module first determines that commands k to k + n are motion command streams, and then the detection module may start reading from the command k + n +1 backward until a first draw call is read, and determine the read commands as a stop motion command sequence. In addition, the detection module may begin reading forward from command k-1 until the first draw call is read, determining these commands read as a sequence of start motion commands.
The motion command stream in the memory 331 can be obtained by:
firstly, according to the method of the foregoing embodiment, the detection module obtains the motion command stream by comparing scene drawing commands of multiple frames of game objects in a motion state, and writes the motion command stream into the memory 331.
And secondly, after the game starts to run, the CPU reads the motion command stream corresponding to the currently running game from the memory to the memory. The Memory may be a magnetic disk inherent to the electronic device, or may be an external storage device accessed through a USB interface (or other interface) of the electronic device, such as a U-disk, a Secure Digital Memory Card (SD Card), and the like.
Thirdly, after the game starts to run, the electronic device downloads the motion command stream from a designated server through the network and writes the motion command stream into the memory 331.
The motion command stream stored in the memory of the second mode may be obtained by testing the game in the first mode in advance by a manufacturer of the electronic device, and then issued to the corresponding electronic device in a system update or software update mode. In addition, if the game has been run on the electronic device more than once before the current run, the CPU may transfer the motion command stream obtained in the first manner from the memory to the memory during the previous runs of the game.
The motion command stream stored in the server in the third mode may be obtained by a manufacturer of the electronic device in a pre-test mode and stored in the server, and in addition, the electronic device that has used the game may upload the motion command stream obtained by itself in the first mode to the server for downloading by other electronic devices.
When the game object enters the motion state from the non-motion state, the scene presented by the game image is suddenly reduced from the second range to the first range, which may be perceived by the user, and thus, poor visual experience is brought to the user. In order to solve this problem, in some optional embodiments of the present application, the method for generating a scene image provided by the present application may gradually adjust the range of the generated scene image by a small step by the following smooth reduction method until the first type of scene image is finally generated.
Specifically, when the zoom-in module obtains the scene drawing command of the K-th frame, it is determined whether the range of the scene presented by the scene image of the previous frame (i.e., the K-1 frame) that has already been drawn is greater than the first range, and if not, the scene drawing command of the K frame is drawn according to the first range to obtain the first type of scene image. K is any positive integer.
If the range of the scene presented by the K-1 frame scene image is larger than the first range, the range presented by the K-1 frame is adjusted downwards by a smaller amplitude, if the adjusted range is smaller than the first range (or is consistent with the first range), the range presented by the K frame is set as the first range, and if the adjusted range is larger than the first range, the range presented by the K frame is set as the adjusted range. And then, drawing according to the set range presented by the K frames by using the scene drawing command of the K frames to obtain the scene images of the K frames, wherein the range of the scene presented by the drawn scene images of the K frames is larger than the first range and smaller than the second range.
The smaller amplitude may be dynamically adjustable or may be a fixed step size (step), for example, step may be set to 1% of the second range. The step length may be a preset parameter in the electronic device, or may be adjusted according to a setting of a user.
Referring to fig. 6, fig. 6 is a schematic diagram of the smooth reduction method provided in the present embodiment, taking step as 1% of the second range, and the set ratio Z is 90%, that is, the first range is 90% of the second range as an example, the execution process of the smooth reduction method may be:
suppose that the rendered K-th scene image is a second type scene image (as shown in 601), and the game object in the second type scene image is in a non-motion state.
The matching module (or the detection module) obtains the scene drawing command of the (K + 1) th frame, judges that the game object in the scene image of the (K + 1) th frame is in a motion state, and then transmits the scene drawing command of the (K + 1) th frame to the zooming-in module.
The zoom-in module reads a range presented by the K frame of scene image, i.e. a second range, and since the second range is greater than the first range, the zoom-in module adjusts down by 1% on the basis of the second range to obtain 99% of the second range, and the range after the down-adjustment (i.e. 99% of the second range) is greater than the first range, so that the zoom-in module draws the K +1 frame of scene image according to 99% of the second range (as shown in 602).
Then, the zoom-in module obtains a scene drawing command of the K +2 th frame, and continues to adjust 1% downward on the basis of the range presented by the scene image of the K +1 th frame, to obtain 98% of the second range, which is still larger than the first range, and then the zoom-in module draws the scene image of the K +2 th frame according to 98% of the second range (as shown in 603).
By analogy, the range of each scene image frame is 1% smaller than the range of the previous frame by the second range, and finally, the zoom-in module draws the K +10 th frame of scene image according to 90% of the second range, that is, draws the scene image presenting the first range of scene (as shown in 604).
Each scene image after the K +10 th scene image is rendered within the first range since the previous scene image is rendered within the first range (i.e., not greater than the first range), until the game object enters the non-motion state again.
In the above process, the method for drawing the scene image of 99% of the second range, the scene image of 97% of the second range, and the like can refer to the method for generating the first type of scene image, and the size of the new frame buffer and the range of the view can be adjusted according to the corresponding proportion. For example, if 97% of the second range of scene images are rendered, the size of the newly created frame buffer may be set to 97% of the frame buffer originally storing the second type of scene images, and the view correspondence is set to 97% of the second range.
The method is equivalent to sequentially generating a plurality of frames of scene images with gradually reduced range from the second range to the first range when the game object enters the motion state from the non-motion state until the first range is reduced, and then continuously generating the first type of scene images.
In some variations of the above method, the scene images sequentially generated before the first type of scene image are generated may be smaller in the range of the scene presented, instead of being smaller from frame to frame, every few frames. For example, in the above embodiment, after the K-th frame of scene image is rendered, the scene images with the range of several consecutive frames being 99% of the second range are rendered first, then the scene images with the range of several consecutive frames being 98% of the second range are rendered, and so on, and finally the required first-type scene image is rendered.
Fig. 6 is merely an example for illustrating the manner in which the range of the game scene is smoothly narrowed down in the present application, and in an actual application scene, the game scene displayed in each frame of scene image may be different, for example, during the process of gradually changing the scene image from 601 to 604 in fig. 6, the position of the cart in the game scene may gradually change, thereby showing the scene that the cart travels along the road.
By the method, the range of the scene presented in the game image can be smoothly narrowed down on the premise that the user does not perceive, and better use experience is brought to the user.
Alternatively, the above-described smooth zooming-out method may be applied to a scene in which the range of the presentation changes from the first range to the second range (i.e., the game object is switched from the motion state to the non-motion state).
Specifically, the detection module (and the matching module) obtains a scene drawing command of a certain frame (assumed as an M-th frame), and after judging that the game object in the M-th frame scene image is in a non-motion state, can judge the range presented by the drawn previous frame scene image, judge whether the range of the scene presented by the drawn previous frame (i.e., the M-1-th frame) scene image is smaller than the second range, and if not, send the scene drawing command of the M-th frame to the callback module, so as to directly draw the M-th frame scene image presenting the scene in the second range through the graphics library.
If the range of the scene presented by the M-1 frame scene image is smaller than the second range, the range presented by the M-1 frame is adjusted by a smaller amplitude, and if the adjusted range is larger than the second range (or is consistent with the second range), the scene drawing command of the M frame is sent to the callback module, so that the M frame scene image presenting the scene in the second range is directly drawn through the graphic library. If the range after the up-regulation is smaller than the second range, the scene drawing command of the M frame and a range setting command can be sent to the zooming-in module together, the range setting command carries the range after the up-regulation, so that the zooming-in module is triggered to draw the scene image of the M frame according to the range after the up-regulation by using the scene drawing command of the M frame, and the range of the scene presented by the scene image of the M frame drawn in the way is the range after the up-regulation. The amplitude of each adjustment in the up-regulation process may also be dynamically adjusted, i.e. the amplitude of each up-regulation is different, or set to a fixed step size (step), i.e. the amplitude of each up-regulation is the step size, for example, step may be set to 1% of the second range. The step length may be a preset parameter in the electronic device, or may be adjusted according to a setting of a user.
The method is equivalent to sequentially generating a plurality of frames of scene images with the range gradually expanded from the first range to the second range when the game object enters the non-motion state from the motion state, and continuously generating the second type of scene images when the scene images are expanded.
Similar to the smooth zooming-out method, by the method, the range of the scene presented in the game image can be smoothly expanded when the game object is changed from the motion state to the non-motion state on the premise that the user does not perceive, and better use experience is brought to the user.
In other optional embodiments, the method for generating a scene image provided by the present application may further divide the motion state of the game object into multiple motion states according to different speeds, for example, a first speed interval and a second speed interval may be divided, where a lower limit of the second speed interval is higher than an upper limit of the first speed interval, the first speed interval corresponds to the first motion state, and the second speed interval corresponds to the second motion state.
In some specific application scenarios, the game object can run and use (ride or drive) a vehicle in the game scenario during the game, and accordingly, the speed range of the game object during running can be taken as a first speed interval, the first motion state is a running state, the speed range of the game object during vehicle use is taken as a second speed interval, and the second motion state is a vehicle-used state.
According to the division of the motion states, the detection module (and the matching module) can further distinguish whether the motion state of the game object is a first motion state or a second motion state after obtaining a scene drawing command of one frame and judging that the game object in the frame is in the motion state.
In a specific implementation, the detection module may distinguish the motion state by the speed of the game object in the previous one or several frames of the drawn scene image, and if the speed is within the first speed interval, it is determined that the game object is in the first motion state, and if the speed is within the second speed, it is determined that the game object is in the second motion state.
The matching module can respectively use different motion command streams corresponding to different motion states to respectively correspond to the obtained scene drawing commands, if the matching degree of the obtained scene drawing commands and the motion command stream of the first motion state is higher, the game object is judged to be in the first motion state, and if the matching degree of the obtained scene drawing commands and the motion command stream of the second motion state is higher, the game object is judged to be in the second motion state.
The detection module may write a scene drawing command for drawing a frame of scene image into the memory as a motion command stream of the first motion state when it is determined that the speed of the game object in the frame of drawn scene image is within the first speed interval.
Similarly, the detection module may write the scene drawing command for drawing a frame of the scene image into the memory as the motion command stream in the second motion state when determining that the speed of the game object in the frame of the drawn scene image is in the second speed interval.
After the detection module (and the matching module) judges that the game object is in the first motion state or the second motion state, the detection module (and the matching module) can send the scene drawing command and the judgment result to the zoom-in module together.
If the game object is in the first motion state, the zooming-in module draws the scene image corresponding to the scene drawing command according to the first range, so that a first class of scene image is obtained, namely the scene image with the range of the presented scene as the first range, and if the game object is in the second motion state, the zooming-in module draws the scene image corresponding to the scene drawing command according to the third range, so that a third class of scene image is obtained, namely the scene image with the range of the presented scene as the third range. The third range is less than the first range. For example, the first range may be 90% of the second range, and the third range may be 80% of the second range.
The method for drawing the third type of scene image is consistent with the method for drawing the first type of scene image, and only needs to perform corresponding adjustment according to the third range when a new frame buffer is created and a view is set, and the specific process is not repeated.
By applying the method on the electronic equipment, after the game is started, if the game object is in a non-motion state, the electronic equipment generates a second type of scene images in real time according to the change of the scene in the game and displays the game images formed by the second type of scene images, if the user controls the game object to start running, the equipment judges that the game object is in a first motion state at the moment, further reducing the scene presented to the user to a first range, namely generating a first kind of scene images in real time and displaying the game images formed by the first kind of scene images, if the user controls the game object to use a vehicle, the device judges that the game object is in a second motion state at the moment, and further reducing the scene presented to the user to a third range, namely generating a third type of scene image in real time and displaying the game image formed by the third type of scene image.
By implementing the method provided by the above embodiment, the following beneficial effects can be obtained:
the range of the presented scene is dynamically adjusted according to the different speeds of the game objects, so that the contradiction between maintaining the scene image with higher image quality and reducing the load can be better relieved.
Specifically, the method for generating a scene image according to this embodiment may gradually reduce the range of the scene presented by the generated scene image as the moving speed of the game object increases, so as to reduce the amount of computation required for generating one frame of the scene image, control the load of the electronic device to be always within an acceptable load range, and avoid the electronic device from being overloaded. In addition, the method provided by the embodiment can present a scene in a larger range when the game object is in a non-motion state or a motion state with a lower speed, and present a game image with higher quality to the user as far as possible while avoiding the over-high load of the electronic device.
Referring to fig. 7, fig. 7 is a flowchart of a method for generating a scene image according to an embodiment of the present disclosure, where the method includes the following steps:
s701, detecting the state of the game object.
If the game object is in a motion state, step S702 is executed, and if the game object is in a non-motion state, step S703 is executed.
S702, if the game object is in a motion state, generating a first type scene image.
And S703, if the game object is in a non-motion state, generating a second type scene image.
According to the interaction diagram shown in fig. 5, it can be understood that the method for generating a scene image provided in the embodiment shown in fig. 7 can be run in real time after a game starts, specifically, it can be determined whether a game object in a frame of game image is in a motion state before the game image is drawn each time, if not, the scene image of the frame is drawn according to the second range to obtain a second type of scene image, and if so, the scene image of the frame is drawn according to the first range to obtain a first type of scene image.
Step S701 in the above embodiment may be implemented by the detection module or the matching module of the application framework layer shown in fig. 3, step S702 may be implemented by the zooming-in module shown in fig. 3, and step S703 may be implemented by the graphics library shown in fig. 3, and the specific implementation of each step above may be referred to in the foregoing, and is not described herein again.
The embodiments of the present application further provide a computer storage medium, which is used to store a computer program, and when the computer program is executed, the computer program is specifically used to implement the method for generating a scene image provided in any embodiment of the present application.
The embodiments of the present application further provide a computer program product, which includes a plurality of executable computer commands, and when the computer commands of the product are executed, the computer program product is specifically used for implementing the method for generating a scene image provided in any embodiment of the present application.

Claims (19)

1. A method of generating an image of a scene, comprising:
detecting a state of a game object;
if the game object is in a motion state, generating a first type of scene image, wherein the first type of scene image presents a game scene in a first range;
if the game object is in a non-motion state, generating a second type of scene image, wherein the second type of scene image presents a game scene in a second range; the first range is less than the second range.
2. The method of claim 1, wherein before generating the first type of scene image, the method further comprises:
sequentially generating a plurality of frames of scene images; the range of the game scenes represented by the plurality of frames of scene images is sequentially reduced from the first range to the second range in the generated order.
3. The method according to claim 1 or 2, wherein the generating of the first type of scene image comprises:
obtaining object information used for drawing a second type of scene image;
and drawing to obtain a first type of scene image by using the object information which is positioned in the first range and used for drawing the second type of scene image.
4. The method of claim 1 or 2, wherein the detecting a state of a game object comprises:
obtaining a current scene drawing command; the scene drawing command is a drawing command for drawing a scene image;
matching the current scene drawing command with a motion command stream obtained in advance; the motion command stream is a scene drawing command obtained when the game object is in a motion state;
if the current scene drawing command is successfully matched with the motion command stream, detecting that the game object is in a motion state;
and if the current scene drawing command is unsuccessfully matched with the motion command stream, detecting that the game object is in a non-motion state.
5. The method of claim 1 or 2, wherein the detecting a state of a game object comprises:
obtaining a multi-frame scene drawing command within a preset time period;
determining the speed of the game object in the preset time period according to the distance between the game object and the central coordinate of the scene image in each frame of the scene drawing command;
if the speed of the game object in the preset time period is larger than a speed threshold value, detecting that the game object is in a motion state;
and if the speed of the game object in the preset time period is less than or equal to the speed threshold, detecting that the game object is in a non-motion state.
6. The method according to claim 1 or 2, wherein before generating the first type of scene image, the method further comprises:
determining a speed interval to which the speed of the game object belongs;
if the speed of the game object belongs to a first speed interval, executing the step of generating the first type of scene images;
if the speed of the game object belongs to a second speed interval, generating a third type scene image; the third type of scene image presents a third range of game scenes; a lower limit of the second speed section is larger than an upper limit of the first speed section; the third range is less than the first range.
7. The method for generating the scene image according to any one of claims 1 to 6, wherein the method for generating the scene image is applied to an electronic device, and a software layer of the electronic device comprises a matching module;
the detecting the state of the game object comprises:
the matching module receives a scene drawing command;
the matching module reads a motion command stream from a memory;
the matching module matches the motion command stream with the one-frame scene rendering command;
if the motion command stream is successfully matched with the scene rendering command, the matching module determines that the game object is in a motion state in the scene rendering command;
if the motion command stream and the scene rendering command are not matched, the matching module determines that the game object is in a non-motion state in the scene rendering command.
8. The method of claim 7, wherein the software layer of the electronic device further comprises a zoom-in module;
if the game object is in a motion state, generating a first type of scene image, including:
the matching module determines that the game object is in a motion state in the scene drawing command, and then the matching module sends the scene drawing command to the zooming-in module;
the zooming-in module receives the scene drawing command;
and the zooming-in module generates the first type scene image according to the one-frame scene drawing command.
9. The method of claim 8, wherein the software layer of the electronic device further comprises a rendering interface;
after the zooming-in module generates the first type of scene images according to the one-frame scene drawing command, the zooming-in module sends the first type of scene images to the display sending interface;
and the display sending interface displays the first type of scene images on a display screen.
10. The method of claim 8, wherein the zooming-in module generates the first type of scene image according to the one-frame scene rendering command, and comprises:
the zoom-in module creates a temporary frame buffer and a view; the size of the temporary frame buffer and the size of the view both match the first range;
the zooming-in module reads object information from the one-frame scene drawing command;
and the zooming-in module calls a graphic processor based on the object information, the temporal frame buffer and the view, so that the graphic processor draws the first type of scene images.
11. The method of claim 7, wherein the software layer of the electronic device further comprises a callback module and a graphics library;
if the game object is in a non-motion state, generating a second type of scene image, including:
the matching module determines that the game object is in a non-motion state in the scene drawing command, and then the matching module sends the scene drawing command to the callback module;
the callback module receives the scene rendering command of the frame;
the callback module is used for calling back the scene drawing command of the frame to the graphic library;
and responding to the callback module to callback the frame of scene drawing command, and generating the second type of scene image by the graphic library according to the frame of scene drawing command.
12. The method of claim 7, wherein the software layer of the electronic device further comprises an identification module;
before the matching module receives a scene rendering command, the method further comprises:
the identification module receives a frame of drawing command stream;
after the identification module determines that the scene frame is buffered, the identification module determines the drawing command stored in the scene frame buffer in the one-frame drawing command stream as a one-frame scene drawing command;
the identification module judges whether the memory stores a motion command stream or not;
and after the identification module judges that the motion command stream is stored in the memory, the identification module sends the one-frame scene drawing command to the matching module.
13. The method of claim 12, wherein the identifying module determines a scene frame buffer, comprising:
the identification module receives a previous frame drawing command stream; the previous frame of drawing command stream is another frame of drawing command stream received by the recognition module before the one frame of drawing command stream;
the identification module counts the number of drawing commands stored in each frame buffer except an interface frame buffer in a plurality of frame buffers storing the previous frame of drawing command stream; the interface frame buffer is used for storing a frame buffer of a user interface drawing command in the plurality of frame buffers; the user interface drawing command is a drawing command used for drawing a user interface image in the drawing command stream;
the recognition module determines, as the scene frame buffer, a frame buffer of the plurality of frame buffers, in which the number of drawing commands stored in addition to the interface frame buffer is the largest.
14. The method of claim 12, wherein the software layer of the electronic device further comprises a detection module;
after the identification module determines whether the memory stores the motion command stream, the method further includes:
after the identification module judges that the memory does not store the motion command stream, the identification module sends the scene rendering command to the detection module;
the detection module receives the scene rendering command of the frame;
the detection module calculates the speed of the game object in the scene rendering command;
the detection module judges whether the speed of the game object in the one-frame scene drawing command is larger than a preset speed threshold value or not;
after the detection module judges that the speed of the game object in the one-frame scene drawing command is greater than the speed threshold value, the detection module determines the motion command stream according to the one-frame scene drawing command;
the detection module stores the motion command stream into the memory.
15. The method of claim 14, wherein the detection module calculates a velocity of a game object in the one-frame scene-rendering command, comprising:
the detection module acquires a timestamp and a distance corresponding to the one-frame scene drawing command; the distance corresponding to the one-frame scene drawing command is the distance between the game object and the image central coordinate in the one-frame scene drawing command;
the detection module reads a timestamp and a distance corresponding to the scene rendering command of the previous frame from the memory; the distance corresponding to the previous scene drawing command is the distance between the game object and the image central coordinate in the previous scene drawing command; the previous scene rendering command is another scene rendering command received by the identification module before the one scene rendering command;
the detection module divides the distance difference value by the time difference value to obtain the speed of the game object in the one-frame scene drawing command; the distance difference value is the difference value between the distance corresponding to the scene rendering command of the frame and the distance corresponding to the scene rendering command of the previous frame; the time difference value is a difference value between a time stamp corresponding to the one frame of scene rendering command and a time stamp corresponding to the previous frame of scene rendering command.
16. The method of claim 14, wherein the detection module determines the motion command stream based on the one-frame scene rendering command, comprising:
the detection module reads out the scene rendering command of the first N frames from the memory; the speeds of the game objects in the first N scene drawing commands are all larger than the speed threshold; the first N scene rendering commands are N scene rendering commands received by the identification module before the one scene rendering command; n is a preset positive integer;
the detection module determines a drawing command sequence contained in the scene drawing command of one frame and the scene drawing command of the first N frames as the motion command stream; the sequence of drawing commands is a collection of consecutive scene drawing commands.
17. The method of claim 12, wherein the electronic device further comprises a game application and an interception module;
before the identifying module receives a frame of drawing command stream, the identifying module further comprises:
the game application outputting the stream of one frame of drawing commands;
the intercepting module intercepts the frame of drawing command stream output by the game application;
and the intercepting module sends the frame of drawing command stream to the identification module.
18. An electronic device, wherein a hardware layer of the electronic device comprises: one or more processors, memory, and a display screen;
the memory is used for storing one or more programs;
the one or more processors are operable to execute the one or more programs to cause the electronic device to perform the following:
detecting a state of a game object;
if the game object is in a motion state, generating a first type of scene image, wherein the first type of scene image presents a game scene in a first range;
if the game object is in a non-motion state, generating a second type of scene image, wherein the second type of scene image presents a game scene in a second range; the first range is less than the second range.
19. A computer storage medium storing a computer program, which when executed is particularly adapted to implement a method of generating an image of a scene as claimed in any one of claims 1 to 17.
CN202110778014.6A 2021-07-09 2021-07-09 Method, apparatus and storage medium for generating scene image Active CN114452645B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110778014.6A CN114452645B (en) 2021-07-09 2021-07-09 Method, apparatus and storage medium for generating scene image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110778014.6A CN114452645B (en) 2021-07-09 2021-07-09 Method, apparatus and storage medium for generating scene image

Publications (2)

Publication Number Publication Date
CN114452645A true CN114452645A (en) 2022-05-10
CN114452645B CN114452645B (en) 2023-08-04

Family

ID=81406106

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110778014.6A Active CN114452645B (en) 2021-07-09 2021-07-09 Method, apparatus and storage medium for generating scene image

Country Status (1)

Country Link
CN (1) CN114452645B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116095221A (en) * 2022-08-10 2023-05-09 荣耀终端有限公司 Frame rate adjusting method in game and related device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2995703B1 (en) * 1998-10-08 1999-12-27 コナミ株式会社 Image creation device, display scene switching method in image creation device, readable recording medium storing display scene switching program in image creation device, and video game device
CN101878056A (en) * 2007-11-30 2010-11-03 科乐美数码娱乐株式会社 Game program, game device and game control method
CN109499061A (en) * 2018-11-19 2019-03-22 网易(杭州)网络有限公司 Method of adjustment, device, mobile terminal and the storage medium of scene of game picture
CN109603152A (en) * 2018-12-14 2019-04-12 北京智明星通科技股份有限公司 A kind of scene of game image processing method, device and terminal
CN109675310A (en) * 2018-12-19 2019-04-26 网易(杭州)网络有限公司 The method and device of virtual lens control in a kind of game
CN110930307A (en) * 2019-10-31 2020-03-27 北京视博云科技有限公司 Image processing method and device
CN111228801A (en) * 2020-01-07 2020-06-05 网易(杭州)网络有限公司 Rendering method and device of game scene, storage medium and processor

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2995703B1 (en) * 1998-10-08 1999-12-27 コナミ株式会社 Image creation device, display scene switching method in image creation device, readable recording medium storing display scene switching program in image creation device, and video game device
CN101878056A (en) * 2007-11-30 2010-11-03 科乐美数码娱乐株式会社 Game program, game device and game control method
CN109499061A (en) * 2018-11-19 2019-03-22 网易(杭州)网络有限公司 Method of adjustment, device, mobile terminal and the storage medium of scene of game picture
CN109603152A (en) * 2018-12-14 2019-04-12 北京智明星通科技股份有限公司 A kind of scene of game image processing method, device and terminal
CN109675310A (en) * 2018-12-19 2019-04-26 网易(杭州)网络有限公司 The method and device of virtual lens control in a kind of game
CN110930307A (en) * 2019-10-31 2020-03-27 北京视博云科技有限公司 Image processing method and device
CN111228801A (en) * 2020-01-07 2020-06-05 网易(杭州)网络有限公司 Rendering method and device of game scene, storage medium and processor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陆兴华等: "基于Android系统的自适应跟踪场景渲染技术", 《计算机与网络》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116095221A (en) * 2022-08-10 2023-05-09 荣耀终端有限公司 Frame rate adjusting method in game and related device
CN116095221B (en) * 2022-08-10 2023-11-21 荣耀终端有限公司 Frame rate adjusting method in game and related device

Also Published As

Publication number Publication date
CN114452645B (en) 2023-08-04

Similar Documents

Publication Publication Date Title
CN109379625B (en) Video processing method, video processing device, electronic equipment and computer readable medium
US20210274251A1 (en) Video Processing Method, Electronic Device and Computer-Readable Medium
US10306180B2 (en) Predictive virtual reality content streaming techniques
US20210168441A1 (en) Video-Processing Method, Electronic Device, and Computer-Readable Storage Medium
EP3879843A1 (en) Video processing method and apparatus, electronic device, and computer-readable medium
US11270492B2 (en) Graphics processing systems
WO2022048097A1 (en) Single-frame picture real-time rendering method based on multiple graphics cards
US20220319103A1 (en) Lightweight View Dependent Rendering System for Mobile Devices
CN109102560B (en) Three-dimensional model rendering method and device
CN111491208B (en) Video processing method and device, electronic equipment and computer readable medium
CN110636365B (en) Video character adding method and device, electronic equipment and storage medium
CN114257849B (en) Barrage playing method, related equipment and storage medium
US11587280B2 (en) Augmented reality-based display method and device, and storage medium
WO2023036160A1 (en) Video processing method and apparatus, computer-readable storage medium, and computer device
WO2023030176A1 (en) Video processing method and apparatus, computer-readable storage medium, and computer device
WO2022218042A1 (en) Video processing method and apparatus, and video player, electronic device and readable medium
CN110572717A (en) Video editing method and device
CN114452645B (en) Method, apparatus and storage medium for generating scene image
US20140161173A1 (en) System and method for controlling video encoding using content information
WO2023093792A1 (en) Image frame rendering method and related apparatus
CN116091292B (en) Data processing method and related device
JP7427786B2 (en) Display methods, devices, storage media and program products based on augmented reality
CN108898652B (en) Skin image setting method and device and electronic equipment
JP2023522370A (en) Image display method, device, equipment and storage medium
CN111475665A (en) Picture playing method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant