CN116700941A - Image rendering method and device, electronic equipment and storage medium - Google Patents

Image rendering method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116700941A
CN116700941A CN202210171584.3A CN202210171584A CN116700941A CN 116700941 A CN116700941 A CN 116700941A CN 202210171584 A CN202210171584 A CN 202210171584A CN 116700941 A CN116700941 A CN 116700941A
Authority
CN
China
Prior art keywords
rendering
resource
image
target
resources
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210171584.3A
Other languages
Chinese (zh)
Inventor
魏知晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210171584.3A priority Critical patent/CN116700941A/en
Publication of CN116700941A publication Critical patent/CN116700941A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/526Mutual exclusion algorithms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Image Generation (AREA)

Abstract

The application provides an image rendering method, an image rendering device, electronic equipment and a storage medium, which are applied to various scenes such as cloud technology, artificial intelligence, intelligent transportation, internet of vehicles and the like, wherein the method comprises the following steps: responding to a first rendering request of a current frame image in an image sequence, and acquiring a target cache resource corresponding to the current frame image based on the position information of the current frame image; extracting current rendering resources required for rendering the current frame image from the target cache resources based on the preset length information and the preset displacement information; storing the current rendering resources into a local cache, and taking the rendering resources existing in the local cache and the current rendering resources as candidate rendering resources; obtaining target candidate rendering resources from the candidate rendering resources; and rendering the image data of the target image to be rendered based on the target candidate rendering resource to obtain a rendering result of the target image to be rendered. The embodiment of the application can reduce the occupation of the memory in the image rendering and rendering process.

Description

Image rendering method and device, electronic equipment and storage medium
Technical Field
The application belongs to the technical field of computers, and particularly relates to an image rendering method, an image rendering device, electronic equipment and a storage medium.
Background
In Opengles-based rendering programs, it is often necessary to use a buffer resource (performance buffer) as a rendering parameter that is passed to a Graphics Processor (GPU) by a central processing unit (GPU). Among other things, opengles is a 3D graphics rendering programming interface (Application Programming Interface, API) standard widely used on mobile computing devices to define and specify a method of drawing 3D graphics on the device.
However, in a large 3D hand-tour scene, more rendering parameters and entities are required to be used due to the complex scene, so that a large number of scattered form buffers are used, and the use of a large number of scattered form buffers can cause Opengles to create a large number of structures for the Opengles driver, so as to generate more memory occupation.
Disclosure of Invention
In order to solve the technical problems, the application provides an image rendering method, an image rendering device, electronic equipment and a storage medium.
In one aspect, the present application provides an image rendering method, including:
responding to a first rendering request of a current frame image in an image sequence, and acquiring a target cache resource corresponding to the current frame image based on the position information of the current frame image; the position information characterizes a frame number of the current frame image in the image sequence;
Extracting current rendering resources required for rendering the current frame image from the target cache resources based on preset length information and preset displacement information;
storing the current rendering resources into a local cache, and taking the rendering resources existing in the local cache and the current rendering resources as candidate rendering resources;
obtaining target candidate rendering resources from the candidate rendering resources;
rendering the image data of the target image to be rendered based on the target candidate rendering resource to obtain a rendering result of the target image to be rendered; the target image to be rendered is an image corresponding to the target candidate rendering resource in the image sequence, and the target image to be rendered comprises the current frame image.
In another aspect, an embodiment of the present application provides an image rendering apparatus, including:
the response module is used for responding to a first rendering request of a current frame image in an image sequence, and acquiring a target cache resource corresponding to the current frame image based on the position information of the current frame image; the position information characterizes a frame number of the current frame image in the image sequence;
The current rendering resource acquisition module is used for extracting current rendering resources required by rendering the current frame image from the target cache resources based on preset length information and preset displacement information;
the candidate rendering resource determining module is used for storing the current rendering resource into a local cache, and taking the rendering resource existing in the local cache and the current rendering resource as candidate rendering resources;
a target candidate rendering resource acquisition module, configured to acquire a target candidate rendering resource from the candidate rendering resources;
the rendering module is used for rendering the image data of the target image to be rendered based on the target candidate rendering resource to obtain a rendering result of the target image to be rendered; the target image to be rendered is an image corresponding to the target candidate rendering resource in the image sequence, and the target image to be rendered comprises the current frame image.
In another aspect, the present application provides an electronic device for image rendering, where the electronic device includes a processor and a memory, and at least one instruction or at least one program is stored in the memory, where the at least one instruction or at least one program is loaded and executed by the processor to implement an image rendering method as described above.
In another aspect, the present application provides a computer readable storage medium having stored therein at least one instruction or at least one program loaded and executed by a processor to implement an image rendering method as described above.
In another aspect, the application proposes a computer program product comprising a computer program which, when executed by a processor, implements an image rendering method as described above.
According to the image rendering method, device, electronic equipment and storage medium, a first rendering request of a current frame image in an image sequence is responded, a target cache resource corresponding to the current frame image is obtained based on position information of the current frame image, the current rendering resource required for rendering the current frame image is extracted from the target cache resource based on preset length information and preset displacement information, the current rendering resource is stored in a local cache, the rendering resource existing in the local cache and the current rendering resource are used as candidate rendering resources, the target candidate rendering resource is obtained from the candidate rendering resources, image data of the target image to be rendered is rendered based on the target candidate rendering resource, and a rendering result of the target image to be rendered is obtained, so that the current rendering resource corresponding to the current frame image is obtained from a pre-allocated and combined target cache resource in a mode of maintaining preset length information and preset displacement information of each small Uniform buffer on the combined Uniform buffer, the number of structures for rendering the current frame image is reduced, and the occupied by an Opengles driving and the memory is further reduced.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of an implementation environment of an image rendering method according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating an image rendering method according to an exemplary embodiment.
Fig. 3 is a flowchart illustrating a method of acquiring a target cache resource corresponding to a current frame image according to an exemplary embodiment.
FIG. 4 is a flowchart illustrating one method of generating a first target cache resource and a second target cache resource according to an example embodiment.
FIG. 5 is a flowchart illustrating one method for extracting current rendering resources needed to render a current frame image from a target cache resource, according to one example embodiment.
FIG. 6 is a flowchart illustrating a method for obtaining a first sub-rendering resource from a first target cache resource corresponding to each of a plurality of first rendering requests, according to an example embodiment.
FIG. 7 is a schematic diagram illustrating a method for obtaining first sub-rendering resources corresponding to each of a plurality of first rendering requests from the first target cache resource according to an exemplary embodiment.
FIG. 8 is a schematic diagram illustrating the acquisition of sub-rendering resources and the rendering of image data according to the sub-rendering resources, according to an example embodiment.
FIG. 9 is a flow diagram illustrating a method for obtaining a target candidate rendering asset from the candidate rendering assets described above, according to an example embodiment.
Fig. 10 is a diagram illustrating an image rendering apparatus according to an exemplary embodiment.
Fig. 11 is a hardware configuration block diagram of a server for image rendering according to an embodiment of the present application.
Detailed Description
Cloud technology (Cloud technology) refers to a hosting technology for integrating hardware, software, network and other series resources in a wide area network or a local area network to realize calculation, storage, processing and sharing of data.
The cloud technology is a generic term of network technology, information technology, integration technology, management platform technology, application technology and the like based on cloud computing business model application, can form a resource pool, and is flexible and convenient as required. Background services of technical networking systems require a large amount of computing, storage resources, such as video websites, picture-like websites, and more portals. Along with the high development and application of the internet industry, each article possibly has an own identification mark in the future, the identification mark needs to be transmitted to a background system for logic processing, data with different levels can be processed separately, and various industry data needs strong system rear shield support and can be realized only through cloud computing. Specifically, cloud technology includes technical fields of security, big data, index databases, industry applications, networks, storage, management tools, computing, and the like.
In particular, embodiments of the present application relate to storage technology in cloud technology.
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Fig. 1 is a schematic view of an implementation environment of an image rendering method according to an exemplary embodiment. As shown in fig. 1, the implementation environment may include at least a terminal 01 and a server 02, where the terminal 01 and the server 2 may be directly or indirectly connected through a wired or wireless communication manner, and the present application is not limited herein.
Specifically, the terminal may be configured to obtain a rendering resource, and render image data of an image according to the rendering resource. Alternatively, the terminal 01 may be a computer device for performing 3D graphics rendering based on Opengles, which may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart tv, a smart watch, or the like, but is not limited thereto.
Specifically, the server 02 may provide background services for the terminal. Optionally, the server 02 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and artificial intelligence platforms.
It should be noted that fig. 1 is only an example. In other scenarios, other implementation environments may also be included. For example, the implementation environment includes a server that may be a computer device that performs 3D graphics rendering based on Opengles.
An application scene of the embodiment of the application is to render a 3D scene in various computer devices which perform 3D graphics rendering based on Opengles. Optionally, the 3D scene may include, but is not limited to, the following: scene rendering in large 3D games, 3D animations, VR presentations.
In a specific embodiment, the application program uses Opengles as graphics APIs to render complex scene models, the application program generates and sends rendering instructions on the GPU, and the rendering instructions are processed on the GPU to draw graphics. The rendering resource used by the form buffer as a rendering instruction is generated by the GPU and processed by the GPU. Illustratively, stored in the uniformity buffer is data needed to render image data, which may include, but is not limited to: the location of the rendered triangle, the material of the rendered picture, the rendering parameters, etc.
Technical terms used in the embodiments of the present application are described below:
GPU: a graphics processor on the computer device for rendering graphics.
CPU: a central processor on the computer device for processing the program logic.
And (3) graphic rendering: a 3D scene is drawn on a screen.
Opengles drive: software implementations of the Opengles standard provide a programming interface for a developer to use to enable the GPU device to draw 3D graphics on a screen.
Form buffer: one data structure used in Opengles is typically used to represent a piece of data used to transfer parameters from the GPU to the GPU.
Mutual exclusion lock: in order to prevent access conflict caused by simultaneous access of multiple objects to the same resource, each object needs to detect whether the object is locked before accessing the resource, if so, the object is continuously waiting, if not, the object is locked, then the object is accessed, and finally the lock is released.
Asynchronous: refers to executing a program in the form of multiple threads at the same time.
Handle: refers to a unique identifier of a computer system resource.
Frame: refers to the smallest unit of time for a computer to draw a 3D animation.
Fig. 2 is a flowchart illustrating an image rendering method according to an exemplary embodiment. The method may be used in the implementation environment of fig. 1. The present specification provides method operational steps as described in the examples or flowcharts, but may include more or fewer operational steps based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one way of performing the order of steps and does not represent a unique order of execution. When implemented in a real system or server product, the methods illustrated in the embodiments or figures may be performed sequentially or in parallel (e.g., in a parallel processor or multithreaded environment). As shown in fig. 2, the method may include:
S101, responding to a first rendering request of a current frame image in an image sequence, and acquiring a target cache resource corresponding to the current frame image based on the position information of the current frame image; the position information characterizes a frame number of the current frame image in the image sequence.
Alternatively, the embodiment of the present application may acquire the target cache resource corresponding to the current frame image in a plurality of manners, which is not limited herein.
In one manner, FIG. 3 is a flow chart illustrating one way of acquiring a target cache resource corresponding to a current frame image, according to one exemplary embodiment. As shown in fig. 3, in the step S101, the obtaining, in response to the first rendering request of the current frame image in the image sequence, the target buffer resource corresponding to the current frame image based on the location information of the current frame image may include:
s1011, responding to the first rendering request, and identifying the position information.
S1013, under the condition that the position information represents that the current frame image is an odd frame image, acquiring a first target buffer resource corresponding to the odd frame image based on image resource mapping information, and taking the first target buffer resource as a target buffer resource of the odd frame image; the image resource mapping information characterizes the mapping relation between the image and the target cache resource.
S1015, under the condition that the position information represents that the current frame image is an even frame image, acquiring a second target buffer resource corresponding to the even frame image based on the image resource mapping information, and taking the second target buffer resource as the target buffer resource of the even frame image.
Alternatively, the above steps S101, S1011 to S1015 may be executed by the CPU. The CPU side code may generate a plurality of first rendering requests for creating the performance buffer, and the CPU may obtain the target buffer resource corresponding to the position information of the current frame image in response to the first rendering requests. Illustratively, the target cache resources are not sporadic, but rather are better pre-allocated, consolidated form buffers
For example, image resource mapping information characterizing a mapping relationship between an image and a target cache resource may be established in advance, for example, an odd frame image corresponds to a first target cache resource and an even frame image corresponds to a second target cache resource. In the above-described step S1011, the frame number of the current frame image in the image sequence may be acquired in response to the first rendering request, thereby identifying the position information of the current frame image in the image sequence. In the above steps S1013 to S1015, in the case where the position information indicates that the current frame image is an odd frame image (for example, a first frame image, a third frame image, a fifth frame image, etc.), the CPU may acquire a first target buffer resource (hereinafter abbreviated as UBa) corresponding to the odd frame image based on the image resource mapping information established in advance, and in the case where the position information indicates that the current frame image is an even frame image (for example, a second frame image, a fourth frame image, a sixth frame image, etc.), the CPU may acquire a second target buffer resource (hereinafter abbreviated as UBb) corresponding to the even frame image based on the mapping relation established in advance. According to the embodiment of the application, the form buffer resources required by the application program are always sourced from two form buffers with fixed sizes (namely the first target buffer resource and the second target buffer resource), so that the memory overhead of Opengles in a drive layer is reduced; by using the double-buffer mechanism of the first target buffer resource and the second target buffer resource, the CPU and the GPU can access different form buffers in the same frame, so that mutual exclusion waiting on resource use is eliminated, and the operation efficiency is improved.
In another manner, in the step S101, the CPU may directly obtain a total cache resource when responding to the first rendering request, where the total cache resource includes at least the first target cache resource and the second target cache resource, that is, the first target cache resource and the second target cache resource are not independent, but are part of one overall cache resource. The CPU may acquire a target cache resource corresponding to the current frame image based on the position information of the current frame image from the total cache resources. For example, when the current frame image is determined to be an odd frame image based on the position information, a first target buffer resource corresponding to the odd frame image is determined from the total buffer resources, and when the current frame image is determined to be an even frame image based on the position information, a second target buffer resource corresponding to the even frame image is determined from the total buffer resources.
Optionally, in the embodiment of the present application, the computer device may generate the first target cache resource and the second target cache resource in a plurality of manners, which is not limited herein specifically.
In one manner, FIG. 4 is a flow chart illustrating one method of generating a first target cache resource and a second target cache resource according to an exemplary embodiment. As shown in fig. 4, the step of generating the first target cache resource and the second target cache resource may further include:
S201, acquiring a plurality of first preset rendering resources and a plurality of second preset rendering resources corresponding to a target application program; the first preset rendering resources represent rendering resources required for rendering the odd frame image, and the second preset rendering resources represent rendering resources required for rendering the even frame image.
S203, merging the plurality of first preset rendering resources to obtain the first target cache resources, and merging the plurality of second preset rendering resources to obtain the second target cache resources.
Optionally, in the step S201, in the target application initialization stage, a plurality of first preset rendering resources and a plurality of second preset rendering resources that are required to be used by the target application in the rendering process may be obtained in advance, where the plurality of first preset rendering resources represent rendering resources required to render the odd frame image, and the plurality of second preset rendering resources represent rendering resources required to render the even frame image. The first preset rendering resources and the second preset rendering resources may be completely the same resources, partially the same rendering resources, or completely unused rendering resources.
Optionally, in the step S203, the plurality of first preset rendering resources may be combined into one large enough performance buffer to obtain the first target cache resource, and the plurality of second preset rendering resources may be combined into one large enough performance buffer to obtain the second target cache resource.
As an example, each first preset rendering resource may be organized in a section of buffer in the memory according to different offsets, so as to form the first target buffer resource. Accordingly, each second preset rendering resource may be organized in a section of buffer in the memory according to different offsets, thereby forming the second target buffer resource.
As another example, the first target cache resource and the second target cache resource may be created using a standard Opengles function (e.g., glgenbuffer, glbufferdata). The first target cache resource and the second target cache resource may store a current tail displacement information (tail) at the beginning, and initialize to 0.
The above-mentioned sufficiently large means that: the sum of the sizes of all the unitbuffers required in a frame of image can be accommodated.
In another manner, the computer device may obtain, in advance, all rendering resources required to be used by the target application in the rendering process in the target application initialization stage, merge the all rendering resources into one large enough performance buffer, and configure two large enough performance buffers, where one serving is a first target cache resource and the other serving is a second target cache resource.
In a third manner, the computer device may further obtain, in advance, all rendering resources required to be used by the target application in the rendering process in the target application initialization stage, merge the all rendering resources into one large enough performance buffer, divide the large enough performance buffer into a first portion, where one portion is used as the first target cache resource, and another portion is used as the second target cache resource.
Since the openfiles driver consumes a certain amount of memory to maintain its structure every time a form buffer is created, the number of form buffers is proportional to the memory consumption, and in a typical complex 3D scene, thousands of form buffers may be used, which correspondingly generates more than tens of M memory consumption. Because the embodiment of the application combines the creation and use of all the uniformity buffers in one frame of image to one large uniformity buffer, the target application program globally has only two combined uniformity buffers, and the memory increase caused by the number of uniformity buffers can be greatly reduced.
S103, extracting current rendering resources required for rendering the current frame image from the target cache resources based on preset length information and preset displacement information.
Illustratively, the preset length information characterizes: and the length of the current rendering resource occupied in the target cache resource.
For example, since the first target cache resource and the second target cache resource may both store displacement information (tail) of a current tail at the beginning and initialize to 0, the preset displacement information may be used to determine an interception start position of the current rendering resource, which is characterized in that: the position of the displacement value of the current rendering resource relative to the tail of the target cache resource changes.
Alternatively, the embodiment of the present application may extract the current rendering resources required for rendering the current frame image from the above-mentioned target cache resources in a plurality of manners, which is not limited herein.
In one mode, the CPU may randomly extract the current rendering resource corresponding to the current frame image from the target cache resource based on the preset length information and the preset displacement information.
In another manner, FIG. 5 is a flow chart illustrating one method for extracting current rendering resources needed to render a current frame image from a target cache resource according to one exemplary embodiment. As shown in fig. 5, the number of the first rendering requests is plural, and in the step S103, extracting the current rendering resources required for rendering the current frame image from the target cache resources based on the preset length information and the preset displacement information may include:
S1031, under the condition that the current frame image is the odd frame image, acquiring first sub-rendering resources corresponding to a plurality of first rendering requests from the first target cache resources based on the preset length information and the preset displacement information.
S1033, generating the current rendering resource based on the first sub-rendering resource.
In one mode, in the step S1031, since the mapping relationship between the odd frame image and the UBa is established in advance, when the current frame image is the odd frame image, the CPU may obtain the first sub-rendering resources corresponding to the first rendering requests from the step S UBa by maintaining the displacement of each small sub-buffer on the large sub-buffer (i.e., the step S UBa) and the form of the section length, and use the first sub-rendering resources corresponding to the first rendering requests as the current rendering resources corresponding to the odd frame image in the step S1033. As an example, the first sub-rendering resource may carry the resource identification information of UBa (e.g., a resource handle), the starting position of the segment corresponding to the first sub-rendering resource in UBa, and so on.
In another manner, in the case that the current frame image is an even frame image, since the mapping relationship between the even frame image and the UBb is pre-established, the CPU may obtain, from the UBb, the second sub-rendering resources corresponding to the second rendering requests of the even frame, and use the second sub-rendering resources corresponding to the second rendering requests of the even frame as the current rendering resources corresponding to the even frame image by maintaining the displacement and the section length of each small performance buffer on the performance buffer (i.e. UBb). As an example, the second sub-rendering resource may carry the resource identification information (e.g., the resource handle) of UBb, and the starting position of the segment corresponding to the second sub-rendering resource in UBb.
In the embodiment of the application, the current rendering resources are obtained from the pre-configured and combined UBa by maintaining the displacement and the section length of each small form buffer on the large form buffer, so that the number of structural bodies maintained by Opengles drive is reduced, the memory occupation is reduced, and the processing speed of the computer equipment can be effectively improved under the condition of reducing the memory occupation, thereby improving the acquisition efficiency of the current rendering resources.
In the embodiment of the present application, for the step S1031, the first sub-rendering resources corresponding to each of the plurality of first rendering requests may be obtained from the first target cache resources in a plurality of manners, which is not specifically limited herein.
In one manner, for each first rendering request, a starting position may be determined based on preset displacement information, a cache resource of preset length information is randomly intercepted from the first target cache resource based on the starting position, and the intercepted cache resource is used as a first sub-rendering resource corresponding to each first rendering request.
In another manner, fig. 6 is a flow chart illustrating a method for obtaining a first sub-rendering resource corresponding to each of a plurality of first rendering requests from a first target cache resource according to an exemplary embodiment. As shown in fig. 6, the obtaining, from the first target cache resource, a first sub-rendering resource corresponding to each of the plurality of first rendering requests based on the preset length information and the preset displacement information may include:
S10311, taking tail displacement information of the first target cache resource as a current starting position, extracting the cache resource corresponding to the preset length information from the first target cache resource, and obtaining a first sub-rendering resource corresponding to any one of the plurality of first rendering requests.
S10313, the tail displacement information of the first sub-rendering resource obtained at present is used as the current starting position again.
S10315, taking any one of the residual rendering requests as a current rendering request; the remaining rendering requests are rendering requests for which corresponding first sub-rendering resources are not obtained from the plurality of first rendering requests.
S10317, based on the current starting position, extracting a cache resource corresponding to the preset length information from the first target cache resource to obtain a first sub-rendering resource corresponding to the current rendering request.
S10319, repeating the step of obtaining the first sub-rendering resources corresponding to the current rendering request by using the tail displacement information of the first sub-rendering resources obtained at present as the current starting position again until the first sub-rendering resources corresponding to each first rendering request are obtained.
In the embodiment of the present application, a CPU may generate a rendering request for creating multiple rendering requests, and the length of a rendering resource to be created is set to be l, and since UBa stores displacement information (tail) of a current tail at the beginning, the tail displacement information (i.e. tail) of UBa may be used as a current starting position, l is preset length information, and a section of area is marked on UBa as a result of the rendering request, that is, a first sub-rendering resource (ubproxy_i) corresponding to any rendering request is obtained on UBa. The length ubproxy_i_l=l, the displacement ubproxy_i_p=tail, and the located unit buffer resource identification information ubproxy_iu= UBa of the first sub-rendering resource corresponding to any rendering request, and the tail is initialized to 0, so that the value of the tail can be increased by l.
Because the number of the first rendering requests is multiple, after the first sub-rendering resource corresponding to any one rendering request is obtained, the tail displacement information of the first sub-rendering resource obtained currently can be re-used as the current starting position, any one of the remaining rendering requests is used as the current rendering request, and the current rendering request is removed from the remaining rendering requests. And based on the current starting position, extracting a cache resource corresponding to the preset length information from the first target cache resource to obtain first sub-rendering resources corresponding to the current rendering request, and repeating the steps S10313-S10317 until the first sub-rendering resources corresponding to each first rendering request are obtained. The plurality of first rendering requests respectively correspond to first sub-rendering resources to jointly form a proxy object set (marked as Sn) created by the current frame image.
Assume that the plurality of first rendering requests are rendering request 1, rendering request 2, rendering request 3.
And taking the tail displacement information of the first target cache resource as a current starting position, extracting the cache resource corresponding to the preset length information from the first target cache resource, and obtaining a first sub-rendering resource 1 corresponding to any rendering request (for example, rendering request 1).
And re-using the currently obtained tail displacement information of the first sub-rendering resource 1 as the current starting position. At this time, the rendering requests (i.e., the remaining rendering requests) that do not obtain the corresponding first sub-rendering resource are the rendering request 2 and the rendering request 3, and any remaining rendering request (e.g., the rendering request 2) is taken as the current rendering request. And extracting the cache resource corresponding to the preset length information from the first target cache resource based on the tail displacement information of the first sub-rendering resource 1 to obtain a first sub-rendering resource 2 corresponding to the rendering request 2.
And re-using the tail displacement information of the first sub-rendering resource 2 as the current starting position. At this time, the rendering request (i.e., the remaining rendering request) that does not obtain the corresponding first sub-rendering resource is the rendering request 3, and the rendering request 3 is taken as the current rendering request. And extracting the cache resource corresponding to the preset length information from the first target cache resource based on the tail displacement information of the first sub-rendering resource 2 to obtain a first sub-rendering resource 3 corresponding to the rendering request 3.
FIG. 7 is a schematic diagram illustrating a method for obtaining first sub-rendering resources corresponding to each of a plurality of first rendering requests from the first target cache resource according to an exemplary embodiment. As shown in fig. 7, it is assumed that the number of first rendering requests is 3, the current frame image is an nth frame image, and N is an odd number.
For the first rendering request, the tail displacement information (i.e. tail) of the UBa is taken as the current starting position, and the cache resource corresponding to the preset length information is extracted from the UBa, so as to obtain the first sub-rendering resource 1 corresponding to the first rendering request. At this point the value of tail is incremented by l (i.e., tail+1).
For the second rendering request, the second rendering request may be re-used as the current rendering request, and tail+l may be re-used as the current starting position, and the cache resource corresponding to the preset length information may be extracted from the UBa, so as to obtain the first sub-rendering resource 2 corresponding to the second rendering request. At this point the value of Tail+l is again incremented by l (i.e., tail+l+l).
For the third rendering request, the third rendering request may be re-used as the current rendering request, and tail+l+l may be re-used as the current starting position, and the cache resource corresponding to the preset length information may be extracted from the UBa, so as to obtain the first sub-rendering resource 3 corresponding to the third rendering request. At this time the value of Tail+l+l again increase l (i.e. tail +l+l +l).
FIG. 8 is a schematic diagram illustrating the acquisition of sub-rendering resources and the rendering of image data according to the sub-rendering resources, according to an example embodiment. In an exemplary embodiment, assuming that the current frame image is an nth frame image (N is an odd number), the above description of the procedure is for a plurality of rendering requests of the same frame image, and for an n+2th frame image, which is still an odd number frame, the creation procedure of the corresponding rendering resource is the same as that of the nth frame image, but since the rendering resource corresponding to the nth frame image has been generated in UBa, as shown in fig. 8, the resource of the preset length information may be extracted from UBa as the rendering resource of the n+2th frame image with the tail displacement information of the rendering resource of the nth frame image as a start position. The generation process of rendering resources of the subsequent odd frames is similar to that of the n+2th frame image, and will not be described here again.
Continuing with the above example, assuming that the number of rendering requests for the n+2th frame image is also 3, for the first rendering request for the n+2th frame image, it may be taken as the current rendering request, and tail+l+l is taken as the current starting position, and the buffer resource corresponding to the preset length information is extracted from the UBa, so as to obtain the sub-rendering resource corresponding to the first rendering request for the n+2th frame image, and so on, until the rendering resources corresponding to each of the multiple rendering requests for the n+2th frame image are obtained.
In another exemplary embodiment, when the rendering resources of the subsequent odd frame are created, the rendering resources created in UBa of the previous odd frame image may also be deleted, and the rendering resources of the subsequent odd frame may be created according to the method in steps S10311-S103119 described above.
It should be noted that, the generation process of the rendering resource of the even frame image is similar to that of the odd frame image, and the difference is that the rendering resource created by the even frame image is from another pre-allocated and combined performance buffer, namely from UBb.
In the embodiment of the application, the tail displacement information of the first target buffer resource is firstly used as the current starting position, after the first sub-rendering resource corresponding to any rendering request is obtained, the tail displacement information of the first sub-rendering resource obtained at present is used as the current starting position again, and the first sub-rendering resource corresponding to the first rendering request is obtained by sequentially obtaining the first sub-rendering resource based on the preset length information, so that the current rendering resource is obtained from the pre-allocated and combined first target buffer resource in a mode of maintaining the displacement and the section length of each small unit buffer on the combined buffer, the number of structures maintained by Opengles drive is reduced, the memory occupation is reduced, and the processing speed of computer equipment can be effectively improved under the condition of reducing the memory occupation, thereby improving the obtaining efficiency of the current rendering resource.
In a possible embodiment, in order to improve the utilization rate of the performance buffer, in the process of creating the first sub-rendering resource corresponding to each rendering request, the sub-rendering resource that has been created may be continuously recycled, and the specific process may be as follows:
as in the gray area a and the gray area B in fig. 7, if the gray area a and the gray area B are not required to be used in the subsequent rendering resource creation process, the gray area a and the gray area B may be marked as available states. In the subsequent rendering resource creation process, instead of directly taking the tail part as the starting position to create a new rendering resource, a region with a proper size is selected from the gray region A and the gray region B marked as available states to create the rendering resource, so that the utilization rate of the unit buffer is effectively improved.
S105, storing the current rendering resources into a local cache, and taking the rendering resources existing in the local cache and the current rendering resources as candidate rendering resources.
In a specific embodiment, in step S105, after the CPU obtains the current rendering resource corresponding to the current frame image, the current rendering resource corresponding to the current frame image may be sent to the GPU, and the GPU performs local buffering.
Optionally, the existing rendering resources may be historical rendering resources corresponding to historical frame images, where the historical frame images are adjacent even frame images in the image sequence, and the acquisition time point is located before the current frame image.
In an alternative embodiment, in the step S101, the CPU may further generate, in response to the first rendering request of the current frame image in the image sequence, a rendering instruction of the current frame image, where the rendering instruction may carry image data of the current frame image. The image data may be sent to the GPU along with the current rendering resources, which are locally cached by the GPU.
S107, acquiring target candidate rendering resources from the candidate rendering resources.
S109, rendering image data of a target image to be rendered based on the target candidate rendering resource to obtain a rendering result of the target image to be rendered; the target image to be rendered is an image corresponding to the target candidate rendering resource in the image sequence, and the target image to be rendered comprises the current frame image.
In one approach, the operation of the CPU and GPU may be asynchronous, i.e., the GPU is processing rendering instructions for the N-1 th frame at the same time as the rendering instructions are generated by the N-th frame CPU. In this case, the target candidate rendering resources may be existing rendering resources, i.e., historical rendering resources.
In the embodiment of the disclosure, the GPU side uses the form buffer proxy set Sn created by the CPU to draw the graphics. Optionally, the embodiment of the present application may implement the process of rendering the image data of the target to-be-rendered image based on the target candidate rendering resource in a plurality of manners to obtain the rendering result of the target to-be-rendered image, which is not limited herein specifically.
In one approach, where the CPU and GPU are operating asynchronously, the GPU is processing the rendering instructions of the N-1 th frame at the same time as the rendering instructions are generated by the N-th frame CPU. Specifically, fig. 9 is a flowchart illustrating a method for acquiring a target candidate rendering resource from the candidate rendering resources according to an exemplary embodiment, and in step S107, as shown in fig. 9, the method for acquiring the target candidate rendering resource from the candidate rendering resources may include:
s1071, acquiring a history rendering resource corresponding to the history frame image from the existing rendering resources; the history frame image is an adjacent even frame image with the acquisition time point positioned before the current frame image in the image sequence; the history rendering resources comprise a plurality of second rendering resources corresponding to the second rendering requests respectively; the second sub-rendering resource is obtained from the second target cache resource based on the preset length information and the preset displacement information, and the second target cache resource is obtained based on the position information of the history frame image in response to the plurality of second rendering requests.
S1073, taking the history frame image as the target image to be rendered, and taking the history rendering resource as the target candidate rendering resource.
Accordingly, in the step S109, the rendering the image data of the target to-be-rendered image based on the target candidate rendering resource to obtain a rendering result of the target to-be-rendered image may include:
and rendering the image data of the history frame image based on the history rendering resource to obtain a rendering result of the history frame image.
In an alternative embodiment, as shown in fig. 8, in the case where the operations of the CPU and the GPU are asynchronous, in the above steps S107 to S109, the CPU may obtain, from the local buffer, a history rendering instruction and a history rendering resource of an adjacent history frame image before the current frame image, use the history rendering resource as the target candidate rendering resource, use the history image frame as the target image to be rendered, and render image data of the history frame image according to the history rendering resource, so as to obtain a rendering result of the history frame image. Correspondingly, for the current rendering resource of the current frame image, when the CPU generates the rendering instruction and the rendering resource of the next frame image of the current frame image, the GPU processes the current rendering instruction and the current rendering resource of the current frame image, so that the rendering result of the current frame image is obtained.
It should be noted that, the creation process of the second sub-rendering resources corresponding to each of the plurality of second rendering requests of the history frame image is similar to the creation process of the first sub-rendering resources corresponding to each of the plurality of first rendering requests of the current frame image, and the difference is that the rendering resources created by the history frame image are from another pre-allocated and combined performance buffer, that is, from UBb. In the creation process of the second sub-rendering resources corresponding to each of the plurality of second rendering requests of the history frame image, please refer to the above step S10311-the above step S10319, which is not described herein again.
In the embodiment of the present application, the form buffer resource generated by the CPU in the nth frame is provided to the GPU in the n+1st frame for use, and the process is looped until the game exits, which can be said that when the CPU in the nth frame (N is an odd number), all the form buffer creation requests generated by the CPU occur on a common block resource UBa, and at the same time, the GPU accesses the common block resource UBb when the GPU processes the form buffer generated by the CPU in the N-1st frame. When N is an even frame, the CPU accesses the resource UBb and the GPU accesses UBa, i.e., the embodiment of the present application uses a double buffer mechanism of two performance buffers, one for the CPU in the odd frame and the other for the GPU in the even frame, and vice versa. Therefore, the CPU and the GPU do not access the same resources, namely mutual exclusion waiting for resource access does not occur, so that the overhead caused by locking can be effectively avoided, the time consumption for accessing the form buffer is reduced, and the rendering efficiency of the program is improved.
In another approach, the operation of the CPU and GPU may be synchronized or there may be a sequencing of the operation of the CPU and GPU. Under the synchronous condition, the target candidate rendering resource is the current rendering resource, the target image to be rendered is the current frame image, namely when the CPU generates the current rendering instruction and the current rendering resource of the current frame image, the CPU sends the current rendering instruction and the current rendering resource to the GPU, and the GPU processes the current rendering instruction and the current rendering resource.
In an exemplary embodiment, the rendering the image data of the history frame image based on the history rendering resource to obtain a rendering result of the history frame image may include:
and acquiring length information, displacement information and identification information of the second target cache resource of the second sub-rendering resource in the second target cache resource.
And rendering the image data of the history frame image based on the identification information, the initial position information and the displacement information to obtain a rendering result of the history frame image.
In the embodiment of the present application, for any element ubproxy_i (i.e., any second sub-rendering resource) in Sn, it stores the identification information (e.g., resource handle) of UBb to which the second sub-rendering resource belongs, and the start position information, the length information and the displacement information of the second sub-rendering resource in UBb. As an example, the displacement information may refer to the second child rendering resource creating a completed tail value in UBb.
In an exemplary embodiment, a unit buffer resource handle ubproxy_i_u, displacement information ubproxy_i_p, and length information ubproxy_i_l of the unit buffer resource where the two sub-rendering resources are located may be obtained. Binding ubproxy_i_u to the GPU, designating displacement information ubproxy_i_p and buffer length information ubproxy_i_l to the GPU, and rendering through a standard function (glBindBufferRange) of Opengles to obtain a rendering result.
In the embodiment of the disclosure, the GPU renders the image data of the history frame image based on the identification information, the length information and the displacement information, and the CPU and the GPU do not access the same resource, i.e. do not generate mutual exclusion waiting for resource access, so that overhead caused by locking can be effectively avoided, time consumption for accessing a performance buffer is reduced, and the rendering efficiency of a program is improved.
In one possible embodiment, the image rendering method as disclosed in the present application, wherein the target cache resource, the current rendering resource, the first target cache resource, the second target cache resource, etc. may be saved on the blockchain.
Fig. 10 is a diagram illustrating an image rendering apparatus according to an exemplary embodiment. As shown in fig. 10, the apparatus may include at least:
The response module 301 may be configured to obtain, in response to a first rendering request of a current frame image in an image sequence, a target cache resource corresponding to the current frame image based on location information of the current frame image; the position information characterizes a frame number of the current frame image in the image sequence.
The current rendering resource obtaining module 303 may be configured to extract, from the target cache resources, current rendering resources required for rendering the current frame image based on preset length information and preset displacement information.
The candidate rendering resource determining module 305 may be configured to store the current rendering resource in a local cache, and use the rendering resource existing in the local cache and the current rendering resource as candidate rendering resources.
The target candidate rendering resource obtaining module 307 may be configured to obtain a target candidate rendering resource from the candidate rendering resources.
The rendering module 309 may be configured to render image data of a target to-be-rendered image based on the target candidate rendering resource, to obtain a rendering result of the target to-be-rendered image; the target image to be rendered is an image corresponding to the target candidate rendering resource in the image sequence, and the target image to be rendered comprises the current frame image.
In an exemplary embodiment, the response module may include:
and an identification unit operable to identify the position information in response to the first rendering request.
The first target cache resource obtaining unit may be configured to obtain, based on image resource mapping information, a first target cache resource corresponding to the odd frame image when the location information indicates that the current frame image is an odd frame image, and use the first target cache resource as a target cache resource of the odd frame image; the image resource mapping information characterizes the mapping relation between the image and the target cache resource.
The second target cache resource obtaining unit may be configured to obtain, based on the image resource mapping information, a second target cache resource corresponding to the even frame image when the location information indicates that the current frame image is an even frame image, and use the second target cache resource as a target cache resource of the even frame image.
In an exemplary embodiment, the number of the first rendering requests is a plurality, and the current rendering resource obtaining module may include:
the first sub-rendering resource obtaining unit may be configured to obtain, when the current frame image is the odd frame image, first sub-rendering resources corresponding to each of the plurality of first rendering requests from the first target cache resource based on the preset length information and the preset displacement information.
The current rendering resource generating unit may be configured to generate the current rendering resource based on the first sub-rendering resource.
In an exemplary embodiment, the first sub-rendering resource obtaining unit may include:
the first buffer resource extraction subunit may be configured to extract, from the first target buffer resource, a buffer resource corresponding to the preset length information by using the tail displacement information of the first target buffer resource as a current starting position, so as to obtain a first sub-rendering resource corresponding to any one of the plurality of first rendering requests.
The first redetermining subunit may be configured to redefine tail displacement information of the currently obtained first sub-rendering resource as the current starting position.
A removing subunit, configured to take any one of the remaining rendering requests as a current rendering request; the remaining rendering requests are rendering requests for which corresponding first sub-rendering resources are not obtained from the plurality of first rendering requests.
The second cache resource extraction subunit may be configured to extract, based on the current starting position, a cache resource corresponding to the preset length information from the first target cache resource, so as to obtain a first sub-rendering resource corresponding to the current rendering request.
And the second redetermining subunit may be configured to repeat the tail displacement information of the first sub-rendering resource that is currently obtained, and re-use the tail displacement information as the current starting position until the step of obtaining the first sub-rendering resource corresponding to the current rendering request is obtained, until the first sub-rendering resource corresponding to each first rendering request is obtained.
In an exemplary embodiment, the target candidate rendering resource obtaining module may include:
the history rendering resource obtaining unit may be configured to obtain a history rendering resource corresponding to a history frame image from the existing rendering resources; the historical frame images are adjacent even frame images with the acquisition time points positioned in front of the current frame image in the image sequence; the second sub-rendering resource is obtained from the second target cache resource based on the preset length information and the preset displacement information, and the second target cache resource is obtained based on the position information of the history frame image in response to the plurality of second rendering requests; the history frame image is an adjacent image with the acquisition time point positioned before the current frame image in the image sequence.
And the candidate resource image determining unit may be configured to use the history frame image as the target image to be rendered, and use the history rendering resource as the target candidate rendering resource.
In an exemplary embodiment, the rendering module may include a rendering result determining unit, which may be configured to render the image data of the history frame image based on the history rendering resource, to obtain a rendering result of the history frame image.
In an exemplary embodiment, the rendering result determining unit may include:
the information obtaining subunit may be configured to obtain length information, displacement information, and identification information of the second target cache resource of the second sub-rendering resource.
And a rendering result determining subunit, configured to render the image data of the history frame image based on the identification information, the length information, and the displacement information, to obtain a rendering result of the history frame image.
In an exemplary embodiment, the apparatus may further include:
the preset rendering resource acquisition module can be used for acquiring a plurality of first preset rendering resources and a plurality of second preset rendering resources corresponding to the target application program; the first preset rendering resources represent rendering resources required for rendering the odd frame image, and the second preset rendering resources represent rendering resources required for rendering the even frame image.
The merging module may be configured to merge the plurality of first preset rendering resources to obtain the first target cache resource, and merge the plurality of second preset rendering resources to obtain the second target cache resource.
It will be appreciated that in the specific embodiments of the present application, related data such as user information is involved, and when the above embodiments of the present application are applied to specific products or technologies, user permissions or consents need to be obtained, and the collection, use and processing of related data need to comply with related laws and regulations and standards of related countries and regions.
It should be noted that, the device embodiment provided by the embodiment of the present application and the method embodiment described above are based on the same inventive concept.
The embodiment of the application also provides an electronic device, which comprises a processor and a memory, wherein at least one instruction or at least one section of program is stored in the memory, and the at least one instruction or the at least one section of program is loaded and executed by the processor to realize the image rendering method provided by any embodiment.
Embodiments of the present application also provide a computer readable storage medium that may be provided in a terminal to store at least one instruction or at least one program related to a network training method for implementing an image rendering method or image rendering in a method embodiment, where the at least one instruction or at least one program is loaded and executed by a processor to implement the image rendering method as provided in the above method embodiment.
Alternatively, in the present description embodiment, the storage medium may be located in at least one network server among a plurality of network servers of the computer network. Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The memory of the embodiments of the present specification may be used for storing software programs and modules, and the processor executes various functional applications and data processing by executing the software programs and modules stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs required for functions, and the like; the storage data area may store data created according to the use of the device, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory may also include a memory controller to provide access to the memory by the processor.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the image rendering method provided by the above-mentioned method embodiment.
The image rendering method provided by the embodiment of the application can be executed in a terminal, a computer terminal, a server or similar computing devices. Taking the operation on a server as an example, fig. 11 is a hardware structure block diagram of a server for image rendering according to an embodiment of the present application. As shown in fig. 11, the server 400 may vary considerably in configuration or performance, and may include one or more first processors (Central Processing Units, GPUs) 410 (the first processors 410 may include, but are not limited to, a microprocessor MCU, a programmable logic device FPGA, etc.) a memory 430 for storing data, one or more storage mediums 420 (e.g., one or more mass storage devices) for storing applications 423 or data 422. Wherein memory 430 and storage medium 420 may be transitory or persistent. The program stored on the storage medium 420 may include one or more modules, each of which may include a series of instruction operations on a server. Still further, the first processor 410 may be configured to communicate with the storage medium 420 and execute a series of instruction operations in the storage medium 420 on the server 400. The server 400 may also include one or more power supplies 460, one or more wired or wireless network interfaces 450, one or more input/output interfaces 440, and/or one or more operating systems 421, such as Windows ServerTM, mac OS XTM, unixTM, linuxTM, freeBSDTM, etc.
The input-output interface 440 may be used to receive or transmit data via a network. The specific example of the network described above may include a wireless network provided by a communication provider of the server 400. In one example, the input-output interface 440 includes a network adapter (Network Interface Controller, NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the input/output interface 440 may be a Radio Frequency (RF) module for communicating with the internet wirelessly.
It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 11 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, the server 400 may also include more or fewer components than shown in FIG. 11, or have a different configuration than shown in FIG. 11.
It should be noted that: the sequence of the embodiments of the present application is only for description, and does not represent the advantages and disadvantages of the embodiments. And the foregoing description has been directed to specific embodiments of this specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the device and server embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and references to the parts of the description of the method embodiments are only required.
It will be appreciated by those of ordinary skill in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by a program to instruct related hardware, and the program may be stored in a computer readable storage medium, where the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing is only illustrative of the present application and is not to be construed as limiting thereof, but rather as various modifications, equivalent arrangements, improvements, etc., within the spirit and principles of the present application.

Claims (11)

1. An image rendering method, the method comprising:
responding to a first rendering request of a current frame image in an image sequence, and acquiring a target cache resource corresponding to the current frame image based on the position information of the current frame image; the position information characterizes a frame number of the current frame image in the image sequence;
Extracting current rendering resources required for rendering the current frame image from the target cache resources based on preset length information and preset displacement information;
storing the current rendering resources into a local cache, and taking the rendering resources existing in the local cache and the current rendering resources as candidate rendering resources;
obtaining target candidate rendering resources from the candidate rendering resources;
rendering the image data of the target image to be rendered based on the target candidate rendering resource to obtain a rendering result of the target image to be rendered; the target image to be rendered is an image corresponding to the target candidate rendering resource in the image sequence, and the target image to be rendered comprises the current frame image.
2. The method according to claim 1, wherein the obtaining, in response to the first rendering request of the current frame image in the image sequence, the target buffer resource corresponding to the current frame image based on the location information of the current frame image includes:
identifying the location information in response to the first rendering request;
acquiring a first target cache resource corresponding to an odd frame image based on image resource mapping information under the condition that the position information characterizes the current frame image as the odd frame image, and taking the first target cache resource as the target cache resource of the odd frame image; the image resource mapping information characterizes the mapping relation between the image and the target cache resource;
And under the condition that the position information characterizes that the current frame image is an even frame image, acquiring a second target buffer resource corresponding to the even frame image based on the image resource mapping information, and taking the second target buffer resource as the target buffer resource of the even frame image.
3. The method according to claim 2, wherein the number of the first rendering requests is plural, and the extracting the current rendering resources required for rendering the current frame image from the target cache resources based on the preset length information and the preset displacement information includes:
acquiring first sub-rendering resources corresponding to a plurality of first rendering requests from the first target cache resources based on the preset length information and the preset displacement information under the condition that the current frame image is the odd frame image;
and generating the current rendering resource based on the first sub-rendering resource.
4. The method of claim 3, wherein the obtaining, from the first target cache resource, a first sub-rendering resource corresponding to each of the plurality of first rendering requests based on the preset length information and the preset displacement information, includes:
Taking tail displacement information of the first target cache resource as a current starting position, extracting the cache resource corresponding to the preset length information from the first target cache resource, and obtaining a first sub-rendering resource corresponding to any one of the plurality of first rendering requests;
the tail displacement information of the first sub-rendering resource obtained at present is taken as the current starting position again;
taking any one of the residual rendering requests as a current rendering request; the residual rendering request is a rendering request of which the corresponding first sub-rendering resource is not obtained in the plurality of first rendering requests;
based on the current starting position, extracting a cache resource corresponding to the preset length information from the first target cache resource to obtain a first sub-rendering resource corresponding to the current rendering request;
repeating the step of taking the tail displacement information of the first sub-rendering resources obtained at present as the current starting position again until the first sub-rendering resources corresponding to the current rendering request are obtained, until the first sub-rendering resources corresponding to each first rendering request are obtained.
5. A method according to claim 3, wherein said obtaining a target candidate rendering resource from said candidate rendering resources comprises:
Acquiring a history rendering resource corresponding to the history frame image from the existing rendering resource; the historical frame images are adjacent even frame images with acquisition time points positioned before the current frame image in the image sequence; the historical rendering resources comprise a plurality of second sub rendering resources corresponding to the second rendering requests respectively; the second sub-rendering resources are obtained from the second target cache resources based on the preset length information and the preset displacement information, and the second target cache resources are obtained based on the position information of the history frame images in response to the plurality of second rendering requests;
taking the historical frame image as the target image to be rendered, and taking the historical rendering resource as the target candidate rendering resource;
the rendering the image data of the target image to be rendered based on the target candidate rendering resource to obtain a rendering result of the target image to be rendered, including:
and rendering the image data of the historical frame image based on the historical rendering resource to obtain a rendering result of the historical frame image.
6. The method according to claim 5, wherein the rendering the image data of the history frame image based on the history rendering resource to obtain a rendering result of the history frame image includes:
Acquiring length information and displacement information of the second sub-rendering resource in the second target cache resource and identification information of the second target cache resource;
and rendering the image data of the history frame image based on the identification information, the length information and the displacement information to obtain a rendering result of the history frame image.
7. The method according to any one of claims 2 to 6, further comprising:
acquiring a plurality of first preset rendering resources and a plurality of second preset rendering resources corresponding to a target application program; the plurality of first preset rendering resources represent rendering resources required for rendering the odd frame images, and the plurality of second preset rendering resources represent rendering resources required for rendering the even frame images;
and merging the plurality of first preset rendering resources to obtain the first target cache resources, and merging the plurality of second preset rendering resources to obtain the second target cache resources.
8. An image rendering apparatus, the apparatus comprising:
the response module is used for responding to a first rendering request of a current frame image in an image sequence, and acquiring a target cache resource corresponding to the current frame image based on the position information of the current frame image; the position information characterizes a frame number of the current frame image in the image sequence;
The current rendering resource acquisition module is used for extracting current rendering resources required by rendering the current frame image from the target cache resources based on preset length information and preset displacement information;
the candidate rendering resource determining module is used for storing the current rendering resource into a local cache, and taking the rendering resource existing in the local cache and the current rendering resource as candidate rendering resources;
a target candidate rendering resource acquisition module, configured to acquire a target candidate rendering resource from the candidate rendering resources;
the rendering module is used for rendering the image data of the target image to be rendered based on the target candidate rendering resource to obtain a rendering result of the target image to be rendered; the target image to be rendered is an image corresponding to the target candidate rendering resource in the image sequence, and the target image to be rendered comprises the current frame image.
9. An electronic device for image rendering, characterized in that it comprises a processor and a memory in which at least one instruction or at least one program is stored, the at least one instruction or the at least one program being loaded and executed by the processor to implement the image rendering method according to any one of claims 1 to 7.
10. A computer-readable storage medium having stored therein at least one instruction or at least one program that is loaded and executed by a processor to implement the image rendering method of any one of claims 1 to 7.
11. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the image rendering method of any one of claims 1 to 7.
CN202210171584.3A 2022-02-24 2022-02-24 Image rendering method and device, electronic equipment and storage medium Pending CN116700941A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210171584.3A CN116700941A (en) 2022-02-24 2022-02-24 Image rendering method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210171584.3A CN116700941A (en) 2022-02-24 2022-02-24 Image rendering method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116700941A true CN116700941A (en) 2023-09-05

Family

ID=87828042

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210171584.3A Pending CN116700941A (en) 2022-02-24 2022-02-24 Image rendering method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116700941A (en)

Similar Documents

Publication Publication Date Title
CN106021421B (en) method and device for accelerating webpage rendering
CN108829518B (en) Method and device for pushing information
US20180276870A1 (en) System and method for mass-animating characters in animated sequences
CN107818323B (en) Method and apparatus for processing image
CN112785676B (en) Image rendering method, device, equipment and storage medium
CN111580883B (en) Application program starting method, device, computer system and medium
CN111158907B (en) Data processing method and device, electronic equipment and storage medium
CN115361382B (en) Data processing method, device, equipment and storage medium based on data group
CN115378937B (en) Distributed concurrency method, device, equipment and readable storage medium for tasks
CN116596748A (en) Image stylization processing method, apparatus, device, storage medium, and program product
CN111597035A (en) Simulation engine time advancing method and system based on multiple threads
CN116700941A (en) Image rendering method and device, electronic equipment and storage medium
CN115861510A (en) Object rendering method, device, electronic equipment, storage medium and program product
CN114862720A (en) Canvas restoration method and device, electronic equipment and computer readable medium
CN106548501B (en) Image drawing method and device
CN110661857B (en) Data synchronization method and device
CN104424661A (en) Three-dimensional object display method and device
EP3809314A1 (en) 3d object detection from calibrated 2d images background
CN112581492A (en) Moving target detection method and device
CN114820908B (en) Virtual image generation method and device, electronic equipment and storage medium
CN113553173B (en) MPI-based UE4 parallel processing method, device, equipment and medium
CN116385829B (en) Gesture description information generation method, model training method and device
CN115937338B (en) Image processing method, device, equipment and medium
US11410357B2 (en) Pixel-based techniques for combining vector graphics shapes
CN115759260B (en) Reasoning method and device of deep learning model, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination