CN116704075A - Image processing method, device and storage medium - Google Patents

Image processing method, device and storage medium Download PDF

Info

Publication number
CN116704075A
CN116704075A CN202211261351.9A CN202211261351A CN116704075A CN 116704075 A CN116704075 A CN 116704075A CN 202211261351 A CN202211261351 A CN 202211261351A CN 116704075 A CN116704075 A CN 116704075A
Authority
CN
China
Prior art keywords
rendering
image
texture data
image content
coloring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211261351.9A
Other languages
Chinese (zh)
Inventor
陈浩
刘金晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202211261351.9A priority Critical patent/CN116704075A/en
Publication of CN116704075A publication Critical patent/CN116704075A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application provides an image processing method, image processing equipment and a storage medium, which are applied to the technical field of terminals. The method comprises the following steps: the electronic equipment acquires a first rendering instruction from an application program, wherein the first rendering instruction is used for drawing first image content of an N-th frame image, and N is a positive integer; drawing first image content based on the first rendering instruction and the shading rate texture data when the shading rate texture data stored in the preset storage space is determined to be available; acquiring a second rendering instruction from the application program, wherein the second rendering instruction is used for drawing second image content of an N-th frame image, and the drawing call frequency of the second image content is smaller than that of the first image content; and drawing the second image content in a unit of a single pixel based on the second rendering instruction and a preset coloring rate. In the method, different rendering modes are adopted for different image contents of the Nth frame of image, so that the rendering cost of equipment is reduced.

Description

Image processing method, device and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, apparatus, and storage medium.
Background
With the development of screen technology, the screen resolution of the current mobile terminal is higher, the scene complexity of rendering the game on the mobile terminal is higher, and the memory occupation and the rendering power consumption are increased. For high-resolution games, the mobile terminal is often subject to heating or jamming phenomena due to the limitation of power consumption or the CPU (Central processing Unit) capability of the processor, and the user experience is affected.
Disclosure of Invention
The embodiment of the application provides an image processing method, equipment and a storage medium, which are used for reducing the rendering cost of the equipment and improving the user experience by adopting different rendering modes for different image contents in an image.
In a first aspect, an embodiment of the present application provides an image processing method, applied to an electronic device, where an application program runs on the electronic device, the method includes: acquiring a first rendering instruction from the application program, wherein the first rendering instruction is used for drawing first image content of an N-th frame image, and N is a positive integer; when the fact that the coloring rate texture data stored in the preset storage space are available is determined, the first image content is drawn based on the first rendering instruction and the coloring rate texture data; acquiring a second rendering instruction from the application program, wherein the second rendering instruction is used for drawing second image content of the Nth frame image, and the drawing call frequency of the second image content is smaller than that of the first image content; and drawing the second image content in a unit of a single pixel based on the second rendering instruction and a preset coloring rate.
Wherein the shading rate texture data is used for indicating the shading rate of different areas of the drawn first image content.
By way of example, the application may be a gaming application, and the first image content may be image content of a main scene of the gaming application, a main portion of the gaming image, including, for example, characters, buildings, floors, mountains, etc. The secondary image content may be other image content in the game application than the primary scene, being an auxiliary part of the game image, including for example effects of lighting, shadows, etc. The rendering power consumption of the first image content is greater than the rendering power consumption of the second image content.
Alternatively, the application program may be other applications that need image rendering, such as a home design application program and a three-dimensional modeling application program, which is not limited to the present application.
In the scheme, for the first image content of the nth frame image, the first image content of the nth frame image can be drawn by directly using the coloring rate texture data of the preset storage space, and the coloring rate of different areas of the image is high or low due to the fact that the coloring rate texture data indicates the coloring rate of the different areas of the image, so that the equipment rendering cost can be reduced to a certain extent. For the second image content of the nth frame image, because the number of draw calls of the second image content is much smaller than that of the first image content, the second image content is subjected to image rendering by taking a single pixel as a unit based on the preset coloring rate, so that a more ideal rendering effect can be achieved. In contrast, if the second image content is also rendered by using the shading rate texture, based on the same principle, the device needs to calculate the shading rate texture data for drawing the second image content in advance, which will additionally increase the calculation amount of the device, and in the case that the rendering image quality is not greatly improved, the rendering load is increased instead.
With reference to the first aspect, in a possible implementation manner, the shading rate texture data is determined based on a rendering map of third image content of an mth frame image; the number of draw calls of the third image content is larger than that of the image content except the third image content in the Mth frame image, the difference between N and M is larger than 0 and smaller than or equal to a preset threshold, and M is a positive integer.
For example, based on the above example, the third image content may be image content of a main scene of the game application, and since the mth frame image is different from the nth frame image, there is a certain difference between the third image content and the first image content.
In the scheme, the Mth frame image is one frame image before the Nth frame image, and when the first image content of the Nth frame image is rendered, the coloring rate textures determined based on the rendering image of the third image content of the Mth frame image can be directly multiplexed, so that compared with the image rendering of the first image content pixel by pixel based on the preset coloring rate, the calculation amount of equipment can be reduced to a certain extent, and the rendering load of the equipment can be effectively reduced.
With reference to the first aspect, in one possible implementation manner, the method further includes: acquiring a rendering diagram of third image content of an Mth frame image; determining an image intensity change of a rendered map of the third image content; determining the coloring rate of different areas in the rendering diagram of the third image content based on the image intensity change of the rendering diagram of the third image content to obtain the coloring rate texture data; and storing the coloring rate texture data in the preset storage space.
Wherein the image intensity variation of the rendered map may comprise a variation in gray scale and/or brightness of the rendered map.
According to the scheme, through the image intensity analysis of the rendering image of the third image content of the Mth frame image, different coloring rates are set for areas with different image intensity changes in the third image content, so that the rendering of one or more frames of images after the Mth frame is guided, and the rendering load of equipment is effectively reduced.
With reference to the first aspect, in one possible implementation manner, the rendering map of the third image content includes a first area and a second area, and the coloring rate texture data includes a coloring rate of the first area and a coloring rate of the second area; if the image intensity variation of the first area is larger than the image intensity variation of the second area, the coloring rate of the first area is larger than the coloring rate of the second area; if the image intensity variation of the first area is smaller than the image intensity variation of the second area, the coloring rate of the first area is smaller than the coloring rate of the second area; and if the image intensity variation of the first area is equal to the image intensity variation of the second area, the coloring rate of the first area is equal to the coloring rate of the second area.
In the scheme, the higher coloring rate is set for the region with larger image intensity variation in the third image content, and the lower coloring rate is set for the region with smaller image intensity variation in the third image content, so that the variable coloring rate is realized, and the load of equipment rendering is reduced.
With reference to the first aspect, in a possible implementation manner, the electronic device is configured with an interception module and an image processor GPU; acquiring a first rendering instruction from the application program, including: the interception module intercepts the first rendering instruction; when determining that the shading rate texture data stored in the preset storage space is available, drawing the first image content based on the first rendering instruction and the shading rate texture data, including: when the interception module determines that the shading rate texture data is available, the interception module issues a first shading instruction using the shading rate texture data to the GPU through a graphic library of the electronic equipment; the GPU renders the first image content based on the first shading instruction and the shading rate texture data.
In the scheme, the rendering module intercepts the first rendering instruction, and when knowing that the first image content needs to be rendered, the rendering module needs to determine whether available tinting rate texture data exists in a pre-stored space, and if the available tinting rate texture data exists, the rendering module informs the GPU through the graphic library to directly use the tinting rate texture data to draw the first image content, so that the rendering cost in the tinting process is saved, and the power consumption of equipment is reduced.
With reference to the first aspect, in one possible implementation manner, acquiring a second rendering instruction from the application program includes: the interception module intercepts a second rendering instruction from the application program; drawing the second image content in units of single pixels based on the second rendering instruction and a preset shading rate, including: the interception module issues a second coloring instruction using single pixel coloring to the GPU through the graphic library; the GPU draws the second image content by taking a single pixel as a unit based on the second coloring instruction and the preset coloring rate.
In the scheme, the rendering module intercepts a second rendering instruction, and when knowing that the second image content needs to be rendered, the rendering module informs the GPU to draw the second image content in a single-pixel coloring mode through the graphic library. Compared with the first image content, the second image content has lower power consumption in rendering, and the second image content is directly rendered by adopting a single-pixel coloring mode, so that a more ideal rendering effect can be achieved, and if the first image content is rendered in a similar rendering mode, the rendering cost is increased.
With reference to the first aspect, in a possible implementation manner, the electronic device is configured with a computation control module, and after the GPU finishes drawing the first image content, the method further includes: the computation control module determines whether to update the shading rate texture data; if the calculation control module determines to update the coloring rate texture data, the calculation control module issues a calculation instruction to the GPU through the graphic library; the GPU determines the shading rate of different areas in the first image content based on the computing instruction and a rendering diagram of the first image content so as to update the shading rate texture data.
The GPU is preset with a coloring rate calculation model, the coloring rate calculation model can be used for analyzing frequency domain information of a rendering graph of the first image content, and different coloring rates are set for a region with a higher frequency domain and a region with a lower frequency domain in the first image content, namely coloring rate texture data are calculated based on the rendering graph of the first image content. The frequency domain information indicates the image intensity variation condition of the rendering map, the region image intensity variation of the higher frequency domain is larger, and the region image intensity variation of the lower frequency domain is smaller.
In the scheme, the computing control module can timely update the coloring rate texture data of the preset storage space according to the preset updating strategy, and the GPU can compute the latest coloring rate texture data according to the instruction of the computing control module so as to meet the image rendering of the subsequent image frames and reduce the rendering cost of equipment on the continuous image frames.
With reference to the first aspect, in one possible implementation manner, the determining, by the computing control module, whether to update the shading rate texture data includes: if the difference value between N and M is equal to a preset threshold value, the calculation control module determines to update the coloring rate texture data; and if the difference value between N and M is smaller than the preset threshold value, the calculation control module does not update the coloring rate texture data.
For example, the preset threshold may be 2, and if the difference between N and M is equal to 2, the shading rate texture data needs to be updated, that is, the shading rate texture data needs to be updated every two frames. After completing the rendering of the first image content of the nth frame image, the GPU calculates new shading rate texture data based on the rendered map of the first image content of the nth frame image, the new shading rate texture data being usable for rendering of a fourth image content of the n+1st frame and a fifth image content of the n+2nd frame, which may be image content of a main scene such as a gaming application.
In the scheme, the coloring rate texture data is updated only when the difference value between N and M is equal to a preset threshold value, namely, the coloring rate texture data determined based on the rendering diagram of the third image content of the Mth frame image can be directly multiplexed by the image between the Mth frame image and the Nth frame image, so that the rendering cost of equipment can be further reduced.
With reference to the first aspect, in one possible implementation manner, the method further includes: after the GPU updates the shading rate texture data, the GPU sends a notification signal to the interception module, wherein the notification signal is used for indicating whether the shading rate texture data are available.
In the scheme, the interception module acquires whether the coloring rate texture data of the preset storage space is available or not through the notification signal sent by the GPU, so that the interception module sends a corresponding coloring instruction to the GPU after intercepting the rendering instruction, and whether the coloring rate texture is multiplexed or not can be indicated in the coloring instruction, so that the equipment rendering cost is reduced.
In a second aspect, an embodiment of the present application provides an image processing method, applied to an electronic device, where an application is running on the electronic device, and an interception module and an image processor GPU are configured in the electronic device, where the method includes: the interception module intercepts a first rendering instruction from an application program; when the interception module determines that the shading rate texture data stored in the preset storage space is available, the interception module issues a first shading instruction using the shading rate texture data to the GPU through a graphic library of the electronic equipment; the GPU renders the first image content based on the first shading instruction and the shading rate texture data.
With reference to the second aspect, in one possible implementation manner, the interception module determines that no shading rate texture data is available in the preset storage space, and instructs the GPU to draw the first image content based on the single pixel shading manner through the graphics library. After the GPU finishes drawing the first image content, the GPU can determine the shading rate texture data corresponding to the rendering graph based on the rendering graph of the first image content, and store the shading rate texture data into a preset storage space so as to multiplex subsequent image frames, thereby reducing the rendering cost of the equipment.
In a third aspect, an embodiment of the application provides an electronic device comprising a processor for invoking a computer program in memory to perform a method as described in the first or second aspect, or any implementation of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing computer instructions that, when run on an electronic device, cause the electronic device to perform a method as described in the first aspect or the second aspect, or any implementation of the first aspect.
In a fifth aspect, embodiments of the present application provide a chip system for use in an electronic device comprising a processor and a memory, the chip system comprising one or more interface circuits and one or more processors, the interface circuits and the processors being interconnected by wires, the interface circuits being arranged to receive signals from the memory of the electronic device and to send the signals to the processor, the signals comprising computer instructions stored in the memory, which when executed by the processor cause the electronic device to perform a method as described in the first aspect or the second aspect, or any of the embodiments of the first aspect.
It should be understood that the second to fifth aspects of the present application correspond to the technical solutions of the first aspect of the present application, and the advantages obtained by each aspect and the corresponding possible embodiments are similar, and are not repeated.
Drawings
Fig. 1 is a schematic diagram of an image processing method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 3A is a schematic flow chart of an image processing method according to an embodiment of the present application;
fig. 3B is a schematic flow chart of an image processing method according to an embodiment of the present application;
fig. 4 is a schematic diagram of software components of an electronic device according to an embodiment of the present application;
fig. 5 is a schematic flow chart of an image processing method according to an embodiment of the present application;
fig. 6A is a schematic flow chart of an image processing method according to an embodiment of the present application;
fig. 6B is a flowchart of an image processing method according to an embodiment of the present application;
fig. 7 is a schematic flow chart of an image processing method according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a chip system according to an embodiment of the present application.
Detailed Description
When a user uses application programs such as games, home design and modeling in the electronic equipment, the electronic equipment can complete image rendering processing according to rendering commands issued by the application programs, and then display images.
In the related art, when an application program (such as a game application) needs to display an image, a rendering command for the image may be issued to a central processing unit (Central Processing Unit, CPU) of the electronic device. The CPU may control a graphics processor (Graphics Processing Unit, GPU) of the electronic device to render the image in accordance with the rendering commands, and the rendered data may be stored in a default frame buffer (provided framebuffer) of the electronic device. The default frame buffer may be a storage space configured by the electronic device in memory for the image currently to be displayed, may be created prior to image rendering, and the content contained in the default frame buffer may be presented entirely on-screen. That is, when the electronic device needs to display a certain image, the display device can be controlled to display the image according to the display data corresponding to the image stored in the default frame buffer.
In some rendering scenes, a frame of image includes more or complex graphic elements or more or complex image effects (such as shadows, translucency, special effects, etc.), and multiple steps are often required to complete rendering. For example, in the rendering scene of a high resolution game, the rendering of one frame of game image generally includes the rendering of a game main scene, of which the rendering power consumption is highest, and other scenes. The main game scene is also called a high-load scene, is a main body of a game image, and generally comprises contents such as characters, buildings, floors, mountains and the like, and other scenes are auxiliary of the game image, and comprise effects such as illumination, shadows and the like added to the contents. When the rendering of the game image is executed, the rendering step of the main scene of the game is executed, then the rendering step of other scenes is executed, the rendering result is drawn into a default frame buffer, and finally the rendering result is presented on the screen. Of course, in some scenes, there are also situations where other scenes of the game are rendered before the main scene.
The rendering scene has high requirements on the equipment memory and the CPU processing capacity, and for a common mobile terminal, the heating or clamping phenomenon is very easy to occur due to the limitation of the power consumption and the CPU capacity, so that the use experience of a user is influenced.
Based on the problem, the application provides an image processing method which can be applied to electronic equipment such as mobile phones, tablets and the like with image rendering capability. Specifically, based on a variable coloring rate technology (variable rate shading, VRS, also referred to as variable rate coloring), a game main scene to be rendered by an application program is identified by analyzing a rendering instruction issued by the application program, the coloring rate of different contents in the scene is adjusted according to frequency domain information of the contents (or regions) in the game main scene, and an adjustment result is recorded in coloring rate texture data, wherein the coloring rate texture data can be used for rendering a subsequent game main scene image. In the embodiment of the application, the color rate texture data can be understood as a pattern (mode) generated by changing the color rate in a certain form in space, and can be used for determining the color rate at which different areas in the image are rendered.
Further, the electronic device may dynamically adjust the rendering rate texture data according to a preset update policy, such as updating once every two frames, or determining whether to update according to the running condition of the game application, so that a subsequent game master scene image (e.g., a subsequent frame or frames of images) may perform image rendering based on the adjusted rendering rate texture data.
By adopting the method provided by the embodiment of the application, the electronic equipment can determine the coloring rate of different contents in the scene through the image contents in the main game scene, so that the high-frequency contents are rendered by adopting a higher coloring rate, and the low-frequency contents are rendered by adopting a lower coloring rate, thereby reducing the rendering load of the main game scene. The electronic equipment dynamically adjusts the coloring rate texture data to instruct the subsequent image to render the image based on what coloring rate, so that the calculated amount of the equipment can be reduced to a certain extent, the power consumption of the equipment is effectively reduced, and the use experience of a user is improved.
Fig. 1 is a schematic diagram of an image processing method according to an embodiment of the present application. Fig. 1 illustrates a rendering view of a main scene of a current frame, including sky, mountain, grass, vehicles, roads, and the like, on the left side of fig. 1, taking a rendering scene of a high-resolution game as an example. The GPU can identify high-frequency and low-frequency contents in the scene according to the rendering graph of the current main scene, and the high-frequency contents are provided with a higher coloring rate, the low-frequency contents are provided with a lower coloring rate, so that differential rendering of different contents in the scene is realized, the rendering load of the GPU is reduced while the image quality is ensured, and the rendering efficiency is improved.
Specifically, as shown in fig. 1, a rendering chart of a main scene of a current frame may be used as an input of a coloring rate calculation model in a GPU to obtain a coloring rate texture chart (i.e., coloring rate texture data) on the right side of fig. 1, wherein coloring rates of different contents (or areas) in the rendering chart are recorded in the coloring rate texture chart, and the coloring rate texture chart may be used for rendering the main scene of the next frame or several frames. The high-frequency content in the rendering chart refers to content in which the image intensity (brightness or gradation) varies drastically, for example, the vehicle in fig. 1. The low frequency content in the rendering chart refers to content in which the image intensity (brightness or gradation) changes smoothly, such as sky, mountain, grass, road in fig. 1. In the present embodiment, the coloring rate of the region 4 corresponding to the vehicle is larger than that of the other regions (e.g., regions 1, 2, 3).
It should be understood that the importance degree of the high-frequency content in the rendering graph is higher than that of the low-frequency content, and by setting different coloring rates for different contents, the image quality of the high-frequency content is better than that of the low-frequency content, so that the rendering load of the GPU is reduced while the overall requirement of the image quality is met.
The image processing method provided by the embodiment of the application can be applied to electronic equipment, wherein the electronic equipment can be mobile phones, tablet computers, desktop computers, laptop computers, handheld computers, notebook computers, ultra-mobile personal computers (UMPC), netbooks, cellular phones, personal digital assistants (personal digital assistant, PDA), artificial intelligence (artificial intelligence, AI) equipment, wearable equipment, vehicle-mounted equipment, intelligent household equipment, smart city equipment and/or other electronic equipment. The embodiment of the application does not limit the specific form of the device.
Fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 2, the electronic device 200 may include a processor 210, an external memory interface 220, an internal memory 221, a universal serial bus (universal serial bus, USB) interface 230, a charge management module 240, a power management module 241, a battery 242, an antenna 1, an antenna 2, a mobile communication module 250, a wireless communication module 260, an audio module 270, a speaker 270A, a receiver 270B, a microphone 270C, an earphone interface 270D, a sensor module 280, keys 290, a motor 291, an indicator 292, a camera 293, a display 294, a subscriber identity module (subscriber identification module, SIM) card interface 295, and the like. The sensor module 280 may include, among other things, a pressure sensor, a gyroscope sensor, a barometric sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, etc.
It is to be understood that the structure illustrated in this embodiment does not constitute a specific limitation on the electronic apparatus 200. In other embodiments, the electronic device 200 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 210 may include one or more processing units such as, for example: the processor 210 may include a central processor (Central Processing Unit, CPU), an application processor (application processor, AP), a modem processor, a GPU, an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural-network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors 210. As an example, in the present application, an ISP may process an image, such as may include auto exposure (Automatic Exposure), auto Focus (auto Focus), auto white balance (Automatic White Balance), denoising, backlight compensation, color enhancement, and the like. Among them, the process of auto exposure, auto focus, and auto white balance may also be referred to as a 3A process. After processing, the ISP can take the corresponding picture. This process may also be referred to as a sheeting operation of an ISP.
In some embodiments, processor 210 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuitsound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The electronic device 200 may implement a photographing function through an ISP, a camera 293, a video codec, a GPU, a display 294, an application processor, and the like.
The ISP is used to process the data fed back by the camera 293. For example, when photographing, the shutter is opened, light is transmitted to the photosensitive element of the camera 293 through the lens, the optical signal is converted into an electrical signal, and the photosensitive element of the camera 293 transmits the electrical signal to the ISP for processing, so that the electrical signal is converted into an image visible to the naked eye. ISP can also perform algorithm optimization on noise, brightness and the like of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 293.
The camera 293 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, the electronic device 200 may include 1 or N cameras 293, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 200 is selecting a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 200 may support one or more video codecs. In this way, the electronic device 200 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent cognition of the electronic device 200 may be implemented by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The charge management module 240 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 240 may receive a charging input of a wired charger through the USB interface 230. In some wireless charging embodiments, the charge management module 240 may receive wireless charging input through a wireless charging coil of the electronic device 200. The charging management module 240 may also power the electronic device 200 through the power management module 241 while charging the battery 242.
The power management module 241 is used for connecting the battery 242, and the charge management module 240 and the processor 210. The power management module 241 receives input from the battery 242 and/or the charge management module 240 and provides power to the processor 210, the internal memory 221, the external memory, the display 294, the camera 293, the wireless communication module 260, and the like. The power management module 241 may also be configured to monitor battery 242 capacity, number of battery 242 cycles, battery 242 health (leakage, impedance), and other parameters. In other embodiments, the power management module 241 may also be disposed in the processor 210. In other embodiments, the power management module 241 and the charge management module 240 may be disposed in the same device.
The wireless communication function of the electronic device 200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, the modem processor 210, the baseband processor 210, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 200 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 250 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied on the electronic device 200. The mobile communication module 250 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 250 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 250 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 250 may be disposed in the processor 210. In some embodiments, at least some of the functional modules of the mobile communication module 250 may be provided in the same device as at least some of the modules of the processor 210.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to speaker 270A, receiver 270B, etc.), or displays images or video through display screen 294. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 250 or other functional module, independent of the processor 210.
The wireless communication module 260 may provide solutions for wireless communication including wireless local area network (wirelesslocal area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied on the electronic device 200. The wireless communication module 260 may be one or more devices that integrate at least one communication processing module. The wireless communication module 260 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 210. The wireless communication module 260 may also receive a signal to be transmitted from the processor 210, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 250 of electronic device 200 are coupled, and antenna 2 and wireless communication module 260 are coupled, such that electronic device 200 may communicate with a network and other devices via wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (codedivision multiple access, CDMA), wideband code division multiple access (wideband code division multipleaccess, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidounavigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellitesystem, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 200 implements display functions through a GPU, a display screen 294, and an application processor 210, etc. The GPU is a microprocessor for image processing, and is connected to the display screen 294 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 210 may include one or more GPUs that execute program instructions to generate or change display information.
The display 294 is used to display images, videos, and the like. The display 294 includes a display panel. The display panel may employ a liquid crystal display 294 (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrixorganic light emitting diode or active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot lightemitting diodes, QLED), or the like. In some embodiments, the electronic device 200 may include 1 or N display screens 294, N being a positive integer greater than 1.
The external memory interface 220 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 200. The external memory card communicates with the processor 210 through an external memory interface 220 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
Internal memory 221 may be used to store computer executable program code that includes instructions. The processor 210 executes various functional applications of the electronic device 200 and data processing by executing instructions stored in the internal memory 221. The internal memory 221 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 200 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 221 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The electronic device 200 may implement audio functions through the audio module 270, speaker 270A, receiver 270B, microphone 270C, ear-headphone interface 270D, and application processor 210, among others. Such as music playing, recording, etc.
The audio module 270 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 270 may also be used to encode and decode audio signals. In some embodiments, the audio module 270 may be disposed in the processor 210, or some functional modules of the audio module 270 may be disposed in the processor 210. Speaker 270A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 200 may listen to music, or to hands-free conversations, through the speaker 270A. A receiver 270B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When electronic device 200 is answering a telephone call or voice message, voice may be received by placing receiver 270B close to the human ear. Microphone 270C, also referred to as a "microphone" or "microphone," is used to convert sound signals into electrical signals. When making a call or sending a voice message or when it is desired to trigger the electronic device 200 to perform certain functions by a voice assistant, the user may sound near the microphone 270C through his mouth, inputting a sound signal to the microphone 270C. The electronic device 200 may be provided with at least one microphone 270C. In other embodiments, the electronic device 200 may be provided with two microphones 270C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 200 may also be provided with three, four, or more microphones 270C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording, etc. The earphone interface 270D is for connecting a wired earphone. Earphone interface 270D may be USB interface 230 or a 3.5mm open mobile electronic device 200 platform (open mobile terminal platform, OMTP) standard interface, american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
Touch sensors, also known as "touch panels". The touch sensor may be disposed on the display screen 294, and the touch sensor and the display screen 294 form a touch screen, which is also called a "touch screen". The touch sensor is used to detect a touch operation acting on or near it. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. In some embodiments, visual output related to touch operations may be provided through the display screen 294. In other embodiments, the touch sensor may also be disposed on a surface of the electronic device 200 at a different location than the display 294.
The pressure sensor is used for sensing a pressure signal and can convert the pressure signal into an electric signal. In some embodiments, a pressure sensor may be provided at the display 294. Pressure sensors are of many kinds, such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, etc. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. When a force is applied to the pressure sensor, the capacitance between the electrodes changes. The electronic device 200 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 294, the electronic apparatus 200 detects the touch operation intensity according to the pressure sensor. The electronic device 200 may also calculate the location of the touch based on the detection signal of the pressure sensor. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: and executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon. The gyroscopic sensor may be used to determine a motion pose of the electronic device 200. The acceleration sensor may detect the magnitude of acceleration of the electronic device 200 in various directions (typically three axes). And a distance sensor for measuring the distance. The electronic device 200 may measure the distance by infrared or laser. The electronic device 200 can detect that the user holds the electronic device 200 close to the ear to talk by using the proximity light sensor, so as to automatically extinguish the screen to achieve the purpose of saving electricity. The ambient light sensor is used for sensing ambient light brightness. The fingerprint sensor is used for collecting fingerprints. The temperature sensor is used for detecting temperature. In some embodiments, the electronic device 200 performs a temperature processing strategy using the temperature detected by the temperature sensor. The audio module 270 may analyze the voice signal based on the vibration signal of the sound portion vibration bone piece obtained by the bone conduction sensor, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signals acquired by the bone conduction sensor, so that a heart rate detection function is realized.
Keys 290 include a power on key, a volume key, etc. The motor 291 may generate a vibration alert. The indicator 292 may be an indicator light, which may be used to indicate a state of charge, a change in power, a message indicating a missed call, a notification, etc. The SIM card interface 295 is for interfacing with a SIM card. The electronic device 200 may support 1 or N SIM card interfaces 295, N being a positive integer greater than 1. The SIM card interface 295 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 295 may be used to insert multiple cards simultaneously. The SIM card interface 295 may also be compatible with different types of SIM cards. The SIM card interface 295 may also be compatible with external memory cards. The electronic device 200 interacts with the network through the SIM card to realize functions such as communication and data communication. In some embodiments, the electronic device 200 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 200 and cannot be separated from the electronic device 200.
The image processing method provided by the embodiment of the application can be applied to the electronic equipment shown in the figure 2.
It should be noted that fig. 2 and the description thereof are only examples of an application carrier of the solution provided by the embodiments of the present application. The composition of fig. 2 is not to be construed as limiting the protocol described in embodiments of the application. In other embodiments, the electronic device may have more or fewer components than those shown in FIG. 2.
For convenience of explanation, in the following embodiments, the game application in the electronic device will be taken as an example to describe the scheme of the present application. It should be appreciated that the principles implemented when the application is other applications that require the display of a large number of complex images, such as modeling applications, are similar. Will not be listed here.
Fig. 3A is a schematic flow chart of an image processing method according to an embodiment of the present application. The method provided in this embodiment is applied to an electronic device, which may include a game application, a CPU, a GPU, and an internal memory (hereinafter referred to as a memory). In fig. 3A, a description will be given of a scheme by taking rendering of a main scene of a game as an example.
Step 301, the game application sends a rendering instruction stream 1 of a first frame to the CPU.
After the game application is started, the game application issues a rendering instruction stream 1 of a first frame of a game scene, wherein the rendering instruction stream 1 comprises a plurality of rendering instructions, and the rendering instructions comprise rendering instructions of a game main scene of the first frame and rendering instructions of other scenes of the first frame.
In a mobile terminal, the rendering instruction stream issued by the game application is typically an OpenGLES or Vulkan instruction stream.
Step 302, the CPU obtains the rendering information of the first frame and identifies the game main scene of the first frame.
The CPU receives the rendering instruction stream 1 of the first frame, and acquires the rendering information of the first frame from the rendering instruction stream 1. The rendering information includes, for example, the number of draw calls (Drawcall), the number of vertices, resolution, and the like. The rendering information of the first frame includes rendering information of a game main scene of the first frame and rendering information of other scenes of the first frame.
The CPU may identify the main scene of the first frame based on the rendering information of the first frame, that is, determine a rendering instruction for rendering the main scene of the first frame in the rendering instruction stream 1, and may also determine rendering instructions for rendering other scenes of the first frame.
In one possible implementation, the CPU may identify the primary scene of the first frame based on the number of Drawcall in the rendering information. Specifically, the rendering scene with the largest number of dragball is determined as the main scene of the game.
In a possible embodiment, the CPU may also identify the main scene of the first frame in combination with the resolution in the rendering information, i.e. the CPU may also identify the main scene of the first frame based on the number of Drawcall in the rendering information and the resolution information. For example, whether the rendering scene is a main game scene may be determined in combination with whether the resolution satisfies a preset condition, where whether the resolution satisfies the preset condition refers to whether the resolution of the current scene is approximately equal to the screen resolution of the electronic device. For the main scene, the resolution of the main scene is approximately equal to the screen resolution of the electronic device, so that the present scene can be identified as the main scene by combining the preset conditions.
Step 303, the CPU stores the game master scene information of the first frame in the memory.
The first frame of game main scene information, that is, rendering information of the first frame of game main scene, includes vertex data, color attachment information, depth/template attachment information, resolution information, and the like for rendering the first frame of game main scene.
It should be noted that, in general, the game master scene information of the first frame is stored in the memory until the user exits or closes the game, or the game master scene information changes greatly. A major change in the game master scene may be understood as a major change in key information in the game master scene information, such as a major change in default frame buffer identification ID or resolution. When the game master scene information is changed greatly, the CPU does not execute the scheme of the application any more, and the traditional scheme (such as single-pixel coloring) is adopted for rendering the game scene. In the embodiment of the application, single-pixel coloring is coloring operation with granularity of one pixel, that is, coloring operation is performed with unit of single pixel, for example, when a game main scene of a certain frame is rendered, image rendering is performed on the game main scene of the frame with unit of single pixel based on a preset coloring rate.
Step 304, the CPU instructs the GPU to create a variable colorant rate (VRS) resource.
After acquiring the rendering information of the game main scene of the first frame, the CPU instructs the GPU to initialize the VRS resource according to the rendering information of the game main scene of the first frame, namely instructs the GPU to create the VRS resource. Creating the VRS resource includes: a shading rate texture is created, a computing pipeline (computer pipeline) is created, a shading rate computing model is initialized, and a signal synchronization mechanism is created.
Wherein, creating the coloring rate texture refers to creating a block of storage space in the memory for recording the coloring rate. The shading rate calculation model is a data processing model running in a calculation pipeline and is used for calculating the shading rates of different areas of a rendered image based on high and low frequency information of the rendered image. The signal synchronization mechanism refers to signal synchronization between the GPU and the CPU, including signal synchronization of whether or not shading rate texture data is available in memory.
It should be noted that, the computing pipeline created in this embodiment is a newly added computing pipeline in the GPU, the above-mentioned shading rate computing model is deployed in the computing pipeline, and the computing process is relatively independent, so that the original game rendering process is not affected.
Steps 301 to 304 show the initialization (start-up) phase of the game rendering. In the initialization stage, the CPU identifies the rendering information of the main scene of the game by acquiring the rendering instruction stream issued by the game application, creates VRS resources based on the rendering information of the main scene of the game, and provides support for the execution of the subsequent rendering scheme.
Step 305, the CPU determines whether shading rate texture data is available.
Alternatively, the CPU may determine whether the shading rate texture data is available through the state of the shading rate texture data, which includes both available and unavailable. Typically, the state of the shading rate texture data is unavailable until the first frame of the game master scene is rendered.
In step 306, if it is determined that the shading rate texture data is not available, the CPU calls back rendering instruction stream 1 to the GPU.
Callback rendering instruction stream 1 may be understood as the CPU sending the original rendering instruction stream 1 to the GPU.
Step 307, the CPU renders the main game scene of the first frame in a single pixel rendering manner.
It should be appreciated that at the initial stage of game rendering, there is no available shading rate texture data, so for rendering of the primary scene of the first frame, image rendering may be performed in a pixel-by-pixel shading manner based on a preset shading rate, to obtain a rendering map of the primary scene of the first frame.
Step 308, the GPU stores the rendering map of the game main scene of the first frame to the memory.
After the rendering operation of the game main scene of the first frame is completed, other scenes of the first frame are rendered on the basis of the rendering graph of the game main scene of the first frame.
Step 309, the CPU determines whether to update the shading rate texture data.
Step 310, if the CPU determines to update the shading rate texture data, a calculation instruction is sent to the GPU.
After the rendering of the primary scene for the first frame is completed, initial shading rate texture data may be calculated, see step 311. After each frame of the main scene is rendered, whether the coloring rate texture data needs to be updated or not can be determined according to a preset updating strategy, and the following descriptions of step 321 and step 322 can be referred to.
Step 311, the GPU responds to the calculation instruction to calculate the shading rate texture data based on the rendering map of the game main scene of the first frame.
Specifically, the GPU takes a rendering image of the game main scene of the first frame as input of a coloring rate calculation model in the GPU, the coloring rate calculation model firstly extracts high-frequency information and low-frequency information of the rendering image of the game main scene of the first frame, and then calculates coloring rates of different contents (or areas) in the rendering image according to the high-frequency information and the low-frequency information in the rendering image, so as to obtain coloring rate texture data.
Step 312, the GPU stores the calculated shading rate texture data to the memory. The shading rate texture data may be used for rendering of the game master scene for a subsequent frame (next frame or frames).
Step 313, the GPU sends the available signal of the shading rate texture data to the CPU.
After the calculation of the shading rate texture data is completed, the GPU sends an available signal of the shading rate texture data to the CPU so that the CPU can be informed of starting the shading rate texture data in the internal memory to conduct image rendering when the rendering instruction stream of the next frame arrives.
Steps 305 through 313 illustrate the rendering of the first frame game master scene after the initialization phase, which is rendered with single pixel rendering because the shading rate texture data is not available (i.e., no shading rate texture data is available). After the rendering graph of the game main scene of the first frame is obtained, the GPU calculates the coloring rate texture data of the game main scene of the first frame through a preset coloring rate calculation model and is used for rendering the game main scene of the subsequent frame.
On the basis of the above-described embodiment, a description will be given below of a rendering process of a game main scene of a second frame of a game application with reference to fig. 3B. Fig. 3B is a schematic flow chart of an image processing method according to an embodiment of the present application. As shown in fig. 3B, the image processing method further includes:
step 314, the game application sends a rendering instruction stream 2 of the second frame to the CPU.
Similar to the first frame of rendering instruction stream 1, the second frame of rendering instruction stream 2 likewise includes a plurality of rendering instructions including rendering instructions for the second frame of the game master scene and rendering instructions for other scenes of the second frame.
Step 315, the CPU acquires rendering information of the second frame, and identifies a game main scene of the second frame.
The CPU receives the rendering instruction stream 2 of the second frame, and acquires the rendering information of the second frame from the rendering instruction stream 2. The rendering information of the second frame includes rendering information of a game main scene of the second frame and rendering information of other scenes of the second frame.
The CPU may identify the game master scene of the second frame based on the rendering information of the second frame, i.e., determine rendering instructions for rendering the game master scene of the second frame and rendering instructions for rendering other scenes of the second frame in the rendering instruction stream 2. The manner of identifying the main scene of the second frame may refer to the first frame, and will not be described herein.
Step 316, the CPU determines to update the game master scene information.
The CPU identifies the main scene of the second frame, and acquires main scene information of the second frame, namely rendering information of the main scene of the second frame. The rendering information of the game main scene of the second frame includes vertex data, color attachment information, depth/template attachment information, resolution information, and the like for rendering the game main scene of the second frame. The CPU compares the game master scene information of the front frame and the rear frame, and in one case, if the game master scene information of the front frame and the rear frame does not change greatly, if key information (default buffer frame ID and/or resolution) in the game master scene information of the front frame and the rear frame is unchanged, the game master scene information can be updated based on the game master scene information of the rear frame. In another case, if the game master scene information of the front frame and the rear frame is changed greatly, the memory space for storing the game master scene information can be released, and the traditional scheme is adopted for rendering the game scene.
Step 317, the CPU determines whether shading rate texture data is available.
If it is determined that the shading rate texture data is available, the CPU sends an instruction to the GPU to use the shading rate texture data, step 318.
Based on step 313, the cpu records the shading rate texture data as available, so that the shading rate texture data in the memory can be directly used to render the game main scene of the second frame.
Step 319, the GPU renders the game master scene of the second frame with the shading rate texture data in the memory.
Step 320, the GPU stores the rendering map of the game main scene of the second frame to the memory.
And after the rendering operation of the game main scene of the second frame is completed, rendering other scenes of the second frame on the basis of the rendering graph of the game main scene of the second frame.
Step 321, the CPU determines whether to update the shading rate texture data.
In step 322, if it is determined to update the shading rate texture data, the CPU sends a calculation instruction to the GPU.
It should be noted that, in this embodiment, it is assumed that the CPU updates the shading rate texture data once for each frame, so after completing the rendering operation of the game main scene of the second frame, the CPU needs to send a calculation instruction to the GPU, where the calculation instruction is used to instruct the GPU to calculate and update the shading rate texture data, and the updated shading rate texture data is used for rendering the subsequent frame.
In one possible implementation, the CPU may be based on a preset update policy. For example, the shading rate texture data is updated with every frame or every two frames.
In one possible implementation, the CPU may determine whether to update the shading rate texture data based on the running condition of the game application.
Optionally, the operating condition includes an operating speed of the object in the scene. For example, the running speed of a target object (e.g., character, vehicle, etc.) in a game in a scene, etc., determines whether to update the shading rate texture data. If the running speed of the target object in the scene is greater than or equal to a preset value, updating the coloring rate texture data once by adopting each frame; if the running speed of the target object in the scene is smaller than the preset value, the coloring rate texture data can be updated every two frames (particularly, a proper value can be set according to the game test result).
Optionally, the run-time condition includes a motion vector of a screen pixel in the scene.
By setting a preset updating strategy, the dynamic updating of the shading rate texture data is realized, so that the calculated amount of the GPU is effectively reduced.
Step 323, the GPU responds to the calculation instruction to calculate the shading rate texture data based on the rendering map of the game main scene of the second frame.
Specifically, the GPU takes a rendering image of the game main scene of the second frame as an input of a coloring rate calculation model in the GPU, the coloring rate calculation model firstly extracts high-frequency information and low-frequency information of the rendering image of the game main scene of the second frame, and then calculates coloring rates of different contents (or areas) in the rendering image according to the high-frequency information and the low-frequency information in the rendering image, so as to obtain coloring rate texture data.
Step 324, the GPU updates the shading rate texture data in the memory.
It should be noted that, the GPU updates the shading rate texture data in the memory to refer to: replacing the coloring rate texture data calculated by the rendering map of the game main scene based on the first frame with: and the rendering rate texture data calculated based on the rendering map of the game main scene of the second frame. The updated shading rate texture data may be used for rendering of the main game scene for the next frame (or frames).
Step 325, the GPU sends the CPU an available signal for the shading rate texture data.
Steps 321 to 325 show the update process of the shading rate texture data. The GPU determines whether the coloring rate texture data needs to be updated after the rendering of each frame of game main scene is completed based on a preset updating strategy, and the rendering of the game main scene of the subsequent frame is performed based on the coloring rate texture data recorded by the memory so as to improve the rendering efficiency of the GPU.
In the above embodiment, in the initialization stage of game rendering, the CPU identifies rendering information for rendering a game main scene in a rendering instruction stream in response to the rendering instruction stream from a game application to instruct the GPU to create a shading rate texture, create a computing pipeline, initialize a shading rate computation model, and create a signal synchronization mechanism based on the rendering information of the game main scene. In the execution stage of game rendering, the GPU renders the first frame of game main scene in a single-pixel coloring mode to obtain a rendering image of the first frame of game main scene, and then the coloring rate self-adaptive adjustment can be carried out on the content of the rendering image of the first frame of game main scene through a coloring rate calculation model to obtain coloring rate texture data corresponding to the first frame. When the GPU renders the subsequent frames, the coloring rate of different areas in the main scene of the game can be directly adjusted by using the coloring rate texture data corresponding to the first frame. According to the scheme, the coloring rate of different areas is determined through the image content, so that the equipment rendering power consumption can be reduced and the rendering efficiency can be improved while certain image quality requirements are met.
It should be noted that, in the above embodiment, the first frame and the second frame of the game scene are merely taken as examples to describe the scheme, and the image processing scheme of the other frames in the game scene may refer to the second frame.
In the example shown in fig. 2, a hardware composition of an electronic device is provided. In some embodiments, the electronic device may also run an operating system through its various hardware components (e.g., the hardware components shown in FIG. 2). In the operating system, different software hierarchies may be provided, thereby implementing the operation of different programs.
Fig. 4 is a schematic diagram of software components of an electronic device according to an embodiment of the present application. As shown in fig. 4, the electronic device may include an application layer 401, a framework layer 402, a system library 403, a hardware layer 404, and the like.
The application layer 401 may also be referred to as an application layer, or an Application (APP) layer. In some implementations, the application layer can include a series of application packages. The application package may include camera, gallery, calendar, talk, map, navigation, WLAN, bluetooth, music, video, short message, etc. applications. The application package may also include applications that require presentation of pictures or video to a user by rendering images. For example, the application included in the application layer 401 may be a game-type application, a home design-type application, or the like.
Framework layer 402 may also be referred to as an application framework layer. The framework layer 402 may provide an application programming interface (application programming interface, API) and programming framework for the application programs of the application layer 401. The framework layer 402 includes some predefined functions.
By way of example, the framework layer 402 may include a window manager, a content provider, a view system, a resource manager, a notification manager, an activity manager, an input manager, and the like. The Window manager provides a Window management service (Window ManagerService, WMS), and WMS may be used for Window management, window animation management, surface management, and as a relay station for an input system. The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc. The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture. The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like. The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc. The activity manager may provide activity management services (Activity Manager Service, AMS) that may be used for system component (e.g., activity, service, content provider, broadcast receiver) start-up, handoff, scheduling, and application process management and scheduling tasks. The input manager may provide input management services (Input Manager Service, IMS), which may be used to manage inputs to the system, such as touch screen inputs, key inputs, sensor inputs, and the like. The IMS retrieves events from the input device node and distributes the events to the appropriate windows through interactions with the WMS.
The system library 403 may include a plurality of functional modules. For example: surface manager (surface manager), media Framework (Media Framework), standard C libraries (Standard C library, libc), open graphics libraries of embedded systems (OpenGL for Embedded Systems, openGL ES), vulkan, SQLite, webkit, etc.
The surface manager is used for managing the display subsystem and providing fusion of 2D and 3D layers for a plurality of application programs. Media frames support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio video encoding formats, such as: dynamic picture experts group 4 (Moving PicturesExperts Group, MPEG 4), h.264, dynamic picture experts group compressed standard audio layer3 (Moving PictureExperts Group Audio Layer, MP 3), advanced audio coding (Advanced Audio Coding, AAC), adaptive Multi-Rate (AMR), joint picture experts group (Joint Photographic ExpertsGroup, JPEG, otherwise known as JPG), portable network graphics (Portable Network Graphics, PNG), and the like. OpenGLES and/or Vulkan provide drawing and manipulation of 2D graphics and 3D graphics in applications. SQLite provides a lightweight relational database for applications of the electronic device 400. In some implementations, openGL ES in system library 403 can provide variable shading rate functionality. The electronic device may call the variable shading rate API in OpenGL ES when it is required to execute a variable shading rate for a certain drawing command (draw call), and implement the variable shading rate for the current draw call together with other instructions. For example, the electronic device may use a lower coloring rate (e.g., 2×1,2×2,4×2,4×4, etc.) to color the current draw call, thereby reducing the overhead associated with coloring the current draw call.
In the example of fig. 4, a hardware layer 404 may also be included in the electronic device. The hardware layer 404 may include a processor (e.g., CPU, GPU, etc.), and a component having a storage function (e.g., the internal memory 221 shown in fig. 2, etc.). In some implementations, the CPU may be configured to control each module in the framework layer 402 to implement its respective function, and the GPU may be configured to perform a corresponding rendering process according to an API in a graphics library (e.g., openGL ES) called by an instruction processed by each module in the framework layer 402.
Fig. 5 is a schematic flow chart of an image processing method according to an embodiment of the present application. As shown in fig. 5, one or more functional modules may be disposed in a frame layer of an electronic device, so as to implement the image processing scheme provided by the embodiment of the present application. Illustratively, an interception module, a calculation control module, and the like may be disposed in the framework layer. And, the CPU in the hardware layer can control each module in the framework layer to realize the respective function, and the GPU or NPU can execute corresponding rendering processing according to the API in the graphics library (such as OpenGL ES) called by the instruction processed by each module in the framework layer.
The image processing method provided by the embodiment of the application will be described in further detail below with reference to each module. The following examples are described by taking the Vulkan API framework as an example.
In the initialization stage of game rendering, an interception module in the framework layer can intercept a rendering instruction stream issued by a game application, and game rendering information is read from the rendering instruction stream, wherein the game rendering information comprises rendering information of a game main scene and rendering information of other scenes.
The interception module can determine the main scene of the game through the calling times of Drawcall instructions between the two interfaces of vkCmdBungerinRenderpass and vkCmdEndRenderpass, namely, the main scene of the game is identified. As one example, the most numerous scenes of a Drawcall may be taken as the game master scene. Illustratively, taking the Race Master game as an example, the call of the draw call instruction includes vkCmdDrawIndexed, vkCmdDraw, vkCmdBindIndexBuffer three interfaces, and the number of draw calls is determined by determining the call times of the three interfaces. As one example, a rendered scene having a resolution that satisfies a preset condition may be determined as a game master scene, wherein the preset condition includes a scene resolution approximately equal to a screen resolution of the electronic device.
After the interception module identifies the game main scene of any frame, the interception module can acquire VkRenderpassCreateInfo, vkFramebufferCreateInfo, vkGraphicsPipeLineCreateInfo structural body information and the like through intercepting interfaces such as vkCreateRenderPass, vkCreateFramebuffer, vkCreateGraphicsPipelines, and acquire rendering information of the game main scene from the structural body information, wherein the rendering information of the game main scene comprises ImageView image information, frame buffer information, render channel information, pipeline information and the like of the game main scene. The image view information comprises resolution information and the like, the frame buffer information comprises an identifier of a default buffer frame, the rendering channel information comprises an identifier of a rendering channel, color attachment information and depth/template attachment information, and the pipeline information comprises an identifier of a computing pipeline.
The interception module can send the acquired rendering information of the game main scene to the calculation control module of the frame layer so that the calculation control module records the rendering information of the game main scene. Optionally, the computing control module may send a cache instruction to the memory to store rendering information of the game master scene to the memory. Through the backup storage of the rendering information of the main game scene, the realization of a subsequent rendering scheme is ensured.
After the computing control module obtains the rendering information of the game main scene, the computing control module can send a creating instruction to the GPU of the hardware layer through the graphic library of the system library, wherein the creating instruction is used for instructing the GPU to initialize VRS resources according to the rendering information of the game main scene, namely instructing the GPU to create a coloring rate texture, a computing pipeline, initializing a coloring rate computing model, creating a signal synchronization mechanism and the like. In this embodiment, the creation instruction sent by the calculation control module may correspond to step 304 of the embodiment shown in fig. 3A.
Illustratively, taking the rate Master game as an example, a GPU may create a shading rate texture by vkCreateImageView, create a rendering channel render pass containing shading rate texture attachments Attachment by vkCreaterender 2KHR, create a frame buffer containing shading rate texture by vkCreateFramebuffer, and create a compute pipeline comprising variable shading rate characteristics by vkCreateGraphics pipeline.
In the execution stage of game rendering, for the rendering of the game main scene of the first frame, as no available shading rate texture data is available initially, when the shading rate texture data is determined to be unavailable, the interception module can send a rendering instruction of the first frame, namely an original rendering instruction of the callback first frame, to the GPU through the graphics library so that the GPU renders the game main scene of the first frame in a single pixel shading mode.
After the GPU renders the first frame of the game master scene in a single pixel shading manner, the GPU may store the rendered map of the first frame of the game master scene to memory for subsequent GPUs to calculate shading rate texture data.
After the GPU finishes rendering each frame of the main scene of the game, the calculation control module may determine whether to update the shading rate texture data according to a preset updating policy of the shading rate texture data, or according to the running condition of the current game. If the calculation control module determines that the coloring rate texture data needs to be updated, a calculation instruction is sent to the GPU through the graphic library, so that the GPU obtains a rendering image of the game main scene of the current frame from the memory, calculates the coloring rate texture data based on the rendering image of the game main scene of the current frame, and stores the calculated coloring rate texture data into the memory for image rendering of the subsequent frame. Taking the Race Master game as an example, the computing control module can configure the computing pipeline state through vkCmdBIndpipeline, configure computing resources through vkCmdBIndDescriptorSets, and issue computing instructions through vkCmdDispatch.
After the rendering of the game main scene of the first frame is completed, the GPU receives a calculation instruction from the calculation control module, a rendering diagram of the game main scene of the first frame can be obtained from the memory, the rendering diagram of the game main scene of the first frame is input into a coloring rate calculation model in the GPU, coloring rate texture data corresponding to the first frame is obtained, and the coloring rate texture data is used for rendering the game main scene of the subsequent frame. If the update strategy of the coloring rate texture data is updated once every two frames, the coloring rate texture data corresponding to the first frame can be used for rendering the game main scene of the second frame and the third frame. After the rendering of the game main scene of the third frame is completed, the GPU receives a computing instruction from the computing control module, further obtains a rendering diagram of the game main scene of the third frame from the memory, and inputs the rendering diagram of the game main scene of the third frame into the coloring rate computing control module in the GPU to obtain coloring rate texture data corresponding to the third frame. The coloring rate texture data corresponding to the first frame in the memory can be replaced by the coloring rate texture data corresponding to the third frame for rendering the game main scene of the subsequent frame.
After the GPU calculates the shading rate texture data corresponding to the rendering graph of the game main scene of a certain frame based on the calculation instruction, a signal for enabling the shading rate texture data to be available can be sent to the interception module, and the interception module records the state of the shading rate texture data based on the signal so as to determine whether to use the shading rate texture data in the internal memory or not after intercepting the rendering instruction.
In the above embodiment, the interception module intercepts the rendering instruction stream issued by the game application, identifies the main game scene therefrom, and transmits the rendering information of the main game scene to the calculation control module. The interception module may also determine whether the current frame rendering initiates the shading rate texture data based on the state of the shading rate texture data. When the state of the shading rate texture data is available, the interception module sends an instruction for using the shading rate texture data to the GPU through the graphic library. The calculation control module is mainly responsible for completing resource configuration of the shading rate texture data and determining whether the current frame needs to update the shading rate texture data. If the shading rate texture data needs to be updated, the calculation control module issues a calculation instruction to the GPU through the graphic library, and the GPU completes calculation of the shading rate texture data. After the GPU completes the calculation of the shading rate texture data, a signal that the shading rate texture data is available can be sent to the interception module, so that the interception module can determine whether to send an instruction using the shading rate texture data to the GPU after intercepting the rendering instruction stream.
According to the scheme, the coloring rates of different areas of the image are determined by identifying the rendering graph content of the game main scene of the current frame so as to guide the rendering of the game main scene of the subsequent frame, so that the rendering power consumption of equipment can be reduced, and too much image details are not lost. And through the additionally added calculation pipeline, the calculation of the shading rate texture data of the rendering map of the main scene of the game is executed, so that the original game rendering flow is not influenced. Whether to update the shading rate texture data is dynamically determined through a preset updating strategy, so that the calculated amount of the GPU can be further reduced, and the rendering efficiency is improved.
Based on the embodiment shown in fig. 5, the following describes in detail the interaction procedure of the internal module of the electronic device when executing the image processing scheme, in combination with a specific embodiment.
Fig. 6A is a schematic flow chart of an image processing method according to an embodiment of the present application. In fig. 6A, a description will be given of a scheme by taking rendering of a main game scene as an example.
Step 601, the game application sends a rendering instruction stream 1 of a first frame.
Step 602, an interception module intercepts a rendering instruction stream 1 and obtains game main scene information of a first frame.
Step 603, the interception module sends the game main scene information of the first frame to the calculation control module.
Step 604, the computing control module sends a cache instruction to the memory, where the cache instruction is used to instruct the memory to store the game main scene information of the first frame.
Step 605, the computation control module sends a creation instruction to the GPU.
Step 606, the GPU creates a variable colorant rate VRS resource in response to the create instruction.
Steps 601 to 606 are initialization phases of game rendering, in the initialization phases, game master scene information is identified by intercepting a rendering instruction stream issued by a game application, VRS resources are initialized based on the game master scene information, and support is provided for execution of a subsequent rendering scheme.
Step 607, the rendering module determines whether shading rate texture data is available.
In step 608, if it is determined that the shading rate texture data is not available, the rendering module calls back the rendering instruction stream 1 to the GPU.
Step 609, the GPU renders the game master scene of the first frame in a single pixel rendering manner.
Step 610, the GPU stores the rendered map of the game main scene of the first frame to the memory.
In step 611, the computation control module determines whether to update the shading rate texture data.
Step 612, if the computation control module determines to update the shading rate texture data, a computation instruction is sent to the GPU.
Step 613, the GPU calculates the shading rate texture data based on the rendering map of the game main scene of the first frame in response to the calculation instruction.
Step 614, the GPU stores the calculated shading rate texture data to the memory.
Step 615, the GPU sends an available signal of the shading rate texture data to the interception module.
Step 607 to step 615 are the rendering execution stages of the first frame game main scene after the game is initialized, and the rendering of the first frame game main scene still adopts the traditional single-pixel rendering mode, however, after the rendering of the first frame game main scene is completed, the GPU needs to analyze the image frequency domain information based on the rendering map of the first frame game main scene, calculate the rendering rates of different areas in the image, and use for the rendering of the subsequent frame game main scene.
Fig. 6B is a schematic flow chart of an image processing method according to an embodiment of the present application. On the basis of the embodiment of fig. 6A, as shown in fig. 6B, the image processing method further includes:
step 616, the game application sends a rendering instruction stream 2 of the second frame.
Step 617, the interception module intercepts the rendering instruction stream 2 to obtain game master scene information of the second frame.
Step 618, the interception module determines to update the game master scene information.
If the first frame game master scene information and the second frame game master scene information do not change greatly, the game master scene information can be updated based on the second frame game master scene information, namely the second frame game master scene information is used for replacing the first frame game master scene information.
Step 619, the intercept module determines whether shading rate texture data is available.
Step 620, if the interception module determines that the shading rate texture data is available, an instruction to use the shading rate texture data is sent to the GPU.
Step 621, the GPU renders the game master scene of the second frame with the shading rate texture data in the memory.
Step 622, the GPU stores the rendered map of the game main scene of the second frame to the memory.
Step 616 to step 622 are rendering execution phases of the second frame game main scene, wherein the rendering of the second frame game main scene adopts the coloring rate texture data determined based on the first frame rendering graph to render the content of the low frequency region in the second frame main scene at a lower coloring rate, and the content of the high frequency region at a higher coloring rate, so as to realize differentiated rendering of different regions in the main scene, thereby reducing equipment rendering load and improving rendering efficiency while ensuring image quality.
Step 623, the calculation control module determines whether to update the shading rate texture data.
The calculation control module determines whether the shading rate texture data needs to be updated currently based on a preset updating strategy, and the updating strategy can refer to the above.
Step 624, if the computation control module determines to update the shading rate texture data, then a computation instruction is sent to the GPU.
Step 625, the GPU calculates the shading rate texture data based on the rendered map of the game main scene of the second frame in response to the calculation instruction.
Step 626, the GPU updates the shading rate texture data in the memory.
In step 627, the GPU sends an available signal of the shading rate texture data to the interception module.
Step 623 to step 627 are update stages of the shading rate texture data, a reasonable update policy can be set according to actual requirements, and subsequent frames can be rendered based on the shading rate texture data updated in the memory, so as to improve the rendering efficiency of the GPU.
The embodiment of the application provides an image processing method which is applied to electronic equipment, wherein an application program is operated on the electronic equipment, and the method comprises the following steps: acquiring a first rendering instruction from an application program, wherein the first rendering instruction is used for drawing first image content of an N-th frame image, and N is a positive integer; drawing first image content based on the first rendering instruction and the shading rate texture data when the shading rate texture data stored in the preset storage space is determined to be available; acquiring a second rendering instruction from the application program, wherein the second rendering instruction is used for drawing second image content of an N-th frame image, and the drawing call frequency of the second image content is smaller than that of the first image content; and drawing the second image content in units of single pixels based on the second rendering instruction and the preset coloring rate.
Wherein the shading rate texture data is used for indicating the shading rate of different areas of the drawn first image content.
By way of example, the application may be a gaming application, and the first image content may be image content of a main scene of the gaming application, a main portion of the gaming image, including, for example, characters, buildings, floors, mountains, etc. The secondary image content may be other image content in the game application than the primary scene, being an auxiliary part of the game image, including for example effects of lighting, shadows, etc. The rendering power consumption of the first image content is greater than the rendering power consumption of the second image content.
Alternatively, the application program may be other applications that need image rendering, such as a home design application program and a three-dimensional modeling application program, which is not limited to the present application.
In this embodiment, for the first image content of the nth frame image, the first image content of the nth frame image may be drawn directly using the coloring rate texture data of the preset storage space, and since the coloring rate texture data indicates the coloring rates of different areas of the image, the coloring rates of the different areas are high or low, so that the device rendering overhead may be reduced to a certain extent. For the second image content of the nth frame image, because the number of draw calls of the second image content is much smaller than that of the first image content, the second image content is subjected to image rendering by taking a single pixel as a unit based on the preset coloring rate, so that a more ideal rendering effect can be achieved. In contrast, if the second image content is also rendered by using the shading rate texture, based on the same principle, the device needs to calculate the shading rate texture data for drawing the second image content in advance, which will additionally increase the calculation amount of the device, and in the case that the rendering image quality is not greatly improved, the rendering load is increased instead.
In one possible embodiment, the shading rate texture data is determined based on a rendering map of the third image content of the mth frame image; the number of draw calls of the third image content is greater than that of the image content except the third image content in the Mth frame image, the difference between N and M is greater than 0 and less than or equal to a preset threshold, and M is a positive integer.
For example, based on the above example, the third image content may be image content of a main scene of the game application, and since the mth frame image is different from the nth frame image, there is a certain difference between the third image content and the first image content.
In this embodiment, the mth frame image is an image before the nth frame image, and when the first image content of the nth frame image is rendered, the coloring rate texture determined based on the rendering image of the third image content of the mth frame image can be directly multiplexed, so that compared with the image rendering of the first image content pixel by pixel based on the preset coloring rate, the calculation amount of the device can be reduced to a certain extent, and the rendering load of the device can be effectively reduced.
In a possible embodiment, the method further comprises: acquiring a rendering diagram of third image content of an Mth frame image; determining an image intensity change of a rendered map of the third image content; determining the coloring rate of different areas in the rendering diagram of the third image content based on the image intensity change of the rendering diagram of the third image content to obtain coloring rate texture data; and storing the coloring rate texture data in a preset storage space.
Wherein the image intensity variation of the rendered map may comprise a variation in gray scale and/or brightness of the rendered map.
In this embodiment, through the image intensity analysis of the rendering map of the third image content of the mth frame image, different coloring rates are set for the areas with different image intensity changes in the third image content, so as to instruct the rendering of one or more frames of images after the mth frame, and effectively reduce the rendering load of the device.
In a possible implementation manner, the rendering map of the third image content includes a first area and a second area, and the coloring rate texture data includes a coloring rate of the first area and a coloring rate of the second area; if the image intensity variation of the first area is larger than the image intensity variation of the second area, the coloring rate of the first area is larger than that of the second area; if the image intensity variation of the first area is smaller than the image intensity variation of the second area, the coloring rate of the first area is smaller than the coloring rate of the second area; if the image intensity variation of the first area is equal to the image intensity variation of the second area, the coloring rate of the first area is equal to the coloring rate of the second area.
In this embodiment, by setting a higher coloring rate for a region in the third image content where the image intensity variation is large, setting a lower coloring rate for a region in the third image content where the image intensity variation is small, a variable coloring rate is realized, thereby reducing the load of the device rendering.
In a possible implementation manner, an interception module and an image processor GPU are configured in the electronic device; acquiring a first rendering instruction from an application program, including: the interception module intercepts a first rendering instruction; when it is determined that the shading rate texture data stored in the preset storage space is available, drawing first image content based on the first rendering instruction and the shading rate texture data, including: when the interception module determines that the shading rate texture data is available, the interception module issues a first shading instruction using the shading rate texture data to the GPU through a graphic library of the electronic equipment; the GPU renders the first image content based on the first shading instruction and the shading rate texture data.
In this embodiment, the rendering module intercepts the first rendering instruction, and when knowing that the first image content needs to be rendered, the rendering module needs to determine whether available tinting rate texture data exists in the pre-stored space, and if the available tinting rate texture data exists, the rendering module informs the GPU through the graphics library to directly use the tinting rate texture data to draw the first image content, so that the rendering overhead in the tinting process is saved, and the power consumption of the device is reduced.
In one possible implementation, obtaining the second rendering instruction from the application program includes: the interception module intercepts a second rendering instruction from the application program; drawing the second image content in units of single pixels based on the second rendering instruction and the preset shading rate, including: the interception module issues a second coloring instruction using single pixel coloring to the GPU through the graphic library; the GPU draws second image content by taking single pixels as units based on the second coloring instruction and the preset coloring rate.
In the embodiment, the rendering module intercepts the second rendering instruction, and informs the GPU to draw the second image content in a single-pixel coloring mode through the graphic library when the second image content needs to be rendered. Compared with the first image content, the second image content has lower power consumption in rendering, and the second image content is directly rendered by adopting a single-pixel coloring mode, so that a more ideal rendering effect can be achieved, and if the first image content is rendered in a similar rendering mode, the rendering cost is increased.
In a possible implementation manner, the electronic device is configured with a computation control module, and after the GPU finishes drawing the first image content, the method further includes: the calculation control module determines whether to update the shading rate texture data; if the calculation control module determines to update the coloring rate texture data, the calculation control module issues a calculation instruction to the GPU through the graphic library; the GPU determines shading rates of different areas in the first image content based on the computing instructions and a rendering graph of the first image content to update shading rate texture data.
The GPU is preset with a coloring rate calculation model, the coloring rate calculation model can be used for analyzing frequency domain information of a rendering graph of the first image content, and different coloring rates are set for a region with a higher frequency domain and a region with a lower frequency domain in the first image content, namely coloring rate texture data are calculated based on the rendering graph of the first image content. The frequency domain information indicates the image intensity variation condition of the rendering map, the region image intensity variation of the higher frequency domain is larger, and the region image intensity variation of the lower frequency domain is smaller.
In this embodiment, the computing control module may update the shading rate texture data of the preset storage space in time according to a preset update policy, and the GPU may calculate the latest shading rate texture data according to the instruction of the computing control module, so as to satisfy image rendering of a subsequent image frame, and reduce the rendering overhead of the device on a continuous image frame.
In one possible implementation, the computing control module determines whether to update shading rate texture data, comprising: if the difference value between N and M is equal to a preset threshold value, the calculation control module determines to update the coloring rate texture data; if the difference between N and M is smaller than the preset threshold, the calculation control module does not update the coloring rate texture data.
For example, the preset threshold may be 2, and if the difference between N and M is equal to 2, the shading rate texture data needs to be updated, that is, the shading rate texture data needs to be updated every two frames. After completing the rendering of the first image content of the nth frame image, the GPU calculates new shading rate texture data based on the rendered map of the first image content of the nth frame image, the new shading rate texture data being usable for rendering of a fourth image content of the n+1st frame and a fifth image content of the n+2nd frame, which may be image content of a main scene such as a gaming application.
In this embodiment, the rendering overhead of the device may be further reduced only when the difference between N and M is equal to the preset threshold, that is, the rendering image between the mth frame image and the nth frame image may directly multiplex the rendering image determined based on the third image content of the mth frame image.
In a possible embodiment, the method further comprises: after the GPU updates the shading rate texture data, the GPU sends a notification signal to the interception module, wherein the notification signal is used for indicating whether the shading rate texture data is available.
In this embodiment, the interception module obtains whether the shading rate texture data of the preset storage space is available through the notification signal sent by the GPU, so that after intercepting the rendering instruction, the interception module sends a corresponding shading instruction to the GPU, and whether the shading rate texture is multiplexed or not can be indicated in the shading instruction, so as to reduce the equipment rendering overhead.
Based on the above embodiments, the image processing method provided by the present application is summarized below by taking a game application as an example.
Fig. 7 is a schematic flow chart of an image processing method according to an embodiment of the present application. As shown in fig. 7, the electronic device opens a game application, and in the initialization phase, mainly performs the following actions:
Acquiring game master scene information, wherein the game master scene information comprises rendering parameters for rendering a game master scene; creating a shading rate texture, wherein the shading rate texture data are stored in a preset storage space of the electronic equipment; and initializing a computing pipeline, and initializing a shading rate computing model in the computing pipeline, wherein the shading rate computing model is used for computing shading rate texture data based on rendering results of the game main scene.
After the initialization is completed, the main scene image of the next frame (e.g., the first frame or any frame) is drawn. In an image queue (Graphics Pipeline), a current frame game main scene frame buffer is identified, whether available tinting rate texture data exists in a preset storage space is determined, if so, the current frame game main scene image is rendered by using the tinting rate texture data, if not, the current frame game main scene is rendered by adopting a single-pixel tinting mode, and then the rendering result is subjected to post-processing and UI rendering. The current frame game main scene rendering result is subjected to a series of post-processing (such as semitransparent object drawing, special effect drawing and the like) and UI rendering to obtain a final rendering result, the final rendering result is drawn to a frame buffer, and the result of the frame buffer is displayed on a screen through the logic of the game.
After the rendering of the current frame is completed, a determination is made as to whether the current frame updates the shading rate texture data calculation, in a manner that may be in accordance with the update policy described above. If it is determined that the coloring rate texture data needs to be updated, the rendering result of the current frame game main scene can be used as input of a coloring rate computing model in a computing queue, and the coloring rate computing model computes the coloring rate texture data according to the rendering result of the current frame game main scene so as to update the coloring rate texture data in a preset storage space and provide data support for the game main scene of the subsequent frame. If it is determined that the coloring rate texture data does not need to be updated, rendering of the next frame of game main scene can be directly entered.
The above-mentioned rendering process of the game main scene involves multiplexing of the coloring rate texture data, that is, the rendering of the current frame game main scene may use, for example, the coloring rate texture data determined by the rendering result of the previous frame or the previous frame game main scene of the current frame, and compared with the rendering process of each frame game main scene using single pixel coloring, the rendering cost of the device can be greatly reduced.
The embodiment of the application also provides electronic equipment, which can comprise: memory and one or more processors (e.g., CPU, GPU, NPU, etc.). The memory is coupled to the processor. The memory is for storing computer program code, the computer program code comprising computer instructions. When the processor executes the computer instructions, the electronic device may perform the various functions or steps performed by the device in the method embodiments described above.
The embodiment of the application also provides a chip system, and fig. 8 is a schematic structural diagram of the chip system provided by the embodiment of the application. As shown in fig. 8, the chip system 800 includes at least one processor 801 and at least one interface circuit 802. The processor 801 and the interface circuit 802 may be interconnected by wires. For example, interface circuit 802 may be used to receive signals from other devices (e.g., a memory of an electronic apparatus). For another example, interface circuit 802 may be used to send signals to other devices (e.g., processor 801). The interface circuit 802 may, for example, read instructions stored in a memory and send the instructions to the processor 801. The instructions, when executed by the processor 801, may cause the electronic device to perform the various steps of the embodiments described above. Of course, the system-on-chip may also include other discrete devices, which are not particularly limited in accordance with embodiments of the present application.
Embodiments of the present application also provide a computer readable storage medium having stored therein computer instructions which, when executed on an electronic device, cause the electronic device to perform the above-described related method steps to implement the image processing method in the above-described embodiments.
The embodiment of the application also provides a computer program product which, when run on a computer, causes the computer to perform the above-mentioned related steps to implement the image processing method in the above-mentioned embodiment.
In addition, the embodiment of the application also provides a device, which can be a chip, a component or a module, and can comprise a processor and a memory which are connected; the memory is used for storing computer-executable instructions, and when the device is running, the processor can execute the computer-executable instructions stored in the memory, so that the chip executes the image processing method in each method embodiment.
The electronic device, the computer readable storage medium, the computer program product or the chip provided by the embodiments of the present application are used to execute the corresponding method provided above, so that the beneficial effects thereof can be referred to the beneficial effects in the corresponding method provided above, and will not be described herein.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of functional modules is illustrated, and in practical application, the above-described functional allocation may be implemented by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to implement all or part of the functions described above.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated unit may be stored in a readable storage medium if implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present application and not for limiting the same, and although the present application has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present application without departing from the spirit and scope of the technical solution of the present application.

Claims (12)

1. An image processing method, applied to an electronic device, on which an application program is run, comprising:
acquiring a first rendering instruction from the application program, wherein the first rendering instruction is used for drawing first image content of an N-th frame image, and N is a positive integer;
when the fact that the coloring rate texture data stored in the preset storage space are available is determined, the first image content is drawn based on the first rendering instruction and the coloring rate texture data;
acquiring a second rendering instruction from the application program, wherein the second rendering instruction is used for drawing second image content of the Nth frame image, and the drawing call frequency of the second image content is smaller than that of the first image content;
and drawing the second image content in a unit of a single pixel based on the second rendering instruction and a preset coloring rate.
2. The method of claim 1, wherein the shading rate texture data is determined based on a rendering map of third image content of an mth frame image; the number of draw calls of the third image content is larger than that of the image content except the third image content in the Mth frame image, the difference between N and M is larger than 0 and smaller than or equal to a preset threshold, and M is a positive integer.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
acquiring a rendering diagram of third image content of an Mth frame image;
determining an image intensity change of a rendered map of the third image content;
determining the coloring rate of different areas in the rendering diagram of the third image content based on the image intensity change of the rendering diagram of the third image content to obtain the coloring rate texture data;
and storing the coloring rate texture data in the preset storage space.
4. A method according to claim 3, wherein the rendered map of the third image content comprises a first region and a second region, the shading rate texture data comprising a shading rate of the first region and a shading rate of the second region;
if the image intensity variation of the first area is larger than the image intensity variation of the second area, the coloring rate of the first area is larger than the coloring rate of the second area;
if the image intensity variation of the first area is smaller than the image intensity variation of the second area, the coloring rate of the first area is smaller than the coloring rate of the second area;
and if the image intensity variation of the first area is equal to the image intensity variation of the second area, the coloring rate of the first area is equal to the coloring rate of the second area.
5. The method according to any one of claims 1 to 4, wherein an interception module and an image processor GPU are configured in the electronic device;
acquiring a first rendering instruction from the application program, including:
the interception module intercepts the first rendering instruction;
when determining that the shading rate texture data stored in the preset storage space is available, drawing the first image content based on the first rendering instruction and the shading rate texture data, including:
when the interception module determines that the shading rate texture data is available, the interception module issues a first shading instruction using the shading rate texture data to the GPU through a graphic library of the electronic equipment;
the GPU renders the first image content based on the first shading instruction and the shading rate texture data.
6. The method of any of claims 1 to 5, wherein obtaining a second rendering instruction from the application program comprises: the interception module intercepts a second rendering instruction from the application program;
drawing the second image content in units of single pixels based on the second rendering instruction and a preset shading rate, including:
The interception module issues a second coloring instruction using single pixel coloring to the GPU through the graphic library;
the GPU draws the second image content by taking a single pixel as a unit based on the second coloring instruction and the preset coloring rate.
7. The method of claim 5, wherein a computing control module is configured in the electronic device, and wherein after the GPU completes rendering the first image content, the method further comprises:
the computation control module determines whether to update the shading rate texture data;
if the calculation control module determines to update the coloring rate texture data, the calculation control module issues a calculation instruction to the GPU through the graphic library;
the GPU determines the shading rate of different areas in the first image content based on the computing instruction and a rendering diagram of the first image content so as to update the shading rate texture data.
8. The method of claim 7, wherein the computing control module determining whether to update the shading rate texture data comprises:
if the difference value between N and M is equal to a preset threshold value, the calculation control module determines to update the coloring rate texture data;
And if the difference value between N and M is smaller than the preset threshold value, the calculation control module does not update the coloring rate texture data.
9. The method of claim 7, wherein the method further comprises:
after the GPU updates the shading rate texture data, the GPU sends a notification signal to the interception module, wherein the notification signal is used for indicating whether the shading rate texture data are available.
10. An electronic device comprising a processor for invoking a computer program in memory to perform the method of any of claims 1 to 9.
11. A computer readable storage medium storing computer instructions which, when run on an electronic device, cause the electronic device to perform the method of any one of claims 1 to 9.
12. A chip system for application to an electronic device comprising a processor and a memory, the chip system comprising one or more interface circuits and one or more processors, the interface circuits and the processors being interconnected by wires, the interface circuits being adapted to receive signals from the memory of the electronic device and to send the signals to the processor, the signals comprising computer instructions stored in the memory, which when executed by the processor cause the electronic device to perform the method of any of claims 1 to 9.
CN202211261351.9A 2022-10-14 2022-10-14 Image processing method, device and storage medium Pending CN116704075A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211261351.9A CN116704075A (en) 2022-10-14 2022-10-14 Image processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211261351.9A CN116704075A (en) 2022-10-14 2022-10-14 Image processing method, device and storage medium

Publications (1)

Publication Number Publication Date
CN116704075A true CN116704075A (en) 2023-09-05

Family

ID=87838085

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211261351.9A Pending CN116704075A (en) 2022-10-14 2022-10-14 Image processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN116704075A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106774782A (en) * 2015-11-24 2017-05-31 中兴通讯股份有限公司 Interface display method, device and terminal
CN113726950A (en) * 2021-06-10 2021-11-30 荣耀终端有限公司 Image processing method and electronic equipment
CN113781289A (en) * 2020-06-09 2021-12-10 Arm有限公司 Graphics processing
CN114092310A (en) * 2021-11-09 2022-02-25 杭州逗酷软件科技有限公司 Image rendering method, electronic device and computer-readable storage medium
CN114210055A (en) * 2022-02-22 2022-03-22 荣耀终端有限公司 Image rendering method and electronic equipment
CN114494328A (en) * 2022-02-11 2022-05-13 北京字跳网络技术有限公司 Image display method, image display device, electronic device, and storage medium
CN114972001A (en) * 2022-06-08 2022-08-30 Oppo广东移动通信有限公司 Image sequence rendering method and device, computer readable medium and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106774782A (en) * 2015-11-24 2017-05-31 中兴通讯股份有限公司 Interface display method, device and terminal
CN113781289A (en) * 2020-06-09 2021-12-10 Arm有限公司 Graphics processing
CN113726950A (en) * 2021-06-10 2021-11-30 荣耀终端有限公司 Image processing method and electronic equipment
CN114092310A (en) * 2021-11-09 2022-02-25 杭州逗酷软件科技有限公司 Image rendering method, electronic device and computer-readable storage medium
CN114494328A (en) * 2022-02-11 2022-05-13 北京字跳网络技术有限公司 Image display method, image display device, electronic device, and storage medium
CN114210055A (en) * 2022-02-22 2022-03-22 荣耀终端有限公司 Image rendering method and electronic equipment
CN114972001A (en) * 2022-06-08 2022-08-30 Oppo广东移动通信有限公司 Image sequence rendering method and device, computer readable medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN115473957B (en) Image processing method and electronic equipment
CN111666119B (en) UI component display method and electronic device
CN113254120B (en) Data processing method and related device
WO2023065873A1 (en) Frame rate adjustment method, terminal device, and frame rate adjustment system
CN114089932B (en) Multi-screen display method, device, terminal equipment and storage medium
CN113935898A (en) Image processing method, system, electronic device and computer readable storage medium
WO2020233593A1 (en) Method for displaying foreground element, and electronic device
CN116048358B (en) Method and related device for controlling suspension ball
CN114691248B (en) Method, device, equipment and readable storage medium for displaying virtual reality interface
CN117009005A (en) Display method, automobile and electronic equipment
CN113495733A (en) Theme pack installation method and device, electronic equipment and computer readable storage medium
CN116704075A (en) Image processing method, device and storage medium
CN116389640A (en) Interface display method and electronic equipment
CN116051351B (en) Special effect processing method and electronic equipment
CN116185245B (en) Page display method and electronic equipment
CN115941836B (en) Interface display method, electronic equipment and storage medium
CN116048831B (en) Target signal processing method and electronic equipment
CN116095512B (en) Photographing method of terminal equipment and related device
CN116664375B (en) Image prediction method, device, equipment and storage medium
CN116703741B (en) Image contrast generation method and device and electronic equipment
WO2024083031A1 (en) Display method, electronic device, and system
CN116414493A (en) Image processing method, electronic device and storage medium
CN117666819A (en) Mouse operation method, electronic device, mouse and computer-readable storage medium
CN117692693A (en) Multi-screen display method and related equipment
CN117724603A (en) Interface display method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination