WO2022258024A1 - 一种图像处理方法和电子设备 - Google Patents

一种图像处理方法和电子设备 Download PDF

Info

Publication number
WO2022258024A1
WO2022258024A1 PCT/CN2022/097931 CN2022097931W WO2022258024A1 WO 2022258024 A1 WO2022258024 A1 WO 2022258024A1 CN 2022097931 W CN2022097931 W CN 2022097931W WO 2022258024 A1 WO2022258024 A1 WO 2022258024A1
Authority
WO
WIPO (PCT)
Prior art keywords
rendering
image
frame buffer
electronic device
processor
Prior art date
Application number
PCT/CN2022/097931
Other languages
English (en)
French (fr)
Inventor
陈聪儿
刘金晓
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Priority to US18/252,920 priority Critical patent/US20230419570A1/en
Priority to EP22819621.8A priority patent/EP4224831A1/en
Priority to CN202280024721.9A priority patent/CN117063461A/zh
Publication of WO2022258024A1 publication Critical patent/WO2022258024A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/203Drawing of straight lines or curves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Definitions

  • the embodiments of the present application relate to the field of image processing, and in particular, to an image processing method and electronic equipment.
  • the electronic device is a mobile phone as an example.
  • the mobile phone may display images to the user with a higher resolution.
  • the power consumption is higher. Therefore, it will lead to problems such as excessive computing power overhead and serious heat generation. In severe cases, the mobile phone will freeze. And then seriously affect the user experience.
  • Embodiments of the present application provide an image processing method and an electronic device, which can perform a large number of rendering operations in a main scene with a smaller resolution, thereby achieving the effect of reducing rendering power consumption.
  • an image processing method is provided, which is applied to rendering processing of a first image by an electronic device, where the electronic device runs an application program, and when the electronic device performs rendering processing on the first image, one or more frame buffers are invoked
  • a rendering operation in which the electronic device executes rendering processing on the first image is issued by the application program.
  • the method includes: determining a first main scene in a process of rendering the first image.
  • the first main scene is a frame buffer that executes the largest number of rendering operations during the rendering process of the first image by the electronic device.
  • a temporary frame buffer is configured, and the resolution of the temporary frame buffer is smaller than the resolution of the first main scene.
  • the first rendering operation is a rendering operation instructed by the application program to be executed on the first main scene.
  • the electronic device may configure a temporary framebuffer with a smaller resolution for the main scene (eg, the largest framebuffer performing rendering processing). Electronics can also perform rendering operations on the temporary framebuffer that would otherwise need to be performed on the main scene. In this way, the purpose of performing rendering operations with a smaller resolution can be achieved, thereby saving the problem of excessive rendering power consumption caused by a large number of rendering operations at high resolutions.
  • the configuration of the temporary frame buffer and the object performing the rendering operation on the temporary frame buffer are both the main scene, that is, the frame buffer with the most rendering operations.
  • the electronic device may also configure a corresponding temporary frame buffer with a smaller resolution for other frame buffers with more rendering operations, and perform lower resolution rendering on the corresponding temporary frame buffer. rendering operations, thereby increasing the number of rendering operations performed at smaller resolutions, further reducing the power consumption of rendering operations.
  • the determining the first main scene in the rendering process of the first image includes: determining the first main scene based on the frame buffer that performs the largest number of rendering operations in the rendering process of the second image A main scene.
  • the rendering process of the second image is prior to the rendering process of the first image.
  • the first main scene is called when rendering processing is performed on the second image.
  • a solution for determining the first main scene is provided.
  • the electronic device may, according to the rendering operation on each frame buffer during the rendering processing of other frame images (such as the second image) before the rendering processing of the current frame image (such as the first image) The number determines the main scene of the current frame image.
  • the frame buffer that performs the most rendering operations during the rendering process of the second image may be used as the main scene of the current frame image (that is, the first image).
  • the method before determining the first main scene in the process of rendering the first image, the method further includes: during the process of rendering the second image, determining The number of draw calls (drawcalls) executed on the screen, and the frame buffer with the largest number of draw calls executed is determined as the first main scene.
  • the number of rendering operations can be represented by the number of drawcalls. For example, the greater the number of drawcalls, the greater the number of corresponding rendering operations, and vice versa. Therefore, the electronic device can determine the frame buffer with the largest number of drawcalls as the first main scene by determining the number of drawcalls on each frame buffer during the rendering process of the second image.
  • the second image is an image last frame of the first image. Based on this solution, a specific selection solution for the second image is provided.
  • the second image may be a previous frame image of the first image. In this way, since the second image is very close to the image to be displayed by the first image, the first main scene determined according to the number of drawcalls in the rendering process of the second image is more accurate.
  • the electronic device is configured with a processor and a rendering processing module.
  • the execution of the first rendering operation on the temporary frame buffer includes: when the processor receives a rendering command for the first image from the application program, the processor sends the first rendering operation to the rendering processing module A rendering instruction, the first rendering instruction includes the first rendering operation, and the first rendering instruction is used to instruct the rendering processing module to execute the first rendering operation on the temporary frame buffer.
  • the rendering processing module executes the first rendering operation on the temporary frame buffer according to the first rendering instruction. Based on this solution, an example of a specific solution for performing the first rendering operation is provided. In this example, the electronic device can implement the rendering operation of the main scene at a smaller resolution through the cooperation of the processor and the rendering processing module.
  • the processor may issue a rendering instruction to the rendering processing module, instructing the rendering processing module to perform a rendering operation on the temporary frame buffer with a smaller resolution that should be performed on the main scene with a larger resolution.
  • the rendering processing module can perform a corresponding rendering operation on the temporary frame buffer according to the rendering instruction, so as to achieve the effect of rendering with a smaller resolution.
  • the method before the processor sends the first rendering instruction to the rendering processing module, the method further includes: when the processor judges that the currently executed rendering command is a rendering command for the main scene, Replace the framebuffer information bound to the currently executed rendering command by the first framebuffer information with the second framebuffer information to obtain the first rendering instruction, and the first framebuffer information is used to indicate execution on the main scene For the first rendering operation, the second framebuffer information is used to indicate to execute the first rendering operation on the temporary framebuffer.
  • a logic for determining the first rendering instruction sent by the processor to the rendering processing module is provided.
  • the processor may perform subsequent operations after determining that the current rendering command from the application program is a rendering command for the main scene.
  • the processor may compare the framebuffer object of the framebuffer bound to the current rendering command with the framebuffer object of the main scene, and when the results are consistent, the processor determines that the current rendering command is for the main scene. Afterwards, the processor may issue a rendering instruction to the rendering processing module to instruct the rendering processing module to perform a rendering operation. In this example, the processor may replace the framebuffer information pointing to the main scene in the rendering instruction with the framebuffer information pointing to the temporary framebuffer, and send it to the rendering processing module. In this way, the rendering processing module can execute the corresponding rendering operation on the frame buffer (such as the temporary frame buffer) indicated by the rendering instruction according to the rendering instruction, thereby implementing the rendering originally executed on the main scene on the temporary frame buffer operate.
  • the frame buffer such as the temporary frame buffer
  • the first frame buffer information includes a first frame buffer object
  • the first frame buffer object is a frame buffer object corresponding to the main scene
  • the second frame buffer information includes a second frame buffer object
  • the The second framebuffer object is a framebuffer object corresponding to the temporary framebuffer.
  • framebuffer information may include framebuffer objects. Take the framebuffer information that replaces the bindFrameBuffer() function as an example. When the main scene is FB1 and the temporary frame buffer is FB3, bindFrameBuffer(1) can be included in the rendering command.
  • the processor may issue a rendering instruction including bindFrameBuffer(1) to the rendering processing module to instruct the rendering processing module to execute the drawcall in the rendering instruction on FB1.
  • the frame buffer information of FB1 can be replaced with the frame buffer information of FB3, then, when the rendering command can include bindFrameBuffer(1), the processor can replace bindFrameBuffer(1) with bindFrameBuffer( 3), and carry the bindFrameBuffer(3) in the rendering instruction, so as to instruct the rendering processing module to execute the corresponding rendering operation on FB3.
  • the rendering command is issued by the application program to the processor, and the rendering command includes the first rendering operation and the first frame buffer information.
  • the rendering command may be from an application program instructing the processor to perform a first rendering operation on the indicated framebuffer.
  • the processor is a central processing unit (CPU). Based on the solution, a specific implementation of a processor is provided.
  • the processor may be a CPU in an electronic device.
  • the functions of the processor may also be implemented by other components or circuits with processing functions.
  • the rendering processing module is a graphics processing unit (GPU). Based on the solution, a concrete realization of a rendering processing module is provided.
  • the rendering processing module may be a GPU in an electronic device.
  • the function of the rendering processing module may also be realized by other components or circuits that have a graphics rendering function.
  • the first rendering instruction further includes: performing multi-sampling on the image acquired by the first rendering operation.
  • the electronic device may use a multi-sampling technique on an image obtained by performing a rendering operation with a lower resolution to reduce jagged edges of the image and improve the overall display quality of the image.
  • an image processing device which is applied to rendering processing of a first image by an electronic device, where the electronic device runs an application program, and calls one or more frame buffers when the electronic device performs rendering processing on the first image
  • a rendering operation in which the electronic device executes rendering processing on the first image is issued by the application program.
  • the device includes: a determining unit, configured to determine a first main scene in a process of rendering the first image.
  • the first main scene is a frame buffer that executes the largest number of rendering operations during the rendering process of the first image by the electronic device.
  • a configuration unit configured to configure a temporary frame buffer, where the resolution of the temporary frame buffer is smaller than the resolution of the first main scene.
  • the execution unit is configured to execute a first rendering operation on the temporary frame buffer when performing rendering processing on the first image.
  • the first rendering operation is a rendering operation instructed by the application program to be executed on the first main scene.
  • the determining unit is specifically configured to determine the first main scene based on the frame buffer that performs the largest number of rendering operations during the rendering process of the second image.
  • the rendering process of the second image is prior to the rendering process of the first image.
  • the first main scene is called when rendering processing is performed on the second image.
  • the determination unit is also used to determine the number of draw calls (drawcalls) to be executed on each frame buffer during the rendering process of the second image, and the frame buffer with the largest number of draw calls to be executed Determined as the first main scene.
  • the second image is an image last frame of the first image.
  • the function of the execution unit may be implemented by a processor and a rendering processing module.
  • the processor receives a rendering command for the first image from the application program, issue a first rendering instruction to the rendering processing module, where the first rendering instruction includes the first A rendering operation, the first rendering instruction is used to instruct the rendering processing module to execute the first rendering operation on the temporary frame buffer.
  • the rendering processing module is used to execute the first rendering operation on the temporary frame buffer according to the first rendering instruction.
  • the processor is further configured to, before the processor sends the first rendering instruction to the rendering processing module, if it is determined that the currently executed rendering command is a rendering command for the main scene, send The framebuffer information bound to the currently executed rendering command is replaced by the first framebuffer information with the second framebuffer information to obtain the first rendering instruction, and the first framebuffer information is used to indicate that the main scene executes the For the first rendering operation, the second framebuffer information is used to indicate to execute the first rendering operation on the temporary framebuffer.
  • the first frame buffer information includes a first frame buffer object
  • the first frame buffer object is a frame buffer object corresponding to the main scene
  • the second frame buffer information includes a second frame buffer object
  • the The second framebuffer object is a framebuffer object corresponding to the temporary framebuffer.
  • the rendering command is issued by the application program to the processor, and the rendering command includes the first rendering operation and the first frame buffer information.
  • the processor is a central processing unit (CPU).
  • the rendering processing module is a graphics processing unit (GPU).
  • the first rendering instruction further includes: performing multi-sampling on the image acquired by the first rendering operation.
  • an electronic device in a third aspect, includes one or more processors and one or more memories; the one or more memories are coupled to the one or more processors, and the one or more memories store computer instructions; When one or more processors execute computer instructions, the electronic device is made to execute the image processing method according to the above first aspect and any one of various possible designs.
  • a chip system in the fourth aspect, includes an interface circuit and a processor; the interface circuit and the processor are interconnected through lines; the interface circuit is used to receive signals from the memory and send signals to the processor, and the signals include the data stored in the memory Computer instructions; when the processor executes the computer instructions, the system-on-a-chip executes the image processing method in any one of the above-mentioned first aspect and various possible designs.
  • a computer-readable storage medium includes computer instructions, and when the computer instructions are executed, the image processing method in the above first aspect and any one of various possible designs is executed.
  • a computer program product is provided, and the computer program product includes instructions.
  • the computer program product When the computer program product is run on the computer, the computer can execute any one of the above-mentioned first aspect and various possible designs according to the instructions. image processing method.
  • FIG. 1 is a schematic diagram of the composition of a video stream
  • Fig. 2 is a schematic diagram of an image display
  • Fig. 3 is a schematic diagram of yet another image display
  • FIG. 4 is a schematic diagram of the composition of an electronic device provided in an embodiment of the present application.
  • FIG. 5 is a schematic diagram of the software composition of an electronic device provided in the embodiment of the present application.
  • FIG. 6A is a schematic diagram of issuing a rendering command provided by an embodiment of the present application.
  • FIG. 6B is a schematic diagram of issuing a rendering command provided by the embodiment of the present application.
  • FIG. 6C is a schematic diagram of an image rendering provided by the embodiment of the present application.
  • FIG. 7A is a schematic flow chart of an image processing method provided by an embodiment of the present application.
  • FIG. 7B is a schematic diagram of determining the number of drawcalls on different frame buffers provided by the embodiment of the present application.
  • FIG. 7C is a schematic diagram of determining the number of drawcalls on different frame buffers provided by the embodiment of the present application.
  • FIG. 8 is a schematic diagram of issuing a rendering command provided by an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of the composition of an image processing device provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of the composition of an electronic device provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a chip system provided by an embodiment of the present application.
  • the electronic device can display multimedia stream files to the user through a display screen, so as to provide the user with a rich visual experience.
  • the video stream may include multiple frame images.
  • the video stream may include N frames of images, such as the first frame, the second frame, . . . the Nth frame.
  • the electronic device may respectively display the first frame, the second frame, ... the Nth frame on the display screen.
  • the frequency of switching and displaying frame images of electronic devices is higher than the frequency that can be distinguished by human eyes, it is possible to realize that the user does not perceive the switching and display of different frame images, thereby obtaining a continuous viewing effect.
  • An application that wants to play a video stream can issue rendering commands to the electronic device for different frames of images.
  • the electronic device can render each frame of image according to these rendering commands, and display based on the rendering result.
  • the electronic device when displaying the current frame image, may render the subsequent frame image according to the rendering command issued by the application program. So that when a subsequent frame image needs to be displayed, the currently displayed frame image can be replaced by the rendered frame image to be displayed to the user.
  • the electronic device can control the display screen to read the data of the N-1th frame image from the current frame buffer (the current frame buffer shown in Figure 2) when it needs to display the N-1th frame image. show.
  • the current frame buffer may be a storage space configured by the electronic device in the internal memory for the frame image currently to be displayed, and the current frame buffer may be used to store data of the frame image to be displayed currently (such as color data of each pixel, depth data Wait). That is to say, when it is necessary to display the N-1th frame image, the electronic device can control the display screen to display the N-1th frame image according to the data of the N-1th frame image stored in the current frame buffer.
  • the electronic device when the electronic device displays the N-1th frame image, it can render the subsequent image (such as the Nth frame) to be displayed according to the rendering command issued by the application program, so as to obtain the image used to display the Nth frame image data.
  • the electronic device when the electronic device displays the image of the N-1th frame, it may execute the rendering of the Nth frame. It can be understood that, in the process of rendering a frame image, the electronic device needs to store the rendered result for subsequent use. Therefore, in this example, the electronic device can configure the frame buffer in the memory before rendering the image of the Nth frame.
  • the frame buffer can correspond to the storage space in the memory.
  • the rendering result may be stored in a corresponding frame buffer.
  • the electronic device may call multiple frame buffers when rendering one frame of image.
  • the electronic device may invoke three frame buffers (such as FB0 , FB1 , and FB2 ) as shown in FIG. 3 when rendering the image of the Nth frame.
  • FB1 can be used to store the rendering result of some elements of the Nth frame image (such as called rendering result 1)
  • FB2 can be used to store another part of the Nth frame image elements
  • the rendering result of for example, called rendering result 2.
  • the electronic device may fuse (or call rendering) the rendering result 1 and the rendering result 2 into FB0.
  • the complete rendering result of the Nth frame image can be obtained in FB0.
  • the electronic device When the electronic device needs to display the Nth frame of image, it can control the data exchange (swap) in FB0 to the current frame buffer, and then control the display screen according to the data in the current frame buffer after the exchange (such as the data of the Nth frame) Display the Nth frame image.
  • the electronic device may configure a corresponding resolution for the frame buffer when creating the frame buffer.
  • the higher the resolution the larger the storage space corresponding to the frame buffer, which enables rendering of higher resolution images.
  • the embodiment of the present application provides an image processing method, which can enable the electronic device to flexibly adjust the rendering mechanism of the subsequent frame image according to the rendering of the rendered frame image, such as adjusting the rendering process of the subsequent frame image.
  • the resolution of the frame buffer thereby reducing the large pressure on memory and computing power during the frame image rendering process, thereby avoiding the resulting increase in heat generation and power consumption of electronic devices.
  • the image processing method provided in the embodiment of the present application may be applied in a user's electronic device.
  • the electronic device may be a device capable of providing network access.
  • the electronic device can be mobile phone, tablet computer, personal digital assistant (personal digital assistant, PDA), augmented reality (augmented reality, AR) ⁇ virtual reality (virtual reality, VR) equipment, media player, etc.
  • a portable mobile device, the electronic device may also be a wearable electronic device such as a smart watch.
  • the embodiment of the present application does not specifically limit the specific form of the device.
  • the electronic device may have a display function.
  • the electronic device may render an image according to a rendering command issued by an application program, and display a corresponding image to a user according to a rendering result obtained through rendering.
  • FIG. 4 is a schematic composition diagram of an electronic device 400 provided in an embodiment of the present application.
  • the electronic device 400 may include a processor 410, an external memory interface 420, an internal memory 421, a universal serial bus (universal serial bus, USB) interface 430, a charging management module 440, a power management module 441, a battery 442, antenna 1, antenna 2, mobile communication module 450, wireless communication module 460, audio module 470, sensor module 480, button 490, motor 491, indicator 492, camera 493, display screen 494, and subscriber identification module (subscriber identification module, SIM) card interface 495, etc.
  • SIM subscriber identification module
  • the sensor module 480 may include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
  • the electronic device 400 may also include devices such as a speaker, a receiver, a microphone, and an earphone jack for implementing audio-related functions of the electronic device.
  • the structure shown in this embodiment does not constitute a specific limitation on the electronic device 400 .
  • the electronic device 400 may include more or fewer components than shown, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 410 may include one or more processing units, for example: the processor 410 may include a central processing unit (Central Processing Unit/Processor, CPU), an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural Network processor (neural-network processing unit, NPU), etc.
  • different processing units may be independent devices, or may be integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 400 .
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • a memory may also be provided in the processor 410 for storing instructions and data.
  • the memory in processor 410 is a cache memory.
  • the memory may hold instructions or data that the processor 410 has just used or recycled. If the processor 410 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 410 is reduced, thus improving the efficiency of the system.
  • processor 410 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input and output (general-purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, And/or a universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input and output
  • subscriber identity module subscriber identity module
  • SIM subscriber identity module
  • USB universal serial bus
  • the electronic device 400 can realize the shooting function through the ISP, the camera 493 , the video codec, the GPU, the display screen 494 and the application processor 410 .
  • the ISP is used to process data fed back by the camera 493 .
  • the light is transmitted to the photosensitive element of the camera 493 through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera 493 transmits the electrical signal to the ISP for processing, and is converted into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be located in the camera 493 .
  • Camera 493 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other image signals.
  • the electronic device 400 may include 1 or N cameras 493, where N is a positive integer greater than 1.
  • the digital signal processor 410 is used for processing digital signals, and can process other digital signals in addition to digital image signals. For example, when the electronic device 400 selects a frequency point, the digital signal processor 410 is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 400 may support one or more video codecs.
  • the electronic device 400 can play or record videos in various encoding formats, for example: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG moving picture experts group
  • the NPU is a neural-network (NN) computing processor 410.
  • NN neural-network
  • the NPU can quickly process input information, and can also continuously self-learn.
  • Applications such as intelligent cognition of the electronic device 400 can be implemented through the NPU, such as image recognition, face recognition, voice recognition, text understanding, and the like.
  • the charging management module 440 is configured to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 440 can receive the charging input of the wired charger through the USB interface 430 .
  • the charging management module 440 may receive wireless charging input through a wireless charging coil of the electronic device 400 .
  • the charging management module 440 is charging the battery 442 , it can also supply power to the electronic device 400 through the power management module 441 .
  • the power management module 441 is used for connecting the battery 442 , the charging management module 440 and the processor 410 .
  • the power management module 441 receives the input from the battery 442 and/or the charging management module 440 to provide power for the processor 410 , internal memory 421 , external memory, display screen 494 , camera 493 , and wireless communication module 460 .
  • the power management module 441 can also be used to monitor parameters such as the capacity of the battery 442, the number of cycles of the battery 442, and the state of health of the battery 442 (leakage, impedance).
  • the power management module 441 may also be set in the processor 410 .
  • the power management module 441 and the charging management module 440 can also be set in the same device.
  • the wireless communication function of the electronic device 400 can be realized by the antenna 1, the antenna 2, the mobile communication module 450, the wireless communication module 460, the modem processor and the baseband processor.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 400 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 450 can provide wireless communication solutions including 2G/3G/4G/5G applied on the electronic device 400 .
  • the mobile communication module 450 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
  • the mobile communication module 450 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the mobile communication module 450 can also amplify the signal modulated by the modem processor, convert it into electromagnetic wave and radiate it through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 450 may be set in the processor 410 .
  • at least part of the functional modules of the mobile communication module 450 and at least part of the modules of the processor 410 may be set in the same device.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator sends the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is passed to the application processor after being processed by the baseband processor.
  • the application processor outputs sound signals through audio equipment (not limited to speakers, receivers, etc.), or displays images or videos through the display screen 494 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 410, and be set in the same device as the mobile communication module 450 or other functional modules.
  • the wireless communication module 460 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite, etc. applied on the electronic device 400.
  • System global navigation satellite system, GNSS
  • frequency modulation frequency modulation, FM
  • near field communication technology near field communication, NFC
  • infrared technology infrared, IR
  • the wireless communication module 460 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 460 receives electromagnetic waves via the antenna 2 , frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 410 .
  • the wireless communication module 460 can also receive the signal to be transmitted from the processor 410 , frequency-modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • the antenna 1 of the electronic device 400 is coupled to the mobile communication module 450, and the antenna 2 is coupled to the wireless communication module 460, so that the electronic device 400 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technology, etc.
  • the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a Beidou navigation satellite system (beidou navigation satellite system, BDS), a quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • Beidou navigation satellite system beidou navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 400 implements a display function through a GPU, a display screen 494, and an application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 494 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 410 may include one or more GPUs that execute program instructions to generate or alter display information.
  • the display screen 494 is used to display images, videos and the like.
  • Display 494 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • the electronic device 400 may include 1 or N display screens 494, where N is a positive integer greater than 1.
  • the external memory interface 420 can be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the electronic device 400.
  • the external memory card communicates with the processor 410 through the external memory interface 420 to implement a data storage function. Such as saving music, video and other files in the external memory card.
  • the internal memory 421 may be used to store computer-executable program code, which includes instructions.
  • the processor 410 executes various functional applications and data processing of the electronic device 400 by executing instructions stored in the internal memory 421 .
  • the internal memory 421 may include an area for storing programs and an area for storing data.
  • the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like.
  • the storage data area can store data created during the use of the electronic device 400 (such as audio data, phonebook, etc.) and the like.
  • the internal memory 421 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • the internal storage 421 may also be referred to as a memory.
  • a processor such as a CPU may create corresponding frame buffers in memory for rendering processing of different frame images.
  • the CPU can create FBO, FB1 and FB2 in the memory under the control of the command of the application program to render the image of the Nth frame.
  • the electronic device 400 may implement an audio function through an audio module 470 , a speaker, a receiver, a microphone, an earphone interface, and an application processor 410 . Such as music playback, recording, etc.
  • the audio module 470 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
  • the audio module 470 may also be used to encode and decode audio signals.
  • the audio module 470 may be set in the processor 410 , or some functional modules of the audio module 470 may be set in the processor 410 .
  • Loudspeakers also called “horns”, are used to convert audio electrical signals into sound signals.
  • the electronic device 400 can listen to music through a speaker, or listen to a hands-free call.
  • the receiver also known as the "earpiece” is used to convert audio electrical signals into sound signals. When the electronic device 400 receives a call or a voice message, the voice can be heard by putting the receiver close to the human ear.
  • Microphones also known as “microphones” and “microphones”, are used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone with a human mouth, and input the sound signal into the microphone.
  • the electronic device 400 may be provided with at least one microphone.
  • the electronic device 400 may be provided with two microphones, which may also implement a noise reduction function in addition to collecting sound signals.
  • the electronic device 400 can also be provided with three, four or more microphones to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions, etc.
  • the headphone jack is used to connect wired headphones.
  • the earphone interface can be a USB interface 430, or a 3.5mm open mobile terminal platform (open mobile terminal platform, OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • CTIA cellular telecommunications industry association of the USA
  • Touch sensor also known as "touch panel”.
  • the touch sensor can be arranged on the display screen 494, and the touch sensor and the display screen 494 form a touch screen, also called “touch screen”.
  • the touch sensor is used to detect a touch operation on or near it.
  • the touch sensor can transmit the detected touch operation to the application processor 410 to determine the type of the touch event.
  • the visual output related to the touch operation can be provided through the display screen 494 .
  • the touch sensor may also be disposed on the surface of the electronic device 400 , which is different from the position of the display screen 494 .
  • the pressure sensor is used to sense the pressure signal and convert the pressure signal into an electrical signal.
  • a pressure sensor may be located on the display screen 494 .
  • pressure sensors such as resistive pressure sensors, inductive pressure sensors, and capacitive pressure sensors.
  • a capacitive pressure sensor may be comprised of at least two parallel plates with conductive material. When a force is applied to the pressure sensor, the capacitance between the electrodes changes.
  • the electronic device 400 determines the intensity of pressure according to the change in capacitance.
  • the electronic device 400 detects the intensity of the touch operation according to the pressure sensor.
  • the electronic device 400 may also calculate the touched position according to the detection signal of the pressure sensor.
  • touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example: when a touch operation with a touch operation intensity less than the first pressure threshold acts on the short message application icon, an instruction to view short messages is executed. When a touch operation whose intensity is greater than or equal to the first pressure threshold acts on the icon of the short message application, the instruction of creating a new short message is executed.
  • the gyro sensor can be used to determine the motion posture of the electronic device 400 .
  • the angular velocity of the electronic device 400 around three axes may be determined by a gyro sensor.
  • the gyro sensor can be used for image stabilization. Exemplarily, when the shutter is pressed, the gyro sensor detects the shaking angle of the electronic device 400, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shaking of the electronic device 400 through reverse movement to achieve anti-shake.
  • Gyroscope sensors can also be used for navigation and somatosensory game scenes.
  • the barometric pressure sensor is used to measure air pressure.
  • the electronic device 400 calculates the altitude based on the air pressure value measured by the air pressure sensor to assist positioning and navigation.
  • Magnetic sensors include Hall sensors.
  • the electronic device 400 may use a magnetic sensor to detect opening and closing of the flip leather case.
  • the electronic device 400 may detect opening and closing of the flip according to the magnetic sensor.
  • features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor can detect the acceleration of the electronic device 400 in various directions (generally three axes). When the electronic device 400 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of the electronic device 400, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
  • the electronic device 400 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 400 may use a distance sensor to measure a distance to achieve fast focusing.
  • Proximity light sensors may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the light emitting diodes may be infrared light emitting diodes.
  • the electronic device 400 emits infrared light through the light emitting diode.
  • Electronic device 400 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that there is an object near the electronic device 400 . When insufficient reflected light is detected, the electronic device 400 may determine that there is no object near the electronic device 400 .
  • the electronic device 400 can use the proximity light sensor to detect that the user holds the electronic device 400 close to the ear to make a call, so as to automatically turn off the screen to save power.
  • Proximity light sensor can also be used for leather case mode, pocket mode auto unlock and lock screen.
  • the ambient light sensor is used to sense the ambient light brightness.
  • the electronic device 400 can adaptively adjust the brightness of the display screen 494 according to the perceived ambient light brightness.
  • the ambient light sensor can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor can also cooperate with the proximity light sensor to detect whether the electronic device 400 is in the pocket, so as to prevent accidental touch.
  • the fingerprint sensor is used to collect fingerprints.
  • the electronic device 400 can use the collected fingerprint characteristics to implement fingerprint unlocking, access to application locks, take pictures with fingerprints, answer incoming calls with fingerprints, and the like.
  • a temperature sensor is used to detect temperature.
  • the electronic device 400 uses the temperature detected by the temperature sensor to implement a temperature processing strategy. For example, when the temperature reported by the temperature sensor exceeds the threshold, the electronic device 400 may reduce the performance of the processor 410 located near the temperature sensor, so as to reduce power consumption and implement thermal protection.
  • the electronic device 400 when the temperature is lower than another threshold, the electronic device 400 heats the battery 442 to prevent the electronic device 400 from being shut down abnormally due to the low temperature.
  • the electronic device 400 boosts the output voltage of the battery 442 to avoid abnormal shutdown caused by low temperature.
  • Bone conduction sensors can pick up vibration signals.
  • the bone conduction sensor can acquire the vibration signal of the vibrating bone mass of the human voice.
  • Bone conduction sensors can also contact the human pulse and receive blood pressure beating signals.
  • the bone conduction sensor can also be disposed in the earphone, combined into a bone conduction earphone.
  • the audio module 470 can analyze the voice signal based on the vibration signal of the vibrating bone mass of the vocal part acquired by the bone conduction sensor, so as to realize the voice function.
  • the application processor 410 can analyze the heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor, so as to realize the heart rate detection function.
  • the keys 490 include a power key, a volume key and the like.
  • the key 490 may be a mechanical key 490 . It can also be a touch button 490 .
  • the electronic device 400 may receive an input of the key 490 and generate a key signal input related to user setting and function control of the electronic device 400 .
  • Motor 491 can generate a vibrating prompt.
  • the motor 491 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback.
  • touch operations applied to different applications may correspond to different vibration feedback effects.
  • the motor 491 can also correspond to different vibration feedback effects for touch operations acting on different areas of the display screen 494 .
  • Different application scenarios for example: time reminder, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 492 can be an indicator light, and can be used to indicate charging status, power change, and can also be used to indicate messages, missed calls, notifications, and the like.
  • the SIM card interface 495 is used for connecting a SIM card.
  • the SIM card can be connected and separated from the electronic device 400 by inserting it into the SIM card interface 495 or pulling it out from the SIM card interface 495 .
  • the electronic device 400 may support 1 or N SIM card interfaces 495, where N is a positive integer greater than 1.
  • SIM card interface 495 can support Nano SIM card, Micro SIM card, SIM card etc. Multiple cards can be inserted into the same SIM card interface 495 at the same time. The types of the multiple cards may be the same or different.
  • the SIM card interface 495 is also compatible with different types of SIM cards.
  • the SIM card interface 495 is also compatible with external memory cards.
  • the electronic device 400 interacts with the network through the SIM card to implement functions such as calling and data communication.
  • the electronic device 400 adopts an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 400 and cannot be separated from the electronic device 400 .
  • FIG. 4 shows a hardware structural composition in an electronic device.
  • the electronic device 400 may also be divided from another perspective.
  • FIG. 5 another logical composition of the electronic device 400 is shown.
  • the electronic device 400 may have a layered architecture.
  • a layered architecture divides the software into layers, each with a clear role and division of labor. Layers communicate through software interfaces.
  • the electronic device operates with (Android) operating system as an example.
  • the system can be divided into five layers, from top to bottom are application layer, application framework layer, Android runtime (Android runtime, ART) and native C/C++ library, hardware abstraction layer (Hardware Abstract Layer, HAL) and kernel Floor.
  • the application program layer may include a series of application program packages.
  • the application package may include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, and short message.
  • the application program layer may include an application program that provides the user with a multimedia stream presentation function.
  • the application program layer may include various game applications (such as Wait).
  • the application layer may also include various video applications (such as Wait).
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a window manager, a content provider, a view system, a resource manager, a notification manager, an event manager, an input manager, and the like.
  • the window manager provides window management service (Window Manager Service, WMS).
  • WMS can be used for window management, window animation management, surface management and as a transfer station for input systems.
  • Content providers are used to store and retrieve data and make it accessible to applications. This data can include videos, images, audio, calls made and received, browsing history and bookmarks, phonebook, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on.
  • the view system can be used to build applications.
  • a display interface can consist of one or more views.
  • a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and can automatically disappear after a short stay without user interaction. For example, the notification manager is used to notify the download completion, message reminder, etc.
  • the notification manager can also be a notification that appears on the status bar at the top of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, issuing a prompt sound, vibrating the electronic device, and flashing the indicator light, etc.
  • the activity manager can provide activity management service (Activity Manager Service, AMS), AMS can be used for system components (such as activities, services, content providers, broadcast receivers) to start, switch, schedule, and manage and schedule application processes .
  • the input manager can provide input management service (Input Manager Service, IMS), and IMS can be used to manage the input of the system, such as touch screen input, key input, sensor input, etc.
  • IMS fetches events from input device nodes, and distributes events to appropriate windows through interaction with WMS.
  • the Android runtime includes the core library and the Android runtime.
  • the Android runtime is responsible for converting source code into machine code.
  • the Android runtime mainly includes the use of ahead of time (ahead or time, AOT) compilation technology and just in time (just in time, JIT) compilation technology.
  • the core library is mainly used to provide basic Java class library functions, such as basic data structure, mathematics, input and output (Input Output, IO), tools, database, network and other libraries.
  • the core library provides APIs for users to develop Android applications.
  • a native C/C++ library can include multiple functional modules. For example: surface manager (surface manager), media framework (Media Framework), standard C library (Standard C library, libc), open graphics library for embedded systems (OpenGL for Embedded Systems, OpenGL ES), Vulkan, SQLite, Webkit, etc.
  • the surface manager is used to manage the display subsystem, and provides the fusion of 2D and 3D layers for multiple applications.
  • the media framework supports playback and recording of various commonly used audio and video formats, as well as still image files.
  • the media library can support multiple audio and video encoding formats, such as: Moving Picture Experts Group 4 (Moving Pictures Experts Group MPEG4), H.264, Moving Picture Experts Group Audio Layer 3 (Moving Picture Experts Group Audio Layer3, MP3), Advanced Audio Coding (AAC), Adaptive Multi-Rate (AMR), Joint Photographic Experts Group (JPEG, or JPG), Portable Network Graphics , PNG) etc.
  • OpenGL ES and/or Vulkan provide drawing and manipulation of 2D graphics and 3D graphics in the application. SQLite provides a lightweight relational database for applications of the electronic device 400 .
  • the hardware abstraction layer runs in user space, encapsulates the kernel layer driver, and provides a call interface to the upper layer.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer includes at least a display driver, a camera driver, an audio driver, and a sensor driver.
  • the processor 410 includes a CPU and a GPU as an example.
  • the CPU may be configured to receive a rendering command from an application program, and issue a corresponding rendering command to the GPU according to the rendering command, so that the GPU performs corresponding rendering according to the rendering command.
  • the render command may include a bindFrameBuffer() function and one or more glDrawElements.
  • the rendering instruction may also include the bindFrameBuffer() function and one or more glDrawElements.
  • the bindFrameBuffer() function can be used to indicate the currently bound frame buffer.
  • bindFrameBuffer(1) can indicate that the currently bound frame buffer is FB1, that is, execute subsequent glDrawElement on FB1.
  • the collection of glDrawElement in the rendering command/rendering instruction is called the rendering operation.
  • the rendering command may include frame buffers to be invoked for rendering the Nth image frame, and rendering operations to be executed in each frame buffer.
  • the rendering command can bind the corresponding rendering operation to the frame buffer through the bindFrameBuffer() function.
  • the image of the Nth frame needs to use FB0, FB1 and FB2 as the frame buffer as an example.
  • FIG. 6A it is an example of a rendering command corresponding to FB1.
  • the rendering command issued by the application may include the bindFrameBuffer(1) function, thereby realizing the binding of the current rendering operation to FB1.
  • 1 glDrawElement instruction can correspond to 1 drawcall.
  • there can be multiple glDrawElement instructions executed on FB1 and there can be multiple corresponding drawcalls executed on FB1.
  • the number of drawcalls executed on FB1 may be included in the rendering command, such as glDrawElement 1-glDrawElement A.
  • the CPU can issue the corresponding rendering command to the GPU according to the frame buffer information bound by the bindFrameBuffer() function and the corresponding glDrawElement command.
  • the GPU can perform a corresponding rendering operation, and store the result of the rendering operation in the currently bound frame buffer.
  • the CPU may determine that the active (active) frame buffer is FB1 when receiving the bindFrameBuffer(1) instruction.
  • the CPU can issue a rendering instruction corresponding to glDrawElement on FB1 to the GPU.
  • the CPU can issue rendering instructions including executing glDrawElement 1-1 to glDrawElement 1-A to the GPU, so that the GPU can execute glDrawElement 1-1 to glDrawElement 1-A.
  • the CPU may issue bindFrameBuffer(1) to the GPU, so that the GPU may determine to execute the above rendering instruction on FB1 and save the result in FB1.
  • the CPU may determine that the active frame buffer is FB2 when receiving the bindFrameBuffer(2) command.
  • the CPU can issue the corresponding glDrawElement rendering command on FB2 to the GPU, thereby completing the rendering of relevant data on FB2.
  • the electronic device can render the rendering results in these frame buffers into FB0, so that the data corresponding to the image of the Nth frame can be obtained in FB0.
  • the Nth frame image takes the default frame buffer of the Nth frame image as FB0, and the Nth frame image also calls FB1 and FB2 during the rendering process.
  • corresponding data such as color data and/or depth data of each pixel point can be stored in FB1 and FB2 respectively Wait).
  • These data (such as the data in FB1 and FB2 ) may respectively correspond to some elements of the image of the Nth frame.
  • the electronic device can render the data in FB1 and FB2 to FB0. In this way, all the data of the Nth frame image can be obtained on FB0.
  • the CPU can swap the data corresponding to the Nth frame of image from FB0 to the current frame buffer through the swapbuffer command. Furthermore, the display screen can be displayed according to the image data of the Nth frame in the current frame buffer.
  • the full data of the Nth frame image is obtained by rendering only the rendering results based on the data of FB1 and FB2 on FB0 as an example.
  • the Nth frame image is an image rendered according to a rendering instruction issued by the game application.
  • the Nth frame image may include elements such as characters and trees, may also include elements such as special effects, and may also include elements such as user interface (User Interface, UI) controls.
  • the CPU can control the GPU to perform rendering of characters, trees, UI controls and other elements on FB1 according to the rendering command sent by the game application, and store the result in FB1.
  • the CPU can also control the GPU to execute the rendering of special effects and other elements on the FB2 according to the rendering commands issued by the game application, and store the results in the FB2. Then, by rendering the data stored in FB1 and the data stored in FB2 to FB0, the full data of the Nth frame image can be obtained.
  • the rendering operation performed on FB0 may also include rendering operations performed based on rendering instructions other than the data of FB1 and FB2.
  • the CPU can control the GPU to perform rendering of characters, trees and other elements on FB1 according to the rendering commands issued by the game application, and store the results in FB1.
  • the CPU can also control the GPU to execute the rendering of special effects and other elements on the FB2 according to the rendering commands issued by the game application, and store the results in the FB2.
  • the CPU can also control the GPU to execute the rendering of UI controls on FB0 according to the rendering command issued by the game application, and combine the data stored in FB1 and FB2 to render and fuse all the rendering results, so as to obtain the Nth frame on FB0 The full data of the image.
  • the solution provided by the embodiment of the present application can enable the electronic device to automatically identify the frame buffer (such as the main scene) that consumes a lot of resources during the rendering process of the frame image currently to be rendered.
  • the frame buffer such as the main scene
  • the effect of reducing the computing resources consumed by the rendering operation in the main scene is achieved.
  • the electronic device may determine the main scene through the rendering process of the N-1th frame image.
  • the CPU may determine, according to the number of drawcalls executed on each frame buffer during the processing of the rendered frame image (such as the N-1th frame), when executing the N-1th frame image
  • the frame buffer that is, the main scene
  • the CPU may configure a temporary frame buffer with a smaller resolution in memory for the main scene when rendering the image of the Nth frame.
  • the CPU when receiving a rendering command for the main scene, the CPU can control the GPU to perform a rendering operation corresponding to the main scene at a smaller resolution in the temporary frame buffer. In this way, during the rendering of the image of the Nth frame, a large number of rendering operations can be performed at a smaller resolution, thereby reducing the rendering load and reducing the power consumption required for rendering.
  • FIG. 7A is a schematic flowchart of an image processing method provided by an embodiment of the present application. As shown, the method may include:
  • the main scene may be the frame buffer that performs the largest number of rendering operations during the rendering process.
  • an electronic device playing a video stream is taken as an example. Scenes of successive frames of images constituting a video stream are associated. Therefore, the processor (such as CPU) of the electronic device can determine the main scene of the current frame image according to the main scene of the previous frame image.
  • the processor such as CPU
  • the CPU may determine the main scene of the N-1th frame image according to the number of drawcalls executed on different frame buffers during the image rendering process of the previous frame (such as the N-1th frame image).
  • the Nth frame of image may also be called a first image
  • the main scene of the Nth frame of image may also be called a first main scene.
  • the N-1th frame of image may be referred to as a second image
  • the main scene of the second image may be the same as that of the first image (eg, both are the first main scene).
  • the rendering command issued by the application program corresponding to FB1 is called rendering command 1
  • the rendering command issued by the application program corresponding to FB2 is called rendering command 2.
  • the rendering command 1 can include glBindFrameBuffer(1) for binding FB1.
  • Rendering command 1 may also include one or more glDrawElements that need to be executed on FB1.
  • rendering command 1 may include A glDrawElement, that is, A drawcall.
  • Rendering command 2 may also include one or more glDrawElements that need to be executed on FB2.
  • the rendering command 2 may include B glDrawElements, that is, B drawcalls.
  • the CPU may count the number of drawcalls included in the corresponding rendering commands, so as to obtain the number of drawcalls executed on each FB.
  • the CPU can initialize the counter 1 when executing glBindFrameBuffer(1) according to the rendering command issued by the application program. For example, configure the corresponding counting frame bits for FB1 in the memory. By initializing the counter 1, the value of the frame bit can be initialized to 0.
  • the count of counter 1 is increased by 1, such as executing count1++.
  • the CPU can execute count1++ on counter 1, so that the value of the frame bit stored in FB1 Drawcall number changes from 0 to 1. That is to say, the number of drawcalls executed on FB1 is 1 at this time. and so on. Then, the CPU can determine that the number of drawcalls executed on FB1 is the current count of counter 1 during the rendering process of the N-1th frame image (for example, the count can be A).
  • the count of counter 2 is increased by 1, such as executing count2++.
  • the CPU can determine that the number of drawcalls executed on FB2 is the current count of counter 2 (for example, the count can be B) during the rendering process of image frame N-1.
  • the value of the frame bits storing the number of FB1Drawcalls in the memory can be A, and the number of frame bits storing the number of FB2Drawcalls can be B.
  • the CPU may select the frame buffer corresponding to the larger count in A and B as the main scene. For example, when A is greater than B, the CPU may determine that the number of drawcalls executed on FB1 is more, thereby determining that FB1 is the main scene of the N-1th frame image. In contrast, when A is smaller than B, the CPU can determine that the number of drawcalls executed on FB2 is more, thereby determining that FB2 is the main scene of the N-1th frame image.
  • the execution process of S701 may be executed by the CPU during the rendering process of the N-1th frame image.
  • the main scene is the frame buffer that executes the largest number of drawcalls. Therefore, when it is necessary to reduce the pressure of rendering processing on the electronic device, the rendering processing in the main scene can be adjusted, so as to obtain a more significant effect than adjusting the rendering processing mechanisms in other frame buffers.
  • the CPU of the electronic device may configure a temporary frame buffer for the main scene, and the resolution of the temporary frame buffer may be smaller than that of the main scene.
  • the electronic device can use a smaller resolution to process a large number of drawcalls in the main scene, thereby reducing the impact of high-resolution rendering on the electronic device.
  • the CPU may determine the resolution of the main scene before configuring the temporary frame buffer in the memory for the main scene.
  • the CPU can determine the resolution of the main scene according to the size of the canvas used when rendering the N-1th frame image. It can be understood that when the application sends a rendering command to the CPU, the frame buffer can be bound through the bindFrameBuffer() function in the rendering command. Take the rendering command bound to FB1 through bindFrameBuffer(1) as an example. In this rendering command, the drawing size when performing rendering operations in FB1 can also be specified through the glViewPort(x, y) function. In this way, the CPU can control the GPU to perform subsequent rendering processing corresponding to glDrawElement in the x*y pixel area.
  • the CPU when acquiring the resolution of the main scene, can determine the pixel size of the main scene according to the pixel area specified by the glViewPort(x, y) function during the image rendering process of frame N-1 of the main scene. For example, take FB1 as the main scene as an example. The CPU may determine that glViewPort(2218, 978) is received after binding FB1 (such as receiving the bindFrameBuffer(1) function) when rendering the image of the N-1th frame. Then the CPU can determine that the 2218 ⁇ 978 pixel area on FB1 is used in the rendering process of the N-1 frame image. Thus, the CPU can determine that the resolution of the main scene (that is, FB1) is 2218 ⁇ 978.
  • the CPU may determine the resolution of the main scene by executing the glGetIntegerv command on the main scene after receiving the rendering command for the image of the Nth frame. For example, take the main scene as FB1 and execute the glGetIntegerv(GL_VIEWPORT) command to obtain the resolution as an example.
  • the CPU can execute the glGetIntegerv (GL_VIEWPORT) instruction after receiving the Nth frame of image and binding FB1 (for example, receiving the bindFrameBuffer(1) function). Therefore, according to the obtained result (such as tex1 (2118 ⁇ 978)), the resolution of the main scene (ie, FB1 ) is determined to be 2218 ⁇ 978.
  • the CPU may configure a temporary frame buffer with a smaller resolution for the main scene when the resolution of the main scene is determined.
  • the operation of configuring the temporary frame buffer may be performed by the CPU during the rendering process of the N-1th frame image.
  • the operation of configuring the temporary frame buffer may also be performed before the CPU completes the rendering of the image of the N-1th frame and starts to execute the rendering of the image of the Nth frame.
  • scaling parameters may be configured in the electronic device. This scaling parameter can be used to determine the resolution of the temporary framebuffer.
  • the CPU may determine the resolution of the temporary frame buffer according to the following formula (1).
  • the resolution of the temporary frame buffer the resolution of the main scene ⁇ the scaling parameter...Formula (1).
  • the scaling parameter may be a positive number less than 1.
  • the scaling parameter may be configured by the user, may also be preset in the electronic device, or may be acquired by the electronic device from the cloud when needed.
  • the CPU After determining the resolution of the temporary frame buffer, the CPU can configure a storage space of a corresponding size for the temporary frame buffer in the memory.
  • the CPU may determine the resolution of the main scene after performing the rendering process of the image of the N-1th frame. Furthermore, the resolution of the temporary frame buffer can be determined according to the above formula (1). Based on this, the CPU can configure the temporary frame buffer in memory.
  • the CPU may configure the temporary frame buffer by creating a temporary frame buffer object and binding the temporary frame buffer object to a storage space configured in memory.
  • the temporary frame buffer object may be the name of the temporary frame buffer.
  • fbo can be assigned a value of 3
  • the ID value is texture
  • the driver will assign texture to the FB, for example, texture can be assigned a value of 11
  • glBindTexture (GL_TEXTURE_2D,texture);//Bind texture 11, where GL_TEXTURE_2D is used to indicate that the texture target is a 2D texture
  • glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, 800, 600, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL); // allocate a memory space of 800*600 to the texture;
  • the second factor (ie 0) in the function can be used to indicate the texture Level
  • the third factor in the function ie GL_RGB
  • the seventh factor in the function ie GL_RGB
  • the eighth factor in the function ( That is, GL_UNSIGNED_BYTE) can be used to indicate the input texture data type
  • glFramebufferTexture2D (GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,GL_TEXTURE_2D,texture,0);//Attach the created texture texture to the currently bound framebuffer object
  • the temporary frame buffer is FB3 as an example. Since the resolution of the FB3 is smaller than that of the FB1, the storage space occupied by the FB3 in the internal memory is smaller than the storage space occupied by the FB1 in the internal memory. Correspondingly, since the resolution of the rendering operation performed on FB3 is lower, that is to say, when performing the same rendering process on FB3, the number of pixels to be processed is smaller than that of FB1, so when performing rendering operations on FB3, compared with FB1 Rendering operations require less power and generate less heat.
  • the CPU can complete the action of determining the main scene and configuring the corresponding temporary frame buffer for the main scene.
  • the electronic device may use a frame buffer with a smaller resolution to perform a large number of rendering operations in the main scene through the processor.
  • the CPU may control the GPU to perform a rendering operation on a corresponding frame buffer according to the rendering command.
  • Figure 8 Take the main scene as FB1, the corresponding temporary frame buffer as FB3, and the number of drawcalls included in FB1 is 2 (such as glDrawElement1 and glDrawElement2) during the rendering of the Nth frame image as an example.
  • the rendering command received by the CPU from the application program includes the following instructions as an example:
  • the CPU may replace the identifier (such as FB1 ) used to bind the main scene in the rendering command of the Nth frame image with the identifier (such as FB3 ) of the temporary frame buffer.
  • the frame buffer object of FB1 may also be called first frame buffer information
  • the frame buffer object of FB3 may also be called second frame buffer information.
  • the CPU can replace bindFrameBuffer(1) with bindFrameBuffer(3). Then, the replaced rendering command can be:
  • the CPU can replace the original rendering operation bound to the main scene (such as FB1) with the rendering operation bound to the temporary frame buffer. Furthermore, the CPU can send the rendering instruction (for example, the first rendering instruction) to the GPU, so that the GPU can execute the rendering operations of glDrawElement1 and glDrawElement2 on FB3, and store the result on FB3.
  • the rendering operation performed on the main scene (or the temporary frame buffer) may also be referred to as the first rendering operation.
  • the electronic device may execute the above solution for rendering operations of all main scenes through the CPU, for example, a temporary frame buffer with a smaller resolution is used to perform rendering operations of the main scene.
  • the electronic device may also refer to the scheme shown in FIG. 7A when power consumption control is required. For example, as a possible implementation, the electronic device may determine that power consumption control needs to be performed when the load is greater than a preset threshold. Thus, the solution shown in FIG. 7A can be triggered to reduce power consumption during image processing. As another possible implementation, the electronic device may implement the above solution in combination with the resolution of the main scene.
  • the electronic device may execute S702 when it is determined that the resolution of the main scene is greater than the preset resolution, and/or the number of drawcalls executed in the main scene is greater than the preset rendering threshold, that is, configure a temporary frame buffer for the main scene , which in turn renders the main scene in a temporary framebuffer with a smaller resolution.
  • FIG. 9 shows a schematic flowchart of another image processing method provided by the embodiment of the present application.
  • the solution may be executed by the CPU in the electronic device, so as to realize the effect of dynamically adjusting the resolution corresponding to the main scene of the rendering operation, thereby reducing power consumption.
  • S703 can be specifically implemented through S901-S903.
  • the CPU may determine whether the current rendering command needs to be bound in the main scene according to whether the frame buffer object indicated by bindFrameBuffer() is the same as the frame buffer object of the main scene determined in S701 in the received rendering command. executed in the scene.
  • the electronic device may determine that the currently executed rendering command needs to be executed on the main scene.
  • the electronic device can determine that the currently executed rendering command does not need to be executed on the main scene.
  • the CPU of the electronic device may continue to execute the following S902.
  • the frame buffer information may include a frame buffer object that needs to be bound to the currently executed command.
  • the CPU may replace FB1 corresponding to bindFrameBuffer(1) with a frame buffer object (such as FB3) of the temporary frame buffer.
  • a frame buffer object such as FB3
  • the command after replacement can be: bindFrameBuffer(3).
  • the GPU may perform a corresponding rendering operation according to the received rendering instruction. For example, when the GPU receives bindFrameBuffer(3) and subsequent drawcalls, it can execute the rendering operations of these drawcalls on FB3 and store the rendering results on FB3.
  • the CPU may also execute S904 according to the judgment result after performing the judgment in S901 .
  • S904 may be executed.
  • the CPU may determine that the currently executed rendering command is not in the main scene when the frame buffer object of the frame buffer to be bound to the currently executed rendering command is different from the frame buffer object of the main scene. Rendering commands in .
  • the rendering command to be executed currently is not performed in the main scene, the computing power consumed by the rendering command is not the most consumed during the rendering operation in the Nth frame image. Then the CPU can directly control the GPU to execute the corresponding drawcall on the frame buffer specified by the rendering command. For example, S904 is executed: sending a rendering instruction to the GPU, so that the GPU performs a rendering operation on the corresponding frame buffer.
  • the CPU currently executes the rendering of the image of the Nth frame as an example.
  • the CPU may complete the processing of S701-S702 after the rendering processing of the N-1th frame image is completed. That is, the determined main scene of the image of the Nth frame, and a corresponding temporary frame buffer is configured for the main scene.
  • the application program that sends the rendering command is a game application as an example.
  • the CPU may execute S701-S702 at the beginning of the game.
  • the CPU can be configured to execute the scheme shown in Figure 7A or Figure 9 starting from the sixth frame image, then, after the game starts and the fifth frame image finishes rendering, the CPU can, according to the fifth frame image Rendering situation, determine the main scene, and configure the corresponding temporary frame buffer for the main scene. Therefore, during the rendering process of the sixth frame image, the CPU can use the temporary frame buffer to execute the drawcall of the main scene, thereby achieving the effect of significantly reducing power consumption during the rendering process.
  • the method of replacing the main scene and the temporary frame buffer is described by taking the rendering command sent by the application to point to the main scene through the bindFrameBuffer() function as an example.
  • the CPU can also replace the framebuffer object pointed to by the corresponding function through a similar scheme, such as replacing the framebuffer object with the main scene For a temporary framebuffer, enabling corresponding rendering operations to be performed on the temporary framebuffer.
  • the CPU can replace the identifier of the attachment included in the texture of the current frame buffer, so that the GPU can use the correct attachment to perform corresponding rendering processing in the temporary frame buffer.
  • the CPU can replace the identifier of the attachment included in the texture of the current frame buffer, so that the GPU can use the correct attachment to perform corresponding rendering processing in the temporary frame buffer.
  • the main scene as FB1
  • the temporary frame buffer as FB3 as an example.
  • the identification of the accessories (such as color accessories, depth accessories, etc.) of FB1 can be 11 .
  • the CPU can also replace the identifier of the accessory of FB1 accordingly.
  • replace attachment 11 with attachment 13 (that is, the attachment ID of FB3), for example, replace glBindTexture(GL_TEXTURE_2D, 11) with glBindTexture(GL_TEXTURE_2D, 13).
  • FB3 attachment ID of FB3
  • the electronic device when the electronic device executes the rendering operation of the main scene on the temporary frame buffer, it can use multi-sampling technology to improve the display effect of the rendering result obtained by rendering with reduced resolution, and further improve the display effect to the user. The quality of the displayed image.
  • the CPU may instruct the GPU to use more color, depth and/or stencil information to process primitives (such as points, lines, polygons, etc.) ) to achieve the effect of blurring the edge of the image.
  • primitives such as points, lines, polygons, etc.
  • the embodiments of the present application may divide the involved devices into functional modules according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or in the form of software function modules. It should be noted that the division of modules in the embodiment of the present application is schematic, and is only a logical function division, and there may be other division methods in actual implementation.
  • FIG. 10 is a schematic composition diagram of an image processing apparatus 1000 provided by an embodiment of the present application.
  • the apparatus may be set in an electronic device to implement any possible image processing method provided by the embodiments of the present application.
  • the apparatus may be applied to rendering processing of the first image by an electronic device.
  • the electronic device runs an application program.
  • the electronic device performs rendering processing on the first image, one or more frame buffers are invoked.
  • the electronic device The rendering operation for performing rendering processing on the first image is issued by the application program.
  • the apparatus includes: a determining unit 1001 configured to determine a first main scene in a process of rendering the first image.
  • the first main scene is a frame buffer that executes the largest number of rendering operations during the rendering process of the first image by the electronic device.
  • the configuration unit 1002 is configured to configure a temporary frame buffer, where the resolution of the temporary frame buffer is smaller than the resolution of the first main scene.
  • the executing unit 1003 is configured to execute a first rendering operation on the temporary frame buffer when performing rendering processing on the first image.
  • the first rendering operation is a rendering operation instructed by the application program to be executed on the first main scene.
  • the determining unit 1001 is specifically configured to determine the first main scene based on the frame buffer that performs the largest number of rendering operations during the rendering process of the second image.
  • the rendering process of the second image is prior to the rendering process of the first image.
  • the first main scene is called when rendering processing is performed on the second image.
  • the determination unit 1001 is further configured to determine the number of draw calls (drawcalls) to be executed on each frame buffer during the rendering process of the second image, and the frame with the largest number of drawcalls will be executed Buffering is determined as the first main scene.
  • the second image is an image last frame of the first image.
  • the functions of the units shown in FIG. 10 may be implemented by hardware modules as shown in FIG. 4 .
  • the function of the execution unit 1003 can be realized by the processor and the rendering processing module as shown in FIG. 4 .
  • the rendering processing module may be a module with graphics rendering function.
  • the rendering processing module may be a GPU as shown in FIG. 4 .
  • the aforementioned processor may be a CPU as shown in FIG. 4 .
  • the processor when receiving a rendering command for the first image from the application program, the processor is configured to issue a first rendering instruction to the rendering processing module, where the first rendering instruction includes the first rendering Operation, the first rendering instruction is used to instruct the rendering processing module to execute the first rendering operation on the temporary frame buffer.
  • the rendering processing module is used to execute the first rendering operation on the temporary frame buffer according to the first rendering instruction.
  • the processor is further configured to, before the processor sends the first rendering instruction to the rendering processing module, determine that the currently executed rendering The frame buffer information bound to the executed rendering command is replaced by the first frame buffer information with the second frame buffer information to obtain the first rendering instruction, and the first frame buffer information is used to indicate that the second frame buffer information is executed on the main scene For a rendering operation, the second framebuffer information is used to indicate to execute the first rendering operation on the temporary framebuffer.
  • the first frame buffer information includes a first frame buffer object
  • the first frame buffer object is a frame buffer object corresponding to the main scene
  • the second frame buffer information includes a second frame buffer object
  • the The second framebuffer object is a framebuffer object corresponding to the temporary framebuffer.
  • the rendering command is issued by the application program to the processor, and the rendering command includes the first rendering operation and the first frame buffer information.
  • the first rendering instruction further includes: performing multi-sampling on the image acquired by the first rendering operation.
  • FIG. 11 shows a schematic composition diagram of an electronic device 1100 .
  • the electronic device 1100 may include: a processor 1101 and a memory 1102 .
  • the memory 1102 is used to store computer-executable instructions.
  • the processor 1101 executes the instructions stored in the memory 1102
  • the electronic device 1100 may be made to execute the image processing method shown in any one of the above embodiments.
  • FIG. 12 shows a schematic composition diagram of a chip system 1200 .
  • the chip system 1200 may include: a processor 1201 and a communication interface 1202, configured to support related devices to implement the functions involved in the foregoing embodiments.
  • the chip system further includes a memory for saving necessary program instructions and data of the terminal.
  • the system-on-a-chip may consist of chips, or may include chips and other discrete devices.
  • the communication interface 1202 may also be called an interface circuit.
  • the functions or actions or operations or steps in the above-mentioned embodiments may be fully or partially implemented by software, hardware, firmware or any combination thereof.
  • a software program When implemented using a software program, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server, or data center Transmission to another website site, computer, server or data center by wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or may include one or more data storage devices such as servers and data centers that can be integrated with the medium.
  • the available medium may be a magnetic medium (such as a floppy disk, a hard disk, or a magnetic tape), an optical medium (such as a DVD), or a semiconductor medium (such as a solid state disk (solid state disk, SSD)), etc.

Abstract

本申请实施例公开了一种图像处理方法和电子设备,涉及图像处理领域,可以使用较小的分辨率执行主场景中的大量渲染操作,从而达到降低渲染功耗的效果。具体方案为:确定对该第一图像执行渲染处理过程中的第一主场景。该第一主场景是该电子设备对该第一图像的渲染处理过程中,执行渲染操作数量最多的帧缓冲。配置临时帧缓冲,该临时帧缓冲的分辨率小于该第一主场景的分辨率。在对该第一图像进行渲染处理时,在该临时帧缓冲上,执行第一渲染操作。该第一渲染操作是该应用程序指示的在该第一主场景上执行的渲染操作。

Description

一种图像处理方法和电子设备
本申请要求于2021年6月10日提交国家知识产权局、申请号为202110650009.7、发明名称为“一种图像处理方法和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及图像处理领域,尤其涉及一种图像处理方法和电子设备。
背景技术
[援引加入(细则20.6) 07.07.2022] 
随着电子设备的发展,对电子设备显示图像的能力要求也越来越高。
[援引加入(细则20.6) 07.07.2022] 
示例性的,以电子设备为手机为例。为了能够为用户提供更加清楚的图像显示,手机可以采用更高的分辨率向用户展示图像。而手机在渲染获取具有高分辨率的图像时,产生的功耗更高。因此就会导致算力开销过大,发热严重等问题。严重时还会出现手机运行卡顿。进而严重影响用户体验。
发明内容
本申请实施例提供一种图像处理方法和电子设备,可以使用较小的分辨率执行主场景中的大量渲染操作,从而达到降低渲染功耗的效果。
为了达到上述目的,本申请实施例采用如下技术方案:
第一方面,提供一种图像处理方法,应用于电子设备对第一图像的渲染处理,该电子设备运行有应用程序,该电子设备对该第一图像执行渲染处理时调用一个或多个帧缓冲,该电子设备对该第一图像执行渲染处理的渲染操作由该应用程序下发。该方法包括:确定对该第一图像执行渲染处理过程中的第一主场景。该第一主场景是该电子设备对该第一图像的渲染处理过程中,执行渲染操作数量最多的帧缓冲。配置临时帧缓冲,该临时帧缓冲的分辨率小于该第一主场景的分辨率。在对该第一图像进行渲染处理时,在该临时帧缓冲上,执行第一渲染操作。该第一渲染操作是该应用程序指示的在该第一主场景上执行的渲染操作。
基于该方案,提供了一种降低主场景下的渲染功耗的方案。在本示例中,电子设备可以为主场景(如执行渲染处理最大的帧缓冲)配置具有较小分辨率的临时帧缓冲。电子设备还可以在临时帧缓冲上执行原先需要在主场景上执行的渲染操作。这样,就可以实现采用较小的分辨率执行渲染操作的目的,从而节省由于高分辨率的大量渲染操作导致的渲染功耗过高的问题。可以理解的是,本示例中,配置临时帧缓冲,以及在临时帧缓冲上执行渲染操作的对象,均为主场景,即渲染操作最多的帧缓冲。在本申请的另一些实现中,电子设备还可以为其他具有较多渲染操作的帧缓冲配置对应的具有较小分辨率的临时帧缓冲,并该对应的临时帧缓冲上进行较低分辨率的渲染操作,从而使得采用较小分辨率执行的渲染操作数量得到提升,进一步降低渲染操作的功耗。
在一种可能设计中,该确定对该第一图像的渲染处理过程中的第一主场景,包括:基于对第二图像的渲染处理过程中,执行渲染操作数量最多的帧缓冲,确定该第一主场景。该第二图像的渲染处理在该第一图像的渲染处理之前。对该第二图像执行渲染处理时调用该第一主场景。基于该方案,提供了一种第一主场景的确定方案。在本示例中,电子设备可以根据在当前帧图像(如第一图像)的渲染处理之前,对其他帧图像(如第二图像)进行的渲染处理的过程中,各个帧缓冲上的渲染操作的数量确定当前帧图像的主场景。可以理解的是,第二图像的渲染处理在第一图像之前,那么在执行第一图像的渲染时,第二图像的所有渲染操作已经完成,因此电子设备就可以知晓第二图像在渲染过程中,各个帧缓冲上执行渲染操作的数量。由于图像的连续性,因此一般情况,可以认为对第一图像而言,渲染操作最多的帧缓冲与第二图像保持一致。 因此,在本示例中,可以将第二图像渲染处理过程中,执行渲染操作最多的帧缓冲,作为当前帧图像(即第一图像)的主场景。
在一种可能设计中,在该确定对该第一图像执行渲染处理过程中的第一主场景之前,该方法还包括:在对该第二图像的渲染处理过程中,确定在每个帧缓冲上执行绘制调用(drawcall)的数量,将执行drawcall的数量最多的帧缓冲确定为该第一主场景。基于该方案,提供了一种具体的确定第一主场景的方案示例。在本示例中,可以通过drawcall的数量,来表示渲染操作的数量。比如,drawcall的数量越多,那么对应的渲染操作的数量就越多,反之亦然。由此,电子设备就可以通过确定第二图像的渲染过程中,各个帧缓冲上的drawcall的数量,确定具有最大drawcall数量的帧缓冲为第一主场景。可以理解的是,由于主场景上的drawcall最多,因此在主场景上的渲染处理最复杂,需要的功耗也就越高。因此,结合前述方案,在drawcall最多的帧缓冲(即主场景)上执行上述方案,能够达到较好的降低功耗的效果。
在一种可能设计中,该第二图像是该第一图像的上一帧图像。基于该方案,提供了一种第二图像的具体选择方案。比如,第二图像可以是第一图像的上一帧图像。这样,由于第二图像与第一图像所要显示的图像非常接近,因此根据第二图像渲染过程中的drawcall数量确定的第一主场景也就越准确。
在一种可能设计中,该电子设备配置有处理器和渲染处理模块。该在该临时帧缓冲上,执行第一渲染操作,包括:在该处理器接收来自该应用程序的对该第一图像的渲染命令的情况下,该处理器向该渲染处理模块下发第一渲染指令,该第一渲染指令包括该第一渲染操作,该第一渲染指令用于指示该渲染处理模块在该临时帧缓冲上执行该第一渲染操作。该渲染处理模块根据该第一渲染指令,在该临时帧缓冲上执行该第一渲染操作。基于该方案,提供了一种执行第一渲染操作的具体方案的示例。在本示例中,电子设备可以通过处理器和渲染处理模块配合,实现以较小的分辨率执行主场景的渲染操作的目的。比如,处理器可以向渲染处理模块下发渲染指令,指示渲染处理模块在具有较小分辨率的临时帧缓冲上执行本应在具有较大分辨率的主场景上执行的渲染操作。这样,渲染处理模块就可以根据渲染指令,在临时帧缓冲上执行对应的渲染操作,从而达到采用较小分辨率进行渲染的效果。
在一种可能设计中,在该处理器向该渲染处理模块下发第一渲染指令之前,该方法还包括:该处理器判断当前执行的渲染命令是对该主场景的渲染命令的情况下,将当前执行的该渲染命令绑定的帧缓冲信息由第一帧缓冲信息替换为第二帧缓冲信息,以得到该第一渲染指令,该第一帧缓冲信息用于指示在该主场景上执行该第一渲染操作,该第二帧缓冲信息用于指示在该临时帧缓冲上执行该第一渲染操作。基于该方案,提供了一种处理器向渲染处理模块发送的第一渲染指令的确定逻辑。在该示例中,处理器可以在确定当前来自应用程序的渲染命令是针对主场景的渲染命令的情况下,执行后续的操作。比如,处理器可以根据当前渲染命令所绑定的帧缓冲的帧缓冲对象,与主场景的帧缓冲对象进行对比,当结果一致时,则处理器确定当前的渲染命令是针对主场景的。此后,处理器可以向渲染处理模块下发渲染指令,以指示渲染处理模块进行渲染操作。在本示例中,处理器可以将渲染指令中,本指向主场景的帧缓冲信息,替换为指向临时帧缓冲的帧缓冲信息,并下发给渲染处理模块。这样,渲染处理模块 就可以根据该渲染指令,在该渲染指令指示的帧缓冲(如临时帧缓冲)上执行对应的渲染操作,由此实现在临时帧缓冲上执行本在主场景上执行的渲染操作。
在一种可能设计中,该第一帧缓冲信息包括第一帧缓冲对象,该第一帧缓冲对象是该主场景对应的帧缓冲对象,该第二帧缓冲信息包括第二帧缓冲对象,该第二帧缓冲对象是该临时帧缓冲对应的帧缓冲对象。基于该方案,提供了一种帧缓冲信息的具体实现。比如,帧缓冲信息可以包括帧缓冲对象。以替换bindFrameBuffer()函数的帧缓冲信息为例。在主场景为FB1,临时帧缓冲为FB3的情况下,渲染命令中可以包括bindFrameBuffer(1)。对应的,在不替换帧缓冲信息的情况下,处理器可以向渲染处理模块下发包括bindFrameBuffer(1)的渲染指令,以指示渲染处理模块在FB1上执行渲染指令中的drawcall。在采用本申请所述的方案时,可以将FB1的帧缓冲信息替换为FB3的帧缓冲信息,那么,渲染命令中可以包括bindFrameBuffer(1)时,处理器可以将bindFrameBuffer(1)替换为bindFrameBuffer(3),并在渲染指令中携带该bindFrameBuffer(3),从而指示渲染处理模块在FB3上执行对应的渲染操作。
在一种可能设计中,该渲染命令是该应用程序下发给该处理器的,该渲染命令包括该第一渲染操作,以及该第一帧缓冲信息。基于该方案,提供了一种渲染命令的基本组成示意。在该示例中,渲染命令可以是来自应用程序的,用于指示处理器在所指示的帧缓冲上执行第一渲染操作。
在一种可能设计中,该处理器是中央处理器(CPU)。基于该方案,提供了一种处理器的具体实现。比如,该处理器可以是电子设备中的CPU。在另一些实现中,处理器的功能还可以通过其他具有处理功能的部件或者电路实现。
在一种可能设计中,该渲染处理模块是图形处理器(GPU)。基于该方案,提供了一种渲染处理模块的具体实现。比如,该渲染处理模块可以是电子设备中的GPU。在另一些实现中国,渲染处理模块的功能还可通过其他具有图形渲染功能的部件或者电路实现。
在一种可能设计中,该第一渲染指令还包括:对该第一渲染操作获取的图像执行多重采样。基于该方案,提供了一种在降低功耗的前提下提升图像显示质量的方案示例。在本示例中,电子设备可以对采用较低分辨率执行渲染操作获取的图像,采用多重采样技术,降低图像边缘的锯齿,提升图像的整体显示质量。
第二方面,提供一种图像处理装置,应用于电子设备对第一图像的渲染处理,该电子设备运行有应用程序,该电子设备对该第一图像执行渲染处理时调用一个或多个帧缓冲,该电子设备对该第一图像执行渲染处理的渲染操作由该应用程序下发。该装置包括:确定单元,用于确定对该第一图像执行渲染处理过程中的第一主场景。该第一主场景是该电子设备对该第一图像的渲染处理过程中,执行渲染操作数量最多的帧缓冲。配置单元,用于配置临时帧缓冲,该临时帧缓冲的分辨率小于该第一主场景的分辨率。执行单元,用于在对该第一图像进行渲染处理时,在该临时帧缓冲上,执行第一渲染操作。该第一渲染操作是该应用程序指示的在该第一主场景上执行的渲染操作。
在一种可能设计中,确定单元,具体用于基于对第二图像的渲染处理过程中,执行渲染操作数量最多的帧缓冲,确定该第一主场景。该第二图像的渲染处理在该第一 图像的渲染处理之前。对该第二图像执行渲染处理时调用该第一主场景。
在一种可能设计中,确定单元,还用于在对该第二图像的渲染处理过程中,确定在每个帧缓冲上执行绘制调用(drawcall)的数量,将执行drawcall的数量最多的帧缓冲确定为该第一主场景。
在一种可能设计中,该第二图像是第一图像的上一帧图像。
在一种可能设计中,该执行单元的功能可以通过处理器和渲染处理模块实现。示例性的,在该处理器用于在接收到来自该应用程序的对该第一图像的渲染命令的情况下,向该渲染处理模块下发第一渲染指令,该第一渲染指令包括该第一渲染操作,该第一渲染指令用于指示该渲染处理模块在该临时帧缓冲上执行该第一渲染操作。该渲染处理模块用于根据该第一渲染指令,在该临时帧缓冲上执行该第一渲染操作。
在一种可能设计中,该处理器还用于,在该处理器向该渲染处理模块下发第一渲染指令之前,判断当前执行的渲染命令是对该主场景的渲染命令的情况下,将当前执行的该渲染命令绑定的帧缓冲信息由第一帧缓冲信息替换为第二帧缓冲信息,以得到该第一渲染指令,该第一帧缓冲信息用于指示在该主场景上执行该第一渲染操作,该第二帧缓冲信息用于指示在该临时帧缓冲上执行该第一渲染操作。
在一种可能设计中,该第一帧缓冲信息包括第一帧缓冲对象,该第一帧缓冲对象是该主场景对应的帧缓冲对象,该第二帧缓冲信息包括第二帧缓冲对象,该第二帧缓冲对象是该临时帧缓冲对应的帧缓冲对象。
在一种可能设计中,该渲染命令是该应用程序下发给该处理器的,该渲染命令包括该第一渲染操作,以及该第一帧缓冲信息。
在一种可能设计中,该处理器是中央处理器(CPU)。
在一种可能设计中,该渲染处理模块是图形处理器(GPU)。
在一种可能设计中,该第一渲染指令还包括:对该第一渲染操作获取的图像执行多重采样。
第三方面,提供一种电子设备,电子设备包括一个或多个处理器和一个或多个存储器;一个或多个存储器与一个或多个处理器耦合,一个或多个存储器存储有计算机指令;当一个或多个处理器执行计算机指令时,使得电子设备执行如上述第一方面以及各种可能的设计中任一种的图像处理方法。
第四方面,提供一种芯片系统,芯片系统包括接口电路和处理器;接口电路和处理器通过线路互联;接口电路用于从存储器接收信号,并向处理器发送信号,信号包括存储器中存储的计算机指令;当处理器执行计算机指令时,芯片系统执行如上述第一方面以及各种可能的设计中任一种的图像处理方法。
第五方面,提供一种计算机可读存储介质,计算机可读存储介质包括计算机指令,当计算机指令运行时,执行如上述第一方面以及各种可能的设计中任一种的图像处理方法。
第六方面,提供一种计算机程序产品,计算机程序产品中包括指令,当计算机程序产品在计算机上运行时,使得计算机可以根据指令执行如上述第一方面以及各种可能的设计中任一种的图像处理方法。
应当理解的是,上述第二方面,第三方面,第四方面,第五方面以及第六方面提 供的技术方案,其技术特征均可对应到第一方面及其可能的设计中提供的图像处理方法,因此能够达到的有益效果类似,此处不再赘述。
附图说明
图1为一种视频流的组成示意图;
图2为一种图像显示的示意图;
图3为又一种图像显示的示意图;
图4为本申请实施例提供的一种电子设备的组成示意图;
图5为本申请实施例提供的一种电子设备的软件组成示意图;
图6A为本申请实施例提供的一种渲染命令的下发示意图;
图6B为本申请实施例提供的一种渲染命令的下发示意图;
图6C为本申请实施例提供的一种图像渲染的示意图;
图7A为本申请实施例提供的一种图像处理方法的流程示意图;
图7B为本申请实施例提供的一种不同帧缓冲上drawcall数量的确定示意图;
图7C为本申请实施例提供的一种不同帧缓冲上drawcall数量的确定示意图;
图8为本申请实施例提供的一种渲染命令的下发示意图;
图9为本申请实施例提供的一种图像处理方法的流程示意图;
图10为本申请实施例提供的一种图像处理装置的组成示意图;
图11为本申请实施例提供的一种电子设备的组成示意图;
图12为本申请实施例提供的一种芯片系统的组成示意图。
具体实施方式
在用户使用电子设备的过程中,电子设备通过显示屏可以向用户展示多媒体流文件,以便于向用户提供丰富的视觉体验。
以多媒体流为视频流为例。该视频流可以包括多个帧图像。示例性的,结合图1,该视频流可以包括N帧图像,如第1帧,第2帧,……第N帧。电子设备可以在展示该视频流时,分别在显示屏上显示第1帧,第2帧,……第N帧。在电子设备切换显示帧图像的频率高于人眼能够分辨的频率时,就可以实现用户对于不同帧图像的切换展示的不感知,从而获取观看连续的观看效果。
想要播放视频流的应用程序,可以向电子设备发出针对不同帧图像的渲染命令。电子设备可以根据这些渲染命令,进行对各个帧图像的渲染,并基于渲染结果进行显示。
在一些实现中,电子设备可以在显示当前帧图像时,根据应用程序下发的渲染命令,进行后续帧图像的渲染。以便于在需要显示后续帧图像时,可以将渲染完成的需要显示的帧图像替换当前显示的帧图像,展示给用户。
示例性的,以当前显示的帧图像为第N-1帧为例。如图2所示,电子设备可以在需要显示第N-1帧图像时,控制显示屏从当前帧缓冲(如图2所示的当前帧缓冲)中读取第N-1帧图像的数据进行显示。其中,当前帧缓冲可以是电子设备在内存中为当前需要显示的帧图像配置的存储空间,该当前帧缓冲可以用于存储当前需要显示的帧图像的数据(如各个像素的颜色数据,深度数据等)。也就是说,在需要显示第N-1帧图像时,电子设备可以根据当前帧缓冲中存储的第N-1帧图像的数据,控制显示屏 显示第N-1帧图像。
在一些实现中,在电子设备显示第N-1帧图像时,可以根据应用程序下发的渲染命令,对后续将要显示的图像(如第N帧)进行渲染,从而获取用于显示第N帧图像的数据。
请参考图3,电子设备在显示第N-1帧图像时,可以执行对第N帧的渲染。可以理解的是,电子设备在渲染帧图像的过程中,需要将渲染的结果存储起来以便后续使用。因此,在本示例中,电子设备可以在对第N帧图像进行渲染之前,在内存中配置帧缓冲。帧缓冲可以对应到内存中的存储空间。在电子设备进行渲染时,可以将渲染结果存储在对应的帧缓冲中。
在一些实施例中,为了避免在一个帧缓冲上执行过多的操作,电子设备可以在执行1个帧图像的渲染时,调用多个帧缓冲。示例性的,电子设备可以在执行第N帧图像的渲染时,调用如图3所示的3个帧缓冲(如FB0,FB1,以及FB2)。这样,在执行对第N帧图像的渲染时,FB1可以用于存储第N帧图像的一部分元素的渲染结果(如称为渲染结果1),FB2可以用于存储第N帧图像的另一部分元素的渲染结果(如称为渲染结果2)。在分别获取渲染结果1和渲染结果2之后,电子设备可以将渲染结果1以及渲染结果2融合(或称为渲染)到FB0中。由此即可在FB0中获取第N帧图像的完整的渲染结果。
电子设备可以在需要显示第N帧图像时,控制FB0中的数据交换(swap)到当前帧缓冲中,进而根据交换之后的当前帧缓冲中的数据(如第N帧的数据),控制显示屏显示第N帧图像。
需要说明的是,为了能够对帧图像进行完整的渲染,电子设备可以在创建帧缓冲时,为帧缓冲配置对应的分辨率。分辨率越高,帧缓冲对应的存储空间就越大,这样就能够实现对较高分辨率的图像进行渲染。
可以理解的是,随着电子设备的发展,用户对于电子设备提供的显示分辨率要求越来越高。各个帧图像的渲染都需要进行较高分辨率的渲染,这也对电子设备的图像处理能力提出了较高的要求。由此也会引入电子设备的功耗和发热的显著增加,从而影响用户体验。
为了解决上述问题,本申请实施例提供一种图像处理方法,可以使得电子设备能够根据已经完成渲染的帧图像的渲染情况,灵活调整后续帧图像的渲染机制,比如调整后续帧图像渲染过程中使用的帧缓冲的分辨率,从而降低帧图像渲染过程中对内存和算力造成的较大压力,进而避免由此产生的电子设备的发热和功耗的上升。
以下结合附图对本申请实施例提供的方案进行详细说明。
需要说明的是,本申请实施例提供的图像处理方法,可以应用在用户的电子设备中。该电子设备可以是能够提供网络接入能够的设备。比如,该电子设备可以是手机、平板电脑、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)\虚拟现实(virtual reality,VR)设备、媒体播放器等具备拍摄功能的便携式移动设备,该电子设备也可以是智能手表等可穿戴电子设备。本申请实施例对该设备的具体形态不作特殊限制。在一些实施例中,该电子设备可以具有显示功能。比如,电子设备可以根据应用程序下发的渲染命令进行图像的渲染,并根据渲染获取 的渲染结果向用户展示对应的图像。
请参考图4,为本申请实施例提供的一种电子设备400的组成示意图。如图4所示,该电子设备400可以包括处理器410,外部存储器接口420,内部存储器421,通用串行总线(universal serial bus,USB)接口430,充电管理模块440,电源管理模块441,电池442,天线1,天线2,移动通信模块450,无线通信模块460,音频模块470,传感器模块480,按键490,马达491,指示器492,摄像头493,显示屏494,以及用户标识模块(subscriber identification module,SIM)卡接口495等。其中,传感器模块480可以包括压力传感器,陀螺仪传感器,气压传感器,磁传感器,加速度传感器,距离传感器,接近光传感器,指纹传感器,温度传感器,触摸传感器,环境光传感器,骨传导传感器等。在一些实施例中,该电子设备400还可以包括扬声器,受话器,麦克风,耳机接口等器件用于实现电子设备的音频相关功能。
可以理解的是,本实施例示意的结构并不构成对电子设备400的具体限定。在另一些实施例中,电子设备400可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器410可以包括一个或多个处理单元,例如:处理器410可以包括中央处理器(Central Processing Unit/Processor,CPU)应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以是电子设备400的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器410中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器410中的存储器为高速缓冲存储器。该存储器可以保存处理器410刚用过或循环使用的指令或数据。如果处理器410需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器410的等待时间,因而提高了系统的效率。
在一些实施例中,处理器410可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobi le industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
电子设备400可以通过ISP,摄像头493,视频编解码器,GPU,显示屏494以及应用处理器410等实现拍摄功能。
ISP用于处理摄像头493反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头493感光元件上,光信号转换为电信号,摄像头493感光元件将所述 电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头493中。
摄像头493用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备400可以包括1个或N个摄像头493,N为大于1的正整数。
数字信号处理器410用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备400在频点选择时,数字信号处理器410用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备400可以支持一种或多种视频编解码器。这样,电子设备400可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器410,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备400的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
充电管理模块440用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块440可以通过USB接口430接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块440可以通过电子设备400的无线充电线圈接收无线充电输入。充电管理模块440为电池442充电的同时,还可以通过电源管理模块441为电子设备400供电。电源管理模块441用于连接电池442,充电管理模块440与处理器410。电源管理模块441接收电池442和/或充电管理模块440的输入,为处理器410,内部存储器421,外部存储器,显示屏494,摄像头493,和无线通信模块460等供电。电源管理模块441还可以用于监测电池442容量,电池442循环次数,电池442健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块441也可以设置于处理器410中。在另一些实施例中,电源管理模块441和充电管理模块440也可以设置于同一个器件中。
电子设备400的无线通信功能可以通过天线1,天线2,移动通信模块450,无线通信模块460,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备400中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块450可以提供应用在电子设备400上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块450可以包括至少一个滤波器,开关,功率放大器,低 噪声放大器(low noise amplifier,LNA)等。移动通信模块450可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块450还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块450的至少部分功能模块可以被设置于处理器410中。在一些实施例中,移动通信模块450的至少部分功能模块可以与处理器410的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器,受话器等)输出声音信号,或通过显示屏494显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器410,与移动通信模块450或其他功能模块设置在同一个器件中。
无线通信模块460可以提供应用在电子设备400上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块460可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块460经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器410。无线通信模块460还可以从处理器410接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备400的天线1和移动通信模块450耦合,天线2和无线通信模块460耦合,使得电子设备400可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division mult iple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备400通过GPU,显示屏494,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏494和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器410可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏494用于显示图像,视频等。显示屏494包括显示面板。显示面板可以采 用液晶显示屏494(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备400可以包括1个或N个显示屏494,N为大于1的正整数。
外部存储器接口420可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备400的存储能力。外部存储卡通过外部存储器接口420与处理器410通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器421可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器410通过运行存储在内部存储器421的指令,从而执行电子设备400的各种功能应用以及数据处理。内部存储器421可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备400使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器421可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
在本申请实施例中,内部存储器421也可以称为内存。在一些实施例中,处理器(如CPU)可以在内存中为不同的帧图像的渲染处理创建对应的帧缓冲。比如,结合图3,CPU可以在应用程序的命令的控制下,在内存中创建FBO,FB1以及FB2用于对第N帧图像的渲染操作。
电子设备400可以通过音频模块470,扬声器,受话器,麦克风,耳机接口,以及应用处理器410等实现音频功能。例如音乐播放,录音等。
音频模块470用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块470还可以用于对音频信号编码和解码。在一些实施例中,音频模块470可以设置于处理器410中,或将音频模块470的部分功能模块设置于处理器410中。扬声器,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备400可以通过扬声器收听音乐,或收听免提通话。受话器,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备400接听电话或语音信息时,可以通过将受话器靠近人耳接听语音。麦克风,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息或需要通过语音助手触发电子设备400执行某些功能时,用户可以通过人嘴靠近麦克风发声,将声音信号输入到麦克风。电子设备400可以设置至少一个麦克风。在另一些实施例中,电子设备400可以设置两个麦克风,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备400还可以设置三个,四个或更多麦克风,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。耳机接口用于连接有线耳机。耳机接口可以是USB接口430,也可以是3.5mm的开放移动电子设备400平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
触摸传感器,也称“触控面板”。触摸传感器可以设置于显示屏494,由触摸传感器与显示屏494组成触摸屏,也称“触控屏”。触摸传感器用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器410,以确定触摸事件类型。在一些实施例中,可以通过显示屏494提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器也可以设置于电子设备400的表面,与显示屏494所处的位置不同。
压力传感器用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器可以设置于显示屏494。压力传感器的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器,电极之间的电容改变。电子设备400根据电容的变化确定压力的强度。当有触摸操作作用于显示屏494,电子设备400根据压力传感器检测所述触摸操作强度。电子设备400也可以根据压力传感器的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器可以用于确定电子设备400的运动姿态。在一些实施例中,可以通过陀螺仪传感器确定电子设备400围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器检测电子设备400抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备400的抖动,实现防抖。陀螺仪传感器还可以用于导航,体感游戏场景。
气压传感器用于测量气压。在一些实施例中,电子设备400通过气压传感器测得的气压值计算海拔高度,辅助定位和导航。
磁传感器包括霍尔传感器。电子设备400可以利用磁传感器检测翻盖皮套的开合。在一些实施例中,当电子设备400是翻盖机时,电子设备400可以根据磁传感器检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器可检测电子设备400在各个方向上(一般为三轴)加速度的大小。当电子设备400静止时可检测出重力的大小及方向。还可以用于识别电子设备400姿态,应用于横竖屏切换,计步器等应用。
距离传感器,用于测量距离。电子设备400可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备400可以利用距离传感器测距以实现快速对焦。
接近光传感器可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备400通过发光二极管向外发射红外光。电子设备400使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备400附近有物体。当检测到不充分的反射光时,电子设备400可以确定电子设备400附近没有物体。电子设备400可以利用接近光传感器检测用户手持电子设备400贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器用于感知环境光亮度。电子设备400可以根据感知的环境光亮度自适应调节显示屏494亮度。环境光传感器也可用于拍照时自动调节白平衡。环境光传感器还可以与接近光传感器配合,检测电子设备400是否在口袋里,以防误触。
指纹传感器用于采集指纹。电子设备400可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器用于检测温度。在一些实施例中,电子设备400利用温度传感器检测的温度,执行温度处理策略。例如,当温度传感器上报的温度超过阈值,电子设备400执行降低位于温度传感器附近的处理器410的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备400对电池442加热,以避免低温导致电子设备400异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备400对电池442的输出电压执行升压,以避免低温导致的异常关机。
骨传导传感器可以获取振动信号。在一些实施例中,骨传导传感器可以获取人体声部振动骨块的振动信号。骨传导传感器也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器也可以设置于耳机中,结合成骨传导耳机。音频模块470可以基于所述骨传导传感器获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器410可以基于所述骨传导传感器获取的血压跳动信号解析心率信息,实现心率检测功能。
按键490包括开机键,音量键等。按键490可以是机械按键490。也可以是触摸式按键490。电子设备400可以接收按键490输入,产生与电子设备400的用户设置以及功能控制有关的键信号输入。
马达491可以产生振动提示。马达491可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏494不同区域的触摸操作,马达491也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器492可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口495用于连接SIM卡。SIM卡可以通过插入SIM卡接口495,或从SIM卡接口495拔出,实现和电子设备400的接触和分离。电子设备400可以支持1个或N个SIM卡接口495,N为大于1的正整数。SIM卡接口495可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口495可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口495也可以兼容不同类型的SIM卡。SIM卡接口495也可以兼容外部存储卡。电子设备400通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备400采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备400中,不能和电子设备400分离。
应当理解的是,上述图4示出了电子设备中的一种硬件结构组成。本申请中,还可以从另一个角度对电子设备400进行划分。比如,参考图5,示出了电子设备400的另一种逻辑组成。
示例性的,在如图5所示的示例中,电子设备400可以就有分层架构。在该示例 中,分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,以电子设备运行有
Figure PCTCN2022097931-appb-000001
(Android)操作系统为例。系统可以分为五层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime,ART)和原生C/C++库,硬件抽象层(Hardware Abstract Layer,HAL)以及内核层。
其中,应用程序层可以包括一系列应用程序包。如图5所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
在本申请的一些实施例中,应用程序层可以包括向用户提供多媒体流展示功能的应用程序。比如,应用程序层中可以包括各类游戏类应用(如
Figure PCTCN2022097931-appb-000002
等)。又如,应用程序层中还可以包括各类视频类应用(如
Figure PCTCN2022097931-appb-000003
等)。在这些应用程序运行时,可以下发出渲染命令,由此使得CPU可以根据该渲染命令,控制GPU进行对应的渲染,以获取各个帧图像的数据。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。如图5所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,资源管理器,通知管理器,活动管理器,输入管理器等。窗口管理器提供窗口管理服务(Window Manager Service,WMS),WMS可以用于窗口管理、窗口动画管理、surface管理以及作为输入系统的中转站。内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。该数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。活动管理器可以提供活动管理服务(Activity Manager Service,AMS),AMS可以用于系统组件(例如活动、服务、内容提供者、广播接收器)的启动、切换、调度以及应用进程的管理和调度工作。输入管理器可以提供输入管理服务(Input Manager Service,IMS),IMS可以用于管理系统的输入,例如触摸屏输入、按键输入、传感器输入等。IMS从输入设备节点取出事件,通过和WMS的交互,将事件分配至合适的窗口。
安卓运行时包括核心库和安卓运行时。安卓运行时负责将源代码转换为机器码。安卓运行时主要包括采用提前(ahead or time,AOT)编译技术和及时(just in time,JIT)编译技术。核心库主要用于提供基本的Java类库的功能,例如基础数据结构、数学、输入输出(Input Output,IO)、工具、数据库、网络等库。核心库为用户进行安卓应用开发提供了API。
原生C/C++库可以包括多个功能模块。例如:表面管理器(surface manager),媒体框架(Media Framework),标准C库(Standard C l ibrary,libc),嵌入式系统的开放图形库(OpenGL for Embedded Systems,OpenGL ES)、Vulkan、SQLite、Webkit等。
其中,表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。媒体框架支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:动态图象专家组4(Moving Pictures Experts Group MPEG4),H.264,动态影像专家压缩标准音频层面3(Moving Picture Experts Group Audio Layer3,MP3),高级音频编码(Advanced Audio Coding,AAC),自适应多码解码(Adaptive Multi-Rate,AMR),联合图像专家组(Joint Photographic Experts Group,JPEG,或称为JPG),便携式网络图形(Portable Network Graphics,PNG)等。OpenGL ES和/或Vulkan提供应用程序中2D图形和3D图形的绘制和操作。SQLite为电子设备400的应用程序提供轻量级关系型数据库。
硬件抽象层运行于用户空间(user space),对内核层驱动进行封装,向上层提供调用接口。内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
以下结合图4和图5所示的组成,对图像渲染过程中,软件和硬件的处理机制进行示例性说明。其中以处理器410中包括CPU和GPU为例。在本申请实施例中,CPU可以用于接收来自应用程序的渲染命令,并根据该渲染命令,向GPU下发对应的渲染指令,以便于GPU根据渲染指令执行对应的渲染。作为一种示例,渲染命令中可以包括bindFrameBuffer()函数,以及一个或多个glDrawElement。对应的,渲染指令中也可以包括bindFrameBuffer()函数,以及一个或多个glDrawElement。其中,bindFrameBuffer()函数可以用于指示当前绑定的帧缓冲。如,bindFrameBuffer(1)可以指示当前绑定的帧缓冲为FB1,即在FB1上执行后续的glDrawElement。为了便于说明,以下示例中,将渲染命令/渲染指令中,glDrawElement的集合称为渲染操作。
在应用程序层中的应用程序需要渲染某个图像(如第N帧图像)时,可以向下发起渲染命令。在该渲染命令中,可以包括渲染该第N帧图像所需要调用的帧缓冲,以及在各个帧缓冲中需要执行的渲染操作。在一些示例中,渲染命令可以通过bindFrameBuffer()函数,实现对应渲染操作与帧缓冲的绑定。比如,第N帧图像需要使用FB0,FB1以及FB2作为帧缓冲为例。结合图6A,为FB1对应的渲染命令的一种示例。应用程序下发的渲染命令中,可以包括bindFrameBuffer(1)函数,由此实现当前渲染操作与FB1的绑定。在绑定FB1之后,可以通过glDrawElement指令,实现在该FB1上的渲染操作指示。在本示例中,1个glDrawElement指令可以对应1个drawcall。在不同实现中,在FB1上执行的glDrawElement指令可以为多个,则对应的在FB1上执行的drawcall就可以是多个。例如,结合图6A,在该示例中,渲染命令中可以包括在FB1上执行的drawcall可以为A个,如glDrawElement 1-glDrawElement A。
类似的,对于需要使用FB2的渲染操作,在渲染指令中,可以通过bindFrameBuffer(2)函数绑定FB2。此后可以通过不同的glDrawElement指令,实现在该FB2上的渲 染操作指示。
在接收到上述命令之后,CPU可以根据bindFrameBuffer()函数所绑定的帧缓冲信息,以及对应的glDrawElement指令,向GPU下发对应的渲染指令。从而使得GPU可以进行对应的渲染操作,并将该渲染操作的结果存储在当前绑定的帧缓冲中。
示例性的,CPU可以在接收到bindFrameBuffer(1)的指令时,确定激活(active)的帧缓冲为FB1。CPU可以向GPU下发在FB1上进行对应glDrawElement的渲染指令。比如,结合图6A,CPU可以向GPU下发包括执行glDrawElement 1-1到glDrawElement 1-A的渲染指令,以便于GPU执行glDrawElement 1-1到glDrawElement 1-A。在一些实施例中,CPU可以向GPU下发bindFrameBuffer(1),以便于GPU可以确定在FB1上执行上述渲染指令,并将结果保存在FB1中。
类似的,结合图6B,CPU可以在接收到bindFrameBuffer(2)的命令时,确定激活(active)的帧缓冲为FB2。CPU可以向GPU下发在FB2上进行对应glDrawElement渲染指令,由此完成对FB2上相关数据的渲染。
如此往复,即可完成对第N帧图像对应的各个帧缓冲的渲染。
这样,电子设备可以将这些帧缓冲中的渲染结果渲染到FB0中,由此即可在FB0中获取第N帧图像对应的数据。比如,以第N帧图像的默认帧缓冲为FB0,第N帧图像在进行渲染处理过程中还调用FB1和FB2为例。结合图6A以及图6B的示例,参考图6C,在GPU完成FB1和FB2上的渲染处理之后,即可在FB1和FB2中分别存储对应的数据(如各个像素点的颜色数据和/或深度数据等)。这些数据(如FB1和FB2中的数据)可以分别对应第N帧图像的部分元素。那么,结合图6C,电子设备可以将FB1和FB2中的数据渲染到FB0上。这样,在FB0上就可以获取第N帧图像所有的数据。在电子设备需要显示第N帧图像时,结合图3所示的方案,CPU可以将第N帧图像对应的数据,通过swapbuffer的指令,从FB0中交换到当前帧缓冲中。进而使得显示屏可以根据当前帧缓冲中的第N帧图像的数据进行显示。
需要说明的是,上述示例中,是以在FB0上只进行基于FB1和FB2的数据的渲染结果进行渲染获取第N帧图像的全量数据为例进行说明的。比如,以第N帧图像是根据游戏应用下发的渲染指令进行渲染的图像。在该第N帧图像中可以包括人物,树木等元素,还可以包括特效等元素,还可以包括用户界面(User Interface,UI)控件等元素。那么在一些实施例中,CPU可以根据游戏应用下发的渲染命令,控制GPU在FB1上执行人物,树木,UI控件等元素的渲染,并将结果存储在FB1。CPU还可以根据游戏应用下发的渲染命令,控制GPU在FB2上执行特效等元素的渲染,并将结果存储在FB2。接着就可以通过将FB1存储的数据和FB2存储的数据渲染到FB0上,即可获取第N帧图像的全量数据。
在本申请的另一些实施例中,在FB0上执行的渲染操作,还可以包括基于FB1,FB2的数据之外的渲染指令执行的渲染操作。比如,继续结合上述示例,CPU可以根据游戏应用下发的渲染命令,控制GPU在FB1上执行人物,树木等元素的渲染,并将结果存储在FB1。CPU还可以根据游戏应用下发的渲染命令,控制GPU在FB2上执行特效等元素的渲染,并将结果存储在FB2。接着,CPU还可以根据游戏应用下发的渲染命令,控制GPU在FB0上执行UI控件的渲染,并结合FB1和FB2存储的数据,将所有渲染结 果进行渲染融合,从而在FB0上获取第N帧图像的全量数据。
可以理解的是,在对第N帧图像的渲染过程中,在不同的帧缓冲上执行的drawcall数量是不同。执行的drawcall数量越多,则表明在该帧缓冲上执行的渲染操作越多,那么在该帧缓冲的渲染操作过程中就会消耗更多的算力资源,也就会产生更高的功耗以及发热。
本申请实施例提供的方案,能够使得电子设备可以自动识别当前需要渲染的帧图像的渲染过程中,消耗资源较高的帧缓冲(如称为主场景)。通过采用较低的分辨率,执行该主场景中的渲染操作,达到降低在主场景中的渲染操作消耗的算力资源的效果。
示例性的,基于连续帧图像的相关性,电子设备可以通过第N-1帧图像的渲染过程,确定主场景。在一些实施例中,CPU可以根据已经完成渲染的帧图像(如第N-1帧)在处理过程中,在各个帧缓冲上执行的drawcall个数,确定在执行该第N-1帧图像在渲染过程中,需要执行渲染操作最多的帧缓冲(即主场景)。也就是说,在对当前帧图像(如第N帧)的渲染时,该主场景的渲染操作的数量也可能是最多的。在该示例中,CPU可以在对第N帧图像进行渲染处理时,为该主场景在内存中配置具有较小分辨率的临时帧缓冲。以便于在收到对主场景的渲染命令的情况下,CPU可以控制GPU在该临时帧缓冲中,以较小的分辨率执行主场景对应的渲染操作。由此就可以使得在对第N帧图像的渲染过程中,大量的渲染操作可以以较小的分辨率执行,从而减少渲染负载,降低渲染所需功耗。
以下结合示例,对本申请实施例提供的方案进行详细说明。示例性的,请参考图7A,为本申请实施例提供的一种图像处理方法的流程示意图。如图所示,该方法可以包括:
S701、确定上一帧图像渲染过程中的主场景。
其中,主场景可以是渲染过程中,执行渲染操作数量最多的帧缓冲。
可以理解的是,以电子设备播放视频流为例。组成视频流的连续帧图像的场景具有关联性。因此,电子设备的处理器(如CPU)可以根据上一帧图像的主场景确定当前帧图像的主场景。
示例性的,以当前渲染的帧图像为第N帧图像为例。CPU可以根据上一帧(如第N-1帧图像)图像渲染过程中,在不同帧缓冲上执行drawcall的个数,确定第N-1帧图像的主场景。在一些实施例中,该第N帧图像也可以称为第一图像,第N帧图像的主场景也可以称为第一主场景。对应的,第N-1帧图像可以称为第二图像,第二图像的主场景可以与第一图像的主场景相同(如都是第一主场景)。
作为一种可能的实现方式,以执行渲染帧图像的默认帧缓冲为FB0,第N-1帧图像的渲染过程中调用了FB1,FB2为例。为了便于说明,以下将应用程序下发的与FB1对应的渲染命令称为渲染命令1,将应用程序下发的与FB2对应的渲染命令称为渲染命令2。那么,渲染命令1中就可以包括glBindFrameBuffer(1)用于绑定FB1。渲染命令1中还可以包括一个或多个需要在FB1上执行的glDrawElement。比如,渲染命令1中可以包括A个glDrawElement,即A个drawcall。类似的,渲染命令2中就可以包括glBindFrameBuffer(2)用于绑定FB2。渲染命令2中还可以包括一个或多个需要在FB2上执行的glDrawElement。比如,渲染命令2中可以包括B个glDrawElement, 即B个drawcall。
在接收到对FB1和FB2的渲染命令时,CPU可以对对应的渲染命令中包括的drawcall数量进行计数,从而获取各个FB上执行的drawcall数量。
结合图7B,以FB1为例。CPU可以在根据应用程序下发的渲染命令,执行glBindFrameBuffer(1)时,初始化计数器1。比如在内存中为FB1配置对应的计数帧位。通过初始化该计数器1,既可以将该帧位的值初始化为0。后续每在FB1上执行glDrawElement时,计数器1计数加1,如执行count1++。比如,在执行glDrawElement1-1后,CPU可以对计数器1执行count1++,从而使得存储在FB1 Drawcall数量的帧位的数值由0变为1。也就是说,此时在FB1上执行的drawcall数量为1。以此类推。那么,CPU就可以确定在进行第N-1帧图像的渲染过程中,FB1上执行drawcall的数量为计数器1的当前计数(比如,该计数可以为A)。
与FB1类似的,CPU还可以在根据应用程序下发的渲染命令,调用glBindFrameBuffer(2)时,初始化计数器2。如初始化count2=0。后续每在FB2上执行glDrawElement时,计数器2计数加1,如执行count2++。在完成FB2上图像的渲染处理之后,CPU就可以确定在进行第N-1帧图像的渲染过程中,FB2上执行drawcall的数量为计数器2的当前计数(比如,该计数可以为B)。如参考图7C。在完成FB1和FB2的渲染处理之后,内存中存储FB1Drawcall数量的帧位的数值可以为A,存储FB2Drawcall数量的帧位可以为B。
在本示例中,CPU可以选取A和B中较大的计数对应的帧缓冲作为主场景。比如,在A大于B时,则CPU可以确定FB1上执行的drawcall个数更多,由此确定FB1为第N-1帧图像的主场景。相对的,在A小于B时,则CPU可以确定FB2上执行的drawcall个数更多,由此确定FB2为第N-1帧图像的主场景。
在本申请的一些实施例中,该S701的执行过程,可以是CPU在进行第N-1帧图像的渲染处理的过程中执行的。
S702、为主场景配置临时帧缓冲。
根据上述S701中的说明,主场景是执行drawcall数量最多的帧缓冲。因此,在需要降低渲染处理对电子设备造成的压力时,可以对主场景下的渲染处理进行调整,从而获取相较于调整其他帧缓冲中的渲染处理机制更加显著的效果。
在本申请实施例中,电子设备的CPU可以为主场景配置临时帧缓冲,该临时帧缓冲的分辨率可以小于主场景的分辨率。由此使得电子设备可以采用较小的分辨率,对主场景中的大量drawcall进行处理,从而降低高分辨率的渲染对电子设备造成的影响。
示例性的,在一些实施例中,CPU可以在内存中为主场景配置临时帧缓冲之前,确定主场景的分辨率。
作为一种可能的实现,CPU可以根据第N-1帧图像渲染时,使用的画布大小,确定该主场景的分辨率。可以理解的是,在应用程序向CPU下发渲染命令时,在渲染命令中可以通过bindFrameBuffer()函数绑定帧缓冲。以渲染命令通过bindFrameBuffer(1)绑定FB1为例。在该渲染命令中还可以通过glViewPort(x,y)函数指定在FB1中执行渲染操作时的绘制大小。这样,CPU就可以控制GPU在该x*y的像素区域中执行后续的glDrawElement对应的渲染处理。在本示例中,CPU可以在获取主场景的分 辨率时,根据该主场景在第N-1帧图像渲染过程中,glViewPort(x,y)函数指定的像素区域,确定主场景的像素大小。比如,以FB1为主场景为例。CPU可以确定在执行第N-1帧图像的渲染时,绑定FB1(如接收到bindFrameBuffer(1)函数)后,接收到glViewPort(2218,978)。那么CPU就可以确定在第N-1帧图像的渲染过程中,使用了FB1上2218×978的像素区域。由此,CPU就可以确定主场景(即FB1)的分辨率为2218×978。
作为又一种可能的实现,CPU可以在接收到对第N帧图像的渲染命令后,通过对主场景执行glGetIntegerv指令,确定主场景的分辨率。例如,以主场景为FB1,执行glGetIntegerv(GL_VIEWPORT)指令获取分辨率为例。CPU可以在接收到第N帧图像中,绑定FB1(如接收到bindFrameBuffer(1)函数)后,执行glGetIntegerv(GL_VIEWPORT)指令。从而根据获取的结果(如tex1(2118×978)),确定主场景(即FB1)的分辨率为2218×978。
在本申请实施例中,CPU可以在确定主场景的分辨率的情况下,为主场景配置具有较小分辨率的临时帧缓冲。其中,该配置临时帧缓冲的操作,可以是CPU在执行第N-1帧图像的渲染过程中执行的。在另一些实现中,该配置临时帧缓冲的操作,还可以是CPU在执行完成第N-1帧图像的渲染,开始执行第N帧图像的渲染之前进行的。
需要说明的是,在本申请的一些实施例中,在电子设备中可以配置有缩放参数。该缩放参数可以用于确定临时帧缓冲的分辨率。例如,CPU可以根据如下公式(1)确定临时帧缓冲的分辨率。
临时帧缓冲的分辨率=主场景的分辨率×缩放参数……公式(1)。
其中,缩放参数可以是小于1的正数。该缩放参数可以是用户配置的,也可以是电子设备中预置的,也可以是在需要时电子设备从云端获取的。
在确定临时帧缓冲的分辨率之后,CPU就可以在内存中为该临时帧缓冲配置对应大小的存储空间。
在本申请的一些实施例中,CPU可以在执行完成第N-1帧图像的渲染处理之后,确定主场景的分辨率。进而根据上述公式(1)即可确定临时帧缓冲的分辨率。基于此,CPU就可以在内存中完成对临时帧缓冲的配置。
在一些实现中,CPU可以通过创建临时帧缓冲对象,并将该临时帧缓冲对象绑定到内存中配置的存储空间上,实现临时帧缓冲的配置。其中,临时帧缓冲对象可以是该临时帧缓冲的名称。
作为一种示例,以下给出一种创建帧缓冲的典型指令流的示例:
unsigned int fbo;//定义一个FB的ID,变量名定义为fbo
glGenFramebuffers(1,&fbo);//创建一个FB,ID值为fbo,驱动会为该FB分配ID,比如fbo可以被赋值为3
glBindFramebuffer(GL_FRAMEBUFFER,fbo);//绑定帧缓冲3
unsigned int texture;//定义一个纹理ID,变量名定义为texture
glGenTextures(1,&texture);//创建一个纹理,ID值为texture,驱动会为该FB分配texture,比如texture可以被赋值为11
glBindTexture(GL_TEXTURE_2D,texture);//绑定纹理11,其中GL_TEXTURE_2D 用于指示纹理目标为2D纹理
glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,800,600,0,GL_RGB,GL_UNSIGNED_BYTE,NULL);//给纹理分配一个800*600大小的内存空间;其中函数中的第2个因子(即0)可以用于指示纹理级别,函数中的第3个因子(即GL_RGB)可以用于指示目标纹理格式,函数中的第7个因子(即GL_RGB)可以用于指示入参的纹理格式,函数中的第8个因子(即GL_UNSIGNED_BYTE)可以用于指示入参纹理数据类型
glFramebufferTexture2D(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,GL_TEXTURE_2D,texture,0);//将创建出来的纹理texture附加到当前绑定的帧缓冲对象
可以理解的是,以临时帧缓冲为FB3为例。由于该FB3的分辨率小于FB1的分辨率,因此FB3在内存中占用的存储空间小于FB1在内存中占用的存储空间。对应的,由于在FB3上执行的渲染操作的分辨率更低,也就是说在FB3上执行相同渲染处理时,需要处理的像素数量小于FB1,因此在FB3上执行渲染操作时相较于FB1上执行渲染操作所需功耗更低,发热更小。
这样,在完成第N-1帧图像的渲染处理之后,执行第N帧图像的渲染处理之前,CPU就可以完成确定主场景,并为主场景配置对应的临时帧缓冲的动作。
S703、在临时帧缓冲上执行主场景的渲染操作。
在本申请实施例中,电子设备可以通过处理器,使用具有较小分辨率的帧缓冲进行主场景下的大量渲染处理的操作。
示例性的,CPU可以在接收到来自应用程序的渲染命令之后,根据该渲染命令控制GPU在对应的帧缓冲上执行渲染操作。
比如,请参考图8。以主场景为FB1,对应的临时帧缓冲为FB3,在执行第N帧图像的渲染的过程中,在FB1中包括的drawcall数量为2(如glDrawElement1,以及glDrawElement2)为例。
CPU接收到来自应用程序的渲染命令包括如下指令为例:
bindFrameBuffer(1);
glDrawElement1;
glDrawElement2。
在本示例中,CPU可以将第N帧图像的渲染命令中,用于绑定主场景的标识(如FB1)替换为临时帧缓冲的标识(如FB3)。在一些实施例中,该FB1的帧缓冲对象也可以称为第一帧缓冲信息,该FB3的帧缓冲对象也可以称为第二帧缓冲信息。示例性的,CPU可以将bindFrameBuffer(1)替换为bindFrameBuffer(3)。那么,替换后的渲染命令可以为:
bindFrameBuffer(3);
glDrawElement1;
glDrawElement2。
也就是说,CPU可以将原先绑定主场景(如FB1)的渲染操作替换为绑定临时帧缓冲的渲染操作。进而使得CPU可以将该渲染指令(如称为第一渲染指令)下发给GPU,这样,GPU就可以在FB3上执行glDrawElement1以及glDrawElement2的渲染操作, 并将结果存储在FB3上。在一些实施例中,在第N帧图像的渲染过程中,在主场景(或临时帧缓冲)上执行的渲染操作也可以称为第一渲染操作。
可以理解的是,由于FB3的分辨率小于FB1,因此,在FB3上执行相同的drawcall(如glDrawElement1以及glDrawElement2),需要消耗的功耗功效,发热更少。
需要说明的是,在一些实施例中,电子设备可以通过CPU对所有主场景的渲染操作执行上述方案,如采用更小分辨率的临时帧缓冲执行主场景的渲染操作。在另一些实施例中,电子设备还可以在需要进行功耗控制时,指向如图7A所示的方案。比如,作为一种可能的实现,电子设备可以在负载大于预设阈值的情况下,确定需要进行功耗控制。由此即可触发如图7A所示方案降低图像处理过程中的功耗。作为另一种可能的实现,电子设备可以结合主场景的分辨率执行上述方案。比如,电子设备可以在确定主场景的分辨率大于预设分辨率,和/或在主场景中执行的drawcall数量大于预设的渲染阈值的情况下,执行S702,即为主场景配置临时帧缓冲,进而在具有较小分辨率的临时帧缓冲中执行主场景的渲染操作。
作为一种实现方式,结合图7A所示的方案,图9示出了本申请实施例提供的又一种图像处理方法的流程示意图。在一些实施例中,该方案可以有电子设备中的CPU执行,实现动态调整渲染操作的主场景对应的分辨率,进而降低功耗开销的效果。
如图9所示,在如图7A所示方案的基础上,S703具体的可以通过S901-S903实现。
S901、判断当前要执行的渲染命令是否是在主场景中的渲染命令。
示例性的,CPU可以在根据接收到的渲染命令中,bindFrameBuffer()所指示的需要绑定的帧缓冲对象与S701中确定的主场景的帧缓冲对象是否相同,确定当前渲染命令是否需要在主场景中执行。
例如,在当前执行的渲染命令为bindFrameBuffer(1),在S701中确定的主场景为FB1的情况下,电子设备就可以确定当前执行的渲染命令需要在主场景上执行。
又如在当前执行的渲染命令为bindFrameBuffer(2),在S701中确定的主场景为FB1的情况下,电子设备就可以确定当前执行的渲染命令不需要在主场景上执行。
在确定当前执行的渲染命令是在主场景中执行的情况下,电子设备的CPU可以继续执行以下S902。
S902、将当前执行的渲染命令的帧缓冲信息替换为临时帧缓冲信息。
其中,帧缓冲信息可以包括当前执行的命令需要绑定的帧缓冲对象。
示例性的,结合前述对S703的说明,CPU可以将bindFrameBuffer(1)对应的FB1,替换为临时帧缓冲的帧缓冲对象(如FB3)。比如,替换之后的命令可以为:bindFrameBuffer(3)。
S903、向GPU下发渲染指令,以便GPU在临时帧缓冲上执行主场景的渲染操作。
在本示例中,GPU可以根据接收到的渲染指令执行对应的渲染操作。比如,GPU在接收到bindFrameBuffer(3),以及后续的drawcall时,就可以在FB3上执行这些drewcall的渲染操作,并在FB3上存储渲染结果。
可以理解的是,由于FB3的分辨率小于FB1,因此GPU可以使用更少的算力资源完成相同数量的drawcall。而FB1中的drawcall可以是第N帧图像的渲染过程中, drawcall数量最多的。因此通过该方案(如图7A或图9)能够显著降低第N帧图像的渲染过程对算力的消耗。
在一些实施例中,如图9所示,CPU还可以在执行S901的判断之后,根据判断结果执行S904。
示例性的,在CPU确定当前要执行的渲染命令并非在主场景中的渲染命令的情况下,可以执行S904。其中,参考S901中的说明,CPU可以在当前要执行的渲染命令需要绑定的帧缓冲的帧缓冲对象不同于主场景的帧缓冲对象的情况下,确定当前要执行的渲染命令并非在主场景中的渲染命令。
可以理解的是,由于当前要执行的渲染命令并非在主场景中进行,因此该渲染命令对算力的消耗并不是第N帧图像中的渲染操作过程中消耗最大的。那么CPU就可以直接控制GPU在该渲染命令所指定的帧缓冲上执行对应的drawcall。如执行S904:向GPU下发渲染指令,以便GPU在对应帧缓冲上执行渲染操作。
如图9所示,以CPU当前执行对第N帧图像的渲染为例。在一些实施例中,CPU可以在第N-1帧图像的渲染处理完成之后,即完成S701-S702的处理。即确定的第N帧图像的主场景,并为该主场景配置了对应的临时帧缓存。比如,以下发渲染命令的应用程序为游戏应用为例。CPU可以在游戏刚开始时,执行S701-S702。比如,CPU可以被配置为从第6帧图像开始执行如图7A或图9所示的方案,那么,CPU就可以在游戏开始后,第5帧图像完成渲染处理后,根据第5帧图像的渲染情况,确定主场景,并为该主场景配置对应的临时帧缓存。从而使得CPU可以在第6帧图像的渲染处理过程中,使用该临时帧缓存执行主场景的drawcall,由此达到显著的降低渲染过程中功耗开销的效果。
需要说明的是,上述示例中,是以应用程序下发的渲染命令中通过bindFrameBuffer()函数指向主场景为例,对主场景和临时帧缓冲的替换方法进行说明的。在本申请的另一些实现中,在应用程序下发的渲染命令中还包括其他函数指向主场景时,CPU还可以通过类似的方案,替换对应函数中指向的帧缓冲对象,比如由主场景替换为临时帧缓冲,从而使得对应的渲染操作能够在临时帧缓冲上执行。
示例性的,CPU可以对当前帧缓冲的纹理中包括的附件的标识进行替换,从而使得GPU可以使用正确的附件在临时帧缓冲进行对应的渲染处理。比如,以主场景为FB1,临时帧缓冲为FB3为例。FB1的附件(如颜色附件,深度附件等)的标识可以是11。那么,在将FB1的标识替换为FB3之后,CPU可以将FB1的附件的标识也进行对应的替换。比如,将附件11替换为附件13(即FB3的附件标识),例如,将glBindTexture(GL_TEXTURE_2D,11)替换为glBindTexture(GL_TEXTURE_2D,13)。这样,在FB3上执行渲染处理时,就可以根据附件13上的信息,进行正确的渲染处理。
通过上述方案说明,即可实现降低主场景渲染分辨率的效果。
在本申请实施例的另一些实现中,电子设备可以在临时帧缓冲上执行主场景的渲染操作时,采用多重采样的技术,提升降低分辨率渲染获取的渲染结果的显示效果,进一步提升向用户展示的图像的质量。
示例性的,CPU可以在向GPU下发在临时帧缓冲中执行渲染操作的渲染指令时,指示GPU使用更多的颜色,深度和/或模板信息对图元(如点,直线,多边形等元素) 进行处理,从而达到虚化图像边缘的效果。这样,在该临时帧缓冲中进行渲染操作后获取的结果在显示时,其图像边缘就不会由于分辨率较低产生锯齿的观感。由此即可实现提升图像质量的目的。
上述主要从电子设备的角度对本申请实施例提供的方案进行了介绍。为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以根据上述方法示例对其中涉及的设备进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
请参考图10,为本申请实施例提供的一种图像处理装置1000的组成示意图。该装置可以设置在电子设备中,用于实现本申请实施例提供的任一种可能的图像处理方法。
示例性的,该装置可以应用于电子设备对第一图像的渲染处理,该电子设备运行有应用程序,该电子设备对该第一图像执行渲染处理时调用一个或多个帧缓冲,该电子设备对该第一图像执行渲染处理的渲染操作由该应用程序下发。
如图10所示,该装置包括:确定单元1001,用于确定对该第一图像执行渲染处理过程中的第一主场景。该第一主场景是该电子设备对该第一图像的渲染处理过程中,执行渲染操作数量最多的帧缓冲。配置单元1002,用于配置临时帧缓冲,该临时帧缓冲的分辨率小于该第一主场景的分辨率。执行单元1003,用于在对该第一图像进行渲染处理时,在该临时帧缓冲上,执行第一渲染操作。该第一渲染操作是该应用程序指示的在该第一主场景上执行的渲染操作。
在一种可能设计中,确定单元1001,具体用于基于对第二图像的渲染处理过程中,执行渲染操作数量最多的帧缓冲,确定该第一主场景。该第二图像的渲染处理在该第一图像的渲染处理之前。对该第二图像执行渲染处理时调用该第一主场景。
在一种可能设计中,确定单元1001,还用于在对该第二图像的渲染处理过程中,确定在每个帧缓冲上执行绘制调用(drawcall)的数量,将执行drawcall的数量最多的帧缓冲确定为该第一主场景。
在一种可能设计中,该第二图像是第一图像的上一帧图像。
需要说明的是,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
在一种可能的实现中,如图10所示的各个单元的功能,可以通过如图4所示的硬件模块实现。比如,以如图10所示的执行单元1003为例。该执行单元1003的功能可以通过如图4所示的处理器以及渲染处理模块实现。其中,渲染处理模块可以是具有 图形渲染功能的模块。在一些实施例中,该渲染处理模块可以是如图4所示的GPU。在一些实施例中,上述处理器可以是如图4所示的CPU。
作为一种示例,对处理器配合渲染处理模块实现执行单元1003的功能进行说明。
示例性的,在接收到来自该应用程序的对该第一图像的渲染命令的情况下,该处理器用于向该渲染处理模块下发第一渲染指令,该第一渲染指令包括该第一渲染操作,该第一渲染指令用于指示该渲染处理模块在该临时帧缓冲上执行该第一渲染操作。该渲染处理模块用于根据该第一渲染指令,在该临时帧缓冲上执行该第一渲染操作。
在一种可能设计中,该处理器还用于在该处理器向该渲染处理模块下发第一渲染指令之前,判断当前执行的渲染命令是对该主场景的渲染命令的情况下,将当前执行的该渲染命令绑定的帧缓冲信息由第一帧缓冲信息替换为第二帧缓冲信息,以得到该第一渲染指令,该第一帧缓冲信息用于指示在该主场景上执行该第一渲染操作,该第二帧缓冲信息用于指示在该临时帧缓冲上执行该第一渲染操作。
在一种可能设计中,该第一帧缓冲信息包括第一帧缓冲对象,该第一帧缓冲对象是该主场景对应的帧缓冲对象,该第二帧缓冲信息包括第二帧缓冲对象,该第二帧缓冲对象是该临时帧缓冲对应的帧缓冲对象。
在一种可能设计中,该渲染命令是该应用程序下发给该处理器的,该渲染命令包括该第一渲染操作,以及该第一帧缓冲信息。
在一种可能设计中,该第一渲染指令还包括:对该第一渲染操作获取的图像执行多重采样。
图11示出了的一种电子设备1100的组成示意图。如图11所示,该电子设备1100可以包括:处理器1101和存储器1102。该存储器1102用于存储计算机执行指令。示例性的,在一些实施例中,当该处理器1101执行该存储器1102存储的指令时,可以使得该电子设备1100执行上述实施例中任一种所示的图像处理方法。
需要说明的是,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
图12示出了的一种芯片系统1200的组成示意图。该芯片系统1200可以包括:处理器1201和通信接口1202,用于支持相关设备实现上述实施例中所涉及的功能。在一种可能的设计中,芯片系统还包括存储器,用于保存终端必要的程序指令和数据。该芯片系统,可以由芯片构成,也可以包含芯片和其他分立器件。需要说明的是,在本申请的一些实现方式中,该通信接口1202也可称为接口电路。
需要说明的是,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
在上述实施例中的功能或动作或操作或步骤等,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件程序实现时,可以全部或部分地以计算机程序产品的形式来实现。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、 计算机、服务器或者数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包括一个或多个可以用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带),光介质(例如,DVD)、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
尽管结合具体特征及其实施例对本申请进行了描述,显而易见的,在不脱离本申请的精神和范围的情况下,可对其进行各种修改和组合。相应地,本说明书和附图仅仅是所附权利要求所界定的本申请的示例性说明,且视为已覆盖本申请范围内的任意和所有修改、变化、组合或等同物。显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包括这些改动和变型在内。

Claims (20)

  1. 一种图像处理方法,其特征在于,应用于电子设备,所述电子设备运行有应用程序,所述电子设备执行所述应用程序下发的渲染操作,所述方法包括:
    在第一帧缓冲和第二帧缓冲上执行第一图像的渲染操作,所述第一帧缓冲和所述第二帧缓冲不同;
    在第三帧缓冲和所述第二帧缓冲上执行第二图像的渲染操作,所述第一图像和所述第二图像为连续的两帧图像,不在所述第三帧缓冲上执行所述第一图像的渲染操作,不在所述第一帧缓冲上执行所述第二图像的渲染操作,所述第三帧缓冲的分辨率小于所述第一帧缓冲的分辨率,所述第三帧缓冲和所述第二帧缓冲不同,所述第三帧缓冲和所述第一帧缓冲不同。
  2. 根据权利要求1所述的方法,其特征在于,绘制所述第一图像所使用的帧缓冲和绘制所述第二图像所使用的帧缓冲数量相同。
  3. 根据权利要求1或2所述的方法,其特征在于,所述第一帧缓冲和所述第三帧缓冲上执行的所述第一图像的渲染操作数量相同。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述第一帧缓冲和所述第三帧缓冲上执行的所述第一图像的渲染操作相同。
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,所述方法还包括:创建所述第三帧缓冲。
  6. 根据权利要求1-4中任一项所述的方法,其特征在于,所述第一帧缓冲是所述电子设备对所述第一图像的渲染处理过程中,执行渲染操作数量最多的帧缓冲。
  7. 根据权利要求5所述的方法,其特征在于,所述方法还包括:确定所述电子设备对所述第一图像的渲染处理过程中,执行渲染操作数量最多的帧缓冲。
  8. 根据权利要求6所述的方法,其特征在于,所述方法还包括:
    在对所述第一图像的渲染处理过程中,确定在每个帧缓冲上执行绘制调用drawcall的数量,
    将执行drawcall的数量最多的帧缓冲确定为所述第一帧缓冲。
  9. 根据权利要求1-7中任一项所述的方法,其特征在于,所述渲染操作为glDrawElement。
  10. 根据权利要求1所述的方法,其特征在于,所述第二图像的渲染操作在所述第一图像的渲染操作之后,所述第一帧缓冲用于执行所述第一图像的主场景的渲染,所述第三帧缓冲用于执行所述第二图像的主场景的渲染。
  11. 根据权利要求1-10中任一项所述的方法,其特征在于,所述电子设备配置有处理器和渲染处理模块;
    所述在第一帧缓冲和第二帧缓冲上执行第一图像的渲染操作,包括:
    在所述处理器接收来自所述应用程序的对所述第一图像的渲染操作的情况下,所述处理器向所述渲染处理模块下发第一渲染指令流,所述第一渲染指令流用于指示所述渲染处理模块在所述第一帧缓冲和所述第二帧缓冲上执行所述第一图像的渲染操作;
    所述渲染处理模块根据所述第一渲染指令流,在所述第一帧缓冲和所述第二帧缓冲上执行所述第一图像的渲染操作。
  12. 根据权利要求11所述的方法,其特征在于,
    所述在第三帧缓冲和所述第二帧缓冲上执行第二图像的渲染操作,包括:
    在所述处理器接收来自所述应用程序的对所述第二图像的渲染操作的情况下,所述处理器向所述渲染处理模块下发第二渲染指令流;所述应用程序的对所述第二图像的渲染操作指示在所述第一帧缓冲和所述第二帧缓冲上执行所述第二图像的渲染操作;所述第二渲染指令流用于指示所述渲染处理模块在所述第三帧缓冲和所述第二帧缓冲上执行所述第二图像的渲染操作;
    所述渲染处理模块根据所述第二渲染指令流,在所述第三帧缓冲和所述第二帧缓冲上执行所述第二图像的渲染操作。
  13. 根据权利要求12所述的方法,其特征在于,
    在所述处理器向所述渲染处理模块下发第二渲染指令流之前,所述方法还包括:
    所述处理器根据所述第一图像的渲染操作,确定执行渲染操作最多的帧缓冲为所述第一帧缓冲;
    所述处理器创建所述第三帧缓冲,所述第三帧缓冲用于执行所述第一帧缓冲上的渲染操作。
  14. 根据权利要求13所述的方法,其特征在于,在所述处理器创建所述第三帧缓冲之后,所述方法还包括:
    所述处理器将所述应用程序的对所述第二图像的渲染操作中,在所述第一帧缓冲上执行的渲染操作所指向的帧缓冲替换为所述第三帧缓冲,获得所述第三渲染指令流。
  15. 根据权利要求11-14中任一项所述的方法,其特征在于,所述处理器是中央处理器CPU。
  16. 根据权利要求11-15中任一项所述的方法,其特征在于,所述渲染处理模块是图形处理器GPU。
  17. 根据权利要求11-16中任一项所述的方法,其特征在于,所述第三渲染指令流还包括:对所述第二图像执行多重采样。
  18. 一种电子设备,其特征在于,所述电子设备包括一个或多个处理器和一个或多个存储器;所述一个或多个存储器与所述一个或多个处理器耦合,所述一个或多个存储器存储有计算机指令;
    当所述一个或多个处理器执行所述计算机指令时,使得所述电子设备执行如权利要求1-17中任一项所述的图像处理方法。
  19. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-17中任一项所述的图像处理方法。
  20. 一种芯片系统,其特征在于,所述芯片系统包括接口电路和处理器;所述接口电路和所述处理器通过线路互联;所述接口电路用于从存储器接收信号,并向所述处理器发送信号,所述信号包括所述存储器中存储的计算机指令;当所述处理器执行所述计算机指令时,所述芯片系统执行如权利要求1-17中任一项所述的图像处理方法。
PCT/CN2022/097931 2021-06-10 2022-06-09 一种图像处理方法和电子设备 WO2022258024A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US18/252,920 US20230419570A1 (en) 2021-06-10 2022-06-09 Image Processing Method and Electronic Device
EP22819621.8A EP4224831A1 (en) 2021-06-10 2022-06-09 Image processing method and electronic device
CN202280024721.9A CN117063461A (zh) 2021-06-10 2022-06-09 一种图像处理方法和电子设备

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110650009.7 2021-06-10
CN202110650009.7A CN113726950B (zh) 2021-06-10 2021-06-10 一种图像处理方法和电子设备

Publications (1)

Publication Number Publication Date
WO2022258024A1 true WO2022258024A1 (zh) 2022-12-15

Family

ID=78672859

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/097931 WO2022258024A1 (zh) 2021-06-10 2022-06-09 一种图像处理方法和电子设备

Country Status (4)

Country Link
US (1) US20230419570A1 (zh)
EP (1) EP4224831A1 (zh)
CN (3) CN113726950B (zh)
WO (1) WO2022258024A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116389898A (zh) * 2023-02-27 2023-07-04 荣耀终端有限公司 图像处理方法、设备及存储介质

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113726950B (zh) * 2021-06-10 2022-08-02 荣耀终端有限公司 一种图像处理方法和电子设备
CN116414493A (zh) * 2021-12-30 2023-07-11 荣耀终端有限公司 图像处理方法、电子设备及存储介质
CN114581684A (zh) * 2022-01-14 2022-06-03 山东大学 基于语义时空表示学习的主动目标跟踪方法、系统及设备
CN116563083A (zh) * 2022-01-29 2023-08-08 华为技术有限公司 渲染图像的方法和相关装置
CN114669047B (zh) * 2022-02-28 2023-06-02 荣耀终端有限公司 一种图像处理方法、电子设备及存储介质
CN114708369B (zh) * 2022-03-15 2023-06-13 荣耀终端有限公司 一种图像渲染方法和电子设备
CN114626975A (zh) * 2022-03-21 2022-06-14 北京字跳网络技术有限公司 数据处理方法、装置、设备、存储介质和程序产品
CN116704075A (zh) * 2022-10-14 2023-09-05 荣耀终端有限公司 图像处理方法、设备及存储介质
CN117917683A (zh) * 2022-10-20 2024-04-23 华为技术有限公司 一种图像渲染的方法及装置
CN117114964A (zh) * 2023-02-13 2023-11-24 荣耀终端有限公司 一种缓存图像帧的方法、电子设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204700A (zh) * 2014-09-01 2016-12-07 三星电子株式会社 渲染设备和方法
WO2017092019A1 (zh) * 2015-12-03 2017-06-08 华为技术有限公司 根据场景改变图形处理分辨率的方法和便携电子设备
CN109064538A (zh) * 2018-08-01 2018-12-21 Oppo广东移动通信有限公司 视图渲染方法、装置、存储介质及智能终端
CN113726950A (zh) * 2021-06-10 2021-11-30 荣耀终端有限公司 一种图像处理方法和电子设备

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8547378B2 (en) * 2008-08-28 2013-10-01 Adobe Systems Incorporated Time-based degradation of images using a GPU
CN103500463B (zh) * 2013-10-17 2016-04-27 北京大学 一种gpu上多层形状特征融合的可视化方法
CN104157004B (zh) * 2014-04-30 2017-03-29 常州赞云软件科技有限公司 一种融合gpu与cpu计算辐射度光照的方法
US9940858B2 (en) * 2016-05-16 2018-04-10 Unity IPR ApS System and method for assymetric rendering to eyes in augmented reality and virtual reality devices
CN109559270B (zh) * 2018-11-06 2021-12-24 华为技术有限公司 一种图像处理方法及电子设备
CN111508055B (zh) * 2019-01-30 2023-04-11 华为技术有限公司 渲染方法及装置
CN111696186B (zh) * 2019-02-27 2023-09-26 杭州海康威视系统技术有限公司 界面渲染方法及装置
CN110471731B (zh) * 2019-08-09 2022-08-05 网易(杭州)网络有限公司 显示界面绘制方法、装置、电子设备及计算机可读介质
CN112686981B (zh) * 2019-10-17 2024-04-12 华为终端有限公司 画面渲染方法、装置、电子设备及存储介质
CN112533059B (zh) * 2020-11-20 2022-03-08 腾讯科技(深圳)有限公司 图像渲染方法、装置、电子设备以及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204700A (zh) * 2014-09-01 2016-12-07 三星电子株式会社 渲染设备和方法
WO2017092019A1 (zh) * 2015-12-03 2017-06-08 华为技术有限公司 根据场景改变图形处理分辨率的方法和便携电子设备
CN109064538A (zh) * 2018-08-01 2018-12-21 Oppo广东移动通信有限公司 视图渲染方法、装置、存储介质及智能终端
CN113726950A (zh) * 2021-06-10 2021-11-30 荣耀终端有限公司 一种图像处理方法和电子设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116389898A (zh) * 2023-02-27 2023-07-04 荣耀终端有限公司 图像处理方法、设备及存储介质
CN116389898B (zh) * 2023-02-27 2024-03-19 荣耀终端有限公司 图像处理方法、设备及存储介质

Also Published As

Publication number Publication date
CN113726950A (zh) 2021-11-30
CN113726950B (zh) 2022-08-02
CN115473957B (zh) 2023-11-14
CN117063461A (zh) 2023-11-14
US20230419570A1 (en) 2023-12-28
EP4224831A1 (en) 2023-08-09
CN115473957A (zh) 2022-12-13

Similar Documents

Publication Publication Date Title
WO2022258024A1 (zh) 一种图像处理方法和电子设备
WO2020259452A1 (zh) 一种移动终端的全屏显示方法及设备
CN114397979B (zh) 一种应用显示方法及电子设备
WO2020224485A1 (zh) 一种截屏方法及电子设备
CN109559270B (zh) 一种图像处理方法及电子设备
WO2021036770A1 (zh) 一种分屏处理方法及终端设备
WO2022007862A1 (zh) 图像处理方法、系统、电子设备及计算机可读存储介质
WO2021258814A1 (zh) 视频合成方法、装置、电子设备及存储介质
WO2022001258A1 (zh) 多屏显示方法、装置、终端设备及存储介质
WO2020155875A1 (zh) 电子设备的显示方法、图形用户界面及电子设备
WO2022222924A1 (zh) 一种投屏显示参数调节方法
CN113961157A (zh) 显示交互系统、显示方法及设备
CN115756268A (zh) 跨设备交互的方法、装置、投屏系统及终端
CN112437341B (zh) 一种视频流处理方法及电子设备
WO2023071482A1 (zh) 视频编辑方法和电子设备
WO2022143310A1 (zh) 一种双路投屏的方法及电子设备
WO2022135195A1 (zh) 显示虚拟现实界面的方法、装置、设备和可读存储介质
WO2022078116A1 (zh) 笔刷效果图生成方法、图像编辑方法、设备和存储介质
WO2021204103A1 (zh) 照片预览方法、电子设备和存储介质
CN115686403A (zh) 显示参数的调整方法、电子设备、芯片及可读存储介质
CN113495733A (zh) 主题包安装方法、装置、电子设备及计算机可读存储介质
WO2023124149A1 (zh) 图像处理方法、电子设备及存储介质
WO2023005900A1 (zh) 一种投屏方法、电子设备及系统
WO2024083031A1 (zh) 一种显示方法、电子设备和系统
WO2024066834A1 (zh) Vsync信号的控制方法、电子设备、存储介质及芯片

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22819621

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022819621

Country of ref document: EP

Effective date: 20230505

WWE Wipo information: entry into national phase

Ref document number: 18252920

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 202280024721.9

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE