WO2021008373A1 - 显示方法、装置、电子设备及计算机可读介质 - Google Patents
显示方法、装置、电子设备及计算机可读介质 Download PDFInfo
- Publication number
- WO2021008373A1 WO2021008373A1 PCT/CN2020/099807 CN2020099807W WO2021008373A1 WO 2021008373 A1 WO2021008373 A1 WO 2021008373A1 CN 2020099807 W CN2020099807 W CN 2020099807W WO 2021008373 A1 WO2021008373 A1 WO 2021008373A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- layer
- layers
- interface
- mixed
- barrage
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 71
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 60
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 60
- 230000000717 retained effect Effects 0.000 claims description 4
- 238000012545 processing Methods 0.000 abstract description 30
- 238000001308 synthesis method Methods 0.000 abstract description 3
- 239000000203 mixture Substances 0.000 description 22
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 7
- 230000008859 change Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000002194 synthesizing effect Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 239000002131 composite material Substances 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000026676 system process Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/37—Details of the operation on graphic patterns
- G09G5/377—Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/003—Details of a display terminal, the details relating to the control arrangement of the display terminal and to the interfaces thereto
- G09G5/006—Details of the interface to the display terminal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
- G06F1/3243—Power saving in microcontroller unit
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
- G06F1/325—Power saving in peripheral device
- G06F1/3265—Power saving in display device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/39—Control of the bit-mapped memory
- G09G5/393—Arrangements for updating the contents of the bit-mapped memory
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/426—Internal components of the client ; Characteristics thereof
- H04N21/42653—Internal components of the client ; Characteristics thereof for processing graphics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/433—Content storage operation, e.g. storage operation in response to a pause request, caching operations
- H04N21/4331—Caching operations, e.g. of an advertisement for later insertion during playback
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/02—Improving the quality of display appearance
- G09G2320/0252—Improving the response speed
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/10—Special adaptations of display systems for operation with variable images
- G09G2320/103—Detection of image changes, e.g. determination of an index representative of the image change
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2330/00—Aspects of power supply; Aspects of display protection and defect management
- G09G2330/02—Details of power systems and of start or stop of display operation
- G09G2330/021—Power management, e.g. power saving
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/10—Mixing of images, i.e. displayed pixel being the result of an operation, e.g. adding, on the corresponding input pixels
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/12—Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels
- G09G2340/125—Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels wherein one of the images is motion video
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/18—Use of a frame buffer in a display terminal, inclusive of the display panel
Definitions
- This application relates to the field of video processing technology, and more specifically, to a display method, device, electronic device, and computer-readable medium.
- This application proposes a display method, device, electronic device, and computer readable medium to improve the aforementioned defects.
- an embodiment of the present application provides a display method, which is applied to an electronic device, the electronic device includes a multimedia display processor and a graphics processor, and the method includes: acquiring a layer to be synthesized; The layers include the interface layers of the video playback interface and the multimedia layers corresponding to the video played on the video playback interface; calling the graphics processor to synthesize the interface layers to obtain the layer to be mixed; The multimedia display processor is called to synthesize the multimedia layer and the layer to be mixed to obtain a target image; the target image is displayed on the video playback interface.
- an embodiment of the present application also provides a display device, which is applied to an electronic device, the electronic device includes a multimedia display processor and a graphics processor, and the device includes: a layer acquisition module for acquiring The layer to be synthesized includes the interface layer of the video playback interface and the multimedia layer corresponding to the video played on the video playback interface; the first synthesis module is used to call the graphics processor Synthesize the interface layers to obtain the layer to be mixed; a second synthesizing module is used to call the multimedia display processor to synthesize the multimedia layer and the layer to be mixed to obtain a target image; The image display module is used to display the target image on the video playback interface.
- an embodiment of the present application also provides an electronic device, including: one or more processors; a memory; a graphics processor; a multimedia display processor; one or more application programs, wherein the one or more The application program is stored in the memory and configured to be executed by the one or more processors, and the one or more programs are configured to execute the above-mentioned method.
- an embodiment of the present application also provides a computer-readable medium having program code stored in the computer-readable storage medium, and the program code can be invoked by a processor to execute the foregoing method.
- the display method, device, electronic equipment and computer readable medium provided by this application when acquiring the layer to be synthesized, call different processors according to the layer type of the layer to be synthesized. Synthesis, specifically, call the graphics processor to synthesize the interface layers to obtain the layer to be mixed; call the multimedia display processor to synthesize the multimedia layer and the layer to be mixed to obtain the target image, due to the multimedia display processor
- the power consumption is lower. Therefore, by calling different processors to synthesize the layers to be synthesized, the power consumption caused by multi-layer overlay can be saved, and the layer synthesis speed can be increased, thereby making the image synthesis method more reasonable .
- Figure 1 shows a logical framework diagram of image processing provided by an embodiment of the present application
- FIG. 2 shows a schematic flowchart of a display method provided by an embodiment of the present application
- FIG. 3 shows a schematic diagram of a layer to be synthesized in a display method provided by an embodiment of the present application
- FIG. 4 shows a schematic flowchart of a display method provided by another embodiment of the present application.
- FIG. 5 shows a schematic flowchart of a display method provided by another embodiment of the present application.
- FIG. 6 shows a schematic flow chart of invoking a multimedia display processor for synthesis according to another embodiment of the present application
- FIG. 7 shows a schematic diagram of a multimedia layer of a display method provided by an embodiment of the present application.
- FIG. 8 shows a block diagram of a display device provided by an embodiment of the present application.
- FIG. 9 shows a structural block diagram of an electronic device provided by an embodiment of the present application.
- Fig. 10 shows a storage unit for storing or carrying the program code for implementing the display method according to the embodiment of the present application.
- FIG. 1 shows a logical framework diagram of image processing provided by an embodiment of the present application.
- the application can create a window (window) through the window manager (Windows manager).
- the window manager creates a Surface for each window to draw various elements that need to be displayed on it.
- Each surface corresponds to a layer, namely You can draw a layer on each surface.
- each layer is drawn on the corresponding surface, that is, the corresponding layer is drawn on each surface. Specifically, it can be performed on the canvas provided by the surface through the hardware accelerated renderer (HWUI) and/or the Skia graphics library. Layer drawing.
- HWUI hardware accelerated renderer
- the system is using the layer transfer module (Surface Flinger) service to synthesize each surface, that is, to synthesize each layer.
- the SurfaceFlinger service runs in the system process to manage the frame buffer (FrameBuffer) of the system uniformly.
- SurfaceFlinger obtains all the layers, and can use the graphics processor (GPU, Graphics Processing Unit) to synthesize the layers The result is saved to the frame buffer.
- the GPU can synthesize all or part of the layers.
- the hardware layer mixer HWC, Hardware Composer
- HWC can synthesize the result of SurfaceFlinger synthesized by the GPU together with other layers.
- HWC can call the Multimedia Display Processor (MDP, Multimedia Display Processor) to synthesize the layer obtained after the GPU synthesis in the frame buffer and other uncomposited layers, and finally form the BufferQueue A Buffer, and then under the action of the display driver, the image synthesized in the Buffer is used for display.
- MDP Multimedia Display Processor
- the electronic device can synthesize layers through MDP, or combine layers through GPU, or combine layers through MDP and GPU.
- the power consumption of multi-layer overlay is high.
- An embodiment of the present application provides a display method, which is applied to an electronic device, and the method is used to reasonably set an image synthesis strategy, thereby saving the power consumption of multi-layer overlay.
- the method includes: S101 to S104.
- the layer to be synthesized is each layer corresponding to the image that currently needs to be displayed on the screen of the electronic device.
- the layer to be synthesized includes the interface layer of the video playing interface and the layer used for playing on the video playing interface.
- the multimedia layer corresponding to the video of, as shown in Figure 3, the interface displayed on the current screen is a video playback interface 10 of a video client, and the interface layer and multimedia layer of the client are displayed in the video playback interface.
- the interface layer is a layer used to display the operation interface of the client. Specifically, it can be a layer that includes various UIs in the client, as shown in Figure 11.
- the multimedia layer is the image corresponding to the video played on the video playback interface.
- the layer may be a layer including various parts of video playback content such as bullet screens, subtitles, and video images, as shown in layer 12.
- the interface layer is a static layer
- the multimedia layer is a dynamic layer. That is, in the current interface of the client, the image in the multimedia layer may change, but the image in the interface layer generally does not change .
- the current client is a video APP
- the image of the video playback content is displayed in the area of the multimedia layer of the client, and the data in the image of the video playback content will change with the change of the video playback content, but on the screen
- the multimedia layer in the next frame of image displayed is different from the multimedia layer of the previous frame.
- the video frame image played in the previous frame may be different from the video frame image played in the next frame.
- the next frame The interface layer in the image is the same as the interface layer of the previous frame.
- the client determines the content to be displayed, it can determine the multiple layers corresponding to the displayed content, so as to render and synthesize each layer for display.
- the graphics processor can be a GPU.
- the GPU is a general graphics processor with powerful graphics processing functions. In addition to 2D image processing, it can also do 3D image processing, special effects, etc., which can be superimposed on multiple at one time. Layers. Since the data volume of the interface layer is small, the power consumption required by the graphics processor to synthesize the interface layer is also small, and by first calling the GPU to synthesize the interface layer, it can be done without causing a lot of work. In the case of high consumption, reduce the number of layers that need to be superimposed in the next step of synthesis, and reduce the number of layers that need to be superimposed at once.
- S103 Invoke the multimedia display processor to synthesize the multimedia layer and the layer to be mixed to obtain a target image.
- the target image is the image corresponding to the layer to be synthesized.
- the layer to be synthesized is each layer corresponding to the image that currently needs to be displayed on the screen of the electronic device.
- the target image is the image that currently needs to be displayed on the electronic device.
- the image displayed on the screen is the image obtained after synthesizing the layers to be synthesized.
- the image displayed on the current screen is a video playback interface of a video client.
- the interface layers and multimedia layers of the video client are displayed in the video playback interface.
- the layers are combined to obtain the target image corresponding to the video playback interface.
- MDP is a dedicated display image processing unit that can do conventional 2-dimensional image processing. Its main advantage is low power consumption, but high cost.
- FIFO pipes well-known pipes
- the 2D image that the user finally sees is actually superimposed by multiple layers, such as wallpaper layer, status bar layer, navigation bar layer, APP layer, floating ball layer, Video layers and so on.
- layers such as wallpaper layer, status bar layer, navigation bar layer, APP layer, floating ball layer, Video layers and so on.
- the number of these layers is different.
- Some applications display only 4 layers in the interface, so you can choose to use MDP for composition.
- the interface displayed by some applications has 7 layers or more, and it is impossible to complete the task by relying on only one MDP hardware to do the overlay synthesis.
- GPU is a general-purpose graphics processor, which is much more powerful than MDP in graphics processing. In addition to 2D image processing, it can also do 3D image processing, special effects, etc., but the power consumption is relatively high, and it can be used at one time. Overlay multiple layers.
- a simple interface layer is synthesized by first calling the GPU with higher power consumption. Then call the low-power MDP composite video layer and the layer to be mixed, which can not only save the power consumption of multi-layer overlay, but also combine some layers to be synthesized, such as interface layers, through the GPU first, and then combine them
- the layers to be mixed and the video layers are synthesized by MDP, which reduces the number of layers required to be synthesized by MDP, which can reduce the requirements for the number of layers that can be superimposed by MDP at one time. When the number of available FIFO pipelines for MDP is limited, the picture can be improved.
- the speed of layer synthesis makes the way of layer synthesis more reasonable, and lower power consumption also improves the endurance of electronic devices and improves user experience.
- the target image is an image obtained by synthesizing the layers to be synthesized, which is used for display on the video playback interface.
- the video playback content is realized on the video playback interface.
- the user can realize the corresponding control effect by operating the video playback interface. For example, during the video playback, you can click the video playback interface to pause the video playback, and click the lock control to lock the video playback interface. Does not respond to user clicks and other operations.
- the simple interface layer is synthesized by first calling the GPU with higher power consumption, and then the MDP synthesized video layer with lower power consumption and the layer to be mixed are called, which not only saves the need for multi-layer overlay Power consumption, and some layers to be synthesized, such as interface layers, are synthesized by GPU first, and then the synthesized layers to be mixed and video layers are synthesized by MDP, so that the number of layers required for MDP synthesis is reduced and can be reduced The requirement for the number of layers that can be superimposed by MDP at a time.
- the speed of layer composition can be increased, which makes the method of layer composition more reasonable, and lower power consumption also makes electronic equipment
- the endurance is improved and the user experience is improved.
- FIG. 4 shows a display method provided by an embodiment of the present application.
- the method is applied to an electronic device.
- the method is used to set an image synthesis strategy reasonably, thereby saving the power consumption of multi-layer overlay and improving image synthesis.
- Speed specifically, the method includes: S201 to S205.
- the frame buffer module is a frame buffer (FrameBuffer).
- the synthesized layers can be stored in the frame buffer module.
- the graphics processor is called to synthesize the interface layers.
- the layer to be mixed is stored in the frame buffer module.
- S204 Invoke the multimedia display processor to synthesize the multimedia layer and the layer to be mixed to obtain a target image.
- the MDP synthesizes the to-be-mixed layer obtained after the GPU synthesis in the frame buffer module and other unsynthesized multimedia layers to obtain the target image.
- the target image in the frame buffer module is displayed on the video playback interface.
- the frame buffer module may include a temporary frame buffer module.
- the graphics processor is called to synthesize the interface layers to obtain the layer to be mixed
- the layer to be mixed is stored in the temporary frame buffer module, and the target image is displayed on the video playback interface.
- the layer to be mixed stored in the temporary frame buffer module is retained, so that when the interface layer of the previous frame and the next frame does not change, the layer to be mixed last stored by the temporary frame buffer module can be determined as
- the layer to be mixed is used for synthesis this time without the need to synthesize the same interface layer as the previous frame, which further saves the power consumption of layer overlay and improves the layer synthesis speed.
- Figure 5 Shows a display method provided by an embodiment of the present application. The method is applied to an electronic device. The method is used to reasonably set an image synthesis strategy, thereby saving the power consumption of multi-layer overlay and improving the image synthesis speed.
- the method includes S301-S307:
- S302 Determine whether the interface layer is the same as the interface layer acquired last time.
- the interface layer is stored in a specific location such as a temporary frame buffer module for judgment with the interface layer acquired this time, and the interface layer stored this time is also used Compare with the interface layer acquired next time to determine whether it is the same. Before calling the GPU to synthesize the interface layer, you can determine whether the interface layer acquired this time is the same as the interface layer acquired last time. Whether it is necessary to synthesize the interface layers obtained this time.
- the specific location can be any location in the memory of the electronic device, which is not limited. It is understandable that the specific location has a specific address. Specifically, after obtaining the interface layer this time, the acquisition from the specific location will be the last one. The interface layer of is compared with the interface layer acquired this time to determine whether the interface layer is the same as the interface layer acquired last time.
- the electronic device may not only store the interface layer acquired last time, but also store the interface layer acquired several times before, so that before calling the GPU to synthesize the interface layer, it can determine the interface layer acquired this time Whether the interface layer of the interface layer is the same as the interface layer acquired in the previous few times, to determine whether it is necessary to synthesize the interface layer acquired this time. Specifically, as a way, the electronic device can determine whether the interface layer in the layer to be synthesized (that is, the interface layer acquired this time) is the same as the first interface layer, if the interface layer is the same as the first interface layer If the layer is different, continue to determine whether the interface layer is the same as the second interface layer.
- the graphics processor can be called to synthesize the interface layer to obtain the image to be mixed Floor.
- the first interface layer is the interface layer acquired before the interface layer acquired this time
- the second interface layer is the interface layer acquired before the first interface layer. Therefore, if the interface layer acquired this time is different from the interface layer acquired last time (ie the first interface layer), but the interface layer acquired last time (ie the first interface layer) is different from the interface layer acquired last time. The second interface layer) is the same. At this time, there is still no need to synthesize the interface layer obtained this time, which saves power consumption caused by layer synthesis.
- it may include:
- S304 can be executed.
- S303 Determine the layer to be mixed last stored in the temporary storage frame buffer module as the layer to be mixed for synthesis.
- the layer to be mixed stored last time remains in the temporary frame buffer module after the last composition, that is, the layer to be mixed stored last time is not deleted from the temporary frame buffer module before this composition.
- the layer to be mixed stored last time is determined to be the layer to be mixed used for composition this time for subsequent composition. Therefore, when the interface layer is unchanged, the last stored layer to be mixed is determined as the layer to be mixed for compositing, which can reduce the number of times of compositing the same interface and correspondingly reduce the synthesis time.
- the required power consumption can further save the power consumption of multi-layer overlay.
- the layer to be mixed is stored in the temporary frame buffer module, and after the target image is displayed on the video playback interface, the layer to be mixed stored in the temporary frame buffer module is retained, so that the layer to be mixed stored in the temporary frame buffer module
- the mixed layer can be retained at the end of the composition, so when the next composition, if the interface layer is the same as the one acquired last time, the layer to be mixed can be determined as the layer to be mixed for the composition , Without compositing the interface layers acquired this time. Therefore, since the interface layer is stored in the temporary frame buffer module, when the interface layer does not change, by determining the layer to be mixed that was stored last time as the layer to be mixed for compositing, the interference can be reduced. The number of times of synthesis on the same interface also reduces the power consumption required for synthesis and further saves the power consumption of multi-layer overlay.
- the graphics processor is called to synthesize the interface layer to obtain the layer to be mixed for synthesis this time, and upload it.
- the layer to be mixed stored in the temporary frame buffer module is deleted, so that when the interface layer obtained this time is different from the interface layer obtained last time, replace the layer to be mixed with the layer to be mixed this time for synthesis.
- the stored layers to be mixed can optimize storage, reduce storage pressure, and ensure the operating efficiency of layer synthesis.
- the graphics processor to synthesize the interface layer to obtain the layer to be mixed for the synthesis this time, or The layer to be mixed last stored in the temporary frame buffer module is not deleted, so that the interface layer acquired next time can be compared with the interface layer acquired last time and this time, and the interface layer acquired next time is compared with the interface layer acquired this time.
- the layer to be mixed obtained last time can also be determined as the layer to be mixed for the next synthesis.
- the temporary frame buffer module can store two or more layers to be mixed at the same time.
- the layer to be mixed corresponding to the second interface layer can also be determined as the layer to be mixed for synthesis, so that the interface layer can not only be compared with the interface layer obtained last time , It can also be compared with the previous interface layers, so that when the interface layer is changed only in a few frames and then returns to the same as the original, there is no need to re-synthesize the new layer to be mixed, which can further Save power consumption caused by layer overlay and improve the efficiency of layer composition.
- S306 Invoke the multimedia display processor to synthesize the multimedia layer and the layer to be mixed to obtain the target image.
- the multimedia layer includes a bullet screen layer and a video layer. Since the volume of data corresponding to the bullet screen layer is small, the power consumption required to synthesize the bullet screen layer is small, and the number of bullet screen layers is often large. , The number of layers that can be superimposed on MDP at one time is also limited, so when the number of barrage layers is large, the GPU can be called to synthesize the barrage layers to obtain the barrage layers to be mixed, which can be In the case of excessive power consumption, the layer synthesis speed is guaranteed. Specifically, please refer to Figure 6, which shows that when the multimedia layer and the layer to be mixed are synthesized, different synthesis strategies are selected according to the number of bullet layers The method, the method includes S401-S405:
- S401 Determine whether the number of barrage layers is greater than a preset number.
- the multimedia layer includes the bullet screen layer and the video layer.
- the bullet screen layer is the layer containing the bullet screen on the video playback interface.
- the video layer is the layer corresponding to the video content of the video playback interface.
- the multimedia layer 12 includes a bullet screen layer 121 and a video layer 122.
- the number of barrage layers is the number of barrage corresponding to the acquired video image at this moment, that is, one barrage corresponds to one barrage layer.
- the more the number of barrage the more The number of screen layers is also larger, and when the number of bullet screens is large, if all the multimedia display processors are used for synthesis, it is difficult to output the target image for display in time, because the number of layers that can be superimposed at one time is limited, so through Determine whether the number of bullet screens exceeds the preset number. When it exceeds, call the GPU that can superimpose more layers at one time to synthesize the bullet screen layers, which can increase the layer composition speed, and due to the data volume of the bullet screen layers Smaller, it will not cause excessive power consumption.
- the preset number can be set according to the number of FIFO pipes of the MDP. Specifically, the preset number can be equal to or less than the number of FIFO pipes of the MDP, so that it can be determined whether the number of barrage layers is greater than the preset number.
- the GPU can be called to synthesize the bullet screen layer, so that when the number of layers that can be superimposed at one time in MDP is limited, layer synthesis can be performed more reasonably. Ensure the synthesis speed to avoid the inability of MDP to complete the synthesis of a frame of image in time, which will affect the user experience.
- the preset number may be determined according to the number of free FIFO pipes in the MDP. Specifically, the preset number can be equal to or less than the number of free FIFO pipes in the MDP, so that it can be determined whether the number of barrage layers is greater than the preset number, and the number can be greater than the number of free FIFO pipes in the MDP or close to the number of free FIFO pipes in the MDP.
- the GPU is called to synthesize the barrage layers
- the MDP is still called to synthesize the number of barrage layers.
- the preset number can also be set arbitrarily, such as 8, 3, etc., which is not limited here.
- the preset number after determining whether the number of barrage layers is greater than the preset number, it may include:
- S402 Call the multimedia display processor to synthesize the multimedia layer and the layer to be mixed to obtain a target image.
- the multimedia display processor can only be called to synthesize the multimedia layer and the layer to be mixed to obtain the target image, that is, the number of bullet screen layers In a short while, MDP can still be called to synthesize, saving the power consumption of layer overlay.
- the synthesis strategy for the bullet screen layers can be selected according to the number of FIFO pipes of the multimedia display processor, for example, if the number of free FIFO pipes in the MDP Not less than the number of bullet screen layers, you can only call MDP to synthesize the bullet screen layers; if the number of free FIFO pipes in the MDP is less than the number of bullet screen layers, you can call the GPU to synthesize the bullet screen layers.
- the GPU can synthesize all the bullet screen layers, thereby improving the layer composition speed; as another way, you can also call the GPU to compare the number of bullets that are larger than the idle FIFO pipeline.
- the screen layer is synthesized, and MDP is called to synthesize the rest of the barrage layers. For example, if the number of free pipelines is 2 and the number of barrage layers is 3, then the GPU can be called to synthesize the extra one, and Call MDP to synthesize the two barrage layers, which can make full use of the idle FIFO pipeline and save superimposed power consumption, and because MDP can be used to superimpose as much as possible, it can also avoid causing excessive power. Consumption.
- S403 Invoke the graphics processor to synthesize the barrage layer to obtain the barrage layer to be mixed.
- the graphics processor can be called to synthesize the barrage layers to obtain the barrage layers to be mixed.
- the preset number is set according to the number of MDP FIFO pipelines, such as 4, 7, etc. If the number of barrage layers is greater than the preset number, it may be difficult for MDP to overlay images in time It’s difficult to display on the video playback interface in time for users to watch. Therefore, when the number of bullet screen layers is greater than the preset number, it can be improved by calling the GPU that can stack more layers at one time. The speed of synthesis, the target image is synthesized in time and displayed on the video playback interface, ensuring the user's experience of watching the video, and the data volume of the barrage layer is also small, and calling the GPU for synthesis will not cause excessive power consumption.
- S404 Store the barrage layer to be mixed in the non-temporary frame buffer module.
- the frame buffer module may also include a non-temporary frame buffer module.
- the layers stored in the non-temporary frame buffer module will be deleted after this synthesis. Due to the dynamic changes of the bullet screen layer, for example, there is a new bullet screen.
- the generated or barrage scrolls on the video playback interface, so that the barrage layer of the previous frame and the next frame are often different. Therefore, by storing the barrage layer to be mixed in the non-temporary frame buffer module, you can synthesize this time Afterwards, delete the combined barrage layer to be mixed from the non-temporary frame buffer module, which can reduce the storage requirements for the temporary frame buffer module and improve the efficiency of MDP operation.
- S405 Invoke the multimedia display processor to synthesize the barrage layer, video layer, and layer to be mixed to obtain a target image.
- the layer to be mixed is obtained from the temporary frame buffer module, and the barrage layer to be mixed, the video layer, and the layer to be mixed in the non-temporary frame buffer module are synthesized to obtain Finally, the target image to be displayed is then displayed on the video playback page.
- the barrage layer and the video layer are processed separately, and the GPU is called to process the barrage layer with less data to obtain the barrage layer to be mixed, and then the MDP is called for the barrage layer and video to be mixed.
- the layer and the layer to be mixed are synthesized to obtain the target image, so that on the basis of the foregoing embodiment, the layer synthesis speed can be guaranteed without excessive power consumption.
- the frame buffer module can also include a non-temporary frame buffer module.
- the GPU is called to synthesize the bullet layers.
- the The barrage layer to be mixed is stored in the non-temporary frame buffer module, and after the target image is displayed on the video playback interface, the layer stored in the non-temporary frame buffer module is deleted. Because the barrage layers of the previous frame and the next frame are often different, in this synthesis, by deleting the barrage layer to be mixed stored in the previous frame, the storage requirements for the temporary frame buffer module can be reduced. Improve the efficiency of MDP operation.
- the display device 800 may include: an image acquisition module 810, a first synthesis module 820, a second synthesis module 830, and an image display module 840, of which:
- the image acquisition module 810 is configured to acquire a layer to be synthesized, and the layer to be synthesized includes an interface layer of a video playback interface and a multimedia layer corresponding to a video played on the video playback interface.
- the first synthesis module 820 is configured to call the graphics processor to synthesize the interface layers to obtain the layers to be mixed.
- the second synthesis module 830 is configured to call the multimedia display processor to synthesize the multimedia layer and the layer to be mixed to obtain a target image.
- the image display module 840 is configured to display the target image on the video playback interface.
- the display device 800 further includes: a first storage module, wherein:
- the first storage module is used to store the layer to be mixed in the frame buffer module.
- the frame buffer module includes a temporary frame buffer module
- the first storage module includes: a first storage unit and a layer reservation unit, wherein:
- the first storage unit is configured to store the layer to be mixed in the temporary frame buffer module
- the layer retention unit is configured to retain the layer to be mixed stored in the temporary frame buffer module after the target image is displayed on the video playback interface.
- the first synthesis module includes: a first judgment unit and a first synthesis unit, wherein:
- the first judgment unit is used to judge whether the interface layer is the same as the interface layer acquired last time
- the first compositing unit is configured to, if different, call the graphics processor to synthesize the interface layer to obtain the layer to be mixed.
- the first synthesis module further includes: a first determining unit, wherein:
- the first determining unit is configured to determine the to-be-mixed layer stored in the temporary storage frame buffer module last time if it is the same as the to-be-mixed layer to be used for composition this time.
- the multimedia layer includes a bullet screen layer and a video layer
- the second composition module includes: a second judgment unit, a second composition unit, and a third composition unit, wherein:
- the second determining unit is used to determine whether the number of the barrage layers is greater than a preset number.
- the second compositing unit is configured to call the graphics processor to synthesize the barrage layer if it is larger than that to obtain the barrage layer to be mixed.
- the third synthesis unit is configured to call the multimedia display processor to synthesize the barrage layer to be mixed, the video layer and the layer to be mixed to obtain a target image.
- the frame buffer module includes a non-temporary frame buffer module
- the second synthesis unit includes:
- the second storage subunit is used for storing the barrage layer to be mixed in the non-temporary frame buffer module.
- the layer deletion subunit is used to delete the layer stored in the non-temporary frame buffer module after the target image is displayed on the video playback interface.
- the coupling between the modules may be electrical, mechanical or other forms of coupling.
- each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
- the above-mentioned integrated modules can be implemented in the form of hardware or software function modules.
- the electronic device 900 may be an electronic device capable of running application programs, such as a smart phone, a tablet computer, or an e-book.
- the electronic device 900 in this application may include one or more of the following components: a processor 910, a memory 920, a graphics processor 930, a multimedia display processor 940, and one or more application programs, of which one or more application programs can be It is stored in the memory 920 and configured to be executed by one or more processors 910, and one or more programs are configured to execute the method described in the foregoing method embodiment.
- the processor 910 may include one or more processing cores.
- the processor 910 uses various interfaces and lines to connect various parts of the entire electronic device 900, and executes by running or executing instructions, programs, code sets, or instruction sets stored in the memory 920, and calling data stored in the memory 920.
- the processor 910 may use at least one of digital signal processing (Digital Signal Processing, DSP), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), and Programmable Logic Array (Programmable Logic Array, PLA).
- DSP Digital Signal Processing
- FPGA Field-Programmable Gate Array
- PLA Programmable Logic Array
- the processor 910 may integrate one or a combination of a central processing unit (CPU), a graphics processing unit (GPU), a modem, and the like.
- the CPU mainly processes the operating system, user interface, and application programs;
- the GPU is used for rendering and drawing of display content;
- the modem is used for processing wireless communication. It is understandable that the above-mentioned modem may not be integrated into the processor 910, but may be implemented by a communication chip alone.
- the memory 920 may include random access memory (RAM) or read-only memory (Read-Only Memory).
- the memory 920 may be used to store instructions, programs, codes, code sets or instruction sets.
- the memory 920 may include a program storage area and a data storage area.
- the program storage area may store instructions for implementing the operating system and instructions for implementing at least one function (such as touch function, sound playback function, image playback function, etc.) , Instructions for implementing the following method embodiments, etc.
- the data storage area can also store data (such as phone book, audio and video data, chat record data) created by the terminal 100 during use.
- the graphics processor 930 is a general graphics processor, which is much more powerful than MDP in graphics processing. In addition to 2D image processing, it can also do 3D image processing, special effects, etc., but the power consumption is relatively high. Sex can overlay multiple layers.
- the multimedia display processor 940 is a dedicated display image processing unit that can perform conventional 2-dimensional image processing. Its main advantage is low power consumption but high cost. The more layers an MDP can superimpose at one time, the more FIFO pipelines are needed inside it. On Qualcomm's high-end platform, an MDP has 8 FIFO pipelines, and up to 8 layers can be synthesized and superimposed at one time. On Qualcomm's low-end and mid-range platforms, an MDP has only 4 FIFO pipelines, and up to 4 layers can be synthesized at a time.
- FIG. 10 shows a structural block diagram of a computer-readable storage medium provided by an embodiment of the present application.
- the computer-readable storage medium 1000 stores program codes, and the program codes can be invoked by a processor to execute the methods described in the foregoing method embodiments.
- the computer-readable storage medium 1000 may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
- the computer-readable storage medium 1000 includes a non-transitory computer-readable storage medium.
- the computer-readable storage medium 1000 has storage space for the program code 1010 for executing any method steps in the above-mentioned methods. These program codes can be read out from or written into one or more computer program products.
- the program code 1010 may be compressed in an appropriate form, for example.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Controls And Circuits For Display Device (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
- Digital Computer Display Output (AREA)
Abstract
一种显示方法、装置、电子设备及计算机可读存储介质,涉及图像处理技术领域。该方法包括:获取待合成的图层(S101),待合成的图层包括视频播放界面的界面图层以及用于在视频播放界面播放的视频对应的多媒体图层;调用图形处理器对界面图层进行合成,得到待混合图层(S102);调用多媒体显示处理器对多媒体图层和待混合图层进行合成,以得到目标图像(S103);在视频播放界面显示目标图像(S104)。通过分别调用不同的处理器对待合成的图层进行合成,可以节省多图层叠加所带来的功耗,提高图层合成速度,进而使得图像合成方式更加合理。
Description
相关申请的交叉引用
本申请要求于2019年07月17日提交中国专利局的申请号为CN201910647562.8、名称为“显示方法、装置、电子设备及计算机可读介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及视频处理技术领域,更具体地,涉及一种显示方法、装置、电子设备及计算机可读介质。
随着电子技术和信息技术的发展,越来越多的设备能够播放视频。设备在视频播放的过程中,需要对视频执行解码、渲染以及合成等操作,再在显示屏上显示,其中,可以使用图形处理器(Graphics Processing Unit,GPU)或者多媒体显示处理器(Multimedia Display Processor,MDP)执行图像合成的操作。但是,现有的视频播放技术中,图像合成方式不够合理。
发明内容
本申请提出了一种显示方法、装置、电子设备及计算机可读介质,以改善上述缺陷。
第一方面,本申请实施例提供了一种显示方法,应用于电子设备,所述电子设备包括多媒体显示处理器和图形处理器,所述方法包括:获取待合成的图层,所述待合成的图层包括视频播放界面的界面图层以及用于在所述视频播放界面播放的视频对应的多媒体图层;调用所述图形处理器对所述界面图层进行合成,得到待混合图层;调用所述多媒体显示处理器对所述多媒体图层和所述待混合图层进行合成,以得到目标图像;在所述视频播放界面显示所述目标图像。
第二方面,本申请实施例还提供了一种显示装置,应用于电子设备,所述电子设备包括多媒体显示处理器和图形处理器,所述装置包括:图层获取模块,用于获取待合成的图层,所述待合成的图层包括视频播放界面的界面图层以及用于在所述视频播放界面播放的视频对应的多媒体图层;第一合 成模块,用于调用所述图形处理器对所述界面图层进行合成,得到待混合图层;第二合成模块,用于调用所述多媒体显示处理器对所述多媒体图层和所述待混合图层进行合成,以得到目标图像;图像显示模块,用于在所述视频播放界面显示所述目标图像。
第三方面,本申请实施例还提供了一种电子设备,包括:一个或多个处理器;存储器;图形处理器;多媒体显示处理器;一个或多个应用程序,其中所述一个或多个应用程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个程序配置用于执行上述方法。
第四方面,本申请实施例还提供了一种计算机可读介质,所述计算机可读存储介质中存储有程序代码,所述程序代码可被处理器调用执行上述方法。
相对于现有技术,本申请提供的显示方法、装置、电子设备及计算机可读介质,在获取待合成的图层的时候,根据待合成的图层的图层类型分别调用不同的处理器进行合成,具体地,调用图形处理器对界面图层进行合成,得到待混合图层;调用多媒体显示处理器对多媒体图层和待混合图层进行合成,以得到目标图像,由于多媒体显示处理器所消耗的功耗更低,因此,通过分别调用不同的处理器对待合成的图层进行合成,可以节省多图层叠加所带来的功耗,提高图层合成速度,进而使得图像合成方式更加合理。
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示出了本申请实施例提供的图像处理的逻辑框架图;
图2示出了本申请一个实施例提供的显示方法的流程示意图;
图3示出了本申请一个实施例提供的显示方法的待合成图层的示意图;
图4示出了本申请另一个实施例提供的显示方法的流程示意图;
图5示出了本申请又一个实施例提供的显示方法的流程示意图;
图6示出了本申请又一个实施例提供的调用多媒体显示处理器进行合成的流程示意图;
图7示出了本申请一个实施例提供的显示方法的多媒体图层的示意图;
图8示出了本申请实施例提供的显示装置的模块框图;
图9示出了本申请实施例提供的电子设备的结构框图;
图10示出了本申请实施例的用于保存或者携带实现根据本申请实施例 的显示方法的程序代码的存储单元。
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
请参阅图1,示出了本申请实施例提供的图像处理的逻辑框架图。
应用程序(application)可以通过窗口管理器(Windows manager)创建窗口(window),窗口管理器为每一个窗口创建Surface用于在上面绘制各种需要显示的元素,每个surface对应一个图层,即可以在每个surface绘制一个图层。
在显示界面(如Activity)中,可以包括多个图层,如导航栏、状态栏、程序界面等。每个图层在相应的surface进行绘制,即在每个surface进行对应的图层的绘制,具体可以通过硬件加速渲染器(HWUI)和/或Skia图形库等在surface提供的画布(cavas)进行图层绘制。
系统(System)在使用图层传递模块(Surface Flinger)服务将各个surface进行合成,即将各个图层进行合成。其中,SurfaceFlinger服务运行在系统进程中,用来统一管理系统的帧缓冲区(FrameBuffer),SurfaceFlinger获取所有的图层,可以使用图形处理器(GPU,Graphics Processing Unit)对图层进行合成,将合成的结果保存到帧缓冲区。
在本申请实施例中,GPU可以对所有或者部分图层进行合成。其中,当GPU对一个待显示的显示界面中的部分图层进行合成时,硬件图层混合器(HWC,Hardware composer)可以将SurfaceFlinger通过GPU合成的结果与其他图层一起合成,具体的,如图1所示,HWC可以调用多媒体显示处理器(MDP,Multimedia Display Processor),对帧缓冲区中GPU合成后获得的图层与其他尚未合成的图层进行图层的合成,最终形成BufferQueue中的一个Buffer,再在显示驱动的作用下,将Buffer中合成的图像用于显示。
因此,电子设备可以通过MDP对图层合成,也可以通过GPU对图层合成,还可以通过MDP与GPU混合的方式对图层合成。
目前在图层合成方面,多图层叠加的功耗较高,发明人在研究中发现,导致多图层叠加的功耗较高的原因是,由于GPU是一个通用的图形处理器,能够做2维图像处理,还能做3维图像处理,特效等。因此,目前普遍使用GPU进行图像合成,功耗较高,进而影响电子设备的续航能力。
因此,为了解决上述缺陷,请参阅图2,本申请实施例提供了一种显示方法,该方法应用于电子设备,该方法用于合理设置图像合成的策略,从而节省多图层叠加的功耗,提高图像合成速度,具体地,该方法包括:S101至 S104。
S101:获取待合成的图层。
其中,所述待合成的图层为当前需要在电子设备的屏幕上显示的图像对应的各个图层,具体,待合成的图层包括视频播放界面的界面图层以及用于在视频播放界面播放的视频对应的多媒体图层,如图3所示,当前屏幕上所显示的界面为一个视频客户端的视频播放界面10,在视频播放界面内显示有客户端的界面图层和多媒体图层,其中,界面图层为用于显示客户端的操作界面的图层,具体,可以是包括客户端内各个UI的图层,如图层11,多媒体图层为用于在视频播放界面播放的视频对应的图层,具体,可以是包括各部分视频播放内容如弹幕、字幕、视频图像的图层,如图层12。通常情况下,该界面图层为静态图层,多媒体图层为动态图层,即在客户端的当前界面下,多媒体图层内的图像可能会改变,而界面图层内的图像一般不会变。例如,当前的客户端为视频APP,在该客户端的多媒体图层的区域显示视频播放内容的图像,且视频播放内容的图像内的数据会随着视频播放内容的改变而改变,而在屏幕上显示的下一帧图像中的多媒体图层与上一帧的多媒体图层是不同的,如上一帧播放的视频帧图像与下一帧播放的视频帧图像可能是不同的,但是,下一帧图像中的界面图层与上一帧图像的界面图层相同。
因此,客户端在确定要显示的内容的时候,就能够确定该显示内容所对应的多个图层,以便对各个图层渲染合成之后进行显示。
S102:调用图形处理器对界面图层进行合成,得到待混合图层。
其中,图形处理器可以是GPU,GPU是一个通用的图形处理器,在图形处理上功能强大,除了能够做2维图像处理,还能做3维图像处理,特效等,一次性可以叠加多个图层。由于界面图层的数据量较小,因此,图形处理器对界面图层进行合成所需的功耗也较小,并且,通过先调用GPU对界面图层进行合成,可以在不造成较大功耗的情况下,减少下一步合成所需叠加的图层数量,降低后续所需一次性叠加的图层数量。
S103:调用多媒体显示处理器对多媒体图层和待混合图层进行合成,以得到目标图像。
其中,目标图像为合成待合成的图层后对应的图像,具体,待合成的图层为当前需要在电子设备的屏幕上显示的图像对应的各个图层,目标图像即为当前需要在电子设备的屏幕上显示的图像,即合成待合成的图层后得到的图像。如图3所示,当前屏幕上所显示的图像为一个视频客户端的视频播放界面,在该视频播放界面内显示有该视频客户端的界面图层和多媒体图层,通过将该视频播放界面的各个图层进行合成,可以得到该视频播放 界面对应的目标图像。
其中,MDP是一个专用的显示图像处理单元,能够做常规的2维图像处理,其主要优点是功耗低,但是成本较高。MDP一次性能够叠加的图层越多,在其内部需要更多的有名管道(FIFO管道)。在高通的高端平台,一个MDP有8个FIFO管道,一次性最多能够叠加8个图层。而在高通的中低端平台,一个MDP只有4个FIFO管道,一次性最多能够叠加4个图层。
对于Android系统来说,用户最终看到的2维图像其实是由多个图层合成叠加出来的,比如壁纸图层,状态栏图层,导航栏图层,APP图层,悬浮球图层,视频图层等等。在不同的应用场景下,这些图层的数量是不一样的,有的应用所显示的界面只有4个图层,则可以选择使用MDP去做合成。但是有的应用所显示的界面,有7个图层,甚至更多,只靠一个MDP硬件去做叠加合成无法完成任务。
而GPU是一个通用的图形处理器,在图形处理上比MDP功能要强大很多,除了能够做2维图像处理,还能做3维图像处理,特效等,但是功耗相对较高,一次性可以叠加多个图层。
由于MDP的功耗要小于GPU的功耗,但一次性可叠加的图层数量较少,因此,针对不同的待合成图层,通过先调用功耗较高的GPU合成简单的界面图层,再调用功耗较低的MDP合成视频图层及待混合图层,不仅可以节省多图层叠加的功耗,而且将部分待合成的图层如界面图层先通过GPU合成,再将合成后的待混合图层以及视频图层通过MDP合成,使得MDP所需合成的图层数量减少,可以降低对MDP一次性能够叠加的图层数的要求,在MDP可用FIFO管道数量有限时可以提高图层合成的速度,进而使得图层合成方式更为合理,并且更低的功耗还使得电子设备的续航能力提升,提升了用户体验。
S104:在视频播放界面显示目标图像。
其中,目标图像为将待合成的图层合成后得到的图像,用于在视频播放界面进行显示,通过将视频播放界面所需显示的图像进行合成后显示,实现在视频播放界面对视频播放内容的播放,并且使得用户可以通过对视频播放界面的操作,实现对应的控制效果,如在视频播放过程中,可以通过点击视频播放界面,使视频播放暂停,通过点击锁定控件,锁定视频播放界面,不对用户的点击等操作进行响应。
本实施例提供的显示方法,通过先调用功耗较高的GPU合成简单的界面图层,再调用功耗较低的MDP合成视频图层及待混合图层,不仅可以节省多图层叠加的功耗,而且将部分待合成的图层如界面图层先通过GPU合成,再将合成后的待混合图层以及视频图层通过MDP合成,使得MDP所需 合成的图层数量减少,可以降低对MDP一次性能够叠加的图层数的要求,在MDP可用FIFO管道数量有限时可以提高图层合成的速度,进而使得图层合成方式更为合理,并且更低的功耗还使得电子设备的续航能力提升,提升了用户体验。
请参阅图4,示出了本申请实施例提供的一种显示方法,该方法应用于电子设备,该方法用于合理设置图像合成的策略,从而节省多图层叠加的功耗,提高图像合成速度,具体地,该方法包括:S201至S205。
S201:获取待合成的图层。
S202:调用图形处理器对界面图层进行合成,得到待混合图层。
S203:将待混合图层存储于帧缓冲模块。
其中,帧缓冲模块为帧缓冲区(FrameBuffer),调用图形处理器对图层进行合成后,可以将合成后的图层存储于帧缓冲模块,具体,调用图形处理器对界面图层进行合成,得到待混合图层后,将待混合图层存储于帧缓冲模块。
S204:调用多媒体显示处理器对多媒体图层和待混合图层进行合成,以得到目标图像。
其中,MDP对帧缓冲模块中GPU合成后得到的待混合图层与其他尚未合成的多媒体图层进行图层的合成,得到目标图像。
S205:在视频播放界面显示目标图像。
在显示驱动的作用下,将帧缓冲模块中的目标图像显示于视频播放界面。
需要说明的是,上述步骤中未详细描述的部分,可参考前述实施例,在此不再赘述。
另外,帧缓冲模块可以包括暂存帧缓冲模块,调用图形处理器对界面图层进行合成得到待混合图层后,将待混合图层存储于暂存帧缓冲模块,在视频播放界面显示目标图像之后,保留暂存帧缓冲模块所存储的待混合图层,从而在上一帧与下一帧的界面图层无变化时,可将暂存帧缓冲模块上次存储的待混合图层确定为本次用于合成的待混合图层,而无需对与上一帧相同的界面图层再次进行合成,进一步节省图层叠加的功耗,提高图层合成速度,具体地,请参阅图5,示出了本申请实施例提供的一种显示方法,该方法应用于电子设备,该方法用于合理设置图像合成的策略,从而节省多图层叠加的功耗,提高图像合成速度,具体地,该方法包括S301-S307:
S301:获取待合成的图层。
S302:判断界面图层是否与上次获取的界面图层相同。
其中,上次获取界面图层后,将界面图层存储于特定位置如暂存帧缓冲 模块,以用于与本次获取的界面图层进行判断,并且本次存储的界面图层也用于与下次获取的界面图层进行比较,判断是否相同,从而在调用GPU对界面图层进行合成前,可以通过判断本次获取的界面图层是否与上次获取的界面图层相同,来确定是否需要对本次获取的界面图层进行合成。其中,特定位置可以是电子设备中的存储器的任意位置,对此不作限定,可以理解的是,特定位置具有特定地址,具体,获取本次的界面图层后,从特定位置获取将上次获取的界面图层以与本次获取的界面图层进行比较,进而判断界面图层是否与上次获取的界面图层相同。
在一些实施方式中,电子设备也可以不只存储上次获取的界面图层,还可以存储前几次获取的界面图层,从而在调用GPU对界面图层进行合成前,可以通过判断本次获取的界面图层是否与前几次获取的界面图层相同,来确定是否需要对本次获取的界面图层进行合成。具体地,作为一种方式,电子设备可判断待合成的图层中的界面图层(即本次获取的界面图层)是否与第一界面图层相同,若该界面图层与第一界面图层不同,再继续判断该界面图层是否与第二界面图层相同,若该界面图层与第二界面图层不同,则可调用图形处理器对界面图层进行合成,得到待混合图层。其中,第一界面图层为本次获取的界面图层之前获取的界面图层,第二界面图层为第一界面图层之前获取的界面图层。由此,若本次获取的界面图层与上次获取的界面图层(即第一界面图层)不同,但与上次获取的界面图层的再上一次获取的界面图层(即第二界面图层)相同,此时仍无需对本次获取的界面图层进行合成,节省图层合成带来的功耗。本实施例中,判断界面图层是否与上次获取的界面图层相同之后,可以包括:
若相同,可以执行S303;
若不同,可以执行S304。
S303:将上次存储于暂存帧缓冲模块的待混合图层确定为本次用于合成的待混合图层。
其中,上次存储的待混合图层在上次合成后仍保留在暂存帧缓冲模块,即在本次合成前不将上次存储的待混合图层从暂存帧缓冲模块中删除,从而在本次获取的界面图层与上次获取的界面图层相同时,将上次存储的待混合图层确定为本次用于合成的待混合图层,以进行后续合成。由此,通过在界面图层不变时,将上次存储的待混合图层确定为本次用于合成的待混合图层,可以降低对相同界面进行合成的次数,也相应降低了合成所需的功耗,进一步节省多图层叠加的功耗。
S304:调用图形处理器对界面图层进行合成,得到待混合图层。
S305:将待混合图层存储于暂存帧缓冲模块。
其中,将待混合图层存储于暂存帧缓冲模块,在视频播放界面显示所述目标图像之后,保留暂存帧缓冲模块所存储的待混合图层,使得暂存帧缓冲模块中存储的待混合图层在本次合成结束可保留,从而在下次合成时,若界面图层与上次获取的相同,可将上次获取的待混合图层确定为本次用于合成的待混合图层,而无需再对本次获取的界面图层进行合成。由此,由于将界面图层存储于暂存帧缓冲模块,在界面图层不变时,通过将上次存储的待混合图层确定为本次用于合成的待混合图层,可以降低对相同界面进行合成的次数,也相应降低了合成所需的功耗,进一步节省多图层叠加的功耗。
作为一种实施方式,若本次获取的界面图层与上次获取的界面图层不同,调用图形处理器对界面图层进行合成,得到本次用于合成的待混合图层,并将上次存储于暂存帧缓冲模块的待混合图层删除,从而在本次获取的界面图层与上次获取的界面图层不同时,通过将本次用于合成的待混合图层替换上次存储的待混合图层,可以优化存储,降低存储压力,保证图层合成的运行效率。
作为另一种实施方式,若本次获取的界面图层与上次获取的界面图层不同,调用图形处理器对界面图层进行合成,得到本次用于合成的待混合图层,也可以不将上次存储于暂存帧缓冲模块的待混合图层删除,使得下次获取的界面图层可以与上次以及本次获取的界面图层进行比较,进而在下次获取的界面图层与上次获取的界面图层相同时,还可以将上次得到的待混合图层确定为下次用于合成的待混合图层。
在一个示例性的实施方式中,暂存帧缓冲模块可以同时存储有两个或两个以上待混合图层,则当本次获取的界面图层与第一界面图层不同,但与第二界面图层相同时,还可将第二界面图层对应的待混合图层确定为本次用于合成的待混合图层,从而使得界面图层不仅可以与上次获取的界面图层相比较,还可以与前几次的界面图层相比较,使得在界面图层仅在几帧中发生变化后又回复成与原来相同时,仍可以不必重新合成新的待混合图层,从而可以进一步节省图层叠加带来的功耗,提高图层合成的效率。进一步地,下次获取的界面图层与上次以及本次获取的界面图层进行比较时,可以先与最近一次获取的界面图层进行比较,即先与本次获取的界面图层进行比较,若不同再与上次获取的界面图层进行比较。由于在连续帧中界面图层不变的可能性要高于非连续帧中界面图层不变的可能性,因此通过先与最近一次获取的界面图层进行比较,可以进一步提高图层合成的效率。
S306:调用多媒体显示处理器对多媒体图层和待混合图层进行合成,以 得到目标图像。
S307:在视频播放界面显示目标图像。
需要说明的是,上述步骤中未详细描述的部分,可参考前述实施例,在此不再赘述。
另外,多媒体图层包括弹幕图层和视频图层,由于弹幕图层对应的数据量较小,合成弹幕图层所需的功耗较小,并且弹幕图层的数量往往较大,MDP一次性可叠加的图层数量也有限,因此当弹幕图层的数量较大时,可调用GPU对弹幕图层进行合成,得到待混合的弹幕图层,从而可以在不造成过大功耗的情况下保证图层合成速度,具体地,请参阅图6,其示出了对多媒体图层及待混合图层进行合成时,根据弹幕图层的数量选择不同的合成策略的方法,该方法包括S401-S405:
S401:判断弹幕图层的数量是否大于预设数量。
其中,多媒体图层包括弹幕图层和视频图层,弹幕图层是视频播放界面上包含弹幕的图层,视频图层为视频播放界面的视频内容对应的图层,具体,请参阅图7,例如多媒体图层12包括弹幕图层121以及视频图层122。
作为一种实施方式,弹幕图层的数量为本次获取的视频图像在该时刻下对应的弹幕数量,即一个弹幕对应一个弹幕图层,此时,弹幕数量越多,弹幕图层的数量也越多,而弹幕数量较多时,若全部都调用多媒体显示处理器进行合成,难以及时输出目标图像来进行显示,因为MDP一次性可叠加的图层数量有限,因此通过判断弹幕数量是否超过预设数量,可以在超过时,调用可一次性叠加较多图层的GPU对弹幕图层进行合成,可以提高图层合成速度,并且由于弹幕图层的数据量较小,也不会引起功耗过大。
在一种实施方式中,预设数量可以是根据MDP的FIFO管道数量进行设置,具体,预设数量可以等于或小于MDP的FIFO管道数量,从而可以通过判断弹幕图层的数量是否大于预设数量,可以在数量大于FIFO管道数量或接近FIFO管道数量时,调用GPU对弹幕图层进行合成,从而在MDP一次性可叠加图层的数量有限的情况下,更合理地进行图层合成,保证合成速度,避免MDP无法及时完成对一帧图像的合成,影响用户体验。
在另一种实施方式中,预设数量可以是根据MDP中空闲的FIFO管道数量确定。具体,预设数量可以等于或小于MDP中空闲的FIFO管道数量,从而可以通过判断弹幕图层的数量是否大于预设数量,可以在数量大于MDP中空闲的FIFO管道数量或接近MDP中空闲的FIFO管道数量时,调用GPU对弹幕图层进行合成,而在弹幕图层的数量小于MDP中空闲的FIFO管道数量时,仍调用MDP对弹幕图层的数量进行合成,从而在MDP一次性可叠加图层的数量有限的情况下,更合理地进行图层合成,在只需较低合成功耗的同 时保证合成速度。
在又一种实施方式中,预设数量也可以是任意设置的,如8个、3个等,在此不作限定。
本实施例中,在判断弹幕图层的数量是否大于预设数量之后,可以包括:
若不大于,可以执行S402;
若大于,可以执行S403。
S402:调用多媒体显示处理器对多媒体图层和待混合图层进行合成,以得到目标图像。
作为一种实施方式,若弹幕图层的数量不大于预设数量,可以仅调用多媒体显示处理器对多媒体图层和待混合图层进行合成,以得到目标图像,即在弹幕图层数量不多时,仍可调用MDP进行合成,节省图层叠加的功耗。
作为另一种实施方式,若弹幕图层的数量不大于预设数量,可以根据多媒体显示处理器的FIFO管道数量选择对弹幕图层的合成策略,例如,若MDP中空闲的FIFO管道数量不小于弹幕图层的数量,可仅调用MDP对弹幕图层进行合成;若MDP中空闲的FIFO管道数量小于弹幕图层的数量,可调用GPU对弹幕图层进行合成。
进一步地,作为一种方式,可以调用GPU对所有弹幕图层进行合成,从而提高图层合成速度;作为另一种方式,还可以调用GPU对相较于空闲的FIFO管道数量多出的弹幕图层进行合成,并调用MDP对其余弹幕图层进行合成,例如,空闲管道数量为2个,弹幕图层数量为3个,则可调用GPU对多出的1个进行合成,并调用MDP对2个弹幕图层进行合成,从而可充分利用空闲的FIFO管道,节省叠加功耗,且由于尽可能在MDP可实现叠加的基础上调用MDP进行合成,还可以避免引起过大功耗。
S403:调用图形处理器对弹幕图层进行合成,得到待混合的弹幕图层。
若弹幕图层的数量大于预设数量,可以调用图形处理器对弹幕图层进行合成,得到待混合的弹幕图层。在一种实施方式中,预设数量是根据MDP的FIFO管道数量进行设置的,如可以为4个、7个等,若弹幕图层的数量大于预设数量,则MDP可能难以及时叠加图层,进而难以及时显示于视频播放界面,以供用户观看,因此,通过在弹幕图层的数量大于预设数量时,调用可一次性叠加更多图层的GPU进行合成,可以提高图层合成的速度,及时合成得到目标图像并显示于视频播放界面,保证用户观看视频的体验,并且弹幕图层的数据量也较小,调用GPU进行合成也不会引起过大功耗。
S404:将待混合的弹幕图层存储于非暂存帧缓冲模块。
其中,帧缓冲模块还可以包括非暂存帧缓冲模块,非暂存帧缓冲模块中存储的图层在本次合成后将删除,由于弹幕图层的动态变化的,例如有新的 弹幕产生或弹幕在视频播放界面滚动,使得上一帧与下一帧的弹幕图层往往不同,因此通过将待混合的弹幕图层存储于非暂存帧缓冲模块,可以在本次合成后,将本次合成的待混合的弹幕图层从非暂存帧缓冲模块中删除,可以降低对暂存帧缓冲模块的存储要求,提高MDP运行效率。
S405:调用多媒体显示处理器对待混合的弹幕图层、视频图层和待混合图层进行合成,以得到目标图像。
在一种实施方式中,从暂存帧缓冲模块中获取待混合图层,并将非暂存帧缓冲模块内的待混合的弹幕图层、视频图层和待混合图层进行合成,得到最终待显示的目标图像,然后再将目标图像显示于视频播放页面。
本实施例通过将弹幕图层与视频图层分开处理,调用GPU处理数据量较少的弹幕图层,得到待混合的弹幕图层,再调用MDP对待混合的弹幕图层、视频图层和待混合图层进行合成,以得到目标图像,从而在前述实施例的基础上,可以在不增加过多功耗的情况下保证图层合成速度。
需要说明的是,上述步骤中未详细描述的部分,可参考前述实施例,在此不再赘述。
另外,帧缓冲模块还可以包括非暂存帧缓冲模块,通过在弹幕图层的数量大于预设数量时,调用GPU对弹幕图层进行合成,得到待混合的弹幕图层后,将待混合的弹幕图层存储于非暂存帧缓冲模块,并在视频播放界面显示目标图像之后,将非暂存帧缓冲模块所存储的图层删除。由于上一帧与下一帧的弹幕图层往往不同,因此在本次合成时,通过删除上一帧存储的待混合的弹幕图层,可以降低对暂存帧缓冲模块的存储要求,提高MDP运行效率。
请参阅图8,其示出了本申请实施例提供的一种显示装置的结构框图,该显示装置800可以包括:图像获取模块810、第一合成模块820、第二合成模块830以及图像显示模块840,其中:
图像获取模块810,用于获取待合成的图层,所述待合成的图层包括视频播放界面的界面图层以及用于在所述视频播放界面播放的视频对应的多媒体图层。
第一合成模块820,用于调用所述图形处理器对所述界面图层进行合成,得到待混合图层。
第二合成模块830,用于调用所述多媒体显示处理器对所述多媒体图层和所述待混合图层进行合成,以得到目标图像。
图像显示模块840,用于在所述视频播放界面显示所述目标图像。
进一步的,所述显示装置800还包括:第一存储模块,其中:
第一存储模块,用于将所述待混合图层存储于帧缓冲模块。
进一步的,所述帧缓冲模块包括暂存帧缓冲模块,所述第一存储模块包括:第一存储单元、图层保留单元,其中:
第一存储单元,用于将所述待混合图层存储于所述暂存帧缓冲模块;
图层保留单元,用于在所述视频播放界面显示所述目标图像之后,保留所述暂存帧缓冲模块所存储的待混合图层。
进一步的,所述第一合成模块包括:第一判断单元以及第一合成单元,其中:
第一判断单元,用于判断所述界面图层是否与上次获取的界面图层相同;
第一合成单元,用于若不同,调用所述图形处理器对所述界面图层进行合成,得到待混合图层。
进一步的,所述第一合成模块还包括:第一确定单元,其中:
第一确定单元,用于若相同,将上次存储于所述暂存帧缓冲模块的待混合图层确定为本次用于合成的待混合图层。
进一步的,所述多媒体图层包括弹幕图层和视频图层,所述第二合成模块包括:第二判断单元、第二合成单元以及第三合成单元,其中:
第二判断单元,用于判断所述弹幕图层的数量是否大于预设数量。
第二合成单元,用于若大于,调用所述图形处理器对所述弹幕图层进行合成,得到待混合的弹幕图层。
第三合成单元,用于调用所述多媒体显示处理器对所述待混合的弹幕图层、所述视频图层和所述待混合图层进行合成,以得到目标图像。
进一步的,所述帧缓冲模块包括非暂存帧缓冲模块,所述第二合成单元包括:
第二存储子单元,用于将所述待混合的弹幕图层存储于所述非暂存帧缓冲模块。
图层删除子单元,用于在所述视频播放界面显示所述目标图像之后,将所述非暂存帧缓冲模块所存储的图层删除。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述装置和模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,模块相互之间的耦合可以是电性,机械或其它形式的耦合。
另外,在本申请各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功 能模块的形式实现。
请参考图9,其示出了本申请实施例提供的一种电子设备的结构框图。该电子设备900可以是智能手机、平板电脑、电子书等能够运行应用程序的电子设备。本申请中的电子设备900可以包括一个或多个如下部件:处理器910、存储器920、图形处理器930、多媒体显示处理器940以及一个或多个应用程序,其中一个或多个应用程序可以被存储在存储器920中并被配置为由一个或多个处理器910执行,一个或多个程序配置用于执行如前述方法实施例所描述的方法。
处理器910可以包括一个或者多个处理核。处理器910利用各种接口和线路连接整个电子设备900内的各个部分,通过运行或执行存储在存储器920内的指令、程序、代码集或指令集,以及调用存储在存储器920内的数据,执行电子设备900的各种功能和处理数据。可选地,处理器910可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器910可集成中央处理器(Central Processing Unit,CPU)、图形处理器(Graphics Processing Unit,GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责显示内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器910中,单独通过一块通信芯片进行实现。
存储器920可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory)。存储器920可用于存储指令、程序、代码、代码集或指令集。存储器920可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于实现至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现下述各个方法实施例的指令等。存储数据区还可以存储终端100在使用中所创建的数据(比如电话本、音视频数据、聊天记录数据)等。
图形处理器930是一个通用的图形处理器,在图形处理上比MDP功能要强大很多,除了能够做2维图像处理,还能做3维图像处理,特效等,但是功耗相对较高,一次性可以叠加多个图层。
多媒体显示处理器940是一个专用的显示图像处理单元,能够做常规的2维图像处理,其主要优点是功耗低,但是成本较高。MDP一次性能够叠加的图层越多,在其内部需要更多的FIFO管道。在高通的高端平台,一个MDP有8个FIFO管道,一次性最多能够合成叠加8个图层。而在高通的中 低端平台,一个MDP只有4个FIFO管道,一次性最多能够合成4个图层。
请参考图10,其示出了本申请实施例提供的一种计算机可读存储介质的结构框图。该计算机可读存储介质1000中存储有程序代码,所述程序代码可被处理器调用执行上述方法实施例中所描述的方法。
计算机可读存储介质1000可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。可选地,计算机可读存储介质1000包括非易失性计算机可读介质(non-transitory computer-readable storage medium)。计算机可读存储介质1000具有执行上述方法中的任何方法步骤的程序代码1010的存储空间。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。程序代码1010可以例如以适当形式进行压缩。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不驱使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。
Claims (20)
- 一种显示方法,其特征在于,应用于电子设备,所述电子设备包括多媒体显示处理器和图形处理器,所述方法包括:获取待合成的图层,所述待合成的图层包括视频播放界面的界面图层以及用于在所述视频播放界面播放的视频对应的多媒体图层;调用所述图形处理器对所述界面图层进行合成,得到待混合图层;调用所述多媒体显示处理器对所述多媒体图层和所述待混合图层进行合成,以得到目标图像;在所述视频播放界面显示所述目标图像。
- 根据权利要求1所述的方法,其特征在于,所述调用所述图形处理器对所述界面图层进行合成,得到待混合图层之后,还包括:将所述待混合图层存储于帧缓冲模块。
- 根据权利要求2所述的方法,其特征在于,所述帧缓冲模块包括暂存帧缓冲模块,所述将所述待混合图层存储于帧缓冲模块,包括:将所述待混合图层存储于所述暂存帧缓冲模块;在所述视频播放界面显示所述目标图像之后,保留所述暂存帧缓冲模块所存储的所述待混合图层。
- 根据权利要求3所述的方法,其特征在于,所述调用所述图形处理器对所述界面图层进行合成,得到待混合图层,包括:判断所述界面图层是否与上次获取的界面图层相同;若不同,调用所述图形处理器对所述界面图层进行合成,得到待混合图层。
- 根据权利要求3所述的方法,其特征在于,所述调用所述图形处理器对所述界面图层进行合成,得到待混合图层,包括:判断所述界面图层是否与前指定数量次获取的界面图层相同;若均不同,调用所述图形处理器对所述界面图层进行合成,得到待混合图层。
- 根据权利要求5所述的方法,其特征在于,所述判断所述界面图层是否与前指定数量次获取的界面图层相同;若均不同,调用所述图形处理器对所述界面图层进行合成,得到待混合图层,包括:判断所述界面图层是否与第一界面图层相同,所述第一界面图层为所述界面图层之前获取的界面图层;若不同,判断所述界面图层是否与第二界面图层相同,所述第二界面图层为所述第一界面图层之前获取的界面图层;若不同,调用所述图形处理器对所述界面图层进行合成,得到待混合 图层。
- 根据权利要求4-6任一项所述的方法,其特征在于,所述调用所述图形处理器对所述界面图层进行合成,得到待混合图层之后,所述方法还包括:将所述暂存帧缓冲模块内,本次得到的所述待混合图层之前的图层删除。
- 根据权利要求6所述的方法,其特征在于,所述若相同,判断所述界面图层是否与第二界面图层相同之后,所述方法还包括:若所述界面图层与所述第二界面图层相同,将所述第二界面图层对应的待混合图层确定为本次用于合成的待混合图层。
- 根据权利要求4所述的方法,其特征在于,所述判断所述界面图层是否与上次获取的界面图层相同之后,还包括:若相同,将上次存储于所述暂存帧缓冲模块的待混合图层确定为本次用于合成的待混合图层。
- 根据权利要求2所述的方法,其特征在于,所述多媒体图层包括弹幕图层和视频图层,所述调用所述多媒体显示处理器对所述多媒体图层和所述待混合图层进行合成,以得到目标图像,包括:判断所述弹幕图层的数量是否大于预设数量;若大于,调用所述图形处理器对所述弹幕图层进行合成,得到待混合的弹幕图层;调用所述多媒体显示处理器对所述待混合的弹幕图层、所述视频图层和所述待混合图层进行合成,以得到目标图像。
- 根据权利要求10所述的方法,其特征在于,所述判断所述弹幕图层的数量是否大于预设数量之后,所述方法还包括:若不大于,调用所述多媒体显示处理器对所述多媒体图层和所述待混合图层进行合成,以得到目标图像。
- 根据权利要求10所述的方法,其特征在于,所述判断所述弹幕图层的数量是否大于预设数量之后,所述方法还包括:若不大于,判断所述多媒体显示处理器中空闲的有名管道数量是否小于所述弹幕图层的数量;若所述多媒体显示处理器中空闲的有名管道数量不小于所述弹幕图层的数量,调用所述多媒体显示处理器对所述弹幕图层进行合成,得到待混合的弹幕图层;若所述多媒体显示处理器中空闲的有名管道数量小于所述弹幕图层的数量,调用所述图形处理器对所述弹幕图层进行合成,得到待混合的弹幕 图层。
- 根据权利要求12所述的方法,其特征在于,所述若所述多媒体显示处理器中空闲的有名管道数量小于所述弹幕图层的数量,调用所述图形处理器对所述弹幕图层进行合成,得到待混合的弹幕图层,包括:若所述多媒体显示处理器中空闲的有名管道数量小于所述弹幕图层的数量,调用所述多媒体显示处理器对与所述空闲的有名管道数量相等的弹幕图层进行合成,得到待混合的第一弹幕图层;调用所述图形处理器对超出所述空闲的有名管道数量的剩余的弹幕图层进行合成,得到待混合的第二弹幕图层;将所述待混合的第一弹幕图层与所述待混合的第二弹幕图层进行合成,得到待混合的弹幕图层。
- 根据权利要求10-13任一项所述的方法,其特征在于,所述弹幕图层的数量为本次获取的视频图像对应的弹幕数量。
- 根据权利要求10或11所述的方法,其特征在于,所述预设数量由所述多媒体显示处理器的有名管道数量确定。
- 根据权利要求10或11所述的方法,其特征在于,所述预设数量由所述多媒体显示处理器中空闲的有名管道数量确定。
- 根据权利要求10所述的方法,其特征在于,所述帧缓冲模块包括非暂存帧缓冲模块,所述若大于,调用所述图形处理器对所述弹幕图层进行合成,得到待混合的弹幕图层之后,还包括:将所述待混合的弹幕图层存储于所述非暂存帧缓冲模块;在所述视频播放界面显示所述目标图像之后,将所述非暂存帧缓冲模块所存储的图层删除。
- 一种显示装置,其特征在于,应用于电子设备,所述电子设备包括多媒体显示处理器和图形处理器,所述装置包括:图层获取模块,用于获取待合成的图层,所述待合成的图层包括视频播放界面的界面图层以及用于在所述视频播放界面播放的视频对应的多媒体图层;第一合成模块,用于调用所述图形处理器对所述界面图层进行合成,得到待混合图层;第二合成模块,用于调用所述多媒体显示处理器对所述多媒体图层和所述待混合图层进行合成,以得到目标图像;图像显示模块,用于在所述视频播放界面显示所述目标图像。
- 一种电子设备,其特征在于,包括:一个或多个处理器;存储器;图形处理器;多媒体显示处理器;一个或多个应用程序,其中所述一个或多个应用程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个程序配置用于执行如权利要求1-17任一项所述的方法。
- 一种计算机可读介质,其特征在于,所述计算机可读存储介质中存储有程序代码,所述程序代码可被处理器调用执行所述权利要求1-17任一项所述方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP20840907.8A EP4002062A4 (en) | 2019-07-17 | 2020-07-02 | METHOD AND APPARATUS FOR DISPLAY, ELECTRONIC DEVICE AND COMPUTER READABLE MEDIA |
US17/570,186 US20220139353A1 (en) | 2019-07-17 | 2022-01-06 | Display method, electronic device, and non-transitory computer-readable storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910647562.8A CN110427094B (zh) | 2019-07-17 | 2019-07-17 | 显示方法、装置、电子设备及计算机可读介质 |
CN201910647562.8 | 2019-07-17 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/570,186 Continuation US20220139353A1 (en) | 2019-07-17 | 2022-01-06 | Display method, electronic device, and non-transitory computer-readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021008373A1 true WO2021008373A1 (zh) | 2021-01-21 |
Family
ID=68410880
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/099807 WO2021008373A1 (zh) | 2019-07-17 | 2020-07-02 | 显示方法、装置、电子设备及计算机可读介质 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20220139353A1 (zh) |
EP (1) | EP4002062A4 (zh) |
CN (1) | CN110427094B (zh) |
WO (1) | WO2021008373A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113012263A (zh) * | 2021-03-16 | 2021-06-22 | 维沃移动通信有限公司 | 图层合成方式的配置方法和电子设备 |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110427094B (zh) * | 2019-07-17 | 2021-08-17 | Oppo广东移动通信有限公司 | 显示方法、装置、电子设备及计算机可读介质 |
CN112835660A (zh) * | 2019-11-23 | 2021-05-25 | 西安诺瓦星云科技股份有限公司 | 媒体图层展示方法、装置及系统、存储介质 |
CN111273881B (zh) * | 2020-01-15 | 2023-08-08 | Oppo广东移动通信有限公司 | 显示控制方法及相关产品 |
CN111356026B (zh) * | 2020-03-06 | 2022-03-29 | Oppo广东移动通信有限公司 | 图像数据处理方法及相关装置 |
CN111522520B (zh) * | 2020-04-03 | 2024-04-19 | 广东小天才科技有限公司 | 软件仿类纸的处理方法、装置、设备和存储介质 |
CN111565337A (zh) * | 2020-04-26 | 2020-08-21 | 华为技术有限公司 | 图像处理方法、装置和电子设备 |
WO2021253141A1 (zh) * | 2020-06-15 | 2021-12-23 | 华为技术有限公司 | 一种图像数据处理的装置和方法 |
CN112241303B (zh) * | 2020-10-15 | 2023-02-21 | 展讯半导体(南京)有限公司 | 图像处理方法及系统、电子设备及存储介质 |
CN112616083A (zh) * | 2020-12-11 | 2021-04-06 | 湖南国科微电子股份有限公司 | 一种视频的显示层叠加处理方法、装置、设备及存储介质 |
CN114764358A (zh) * | 2021-01-13 | 2022-07-19 | 华为技术有限公司 | 一种界面显示方法及电子设备 |
CN113110910B (zh) * | 2021-04-20 | 2024-01-23 | 上海卓易科技股份有限公司 | 一种安卓容器实现的方法、系统及设备 |
CN114296840B (zh) * | 2021-05-28 | 2024-08-23 | 海信视像科技股份有限公司 | 一种壁纸显示方法及显示设备 |
CN115834923A (zh) * | 2021-09-16 | 2023-03-21 | 艾锐势企业有限责任公司 | 用于视频内容处理的网络设备、系统和方法 |
CN113873206B (zh) * | 2021-10-30 | 2024-05-14 | 珠海研果科技有限公司 | 一种多路视频录制方法及系统 |
CN114546513A (zh) * | 2022-01-04 | 2022-05-27 | 合肥杰发科技有限公司 | 基于操作系统的显示方法、显示系统 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180005083A1 (en) * | 2015-09-16 | 2018-01-04 | Siemens Healthcare Gmbh | Intelligent multi-scale medical image landmark detection |
US20180189922A1 (en) * | 2016-12-31 | 2018-07-05 | Intel IP Corporation | Smart composition of output layers |
WO2018170393A2 (en) * | 2017-03-17 | 2018-09-20 | Portland State University | Frame interpolation via adaptive convolution and adaptive separable convolution |
US10257487B1 (en) * | 2018-01-16 | 2019-04-09 | Qualcomm Incorporated | Power efficient video playback based on display hardware feedback |
CN109871192A (zh) * | 2019-03-04 | 2019-06-11 | 京东方科技集团股份有限公司 | 一种显示方法、装置、电子设备及计算机可读存储介质 |
CN110427094A (zh) * | 2019-07-17 | 2019-11-08 | Oppo广东移动通信有限公司 | 显示方法、装置、电子设备及计算机可读介质 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8774048B2 (en) * | 2009-01-29 | 2014-07-08 | Qualcomm Incorporated | Link management for multimedia content mobility |
US20130148947A1 (en) * | 2011-12-13 | 2013-06-13 | Ati Technologies Ulc | Video player with multiple grpahics processors |
US9317175B1 (en) * | 2013-09-24 | 2016-04-19 | Amazon Technologies, Inc. | Integration of an independent three-dimensional rendering engine |
CN103593155B (zh) * | 2013-11-06 | 2016-09-07 | 华为终端有限公司 | 显示帧生成方法和终端设备 |
US9538155B2 (en) * | 2013-12-04 | 2017-01-03 | Dolby Laboratories Licensing Corporation | Decoding and display of high dynamic range video |
US20160132284A1 (en) * | 2014-11-07 | 2016-05-12 | Qualcomm Incorporated | Systems and methods for performing display mirroring |
US9928021B2 (en) * | 2014-12-30 | 2018-03-27 | Qualcomm Incorporated | Dynamic selection of content for display on a secondary display device |
US9953620B2 (en) * | 2015-07-29 | 2018-04-24 | Qualcomm Incorporated | Updating image regions during composition |
CN106896899B (zh) * | 2017-03-10 | 2020-04-17 | Oppo广东移动通信有限公司 | 一种移动终端图像绘制的控制方法、装置及移动终端 |
CN109196865B (zh) * | 2017-03-27 | 2021-03-30 | 华为技术有限公司 | 一种数据处理方法、终端以及存储介质 |
-
2019
- 2019-07-17 CN CN201910647562.8A patent/CN110427094B/zh active Active
-
2020
- 2020-07-02 EP EP20840907.8A patent/EP4002062A4/en not_active Withdrawn
- 2020-07-02 WO PCT/CN2020/099807 patent/WO2021008373A1/zh unknown
-
2022
- 2022-01-06 US US17/570,186 patent/US20220139353A1/en not_active Abandoned
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180005083A1 (en) * | 2015-09-16 | 2018-01-04 | Siemens Healthcare Gmbh | Intelligent multi-scale medical image landmark detection |
US20180189922A1 (en) * | 2016-12-31 | 2018-07-05 | Intel IP Corporation | Smart composition of output layers |
WO2018170393A2 (en) * | 2017-03-17 | 2018-09-20 | Portland State University | Frame interpolation via adaptive convolution and adaptive separable convolution |
US10257487B1 (en) * | 2018-01-16 | 2019-04-09 | Qualcomm Incorporated | Power efficient video playback based on display hardware feedback |
CN109871192A (zh) * | 2019-03-04 | 2019-06-11 | 京东方科技集团股份有限公司 | 一种显示方法、装置、电子设备及计算机可读存储介质 |
CN110427094A (zh) * | 2019-07-17 | 2019-11-08 | Oppo广东移动通信有限公司 | 显示方法、装置、电子设备及计算机可读介质 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4002062A4 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113012263A (zh) * | 2021-03-16 | 2021-06-22 | 维沃移动通信有限公司 | 图层合成方式的配置方法和电子设备 |
Also Published As
Publication number | Publication date |
---|---|
EP4002062A1 (en) | 2022-05-25 |
EP4002062A4 (en) | 2022-08-24 |
CN110427094B (zh) | 2021-08-17 |
CN110427094A (zh) | 2019-11-08 |
US20220139353A1 (en) | 2022-05-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021008373A1 (zh) | 显示方法、装置、电子设备及计算机可读介质 | |
WO2021008424A1 (zh) | 图像合成方法、装置、电子设备及存储介质 | |
WO2021008420A1 (zh) | 图层合成方法、装置、电子设备及存储介质 | |
WO2021008390A1 (zh) | 图层处理方法、装置、电子设备及计算机可读介质 | |
CN109587546B (zh) | 视频处理方法、装置、电子设备和计算机可读介质 | |
WO2021008418A1 (zh) | 图层合成方法、装置、电子设备及存储介质 | |
JP6062438B2 (ja) | タイル単位レンダラーを用いてレイヤリングするシステムおよび方法 | |
EP3886444A1 (en) | Video processing method and apparatus, and electronic device and computer-readable medium | |
WO2021008427A1 (zh) | 图像合成方法、装置、电子设备及存储介质 | |
CN109151966B (zh) | 终端控制方法、装置、终端设备及存储介质 | |
WO2018133800A1 (zh) | 视频画面处理方法、装置、电子设备及存储介质 | |
US10957285B2 (en) | Method and system for playing multimedia data | |
KR20140133807A (ko) | 드로잉 방법, 장치, 및 단말 | |
CN112686797B (zh) | 用于gpu功能验证的目标帧数据获取方法、装置及存储介质 | |
CN116821040B (zh) | 基于gpu直接存储器访问的显示加速方法、装置及介质 | |
CN109587555B (zh) | 视频处理方法、装置、电子设备及存储介质 | |
CN109688462B (zh) | 降低设备功耗的方法、装置、电子设备及存储介质 | |
CN110362375A (zh) | 桌面数据的显示方法、装置、设备和存储介质 | |
CN117557701B (zh) | 一种图像渲染方法和电子设备 | |
US20110298816A1 (en) | Updating graphical display content | |
CN111179386A (zh) | 动画生成方法、装置、设备及存储介质 | |
WO2024044936A1 (en) | Composition for layer roi processing | |
WO2023066098A1 (zh) | 渲染处理的方法、装置、设备和存储介质 | |
WO2023245494A1 (zh) | 从渲染引擎中获取纹理数据的方法及装置、电子设备 | |
WO2024087152A1 (en) | Image processing for partial frame updates |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20840907 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2020840907 Country of ref document: EP Effective date: 20220215 |