WO2023174322A1 - 图层处理方法和电子设备 - Google Patents

图层处理方法和电子设备 Download PDF

Info

Publication number
WO2023174322A1
WO2023174322A1 PCT/CN2023/081564 CN2023081564W WO2023174322A1 WO 2023174322 A1 WO2023174322 A1 WO 2023174322A1 CN 2023081564 W CN2023081564 W CN 2023081564W WO 2023174322 A1 WO2023174322 A1 WO 2023174322A1
Authority
WO
WIPO (PCT)
Prior art keywords
layer
synthesized
layers
target
hwc
Prior art date
Application number
PCT/CN2023/081564
Other languages
English (en)
French (fr)
Inventor
何书杰
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023174322A1 publication Critical patent/WO2023174322A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/28Indexing scheme for image data processing or generation, in general involving image processing hardware

Definitions

  • the present application relates to the technical field of electronic equipment, and in particular, to a layer processing method and electronic equipment.
  • this application provides a layer processing method and an electronic device for reducing the power consumption of the electronic device.
  • a layer processing method including:
  • the synthesis method of the target layer in the layer to be synthesized is the HWC synthesis method
  • the target layer is the layer corresponding to the virtual display screen, and/or, the The target layer includes a surface view control with rounded corners;
  • Each of the layers to be synthesized is synthesized using a synthesis method corresponding to each of the layers to be synthesized, to obtain a target interface.
  • the layer processing method provided by the embodiment of the present application optimizes the layer synthesis process to process layers containing surface view controls that have rounded corners, and/or layers corresponding to the virtual display screen. , using the HWC synthesis method to synthesize can reduce the power consumption of electronic equipment and reduce the phenomenon of electronic equipment becoming hot and consuming power quickly.
  • the target layer is a layer drawn by a target application, and the target application supports the HWC synthesis method.
  • the target layer in the target application that supports the HWC synthesis method is synthesized using the HWC synthesis method, which can better ensure the synthesis quality of the layer.
  • a target identifier is set in the target application, and the target identifier is used to indicate that the target application supports the HWC synthesis mode.
  • the target application is an application in a predetermined application set.
  • the method further includes: updating the application collection. In this way, a more comprehensive target application can be obtained.
  • the method further includes: displaying the target interface.
  • the method further includes: sending the target screen projection device standard interface.
  • each of the layers to be synthesized is synthesized using a synthesis method corresponding to each of the layers to be synthesized to obtain a target interface, including:
  • the GPU is used to synthesize each of the first layers to obtain an intermediate layer
  • each of the layers to be synthesized is synthesized through HWC to obtain the target interface;
  • the first layer is a layer of the layer to be synthesized whose synthesis method is the GPU synthesis method
  • the second layer is a layer of the layer to be synthesized whose synthesis method is the HWC synthesis method.
  • the method After determining the synthesis method of each layer to be synthesized, and before using the synthesis method corresponding to each of the layers to synthesize each of the layers, the method also includes:
  • the second layer is a layer whose synthesis method is the HWC synthesis method among the layers to be synthesized.
  • the HWC can complete the synthesis of the second layer through one synthesis process, thereby improving the synthesis speed.
  • a layer processing device including:
  • a processing module used to determine the synthesis method of each layer to be synthesized wherein the synthesis method of the target layer in the layer to be synthesized is the HWC synthesis method, and the target layer is the layer corresponding to the virtual display screen, And/or, the target layer includes a surface view control with rounded corners;
  • Each of the layers to be synthesized is synthesized using a synthesis method corresponding to each of the layers to be synthesized, to obtain a target interface.
  • the target layer is a layer drawn by a target application, and the target application supports the HWC synthesis method.
  • a target identifier is set in the target application, and the target identifier is used to indicate that the target application supports the HWC synthesis mode.
  • the target application is an application in a predetermined application set.
  • the processing module is further configured to: update the application collection.
  • the device further includes: a display module configured to display the target interface.
  • the device further includes: a communication module, configured to send the target interface to the target screen projection device.
  • the processing module is specifically used to:
  • the GPU is used to synthesize each of the first layers to obtain an intermediate layer
  • the intermediate layer and each second layer in the layer to be synthesized are synthesized through HWC to obtain the target interface;
  • each of the layers to be synthesized is synthesized through HWC to obtain the target interface;
  • the first layer is a layer of the layer to be synthesized whose synthesis method is the GPU synthesis method
  • the second layer is a layer of the layer to be synthesized whose synthesis method is the HWC synthesis method.
  • the processing module is further configured to: after determining the synthesis method of each layer to be synthesized, use the synthesis method corresponding to each of the layers to synthesize each of the images.
  • the processing module is further configured to: after determining the synthesis method of each layer to be synthesized, use the synthesis method corresponding to each of the layers to synthesize each of the images.
  • the number of second layers in the layers to be synthesized is greater than the maximum number of layers supported by HWC, adjust the synthesis method of some of the second layers so that the second The number of layers is less than or equal to the maximum number of layers, wherein the second layer is a layer whose synthesis method is the HWC synthesis method among the layers to be synthesized.
  • embodiments of the present application provide an electronic device, including: a memory and a processor.
  • the memory is used to store a computer program; and the processor is used to cause the electronic device to execute the above first aspect or the first aspect when calling the computer program.
  • the method of any embodiment of the aspect is not limited to: a processor, a processor, or a processor.
  • embodiments of the present application provide a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the method described in the first aspect or any embodiment of the first aspect is implemented.
  • embodiments of the present application provide a computer program product, which when the computer program product is run on an electronic device, causes the electronic device to execute the method described in the first aspect or any embodiment of the first aspect.
  • embodiments of the present application provide a chip system, including a processor, the processor is coupled to a memory, and the processor executes a computer program stored in the memory to implement the above first aspect or any of the first aspects.
  • the chip system may be a single chip or a chip module composed of multiple chips.
  • Figure 1 is a schematic diagram of the system architecture of an electronic device provided by an embodiment of the present application.
  • Figure 2 is a schematic diagram of layer synthesis provided by an embodiment of the present application.
  • Figure 3 is a schematic diagram of some user interfaces provided by embodiments of the present application.
  • Figure 4 is a schematic diagram of the screencasting process provided by the embodiment of the present application.
  • FIGS 5-8 are schematic diagrams of some layer processing processes provided by embodiments of the present application.
  • Figure 9 is a schematic flow chart of a layer processing method provided by an embodiment of the present application.
  • FIG. 10 is a schematic flowchart of another layer processing method provided by an embodiment of the present application.
  • Figure 11 is a schematic structural diagram of a layer processing device provided by an embodiment of the present application.
  • Figure 12 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • Figure 1 shows a schematic system architecture diagram of an electronic device.
  • Electronic devices can be mobile phones, tablets (pads), laptops, ultra-mobile personal computers (UMPCs), netbooks or game consoles and other devices with display and processing functions.
  • the tablets can be conventional Tablet devices can also be 2-in-1 devices that integrate at least some of the functions of a laptop.
  • the embodiments of the present application do not place special restrictions on the specific type of the electronic device.
  • the layer processing method provided by the embodiments of the present application can be applied to the above-mentioned electronic devices.
  • the electronic device may include a hardware system and a software system, where the hardware system may include a graphics processing unit (GPU), a memory, a camera, a display screen, and a display controller (display controller), as well as components not shown in Figure 1 Other parts.
  • the hardware system may include a graphics processing unit (GPU), a memory, a camera, a display screen, and a display controller (display controller), as well as components not shown in Figure 1 Other parts.
  • Software systems of electronic devices can adopt layered architecture, event-driven architecture, microkernel architecture, microservice architecture or cloud architecture.
  • the software system of the electronic device can be Android system, Linux system, Windows system, Hongmeng system or iOS system, etc.
  • the embodiment of this application takes the Android system with a layered architecture as an example to illustrate the software structure of the electronic device.
  • the software system of electronic equipment can be divided into several layers, and the layers communicate through software interfaces.
  • the Android system can be divided from top to bottom into application layer, application framework layer, Android runtime (Android runtime) and system libraries, and kernel layer.
  • the application layer can include a series of applications (sometimes referred to as applications below). As shown in Figure 1, applications can include camera, gallery, calendar, calling, WLAN, Bluetooth, music, video, short message, map, screen mirroring and other applications.
  • the application framework layer provides an application programming interface (API) and programming framework for applications in the application layer.
  • API application programming interface
  • the application framework layer includes some predefined functions.
  • the application framework layer may include the activity manager, window manager, view system, input manager, display manager, decision manager shown in Figure 1, as well as content providers, resource managers, and notification managers not shown wait.
  • the window manager can provide window management service (window manager service, WMS).
  • WMS window management service
  • WMS can be used for window management, window animation management, surface management, and as a transfer station for the input system.
  • the window manager can obtain the display size, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • the view system includes visual controls, such as controls that display text, controls that display pictures, etc.
  • a view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
  • the input manager can provide input management service (IMS), which can be used to manage system input, such as touch screen input, key input, sensor input, etc.
  • IMS input management service
  • IMS takes out events from the input device node and distributes the events to appropriate windows through interaction with WMS.
  • the display manager can provide display management service (display manager service, DMS).
  • DMS manages the global life cycle of the display. It determines how to control its logical display based on the currently connected physical display device, and when the status changes, it notifies the system and applications. Program sends notifications etc.
  • the decision manager can provide decision management services, which can determine the horizontal/vertical screen display strategy and display position of the window; and can adjust the horizontal/vertical screen display strategy and display position of the window based on input events fed back by the input manager.
  • Content providers are used to store and retrieve data and make this data accessible to applications.
  • Said data can include videos, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
  • the resource manager provides various resources to applications, such as localized strings, icons, pictures, layout files, video files, etc.
  • the notification manager allows applications to display notification information in the status bar, which can be used to convey notification-type messages and can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also be notifications that appear in the status bar at the top of the system in the form of charts or scroll bar text, such as notifications for applications running in the background, or notifications that appear on the screen in the form of conversation windows. For example, text information is prompted in the status bar, a beep sounds, the electronic device vibrates, the indicator light flashes, etc.
  • Android Runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library contains two parts: one is the functional functions that need to be called by the Java language, and the other is the core library of Android.
  • the application layer and application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and application framework layer into binary files.
  • the virtual machine is used to perform object life cycle management, stack management, thread management, security and exception management, and garbage collection and other functions.
  • System libraries can include multiple functional modules. For example: surface managers (surfaceflinger), media libraries (media libraries), three-dimensional (3D) graphics processing libraries (for example: OpenGL ES), two-dimensional (2D) graphics engines (for example: SGL) and hardware composers (hardware composer), HWC) etc.
  • the surface manager is used to manage the display subsystem, which can provide the fusion of 2D and 3D layers for multiple applications.
  • HWC can call the display controller to perform layer synthesis, and then transfer the synthesized layer to the display screen for display.
  • the maximum number of layers supported by HWC for synthesis corresponds to the number of hardware channels in the display controller.
  • the display controller includes 8 hardware channels.
  • the maximum number of layers supported by HWC for synthesis is 8, that is, HWC can synthesize up to 8 layers at a time.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as static image files, etc.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, composition and layer processing.
  • 2D Graphics Engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software and is used to provide core system services for the Android kernel, such as security services, network services, memory management services, detection management services, and driver models.
  • the kernel layer can include display drivers, camera drivers, audio drivers, sensor drivers, GPU drivers, etc.
  • Layer Contains elements such as text or graphics.
  • the interface displayed by an electronic device is generally composed of multiple layers.
  • the desktop display interface of the electronic device shown in Figure 2 includes four layers: status bar, navigation bar, wallpaper and icon. These four layers are processed through the image. After the layers are synthesized, they are sent to the display screen for display.
  • the synthesis methods of layers include: GPU synthesis method and HWC synthesis method.
  • the GPU synthesis method first renders the contents of each layer into the temporary buffer through the GPU, and then transfers the contents in the temporary buffer through HWC. Send it to the display screen for display;
  • the HWC synthesis method is to use HWC to synthesize each layer and send it to the display screen for display.
  • the GPU synthesis method there is more interaction between the GPU and the memory. Therefore, compared with the GPU synthesis method, the HWC synthesis method generally consumes less power.
  • the surface manager (surfaceflinger) in Figure 1 above can collect the layers drawn by each application and provide the layer list to the HWC; the HWC can determine the synthesis method of each layer in the layer list based on its hardware capabilities.
  • the surfaceflinger service can be synthesized into a layer (here called the intermediate layer) through the GPU, and then the intermediate layer and other layers (the synthesis method is HWC synthesis) are synthesized by HWC. layer) for further synthesis, and the resulting target interface is sent to the display screen for display.
  • surfaceflinger determines the composition method of the layer through HWC, and can set the desired composition method for each layer. For example, you can set each layer to be synthesized through the HWC composition method; HWC can adjust certain parameters according to the performance of the display controller. The composition method of the layer; surfaceflinger can decide whether to adjust the composition method of the corresponding layer based on the adjustment results of HWC, and then notify HWC.
  • Each window (window) created by the application corresponds to a drawing surface (surface).
  • the view can be drawn on the surface.
  • Each drawing surface corresponds to a layer, that is, it can be drawn on each surface. a layer.
  • Each window can include multiple view controls.
  • the main difference between surfaceview and ordinary view is that ordinary views in the same window share the surface corresponding to the window.
  • Surfaceview does not share the same surface with its host window.
  • Surfaceview has independent
  • the surface, that is, the surfaceview corresponds to an independent layer (hereinafter referred to as the surfaceview layer).
  • the drawing and refreshing process of the surfaceview layer is not restricted by the host window and can be controlled freely. Therefore, the surfaceview layer is suitable for drawing user interfaces with high refresh rate requirements such as videos or games.
  • the user opens a game application in a floating window.
  • the game application has high refresh rate requirements, so the surfaceview control can be used in the floating window to draw the user interface of the game application.
  • the surfaceview layer displayed in the floating window can be in a right-angle style; in order to improve the aesthetics, as shown in (b) in Figure 3, the surfaceview layer displayed in the floating window can also be Use rounded corners.
  • the displays supported by the system include physical displays and virtual displays.
  • the physical displays can include built-in displays of electronic devices and external displays.
  • Built-in displays The screen may be, for example, a liquid crystal display (LCD), and the external display may be, for example, a television connected through a high definition multimedia interface (HDMI).
  • HDMI high definition multimedia interface
  • the virtual display screen does not have an actual physical device. It can be created through the display manager to implement functions such as screen recording or screen casting.
  • the mobile phone can establish a short-range communication connection with the laptop and project the opened video to the laptop, so that the user can watch the video on the laptop.
  • the mobile phone when casting the screen, can create a virtual display screen, draw the video interface to be cast on the virtual display screen, and then transfer it to the computer for display.
  • the surfaceview layer using the rounded corner style, and the layer corresponding to the virtual display screen shown in Figure 4 are currently used during layer synthesis. It uses GPU synthesis.
  • the surfaceview layer is usually used to draw user interfaces with high refresh rate requirements.
  • the power consumption of electronic devices will be relatively high; electronic devices will also consume more power when drawing virtual displays. Screen casting will generate more power consumption; and the GPU synthesis method will further increase power consumption. Therefore, in actual use, electronic devices are prone to become hot and consume power quickly in these two scenarios.
  • embodiments of the present application provide a technical solution, which mainly reduces equipment performance by optimizing the layer processing process. consumption, reducing the phenomenon of electronic devices becoming hot and consuming power quickly.
  • the layer processing process in the embodiment of this application is described below.
  • Figure 5 exemplarily shows a schematic diagram of a layer processing process.
  • the layers to be synthesized corresponding to the mobile phone display include: layer 11, layer 12, layer 13 and layer 14 , among which, layer 14 is the surfaceview layer, and the surfaceview control in this surfaceview layer has rounded corners.
  • its synthesis mode can be set to the HWC synthesis mode.
  • relevant synthesis strategies can be used to determine the synthesis method of the layer.
  • the synthesis method of the layer can be determined based on factors such as the size of the layer and the degree of content change.
  • the synthesis method of the layer with a relatively small amount of data can be determined.
  • the synthesis mode of layer 11 and layer 12 can be set to the GPU synthesis mode, and the synthesis mode of layer 13 can be set to the HWC synthesis mode.
  • surfaceflinger can hand over the layer whose synthesis method is the GPU synthesis method (hereinafter referred to as the first layer) to the GPU for synthesis, and the layer whose synthesis method is the HWC synthesis method. (hereinafter referred to as the second layer) is handed over to the HWC for synthesis, and the GPU synthesis result (hereinafter referred to as the intermediate layer) is also handed over to the HWC.
  • the HWC synthesizes the final target interface, it is transmitted to the display screen for display.
  • surfaceflinger When surfaceflinger transfers a layer, it can specifically transfer the address information of the layer to the GPU or HWC, and the GPU and HWC can read the corresponding layer based on the address information.
  • the GPU can first synthesize the intermediate layer, and then the HWC synthesizes the intermediate layer and each second layer together to obtain the target interface; or the HWC and GPU can synthesize at the same time, that is, in While the GPU synthesizes the first layer, the HWC can synthesize the second layer. Then, the HWC synthesizes the intermediate layer synthesized by the GPU and its own synthesis result to obtain the target interface.
  • HWC may not fully support the layers drawn by some applications.
  • the target application may be an application set with a target identifier, and the target identifier is used to indicate that the target application supports the HWC synthesis method.
  • the target application can declare the "hw.hwc_support" field (that is, the target identifier) in the program to indicate that it supports the HWC synthesis method.
  • target identifier is not limited to the above fields, and may also be other identifiers, which is not particularly limited in this embodiment.
  • the target application may also be an application in a predetermined application collection.
  • the application collection may record the identity number (identity, ID) or other identification of the target application, and the electronic device can identify the target application based on the identification.
  • the electronic device can obtain the application collection from the server and update the application collection.
  • the server can deliver the updated application collection to the electronic device, or issue an update notification to the electronic device, notifying
  • the electronic device obtains the updated application collection from the server; alternatively, the electronic device can also periodically obtain the application collection from the server to update the local application collection.
  • the target application can be determined based on the target identification or the application set; in other embodiments, the target application can be determined based on the target identification and the application set, that is, the application with the target identification and the application located in the application set are both
  • the target application that is to say, the target application may have a target ID, may be located in the application collection, or may have both a target ID and be located in the application collection.
  • the layers generated by the target application can be identified when creating the layer, that is, setting the HWC synthesis identifier; after surfaceflinger obtains the layer, it can identify that the layer supports the HWC synthesis method based on the HWC synthesis identifier.
  • the GPU composition flag For layers generated by non-target applications, you can set the GPU composition flag, or you don't need to flag it.
  • the GPU synthesis method can be used, or the HWC synthesis method can be used.
  • the GPU synthesis method can be used; or the synthesis method is not limited, that is, the GPU synthesis method can be used, or the HWC synthesis method can be used.
  • the layers to be synthesized corresponding to the display screen on the tablet computer include: layer 21, layer 22, layer 23 and layer 24, where layer 23 and layer 24 are surfaceview The surfaceview controls in layer 23 and layer 24 are all rounded.
  • Layer 23 is the layer drawn by the non-target application
  • layer 24 is the layer drawn by the target application.
  • layer 24 Has HWC composite logo.
  • layer 24 After surfaceflinger obtains these layers, based on the HWC composition flag of layer 24, it can be determined that layer 24 supports the HWC composition method, and then it can be set to the HWC composition method. For layer 23, it does not have a HWC synthesis flag, so it is not mandatory to use the HWC synthesis method. Instead, the commonly used GPU synthesis method can be used. That is, layer 23 is determined as the first layer, and layer 24 is determined as the second layer.
  • surfaceflinger can determine the first layer and the second layer according to the aforementioned synthesis strategy.
  • layer 21 can be determined as the first layer and layer 22 can be determined as the second layer. layer.
  • surfaceflinger can hand over the first layer to the GPU for synthesis, and hand over the second layer and the intermediate layer synthesized by the GPU to HWC for synthesis, and HWC can synthesize the final target. After the interface is created, it is transferred to the display screen for display.
  • the following takes the screen projection scene as an example to illustrate the processing process of the layer corresponding to the virtual display screen.
  • the layers to be synthesized corresponding to the virtual display screen of the mobile phone include: layer 31, layer 32, layer 33 and layer 34. After surfaceflinger obtains these layers, it can synthesize the layers The mode is set to HWC synthesis mode, and then HWC synthesizes these layers and transmits the synthesized target interface to the virtual display screen.
  • the virtual display screen is associated with the screen projection application of the mobile phone.
  • the screen projection application can project the content in the virtual display screen (ie, the target interface) to the laptop computer, and the laptop computer displays it on its display screen.
  • a certain synthesis strategy can also be used to determine the synthesis method of each layer to be synthesized corresponding to the virtual display screen. For example, a method similar to the aforementioned synthesis strategy can be used to combine the data with a small amount or no change in content.
  • the layer is determined as the first layer (using the GPU synthesis method), and the layer with a large amount of data or a high refresh rate requirement is determined as the second layer (using the HWC synthesis method).
  • the layers to be synthesized corresponding to the virtual display screen of the mobile phone include: layer 41, layer 42, layer 43 and layer 44, where layer 44 is the surfaceview layer, which The surfaceview control in the surfaceview layer has been rounded and cropped; layer 41 is a layer drawn by a non-target application, and layer 42, layer 43 and layer 44 are layers drawn by a target application; the content of layer 43 is generally not Variety.
  • surfaceflinger can hand over the first layer to the GPU for synthesis, and hand over the second layer and the intermediate layer synthesized by the GPU to HWC for synthesis, and HWC can synthesize the final target.
  • the interface is created, it is transmitted to the virtual display screen; the screen casting application then sends the content in the virtual display screen (i.e., the target interface) to the laptop computer, and the laptop computer displays it on its display screen.
  • the number of layers to be synthesized will not be too many.
  • the number of second layers determined by surfaceflinger will generally not exceed the maximum number of layers supported by HWC for synthesis.
  • the above examples are based on this. illustrate.
  • HWC can synthesize it multiple times, that is, it can synthesize part of the second layer first, and then synthesize another part of the second layer. Then synthesize the synthesis results of each time; or, you can also adjust part of the second layer to the first layer and synthesize it by the GPU.
  • one or more second layers with a smaller amount of data can be adjusted.
  • the number of second layers does not exceed the maximum number of layers supported by HWC for composition.
  • the layer synthesis method determines that the result includes the first layer, and when HWC is compositing the layers, it waits for the GPU to complete the synthesis, and then synthesizes the intermediate layer synthesized by the GPU and each second layer together. Then, since the middle layer occupies a hardware channel, when the number of the second layer is greater than or equal to the maximum number of layers supported by the HWC for compositing, the layer can be adjusted so that the number of the second layer is smaller than the number of layers supported by the HWC for compositing. Maximum number of layers.
  • FIG. 9 is a schematic flowchart of a layer processing method provided by an embodiment of the present application. As shown in Figure 9, the layer processing method provided by this embodiment may include the following steps:
  • S110 Determine the synthesis method of the target layer in the layer to be synthesized as the HWC composition method, and determine the synthesis methods of other layers in the layer to be synthesized.
  • the target layer includes a surface view control with rounded corners. The layer is drawn by the target application.
  • layer 14 (where the surfaceview control has been rounded and cropped) can be determined as the target layer, and its synthesis method can be set to the HWC synthesis method; for other layers, you can Factors such as size and content change determine how layers are composited.
  • the layer 24 drawn by the target application (in which the surfaceview control is rounded and cropped) can be determined as the target layer, and the target layer can be synthesized using the HWC synthesis method; for non-target applications
  • the drawn layer 23 (in which the surfaceview control has been rounded and cropped) can be synthesized using the GPU synthesis method to improve the layer synthesis quality; for other layers in the layer to be synthesized, it can be synthesized according to the size of the layer, Factors such as changes in content determine how layers are composited.
  • the target application has a target identification, and/or the target application is located in a predetermined application collection; the electronic device The device can update application collections.
  • HWC synthesis method
  • each first layer can be synthesized through the GPU, and the intermediate layer between each second layer and the GPU synthesis can be synthesized through HWC.
  • the layers are combined to obtain the final target interface.
  • the target interface can be transmitted to the display screen for display.
  • FIG 10 is a schematic flowchart of another layer processing method provided by an embodiment of the present application. As shown in Figure 10, the layer processing method provided by this embodiment may include the following steps:
  • S210 Determine the synthesis method of the target layer in each layer to be synthesized corresponding to the virtual display screen as the HWC synthesis method, and determine the synthesis method of other layers in the layer to be synthesized, where the target layer is drawn by the target application. .
  • each layer to be synthesized corresponding to the virtual display screen can be determined as a target layer, and its synthesis mode is set to the HWC synthesis mode.
  • each layer to be synthesized corresponding to the virtual display screen if it is drawn by the target application, it can be synthesized using the HWC synthesis method or the GPU synthesis method; if it is drawn by a non-target application, it can be synthesized using the GPU Synthesis method.
  • the number of second layers (composition method is HWC composition method) in the layer to be synthesized is greater than or equal to the maximum number of layers supported by HWC for composition, HWC can synthesize it in multiple times or adjust part of the second layer It is the first layer, which is synthesized by the GPU.
  • each second layer can be synthesized through HWC to obtain the final target interface.
  • each first layer can be synthesized through the GPU, and the HWC can be used to synthesize the first layer.
  • Each second layer is synthesized with the intermediate layer synthesized by the GPU to obtain the final target interface.
  • the target interface can be transmitted to the virtual display screen, and then the screen projection application transmits the target interface in the virtual display screen to the target screen projection device for display. .
  • the layer processing method provided in this embodiment optimizes the layer synthesis process to include surface view controls. And the layers with rounded corners of the surface view control, and/or the layers corresponding to the virtual display screen, are synthesized using the HWC synthesis method, which can reduce the power consumption of electronic devices and reduce the risk of electronic devices becoming hot and consuming power quickly. Phenomenon.
  • the embodiment of the present application provides a layer processing device.
  • the device embodiment corresponds to the foregoing method embodiment.
  • the device embodiment no longer refers to the foregoing method embodiment.
  • the details will be described one by one, but it should be clear that the device in this embodiment can correspondingly implement all the contents in the foregoing method embodiments.
  • Figure 11 is a schematic structural diagram of a layer processing device provided by an embodiment of the present application. As shown in Figure 11, the device provided by this embodiment includes:
  • Display module 210 input module 220, processing module 230 and communication module 240.
  • the display module 210 is used to support the electronic device to perform the interface display operation in the above embodiments and/or other processes for the technology described herein.
  • the display module may be a touch screen or other hardware or a combination of hardware and software.
  • the input module 220 is used to receive user input on the display interface of the electronic device, such as touch input, voice input, gesture input, etc.
  • the input module is used to support the electronic device to perform the steps of receiving user operations in the above embodiments and/or for Other processes for the techniques described herein.
  • the input module may be a touch screen or other hardware or a combination of hardware and software.
  • the processing module 230 is used to support the electronic device to perform processing operations in each method step in the above embodiments and/or other processes for the technology described herein.
  • the communication module 240 is used to support the electronic device to perform operations related to the communication process between the electronic device and other devices in the above embodiments and/or other processes for the technology described herein.
  • the device provided in this embodiment can execute the above method embodiments, and its implementation principles and technical effects are similar and will not be described again here.
  • Module completion means dividing the internal structure of the device into different functional units or modules to complete all or part of the functions described above.
  • Each functional unit and module in the embodiment can be integrated into one processing unit, or each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the above-mentioned integrated unit can be hardware-based. It can also be implemented in the form of software functional units.
  • the specific names of each functional unit and module are only for the convenience of distinguishing each other and are not used to limit the scope of protection of the present application.
  • For the specific working processes of the units and modules in the above system please refer to the corresponding processes in the foregoing method embodiments, and will not be described again here.
  • FIG. 12 is a schematic structural diagram of the electronic device provided by an embodiment of the present application.
  • the electronic device may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, Mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and user Identification module (subscriber identification module, SIM) card interface 195, etc.
  • a processor 110 an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, Mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen
  • the sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, and a proximity light sensor. 180G, fingerprint sensor 180H, temperature sensor 180J, touch sensor 180K, ambient light sensor 180L, bone conduction sensor 180M, etc.
  • the electronic device may include more or fewer components than shown in the figures, or some components may be combined, some components may be separated, or different components may be arranged.
  • the components illustrated may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) wait.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • NPU neural-network processing unit
  • different processing units can be independent devices or integrated in one or more processors.
  • the controller can be the nerve center and command center of the electronic device.
  • the controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have been recently used or recycled by processor 110 . If the processor 110 needs to use the instructions or data again, it can be called directly from the memory. Repeated access is avoided and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • Interfaces may include integrated circuit (inter-integrated circuit, I2C) interface, integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, pulse code modulation (pulse code modulation, PCM) interface, universal asynchronous receiver and transmitter (universal asynchronous receiver/transmitter (UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and /or universal serial bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • UART universal asynchronous receiver and transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • the I2S interface can be used for audio communication.
  • the PCM interface can also be used for audio communications to sample, quantize and encode analog signals.
  • the UART interface is a universal serial data bus used for asynchronous communication; the bus can be a bidirectional communication bus, which converts the data to be transmitted between serial communication and parallel communication.
  • the MIPI interface can be used to connect the processor 110 with the display screen 194, camera 193 and other peripheral devices; the MIPI interface includes a camera serial interface (CSI), a display serial interface (display serial interface, DSI), etc.
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the USB interface 130 is an interface that complies with the USB standard specification, and may be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc.
  • the USB interface 130 can be used to connect a charger to charge the electronic device, and can also be used to transmit data between the electronic device and peripheral devices. It can also be used to connect headphones to play audio through them. This interface can also be used to connect other electronic devices, such as AR devices, etc.
  • the interface connection relationships between the modules illustrated in the embodiments of the present application are only schematic illustrations and do not constitute structural limitations on the electronic equipment.
  • the electronic device may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 may receive wireless charging input through a wireless charging coil of the electronic device. While the charging management module 140 charges the battery 142, it can also provide power to the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, internal memory 121, external memory, display screen 194, camera 193, wireless communication module 160, etc.
  • the power management module 141 can also be used to monitor battery capacity, battery cycle times, battery health status (leakage, impedance) and other parameters.
  • the power management module 141 may also be provided in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device can be realized through the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor and the baseband processor.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in an electronic device can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • Antenna 1 can be reused as a diversity antenna for a wireless LAN.
  • antennas may be used in conjunction with tuning switches.
  • the mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G applied to electronic devices.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module 150 can receive electromagnetic waves through the antenna 1, perform filtering, amplification and other processing on the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor and convert it into electromagnetic waves through the antenna 1 for radiation.
  • at least part of the functional modules of the mobile communication module 150 may be disposed in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be sent into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the application processor outputs sound signals through audio devices (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display screen 194.
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 110 and may be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on electronic devices including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), Bluetooth (bluetooth, BT), and global navigation satellite systems. (global navigation satellite system, GNSS), frequency modulation (FM), near field communication (NFC), infrared technology (infrared, IR), millimeter wave (mmWave) technology, ultra-wideband ( Ultra wide band, UWB) technology and other wireless communication solutions.
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 may also receive a signal to be sent from the processor 110, for which It is frequency modulated, amplified, and converted into electromagnetic waves for radiation through antenna 2.
  • the antenna 1 of the electronic device is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division synchronous code division multiple access (TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN , NFC, FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GNSS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi) -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GNSS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device implements display functions through the GPU, display screen 194, and application processor.
  • the GPU is an image processing microprocessor and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • the display screen 194 is used to display images, videos, etc.
  • Display 194 includes a display panel.
  • the display panel can use a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • AMOLED organic light-emitting diode
  • FLED flexible light-emitting diode
  • Mini LED Micro LED
  • quantum dot light emitting diode QLED
  • the electronic device may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the electronic device can realize the shooting function through ISP, camera 193, video codec, GPU, display screen 194 and application processor.
  • the ISP is used to process the data fed back by the camera 193.
  • Camera 193 is used to capture still images or video.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals.
  • Video codecs are used to compress or decompress digital video.
  • NPU is a neural network (NN) computing processor.
  • NN neural network
  • Intelligent cognitive applications of electronic devices can be realized through NPU, such as image recognition, face recognition, speech recognition, text understanding, etc.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the processor 110 executes instructions stored in the internal memory 121 to execute various functional applications and data processing of the electronic device.
  • the internal memory 121 may include a program storage area and a data storage area.
  • the stored program area can store the operating system and at least one application program required for a function (such as a sound playback function, an image playback function, etc.).
  • the storage data area can store data created during the use of electronic equipment (such as audio data, phone books, etc.).
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one disk storage device, flash memory device, universal flash storage (UFS), etc.
  • the external memory interface 120 can be used to connect an external memory, such as a Micro SD card, to expand the storage capacity of the electronic device.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement the data storage function. Such as saving music, videos, etc. files in external memory card.
  • the electronic device can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signals. Audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110 , or some functional modules of the audio module 170 may be provided in the processor 110 . Speaker 170A, also called “speaker”, is used to convert audio electrical signals into sound signals. Receiver 170B, also called “earpiece”, is used to convert audio electrical signals into sound signals. Microphone 170C, also called “microphone” or “microphone”, is used to convert sound signals into electrical signals. The headphone interface 170D is used to connect wired headphones. The headphone interface 170D may be a USB interface 130, or may be a 3.5mm open mobile terminal platform (OMTP) standard interface, or a Cellular Telecommunications Industry Association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA Cellular Telecommunications Industry Association of the USA
  • the electronic device can also send ultrasonic waves through the speaker 170A and receive the ultrasonic waves through the microphone 170C to implement ultrasonic technology.
  • the buttons 190 include a power button, a volume button, etc.
  • Key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device can receive key input and generate key signal input related to user settings and function control of the electronic device.
  • the motor 191 can generate vibration prompts.
  • the motor 191 can be used for vibration prompts for incoming calls and can also be used for touch vibration feedback.
  • the indicator 192 may be an indicator light, which may be used to indicate charging status, power changes, or may be used to indicate messages, missed calls, notifications, etc.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be inserted into the SIM card interface 195 or pulled out from the SIM card interface 195 to realize contact and separation from the electronic device.
  • the electronic device can support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card, etc.
  • the electronic device provided in this embodiment can execute the above method embodiments, and its implementation principles and technical effects are similar, and will not be described again here.
  • Embodiments of the present application also provide a computer-readable storage medium on which a computer program is stored. When the computer program is executed by a processor, the method described in the above method embodiment is implemented.
  • An embodiment of the present application also provides a computer program product.
  • the computer program product When the computer program product is run on an electronic device, the electronic device implements the method described in the above method embodiment when executed.
  • An embodiment of the present application also provides a chip system, including a processor, the processor is coupled to a memory, and the processor executes a computer program stored in the memory to implement the method described in the above method embodiment.
  • the chip system may be a single chip or a chip module composed of multiple chips.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted over a computer-readable storage medium.
  • the computer instructions can be transmitted from one website, computer, server or data center to another website, computer, or data center through wired (such as coaxial cable, optical fiber, digital subscriber line) or wireless (such as infrared, wireless, microwave, etc.) means. server or data center for transmission.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
  • the available media may be magnetic media (such as floppy disks, hard disks or tapes), optical media (such as DVDs), or semiconductor media (such as solid state disks (Solid State Disk, SSD)), etc.
  • the program can be stored in a computer-readable storage medium.
  • the aforementioned storage media may include: ROM, random access memory (RAM), magnetic disks, optical disks and other media that can store program codes.
  • the disclosed devices/devices and methods can be implemented in other ways.
  • the apparatus/equipment embodiments described above are only illustrative.
  • the division of modules or units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units or units. Components may be combined or may be integrated into another system, or some features may be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, indirect coupling or communication connection of devices or units, which may be in electrical, mechanical or other forms.
  • A/B can mean A or B; "and/or” in this application only means It is an association relationship that describes associated objects. It means that there can be three relationships.
  • a and/or B can mean: A exists alone, A and B exist simultaneously, and B exists alone. Among them, A and B Can be singular or plural.
  • plural means two or more than two.
  • At least one of the following or similar expressions thereof refers to any combination of these items, including any combination of single or plural items.
  • at least one of a, b, or c can represent: a, b, c, a-b, a-c, b-c, or a-b-c, where a, b, and c can be single or multiple.
  • the term “if” may be interpreted as “when” or “once” or “in response to determining” or “in response to detecting” depending on the context. ". Similarly, the phrase “if determined” or “if [the described condition or event] is detected” may be interpreted, depending on the context, to mean “once determined” or “in response to a determination” or “once the [described condition or event] is detected ]” or “in response to detection of [the described condition or event]”.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请提供一种图层处理方法和电子设备,涉及电子设备技术领域,其中,该方法包括:确定各个待合成图层的合成方式,其中,所述待合成图层中的目标图层的合成方式为HWC合成方式,所述目标图层为虚拟显示屏对应的图层,和/或,所述目标图层中包括具有圆角的表面视图控件;采用各所述待合成图层对应的合成方式对各所述待合成图层进行合成,得到目标界面。本申请提供的技术方案可以降低电子设备的功耗,减少电子设备发烫和耗电快的现象。

Description

图层处理方法和电子设备
本申请要求于2022年03月17日提交国家知识产权局、申请号为202210265367.0、申请名称为“图层处理方法和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子设备技术领域,尤其涉及一种图层处理方法和电子设备。
背景技术
随着信息社会的快速发展,手机、平板等便携式智能产品已成为人们日常生活中不可或缺的电子设备。
对于手机、平板等便携式电子设备来说,功耗一直是用户关注的重要问题。当电子设备功耗过大时,不仅会影响电子设备的续航能力,还会引起电子设备发热发烫的现象。而为了更好地满足用户需求,这些便携式电子设备拥有的功能越来越多,随之而来的,功耗也越来越大,因此,如何降低功耗就成为了一个需要解决的问题。
发明内容
有鉴于此,本申请提供一种图层处理方法和电子设备,用于降低电子设备的功耗。
为了实现上述目的,第一方面,本申请实施例提供一种图层处理方法,包括:
确定各个待合成图层的合成方式,其中,所述待合成图层中的目标图层的合成方式为HWC合成方式,所述目标图层为虚拟显示屏对应的图层,和/或,所述目标图层中包括具有圆角的表面视图控件;
采用各所述待合成图层对应的合成方式对各所述待合成图层进行合成,得到目标界面。
本申请实施例提供的图层处理方法,通过对图层合成过程进行优化,对包含表面视图控件,且表面视图控件进行了圆角裁剪的图层,和/或,虚拟显示屏对应的图层,采用HWC合成方式进行合成,可以降低电子设备的功耗,减少电子设备发烫和耗电快的现象。
在第一方面的一种可能的实施方式中,所述目标图层为目标应用绘制的图层,所述目标应用支持HWC合成方式。
上述实施方式中,针对支持HWC合成方式的目标应用中的目标图层,采用HWC合成方式合成,可以更好地保证图层的合成质量。
在第一方面的一种可能的实施方式中,所述目标应用中设置有目标标识,所述目标标识用于指示所述目标应用支持HWC合成方式。
在第一方面的一种可能的实施方式中,所述目标应用为预先确定的应用集合中的应用。
在第一方面的一种可能的实施方式中,所述方法还包括:更新所述应用集合。这样可以获取到更全面的目标应用。
在第一方面的一种可能的实施方式中,所述方法还包括:显示所述目标界面。
在第一方面的一种可能的实施方式中,所述方法还包括:向目的投屏设备发送所述目 标界面。
在第一方面的一种可能的实施方式中,所述采用各所述待合成图层对应的合成方式对各所述待合成图层进行合成,得到目标界面,包括:
在所述待合成图层包括第一图层的情况下,通过GPU对各所述第一图层进行合成,得到中间图层;
通过HWC对所述中间图层和所述待合成图层中的各个第二图层进行合成,得到所述目标界面;
在所述待合成图层不包括第一图层的情况下,通过HWC对各所述待合成图层进行合成,得到所述目标界面;
其中,所述第一图层为所述待合成图层中合成方式为GPU合成方式的图层,所述第二图层为所述待合成图层中合成方式为HWC合成方式的图层。
在第一方面的一种可能的实施方式中,在所述确定各个待合成图层的合成方式之后,采用各所述图层对应的合成方式对各所述图层进行合成之前,所述方法还包括:
在所述待合成图层中第二图层的数量大于HWC支持合成的最大图层数量时,调整所述第二图层中部分图层的合成方式,使所述第二图层的数量小于或等于所述最大图层数量,其中,所述第二图层为所述待合成图层中合成方式为HWC合成方式的图层。
通过上述实施方式,可以使HWC通过一次合成过程完成对第二图层的合成,从而可以提高合成速度。
第二方面,本申请实施例提供一种图层处理装置,包括:
处理模块,用于确定各个待合成图层的合成方式,其中,所述待合成图层中的目标图层的合成方式为HWC合成方式,所述目标图层为虚拟显示屏对应的图层,和/或,所述目标图层中包括具有圆角的表面视图控件;
采用各所述待合成图层对应的合成方式对各所述待合成图层进行合成,得到目标界面。
在第二方面的一种可能的实施方式中,所述目标图层为目标应用绘制的图层,所述目标应用支持HWC合成方式。
在第二方面的一种可能的实施方式中,所述目标应用中设置有目标标识,所述目标标识用于指示所述目标应用支持HWC合成方式。
在第二方面的一种可能的实施方式中,所述目标应用为预先确定的应用集合中的应用。
在第二方面的一种可能的实施方式中,所述处理模块还用于:更新所述应用集合。
在第二方面的一种可能的实施方式中,所述装置还包括:显示模块,用于显示所述目标界面。
在第二方面的一种可能的实施方式中,所述装置还包括:通信模块,用于向目的投屏设备发送所述目标界面。
在第二方面的一种可能的实施方式中,所述处理模块具体用于:
在所述待合成图层包括第一图层的情况下,通过GPU对各所述第一图层进行合成,得到中间图层;
通过HWC对所述中间图层和所述待合成图层中的各个第二图层进行合成,得到所述 目标界面;
在所述待合成图层不包括第一图层的情况下,通过HWC对各所述待合成图层进行合成,得到所述目标界面;
其中,所述第一图层为所述待合成图层中合成方式为GPU合成方式的图层,所述第二图层为所述待合成图层中合成方式为HWC合成方式的图层。
在第二方面的一种可能的实施方式中,所述处理模块还用于:在所述确定各个待合成图层的合成方式之后,采用各所述图层对应的合成方式对各所述图层进行合成之前,在所述待合成图层中第二图层的数量大于HWC支持合成的最大图层数量时,调整所述第二图层中部分图层的合成方式,使所述第二图层的数量小于或等于所述最大图层数量,其中,所述第二图层为所述待合成图层中合成方式为HWC合成方式的图层。
第三方面,本申请实施例提供一种电子设备,包括:存储器和处理器,存储器用于存储计算机程序;处理器用于在调用计算机程序时,使得所述电子设备执行上述第一方面或第一方面的任一实施方式所述的方法。
第四方面,本申请实施例提供一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现上述第一方面或第一方面的任一实施方式所述的方法。
第五方面,本申请实施例提供一种计算机程序产品,当计算机程序产品在电子设备上运行时,使得电子设备执行上述第一方面或第一方面的任一实施方式所述的方法。
第六方面,本申请实施例提供一种芯片系统,包括处理器,所述处理器与存储器耦合,所述处理器执行存储器中存储的计算机程序,以实现上述第一方面或第一方面的任一实施方式所述的方法。其中,所述芯片系统可以为单个芯片,或者多个芯片组成的芯片模组。
可以理解的是,上述第二方面至第六方面的有益效果可以参见上述第一方面中的相关描述,在此不再赘述。
附图说明
图1为本申请实施例提供的电子设备的系统架构示意图;
图2为本申请实施例提供的图层合成示意图;
图3为本申请实施例提供的一些用户界面示意图;
图4为本申请实施例提供的投屏过程示意图;
图5-图8为本申请实施例提供的一些图层处理过程示意图;
图9为本申请实施例提供的一种图层处理方法的流程示意图;
图10为本申请实施例提供的另一种图层处理方法的流程示意图;
图11为本申请实施例提供的图层处理装置的结构示意图;
图12为本申请实施例提供的电子设备的结构示意图。
具体实施方式
下面结合本申请实施例中的附图对本申请实施例进行描述。可以理解的是,本申请实施例的实施方式部分使用的术语仅用于对本申请的具体实施例进行解释,而非旨在限定本申请。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例不再赘述。
首先介绍本申请实施例涉及的电子设备,图1示出了一种电子设备的系统架构示意图。
电子设备可以是手机、平板电脑(pad)、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或游戏机等具备显示和处理功能的设备,其中,平板电脑可以是常规的平板设备,也可以是集成了至少部分笔记本电脑的功能的二合一设备。本申请实施例对该电子设备的具体类型不作特殊限制,本申请实施例提供的图层处理方法可以应用于上述电子设备。
电子设备可以包括硬件系统和软件系统,其中,硬件系统可以包括图形处理器(graphics processing unit,GPU)、存储器、摄像头、显示屏和显示控制器(display controller),以及图1中未示出的其他部件。
电子设备的软件系统可以采用分层架构、事件驱动架构、微核架构、微服务架构或云架构。电子设备的软件系统可以为安卓(Android)系统、Linux系统、Windows系统、鸿蒙系统或iOS系统等。
本申请实施例以分层架构的Android系统为例,示例性说明电子设备的软件结构。
如图1所示,电子设备的软件系统可以分成若干个层,层与层之间通过软件接口通信。在一些实施例中,Android系统从上至下可以分为应用程序层、应用程序框架层、安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序(以下有些时候简称为应用)。如图1所示,应用程序可以包括相机、图库、日历、通话、WLAN、蓝牙、音乐、视频、短信息、地图、投屏等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
应用程序框架层可以包括图1所示的活动管理器、窗口管理器、视图系统、输入管理器、显示管理器、决策管理器,以及未示出的内容提供器、资源管理器和通知管理器等。
活动管理器可以提供活动管理服务(activity manager service,AMS),AMS可以用于系统组件(例如活动、服务、内容提供者、广播接收器)的启动、切换、调度以及应用进程的管理和调度工作。
窗口管理器可以提供窗口管理服务(window manager service,WMS),WMS可以用于窗口管理、窗口动画管理、surface管理以及作为输入系统的中转站。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
视图(view)系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
输入管理器可以提供输入管理服务(input manager service,IMS),IMS可以用于管理系统的输入,例如触摸屏输入、按键输入、传感器输入等。IMS从输入设备节点取出事件,通过和WMS的交互,将事件分配至合适的窗口。
显示管理器可以提供显示管理服务(display manager service,DMS),DMS管理显示屏的全局生命周期,它决定如何根据当前连接的物理显示设备控制其逻辑显示,并且在状态更改时,向系统和应用程序发送通知等。
决策管理器可以提供决策管理服务,其可以确定窗口的横/竖屏显示策略和显示位置;并可以根据输入管理器反馈的输入事件调整窗口的横/竖屏显示策略和显示位置。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surfaceflinger)、媒体库(media libraries)、三维(3D)图形处理库(例如:OpenGL ES)、二维(2D)图形引擎(例如:SGL)和硬件合成器(hardware composer,HWC)等。
表面管理器用于对显示子系统进行管理,其可以为多个应用程序提供2D和3D图层的融合。
HWC可以调用显示控制器进行图层合成,然后将合成后的图层传送到显示屏进行显示。其中,HWC支持合成的最大图层数量与显示控制器中的硬件通道的数量对应,例如,显示控制器中包括8个硬件通道,对应的,HWC支持合成的最大图层数量为8个,即HWC可以一次性最多合成8个图层。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。三维图形处理库用于实现三维图形绘图,图像渲染,合成和图层处理等。2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层,用于为Android内核提供核心系统服务,如安全性服务、网络服务、内存管理服务、检测管理服务和驱动模型等。内核层可以包含显示驱动、摄像头驱动、音频驱动、传感器驱动和GPU驱动等。
为了便于理解本申请实施例中的内容,先对本申请实施例涉及的一些技术术语进行说明。
下面以安卓为例,介绍图层(layer)、表面视图(surfaceview)和虚拟显示屏(virtual display)。
图层(layer):包含文字或图形等元素。电子设备显示的界面一般是由多个图层合成的,例如图2所示的电子设备的桌面显示界面包括四个图层:状态栏、导航栏、壁纸和图标,这四个图层经过图层合成后,被送到显示屏进行显示。
图层的合成方式包括:GPU合成方式和HWC合成方式,其中,GPU合成方式是先将各个图层的内容通过GPU渲染到暂存缓冲区中,然后将暂存缓冲区中的内容通过HWC传 送到显示屏进行显示;HWC合成方式是通过HWC将各个图层合成后传送到显示屏进行显示。其中,GPU合成方式中,GPU与内存之间的交互较多,因此,相比GPU合成方式,HWC合成方式功耗一般比较小。
具体地,上述图1中表面管理器(surfaceflinger),可以收集各个应用程序绘制的图层,并向HWC提供图层列表;HWC可以根据其硬件能力确定图层列表中各个图层的合成方式,其中,对于合成方式为GPU合成方式的图层,surfaceflinger服务可以通过GPU合成为一个图层(此处称为中间图层),然后由HWC对中间图层和其他图层(合成方式为HWC合成方式的图层)进行进一步合成,并将得到的目标界面传送到显示屏进行显示。
其中,surfaceflinger在通过HWC确定图层的合成方式其中,可以为各个图层设置期望的合成方式,例如可以设置每个图层都通过HWC合成方式合成;HWC可以根据显示控制器的性能调整某些图层的合成方式;surfaceflinger可以根据HWC的调整结果决定是否调整对应图层的合成方式,然后通知HWC。
表面视图(surfaceview):应用程序创建的每个窗口(window)对应有一个绘图表面(surface),view可以在surface上进行绘制,每个绘图表面对应一个图层,即可以在每个surface中绘制一个图层。每个窗口中可以包括多个视图(view)控件,其中,surfaceview与普通view的区别主要在于,同一窗口的普通view共享该窗口对应的surface,surfaceview不与其宿主窗口共享同一个surface,surfaceview具有独立的surface,即surfaceview对应一个独立的图层(以下称为surfaceview图层)。surfaceview图层的绘制、刷新过程不受宿主窗口约束,可以自由控制,因此,surfaceview图层适合用于绘制视频或游戏等刷新率要求较高的用户界面。
参见图3,用户在悬浮窗中打开游戏应用,该游戏应用对刷新率要求较高,则可以悬浮窗中采用surfaceview控件绘制该游戏应用的用户界面。如图3中的(a)所示,悬浮窗中显示的surfaceview图层可以采用直角样式;为了提升美观度,如图3中的(b)所示,悬浮窗中显示的surfaceview图层也可以采用圆角样式。
虚拟显示屏(virtual display):在进行图层合成时,系统支持的显示屏包括物理显示屏和虚拟显示屏,其中,物理显示屏可以包括电子设备内置的显示屏和外部显示屏,内置的显示屏例如可以是液晶显示屏(liquid crystal display,LCD),外部显示屏例如可以是通过高清多媒体接口(high definition multimedia interface,HDMI)连接的电视。虚拟显示屏没有实际的物理设备,其可以通过显示管理器创建,用于实现录屏或投屏等功能。
例如图4中所示的,手机可以与笔记本电脑建立近距离通信连接,将打开的视频投屏到笔记本电脑上,使用户可以在笔记本电脑上观看视频。其中,在进行投屏时,手机可以创建虚拟显示屏,将待投屏的视频界面绘制在虚拟显示屏中,然后传输到电脑上进行显示。
对于上述图3中的(b)所示的用户界面中,采用圆角样式的surfaceview图层,以及图4中所示的虚拟显示屏对应的图层,目前,在进行图层合成时,都是采用GPU合成方式。如上所述,surfaceview图层通常用于绘制刷新率要求较高的用户界面,此时,电子设备的功耗会比较高;电子设备在绘制虚拟显示屏时也会占用较多的功耗,在投屏时会产生更多的功耗;而GPU合成方式会进一步增加功耗,因此,在实际使用中,这两种场景下容易出现电子设备发烫和耗电快的现象。
为此,本申请实施例提供一种技术方案,主要通过优化图层的处理过程来降低设备功 耗,减少电子设备发烫和耗电快的现象。下面对本申请实施例的图层处理过程进行说明。
图5示例性地示出了一种图层处理过程的示意图,如图5所示,假设手机显示屏对应的待合成图层包括:图层11、图层12、图层13和图层14,其中,图层14为surfaceview图层,该surfaceview图层中surfaceview控件进行了圆角裁剪。
这些图层被应用程序绘制完后,可以被传递到surfaceflinger进行图层合成。surfaceflinger收集到这些图层后,可以先确定各个待合成图层的合成方式。
具体地,对于进行了圆角裁剪的surfaceview图层(即图层14),可以将其合成方式设定为HWC合成方式。
对于其他图层,可以采用相关的合成策略确定图层的合成方式,例如,可以根据图层的大小、内容变化程度等因素确定图层的合成方式,将数据量比较小的图层的合成方式设置为GPU合成方式,将数据量比较大的图层的合成方式设置为HWC合成方式;对于内容变化较小的,特别是内容没有任何变化的图层,可以将其合成方式设置为GPU合成方式,这样位于暂存缓冲区中的GPU合成结果可以重复利用,从而就可以减少合成数据量,降低功耗。
基于上述合成策略,示例性地,可以将图层11和图层12的合成方式设置为GPU合成方式,将图层13的合成方式设置为HWC合成方式。
在确定出各个待合成图层的合成方式后,surfaceflinger可以将合成方式为GPU合成方式的图层(以下称为第一图层)交由GPU进行合成,将合成方式为HWC合成方式的图层(以下称为第二图层)交由HWC进行合成,并将GPU合成结果(以下称为中间图层)也交给HWC,由HWC合成最终的目标界面后,传送到显示屏进行显示。
其中,surfaceflinger在传递图层时,具体可以是将图层的地址信息传送给GPU或HWC,GPU和HWC可以根据地址信息读取对应的图层。
在进行图层合成时,可以先由GPU合成中间图层,然后,HWC将中间图层与各个第二图层一起进行合成,得到目标界面;也可以是,HWC与GPU同时进行合成,即在GPU合成第一图层的同时,HWC可以合成第二图层,然后,HWC将GPU合成的中间图层和自身的合成结果进行合成,得到目标界面。
考虑到有些应用程序绘制的图层,HWC可能不完全支持,本实施例中,对于进行了圆角裁剪的surfaceview图层,可以判断其是否是目标应用绘制的,如果是目标应用绘制的,可以采用HWC合成方式进行合成;如果不是目标应用绘制的,即为非目标应用绘制的,可以采用GPU合成方式进行合成,以提高图层合成质量。
其中,目标应用可以是设置有目标标识的应用,该目标标识用于指示目标应用支持HWC合成方式。例如,目标应用可以在程序中声明“hw.hwc_support”字段(即目标标识),以表示其支持HWC合成方式。
可以理解的是,目标标识的具体实现形式不限于上述字段,也可以是其他标识,本实施例对此不做特别限定。
目标应用也可以是预先确定的应用集合中的应用,该应用集合中可以记录目标应用的身份标识号码(identity,ID)或其他标识,电子设备可以根据该标识识别目标应用。
在具体实现时,电子设备可以从服务端获取该应用集合,并可以更新该应用集合。其中,服务端可以向电子设备下发更新后的应用集合,或者向电子设备下发更新通知,通知 电子设备从服务端获取更新后的应用集合;或者,电子设备也可以定期从服务端获取应用集合来更新本地的应用集合。
在一些实施例中,可以根据目标标识或者应用集合确定目标应用;在另一些实施例中,可以结合目标标识和应用集合确定目标应用,即具有目标标识的应用和位于应用集合中的应用,均为目标应用,也就是说,目标应用可能具有目标标识,也可能位于应用集合中,还可能既具有目标标识,又位于应用集合中。
其中,对于目标应用产生的图层,可以在创建图层时进行标识,即设置HWC合成标识;surfaceflinger获取到图层后,可以根据该HWC合成标识识别图层支持HWC合成方式。对于非目标应用产生的图层,可以设置GPU合成标识,或者也可以不进行标识。
另外,对于目标应用产生的图层,如果该图层不是进行了圆角裁剪的surfaceview图层,可以采用GPU合成方式,也可以采用HWC合成方式。对于非目标应用产生的图层,可以均采用GPU合成方式;也可以不限定其合成方式,即可以采用GPU合成方式,也可以采用HWC合成方式。
示例性地,如图6所示,平板电脑上显示屏对应的待合成图层包括:图层21、图层22、图层23和图层24,其中,图层23和图层24为surfaceview图层,图层23和图层24中的surfaceview控件都进行了圆角裁剪,图层23为非目标应用绘制的图层,图层24为目标应用绘制的图层,对应地,图层24具有HWC合成标识。
surfaceflinger获取到这些图层后,根据图层24的HWC合成标识,可以确定图层24支持HWC合成方式,进而可以将其设定为HWC合成方式。对于图层23,其没有HWC合成标识,可以不强制采用HWC合成方式,而是可以采用目前常采用的GPU合成方式。即将图层23确定为第一图层,将图层24确定为第二图层。
对于图层21和图层22,surfaceflinger可以根据前述的合成策略确定第一图层和第二图层,例如,可以将图层21确定为第一图层,将图层22确定为第二图层。
在确定出各个待合成图层的合成方式后,surfaceflinger可以将第一图层交由GPU进行合成,将第二图层和GPU合成的中间图层交由HWC进行合成,由HWC合成最终的目标界面后,传送到显示屏进行显示。
下面以投屏场景为例示例性说明虚拟显示屏对应的图层的处理过程。
假设手机与笔记本电脑建立了投屏连接,手机为源投屏设备,笔记本电脑为目的投屏设备。如图7所示,手机的虚拟显示屏对应的待合成图层包括:图层31、图层32、图层33和图层34,surfaceflinger获取到这些图层后,可以将这些图层的合成方式均设置为HWC合成方式,然后由HWC对这些图层进行合成后,将合成得到的目标界面传送到虚拟显示屏。
虚拟显示屏与手机的投屏应用相关联,投屏应用可以将虚拟显示屏中的内容(即目标界面)投屏到笔记本电脑,由笔记本电脑在其显示屏上进行显示。
在一些实施例中,也可以采用一定的合成策略确定虚拟显示屏对应的各个待合成图层的合成方式,例如,可以采用与前述的合成策略类似的方式,将数据量较小或内容没有变化的图层确定为第一图层(采用GPU合成方式),将数据量较大或刷新率要求较高的图层确定为第二图层(采用HWC合成方式)。
另外,与前述的surfaceview图层类似,对于虚拟显示屏对应的各个待合成图层,也可 以判断其是否是目标应用绘制的,如果是目标应用绘制的,可以采用HWC合成方式或GPU合成方式进行合成;如果不是目标应用绘制的,即为非目标应用绘制的,可以采用GPU合成方式进行合成,以提高图层合成质量。
示例性地,如图8所示,手机的虚拟显示屏对应的待合成图层包括:图层41、图层42、图层43和图层44,其中,图层44为surfaceview图层,该surfaceview图层中surfaceview控件进行了圆角裁剪;图层41为非目标应用绘制的图层,图层42、图层43和图层44为目标应用绘制的图层;图层43的内容一般没有变化。
surfaceflinger获取到这些图层后,可以将图层42和图层44的合成方式均设置为HWC合成方式,将图层41和图层43的合成方式均设置为GPU合成方式,即,图层41和图层43为第一图层,图层42和图层44为第二图层。
在确定出各个待合成图层的合成方式后,surfaceflinger可以将第一图层交由GPU进行合成,将第二图层和GPU合成的中间图层交由HWC进行合成,由HWC合成最终的目标界面后,传送到虚拟显示屏;再由投屏应用将虚拟显示屏中的内容(即目标界面)发送给笔记本电脑,由笔记本电脑在其显示屏上进行显示。
通常情况下,待合成图层的数量不会过多,对应的,surfaceflinger确定的第二图层的数量一般不会超过HWC支持合成的最大图层数量,上述各示例即是以此为例进行说明。在一些实施例中,如果第二图层的数量超过HWC支持合成的最大图层数量,则HWC可以分多次合成,即可以先合成一部分第二图层,再合成另一部分第二图层,再将各次的合成结果进行合成;或者,也可以将一部分第二图层调整为第一图层,由GPU进行合成,其中,可以将数据量较少的一个或多个第二图层调整为第一图层,使第二图层的数量不超过HWC支持合成的最大图层数量。
需要说明的是,如果图层合成方式确定结果中包括第一图层,且HWC在合成图层时,是等待GPU合成完后,将GPU合成的中间图层和各个第二图层一起合成,那么,由于中间图层占用一个硬件通道,因而可以在第二图层的数量大于或等于HWC支持合成的最大图层数量时,进行图层调整,使第二图层的数量小于HWC支持合成的最大图层数量。
图9为本申请实施例提供的一种图层处理方法的流程示意图,如图9所示,本实施例提供的图层处理方法可以包括如下步骤:
S110、将待合成图层中目标图层的合成方式确定为HWC合成方式,并确定待合成图层中其他图层的合成方式,其中,目标图层中包括具有圆角的表面视图控件,目标图层是目标应用绘制的。
例如图5中所示的,可以将图层14(其中的surfaceview控件进行了圆角裁剪)确定为目标图层,将其合成方式设定为HWC合成方式;对于其他图层,可以根据图层的大小、内容变化程度等因素确定图层的合成方式。
又例如图6中所示的,可以将目标应用绘制的图层24(其中的surfaceview控件进行了圆角裁剪)确定为目标图层,对目标图层采用HWC合成方式进行合成;对于非目标应用绘制的图层23(其中的surfaceview控件进行了圆角裁剪),可以采用GPU合成方式进行合成,以提高图层合成质量;对于待合成图层中的其他图层,可以根据图层的大小、内容变化情况等因素确定图层的合成方式。
其中,目标应用具有目标标识,和/或,目标应用位于预先确定的应用集合中;电子设 备可以更新应用集合。
在一些实施例中,如果待合成图层中第二图层(合成方式为HWC合成方式)的数量大于或等于HWC支持合成的最大图层数量,则HWC可以如前所述的,分多次合成或者将一部分第二图层调整为第一图层,由GPU进行合成。
S120、采用各个待合成图层对应的合成方式对各个待合成图层进行合成,得到目标界面。
如上述图5和图6中所示的,在确定好各个待合成图层的合成方式后,可以通过GPU对各个第一图层进行合成,通过HWC对各个第二图层和GPU合成的中间图层进行合成,得到最终的目标界面。
S130、显示目标界面。
如上述图5和图6中所示的,在合成得到目标界面后,可以将目标界面传送到显示屏进行显示。
图10为本申请实施例提供的另一种图层处理方法的流程示意图,如图10所示,本实施例提供的图层处理方法可以包括如下步骤:
S210、将虚拟显示屏对应的各个待合成图层中目标图层的合成方式确定为HWC合成方式,并确定待合成图层中其他图层的合成方式,其中,目标图层是目标应用绘制的。
例如图7中所示的,可以将虚拟显示屏对应的各个待合成图层均确定为目标图层,将其合成方式设定为HWC合成方式。
又例如图8中所示的,对于虚拟显示屏对应的各个待合成图层,如果是目标应用绘制的,可以采用HWC合成方式或GPU合成方式进行合成;如果是非目标应用绘制的,可以采用GPU合成方式进行合成。
同样地,如果待合成图层中第二图层(合成方式为HWC合成方式)的数量大于或等于HWC支持合成的最大图层数量,则HWC可以分多次合成或者将一部分第二图层调整为第一图层,由GPU进行合成。
S220、采用各个待合成图层对应的合成方式对各个待合成图层进行合成,得到目标界面。
如上述图7中所示的,在确定好各个待合成图层的合成方式后,如果待合成图层中不包括第一图层,则可以通过HWC对各个第二图层进行合成,得到最终的目标界面。
如上述图8中所示的,在确定好各个待合成图层的合成方式后,如果待合成图层中包括第一图层,则可以通过GPU对各个第一图层进行合成,通过HWC对各个第二图层和GPU合成的中间图层进行合成,得到最终的目标界面。
S230、将目标界面发送到目的投屏设备进行显示。
如上述图7和图8中所示的,在合成得到目标界面后,可以将目标界面传送到虚拟显示屏,然后由投屏应用将虚拟显示屏中的目标界面传输到目的投屏设备进行显示。
本领域技术人员可以理解,以上实施例是示例性的,并非用于限定本申请。在可能的情况下,以上步骤中的一个或者几个步骤的执行顺序可以进行调整,也可以进行选择性组合,得到一个或多个其他实施例。本领域技术人员可以根据需要从上述步骤中任意进行选择组合,凡是未脱离本申请方案实质的,都落入本申请的保护范围。
本实施例提供的图层处理方法,通过对图层合成过程进行优化,对包含表面视图控件, 且表面视图控件进行了圆角裁剪的图层,和/或,虚拟显示屏对应的图层,采用HWC合成方式进行合成,可以降低电子设备的功耗,减少电子设备发烫和耗电快的现象。
基于同一构思,作为对上述方法的实现,本申请实施例提供了一种图层处理装置,该装置实施例与前述方法实施例对应,为便于阅读,本装置实施例不再对前述方法实施例中的细节内容进行逐一赘述,但应当明确,本实施例中的装置能够对应实现前述方法实施例中的全部内容。
图11为本申请实施例提供的图层处理装置的结构示意图,如图11所示,本实施例提供的装置包括:
显示模块210、输入模块220、处理模块230和通信模块240。
其中,显示模块210用于支持电子设备执行上述实施例中的界面显示操作和/或用于本文所描述的技术的其它过程。显示模块可以是触摸屏或其他硬件或硬件与软件的综合体。
输入模块220用于接收用户在电子设备的显示界面上的输入,如触摸输入、语音输入、手势输入等,输入模块用于支持电子设备执行上述实施例中接收用户操作的步骤和/或用于本文所描述的技术的其它过程。输入模块可以是触摸屏或其他硬件或硬件与软件的综合体。
处理模块230用于支持电子设备执行上述实施例中各方法步骤中的处理操作和/或用于本文所描述的技术的其它过程。
通信模块240用于支持电子设备执行上述实施例中与电子设备和其他设备之间的通信过程相关的操作和/或用于本文所描述的技术的其它过程。
本实施例提供的装置可以执行上述方法实施例,其实现原理与技术效果类似,此处不再赘述。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
基于同一构思,本申请实施例还提供一种电子设备,请参阅图12,图12为本申请实施例提供的电子设备的结构示意图。
电子设备可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器 180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本申请实施例示意的结构并不构成对电子设备的具体限定。在本申请另一些实施例中,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者进行不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(serial clock line,SCL)。I2S接口可以用于音频通信。PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。UART接口是一种通用串行数据总线,用于异步通信;该总线可以为双向通信总线,它将要传输的数据在串行通信与并行通信之间转换。MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件;MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。GPIO接口可以通过软件配置,GPIO接口可以被配置为控制信号,也可被配置为数据信号。USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备充电,也可以用于电子设备与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备的结构限定。在本申请另一些实施例中,电子设备也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近场通信技术(near field communication,NFC),红外技术(infrared,IR),毫米波(millimeter wave,mmWave)技术,超宽带(ultra wide band,UWB)技术等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其 进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分同步码分多址(time division-synchronous code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GNSS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Mini LED,Micro LED,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。摄像头193用于捕获静态图像或视频。数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。视频编解码器用于对数字视频压缩或解压缩。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,以及至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
外部存储器接口120可以用于连接外部存储器,例如Micro SD卡,实现扩展电子设备的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。 例如将音乐,视频等文件保存在外部存储卡中。
电子设备可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
电子设备也可以通过扬声器170A发送超声波,通过麦克风170C接收超声波,实现超声波技术。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备可以接收按键输入,产生与电子设备的用户设置以及功能控制有关的键信号输入。马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备的接触和分离。电子设备可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。
本实施例提供的电子设备可以执行上述方法实施例,其实现原理与技术效果类似,此处不再赘述。
本申请实施例还提供一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现上述方法实施例所述的方法。
本申请实施例还提供一种计算机程序产品,当计算机程序产品在电子设备上运行时,使得电子设备执行时实现上述方法实施例所述的方法。
本申请实施例还提供一种芯片系统,包括处理器,所述处理器与存储器耦合,所述处理器执行存储器中存储的计算机程序,以实现上述方法实施例所述的方法。其中,所述芯片系统可以为单个芯片,或者多个芯片组成的芯片模组。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者通过所述计算机可读存储介质进行传输。所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所 述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如软盘、硬盘或磁带)、光介质(例如DVD)、或者半导体介质(例如固态硬盘(Solid State Disk,SSD))等。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质可以包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。
在本申请中出现的对步骤进行的命名或者编号,并不意味着必须按照命名或者编号所指示的时间/逻辑先后顺序执行方法流程中的步骤,已经命名或者编号的流程步骤可以根据要实现的技术目的变更执行次序,只要能达到相同或者相类似的技术效果即可。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
在本申请所提供的实施例中,应该理解到,所揭露的装置/设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
应当理解,在本申请说明书和所附权利要求书的描述中,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或模块的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或模块,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或模块。
在本申请的描述中,除非另有说明,“/”表示前后关联的对象是一种“或”的关系,例如,A/B可以表示A或B;本申请中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况,其中A,B可以是单数或者复数。
并且,在本申请的描述中,除非另有说明,“多个”是指两个或多于两个。“以下至少一项”或其类似表达,是指的这些项中的任意组合,包括单项或复数项的任意组合。例如,a,b,或c中的至少一项,可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。
如在本申请说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。
另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的 内容以外的顺序实施。
在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (13)

  1. 一种图层处理方法,其特征在于,包括:
    确定各个待合成图层的合成方式,其中,所述待合成图层中的目标图层的合成方式为HWC合成方式,所述目标图层为虚拟显示屏对应的图层,和/或,所述目标图层中包括具有圆角的表面视图控件;
    采用各所述待合成图层对应的合成方式对各所述待合成图层进行合成,得到目标界面。
  2. 根据权利要求1所述的方法,其特征在于,所述目标图层为目标应用绘制的图层,所述目标应用支持HWC合成方式。
  3. 根据权利要求2所述的方法,其特征在于,所述目标应用中设置有目标标识,所述目标标识用于指示所述目标应用支持HWC合成方式。
  4. 根据权利要求2或3所述的方法,其特征在于,所述目标应用为预先确定的应用集合中的应用。
  5. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    更新所述应用集合。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述方法还包括:
    显示所述目标界面。
  7. 根据权利要求1-5任一项所述的方法,其特征在于,所述方法还包括:
    向目的投屏设备发送所述目标界面。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,所述采用各所述待合成图层对应的合成方式对各所述待合成图层进行合成,得到目标界面,包括:
    在所述待合成图层包括第一图层的情况下,通过GPU对各所述第一图层进行合成,得到中间图层;
    通过HWC对所述中间图层和所述待合成图层中的各个第二图层进行合成,得到所述目标界面;
    在所述待合成图层不包括第一图层的情况下,通过HWC对各所述待合成图层进行合成,得到所述目标界面;
    其中,所述第一图层为所述待合成图层中合成方式为GPU合成方式的图层,所述第二图层为所述待合成图层中合成方式为HWC合成方式的图层。
  9. 根据权利要求1-8任一项所述的方法,其特征在于,在所述确定各个待合成图层的合成方式之后,采用各所述图层对应的合成方式对各所述图层进行合成之前,所述方法还包括:
    在所述待合成图层中第二图层的数量大于HWC支持合成的最大图层数量时,调整所述第二图层中部分图层的合成方式,使所述第二图层的数量小于或等于所述最大图层数量,其中,所述第二图层为所述待合成图层中合成方式为HWC合成方式的图层。
  10. 一种电子设备,其特征在于,包括:存储器和处理器,所述存储器用于存储计算机程序;所述处理器用于在调用所述计算机程序时,使得所述电子设备执行如权利要求1-9任一项所述的方法。
  11. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序 被处理器执行时实现如权利要求1-9任一项所述的方法。
  12. 一种计算机程序产品,其特征在于,当计算机程序产品在电子设备上运行时,使得电子设备执行如权利要求1-9任一项所述的方法。
  13. 一种芯片系统,其特征在于,所述芯片系统包括处理器,所述处理器与存储器耦合,所述处理器执行存储器中存储的计算机程序,以实现如权利要求1-9任一项所述的方法。
PCT/CN2023/081564 2022-03-17 2023-03-15 图层处理方法和电子设备 WO2023174322A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210265367.0 2022-03-17
CN202210265367.0A CN116795197A (zh) 2022-03-17 2022-03-17 图层处理方法和电子设备

Publications (1)

Publication Number Publication Date
WO2023174322A1 true WO2023174322A1 (zh) 2023-09-21

Family

ID=88022374

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/081564 WO2023174322A1 (zh) 2022-03-17 2023-03-15 图层处理方法和电子设备

Country Status (2)

Country Link
CN (1) CN116795197A (zh)
WO (1) WO2023174322A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9317175B1 (en) * 2013-09-24 2016-04-19 Amazon Technologies, Inc. Integration of an independent three-dimensional rendering engine
CN110362186A (zh) * 2019-07-17 2019-10-22 Oppo广东移动通信有限公司 图层处理方法、装置、电子设备及计算机可读介质
CN110363831A (zh) * 2019-07-17 2019-10-22 Oppo广东移动通信有限公司 图层合成方法、装置、电子设备及存储介质
CN113986002A (zh) * 2021-12-31 2022-01-28 荣耀终端有限公司 帧处理方法、装置及存储介质
CN113986162A (zh) * 2021-09-22 2022-01-28 荣耀终端有限公司 图层合成方法、设备及计算机可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9317175B1 (en) * 2013-09-24 2016-04-19 Amazon Technologies, Inc. Integration of an independent three-dimensional rendering engine
CN110362186A (zh) * 2019-07-17 2019-10-22 Oppo广东移动通信有限公司 图层处理方法、装置、电子设备及计算机可读介质
CN110363831A (zh) * 2019-07-17 2019-10-22 Oppo广东移动通信有限公司 图层合成方法、装置、电子设备及存储介质
CN113986162A (zh) * 2021-09-22 2022-01-28 荣耀终端有限公司 图层合成方法、设备及计算机可读存储介质
CN113986002A (zh) * 2021-12-31 2022-01-28 荣耀终端有限公司 帧处理方法、装置及存储介质

Also Published As

Publication number Publication date
CN116795197A (zh) 2023-09-22

Similar Documents

Publication Publication Date Title
WO2020238774A1 (zh) 一种通知消息的预览方法及电子设备
JP7473101B2 (ja) アプリケーション表示方法及び電子デバイス
WO2021027747A1 (zh) 一种界面显示方法及设备
WO2021129326A1 (zh) 一种屏幕显示方法及电子设备
WO2020134872A1 (zh) 一种消息处理的方法、相关装置及系统
WO2021036770A1 (zh) 一种分屏处理方法及终端设备
WO2020093988A1 (zh) 一种图像处理方法及电子设备
CN113553014A (zh) 多窗口投屏场景下的应用界面显示方法及电子设备
EP4060475A1 (en) Multi-screen cooperation method and system, and electronic device
WO2022143082A1 (zh) 一种图像处理方法和电子设备
WO2020155875A1 (zh) 电子设备的显示方法、图形用户界面及电子设备
CN114489529A (zh) 电子设备的投屏方法及其介质和电子设备
WO2023005900A1 (zh) 一种投屏方法、电子设备及系统
WO2021042881A1 (zh) 消息通知方法及电子设备
WO2024001810A1 (zh) 设备交互方法、电子设备及计算机可读存储介质
WO2023174322A1 (zh) 图层处理方法和电子设备
CN114281440B (zh) 一种双系统中用户界面的显示方法及电子设备
WO2022237387A1 (zh) 一种界面显示方法、系统及电子设备
WO2024109481A1 (zh) 窗口控制方法及电子设备
WO2024032430A1 (zh) 管理内存的方法和电子设备
WO2024099212A1 (zh) 空间位置确定方法、系统及其设备
WO2023051036A1 (zh) 加载着色器的方法和装置
WO2022252805A1 (zh) 显示方法及电子设备
US12019942B2 (en) Multi-screen collaboration method and system, and electronic device
WO2024022211A1 (zh) 一种图像处理方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23769824

Country of ref document: EP

Kind code of ref document: A1