CN116700655A - Interface display method and electronic equipment - Google Patents

Interface display method and electronic equipment Download PDF

Info

Publication number
CN116700655A
CN116700655A CN202211145487.3A CN202211145487A CN116700655A CN 116700655 A CN116700655 A CN 116700655A CN 202211145487 A CN202211145487 A CN 202211145487A CN 116700655 A CN116700655 A CN 116700655A
Authority
CN
China
Prior art keywords
attribute information
display
display direction
module
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211145487.3A
Other languages
Chinese (zh)
Other versions
CN116700655B (en
Inventor
张威
夏兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202211145487.3A priority Critical patent/CN116700655B/en
Publication of CN116700655A publication Critical patent/CN116700655A/en
Application granted granted Critical
Publication of CN116700655B publication Critical patent/CN116700655B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/147Digital output to display device ; Cooperation and interconnection of the display device with other functional units using display panels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Image Generation (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an interface display method and electronic equipment, wherein the electronic equipment comprises a graph preprocessing module, a surface mixer, a hardware synthesizer HWC and a display, wherein attribute information corresponding to different display directions of the display is different, and the interface display method comprises the following steps: the electronic equipment acquires a trigger event; the electronic equipment acquires first attribute information in a second display direction through a graph preprocessing module; when the second display direction of the first attribute information is different from the default display direction, the electronic equipment generates a preprocessing instruction based on the first attribute information through the graphic preprocessing module, inserts the preprocessing instruction into the first drawing instruction to obtain a second drawing instruction, and draws based on the second drawing instruction to obtain a first image set; rendering the first image set by the electronic equipment to obtain a second image set; the electronic device controls the HWC to perform layer composition on the second image set via the surface mixer. In the embodiment of the application, the layer processing efficiency can be improved, and the card frame and jitter can be slowed down.

Description

Interface display method and electronic equipment
Technical Field
The present application relates to the field of terminal technologies, and in particular, to an interface display method and an electronic device.
Background
Before a window interface is displayed on a screen of the electronic device, a displayed picture needs to be drawn and rendered, in the rendering process, the image is rendered specifically through a graphic processor GPU, then a hardware synthesizer synthesizes the rendered image, and a display of the electronic device displays the synthesized image stored in a buffer. However, as the display of the electronic device changes in the screen, the efficiency of the image drawing, rendering, and compositing processes decreases, which can cause problems of jitter and frame sticking on the display interface.
Disclosure of Invention
The embodiment of the application discloses an interface display method and electronic equipment, which can improve the layer synthesis efficiency and alleviate the problems of card frames and jitter in display.
In a first aspect, the present application provides an interface display method, where the method is applied to an electronic device, where the electronic device includes a graphics preprocessing module, a graphics drawing module, a surface mixer, a hardware synthesizer HWC, and a display, where the display is provided with different display directions, attribute information corresponding to the different display directions is different, and the attribute information is used to describe a display window; the method comprises the following steps: the electronic equipment acquires a trigger event; the triggering event indicates that the display direction of the electronic equipment is switched from a first display direction to a second display direction; the electronic equipment acquires first attribute information in the second display direction through the graphic preprocessing module; under the condition that the second display direction corresponding to the first attribute information is different from a default display direction, the electronic equipment generates a preprocessing instruction based on the first attribute information through the graphic preprocessing module, and inserts the preprocessing instruction into a first drawing instruction to obtain a second drawing instruction, wherein the first drawing instruction is used for indicating the graphic drawing module to draw; the electronic equipment draws based on the second drawing instruction through the graph drawing module to obtain a first image set, wherein the first image set is an image set drawn for switching the display direction; the electronic equipment renders the first image set through the graph drawing module to obtain a second image set; the electronic equipment controls the HWC to perform layer synthesis on the second image set through the surface mixer to obtain a third image; the electronic device displays the third image through the display.
In the embodiment of the application, in the scene of switching the display direction of the screen, the electronic equipment can finish the processing of rotation, size processing, position and the like through the GPU or the CPU in the process of rendering, so that the task of finishing the specialized layer processing of the hardware synthesizer through the processing of the surface eFlinger after the rendering is finished is avoided, the efficiency of drawing, rendering and displaying the image is improved, the probability of card frames appearing on an interface is reduced, the problems of card frames and jitter in the display are alleviated, and the user experience is improved.
In one possible implementation manner, the electronic device includes a first application and a window manager WMS, the graphics drawing module includes a drawing management module and a rendering module, and after the electronic device acquires the trigger event, the method specifically includes: the electronic equipment acquires the trigger event through the first application; after the electronic device obtains the trigger event through the first application, the method further includes: the electronic equipment acquires the first attribute information based on the trigger event through the first application; the electronic equipment sends the first attribute information to the WMS through the first application; the electronic device obtains the first attribute information in the second display direction through the graphic preprocessing module, and specifically includes: and the electronic equipment sends the first attribute information to the graph preprocessing module through the WMS. Therefore, the graphics preprocessing module can intercept the first attribute information sent by the WMS to the SurfaceFlinger, and is convenient to change the first attribute information into the second attribute information in preprocessing, so that the SurfaceFlinger is ensured not to need to further judge, the operation of graphics layer synthesis by the GPU is avoided, the efficiency of image drawing, rendering and displaying can be improved, and the probability of card frames appearing on an interface is reduced.
In a possible implementation manner, in a case where the second display direction corresponding to the first attribute information is different from a default display direction, the method further includes: the electronic equipment sends second attribute information to the WMS through the graphic preprocessing module, and the display direction corresponding to the second attribute information is the same as the default display direction; the electronic equipment sends the second attribute information to the surface mixer through the WMS; the electronic device controls the HWC to perform layer synthesis on the second image set through the surface mixer to obtain a third image, and the method specifically comprises the following steps: and the electronic equipment controls the HWC to perform layer synthesis on the second image set based on the second attribute information through the surface mixer to obtain a third image. In this way, the electronic device can send the second attribute information with the same default display direction to the SurfaceFlinger during preprocessing, so that the SurfaceFlinger is ensured not to call the GPU to perform layer composition, and the layer composition can be ensured to be completed by HWC, thereby improving the efficiency of image drawing, rendering and display, and reducing the probability of card frames on the interface.
In a possible implementation manner, the electronic device controls, through the surface mixer, the HWC to perform layer synthesis on the second image set based on the second attribute information, so as to obtain a third image, and specifically includes: the electronic equipment judges whether picture adjustment is needed or not based on the second attribute information through the surface mixer, and under the condition that the display direction corresponding to the second attribute information is the same as the default display direction, the electronic equipment determines that picture adjustment is not needed through the surface mixer; under the condition that the display direction corresponding to the second attribute information is different from the default display direction, the electronic equipment determines that picture adjustment is required through the surface mixer; and under the condition that the image adjustment is not needed, the electronic equipment performs layer synthesis on the second image set through the HWC to obtain a third image. Therefore, the display direction of the second attribute information is the same as the default display direction, and the layer synthesis is performed based on the second attribute information, so that the layer synthesis is completed by the HWC instead of the GPU, the efficiency of image drawing, rendering and display can be improved, and the probability of card frames appearing on the interface is reduced.
In a possible implementation manner, the electronic device generates, by the graphics preprocessing module, a preprocessing instruction based on the first attribute information, and inserts the preprocessing instruction into a first drawing instruction, so as to obtain a second drawing instruction, where before the method further includes: the electronic equipment sends the first drawing instruction to the drawing management module through the first application, and the first drawing instruction is stored in an instruction buffer; the electronic equipment reads the first drawing instruction in the instruction buffer through the graphic preprocessing module; the electronic device inserts the preprocessing instruction into the first drawing instruction through the graphic preprocessing module to obtain a second drawing instruction, and the method specifically comprises the following steps: the electronic equipment inserts the preprocessing instruction into the first drawing instruction through the graphic preprocessing module to obtain the second drawing instruction, and the second drawing instruction is input into the instruction buffer; the electronic equipment acquires the second drawing instruction through the drawing management module, and the second drawing instruction instructs the drawing management module to conduct image drawing and preprocessing. Therefore, the preprocessing instruction can be inserted into the drawing instruction before drawing and rendering of the graphics, rotation, size adjustment, position adjustment and other operations are completed in the drawing and rendering processes, the preprocessing process is completed in advance by the CPU or the GPU, so that the fact that the second image set received by the SurfaceFlinger needs to be subjected to picture adjustment again is guaranteed, the fact that layer composition is completed by the HWC instead of the GPU is guaranteed, the efficiency of drawing, rendering and displaying of the images can be improved, and the probability of card frames appearing on an interface is reduced.
In one possible implementation manner, after the electronic device obtains the first attribute information in the second display direction through the graphics preprocessing module, in a case where the first attribute information includes a rotation attribute, the method further includes: the electronic equipment judges whether the second display direction corresponding to the rotation attribute in the first attribute information is different from the default display direction or not based on the default display direction through the graphic preprocessing module; in the case that the rotation angle of the rotation attribute is not 0 degree, the electronic device determines that the second display direction is different from the default display direction through the graphic preprocessing module; under the condition that the rotation angle of the rotation attribute is 0 degree, the electronic equipment determines that the second display direction is the same as the default display direction through the graphic preprocessing module; the rotation attribute describes the display angle of the layer, and the rotation angle of the rotation attribute is 0 degrees to indicate that the screen is in a default display direction. In one possible implementation manner, the electronic device can directly judge whether pretreatment is needed or not based on the rotation attribute, the rotation attribute is directly related to the display direction, whether pretreatment is needed or not can be rapidly and accurately determined, and judging efficiency is improved.
In one possible implementation, the first attribute information includes a rotation attribute, a width-height attribute, and a position attribute; the rotation attribute describes the display angle of the layer; the wide-high attribute describes the size of the layer; the location attribute describes the order of the layers. In this way, the first attribute information includes the attribute information of the change required for switching the display direction, so that the accuracy of the first attribute information and the second attribute information can be ensured, and the accuracy of preprocessing can also be ensured.
In one possible implementation manner, the electronic device screen includes at least two display directions, the touch event includes a screen display direction switching event, and the screen display direction switching event includes one or more of an event that the electronic device detects that a placement direction thereof changes, an operation that the electronic device detects to switch the display directions, and a change event that the electronic device detects a direction of a face relative to the screen. Therefore, because the trigger events of the display direction switching of different electronic devices are different, the electronic devices can detect the trigger events which are required to be different, and the scheme completeness is ensured.
In the above embodiment, the processing procedure of the electronic device may refer to the related descriptions of fig. 7 and fig. 9, which are not repeated.
In a second aspect, the present application provides an electronic device comprising: a graphics preprocessing module, a graphics rendering module and a surface mixer, one or more processors and one or more memories; the one or more processors are coupled with the one or more memories, the one or more memories for storing computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform: acquiring a trigger event; the triggering event indicates that the display direction of the electronic equipment is switched from a first display direction to a second display direction; acquiring first attribute information in the second display direction through the graphic preprocessing module; generating a preprocessing instruction based on the first attribute information by the graphic preprocessing module under the condition that the second display direction corresponding to the first attribute information is different from a default display direction, and inserting the preprocessing instruction into a first drawing instruction to obtain a second drawing instruction, wherein the first drawing instruction is used for indicating the graphic drawing module to draw; drawing by the graph drawing module based on the second drawing instruction to obtain a first image set, wherein the first image set is an image set drawn for switching a display direction; and rendering the first image set through the graph drawing module to obtain a second image set.
In the embodiment of the application, in the scene of switching the display direction of the screen, the electronic equipment can finish the processing of rotation, size processing, position and the like through the GPU or the CPU in the process of rendering, so that the task of finishing the specialized layer processing of the hardware synthesizer through the processing of the surface eFlinger after the rendering is finished is avoided, the efficiency of drawing, rendering and displaying the image is improved, the probability of card frames appearing on an interface is reduced, the problems of card frames and jitter in the display are alleviated, and the user experience is improved.
In a possible embodiment, the electronic device further comprises a HWC and a display; the HWC is used for performing layer synthesis on the second image set to obtain a third image; the display is used for displaying the third image.
In one possible implementation manner, the electronic device includes a first application and a window manager WMS, the graphics drawing module includes a drawing management module and a rendering module, and after the electronic device acquires the trigger event, the method specifically performs: acquiring the trigger event through the first application; after the electronic device obtains the trigger event through the first application, the electronic device further performs: acquiring the first attribute information based on the trigger event through the first application; transmitting the first attribute information to the WMS through the first application; acquiring first attribute information in the second display direction through the graphic preprocessing module, and specifically executing: and sending the first attribute information to the graphic preprocessing module through the WMS. Therefore, the graphics preprocessing module can intercept the first attribute information sent by the WMS to the SurfaceFlinger, and is convenient to change the first attribute information into the second attribute information in preprocessing, so that the SurfaceFlinger is ensured not to need to further judge, the operation of graphics layer synthesis by the GPU is avoided, the efficiency of image drawing, rendering and displaying can be improved, and the probability of card frames appearing on an interface is reduced.
In one possible implementation manner, in a case where the second display direction corresponding to the first attribute information is different from a default display direction, the electronic device further performs: sending second attribute information to the WMS through the graphic preprocessing module, wherein the display direction corresponding to the second attribute information is the same as the default display direction; transmitting the second attribute information to the surface mixer through the WMS; and controlling the HWC to perform layer synthesis on the second image set through the surface mixer to obtain a third image, wherein the method specifically comprises the following steps of: and controlling the HWC to perform layer synthesis on the second image set based on the second attribute information through the surface mixer to obtain a third image. In this way, the electronic device can send the second attribute information with the same default display direction to the SurfaceFlinger during preprocessing, so that the SurfaceFlinger is ensured not to call the GPU to perform layer composition, and the layer composition can be ensured to be completed by HWC, thereby improving the efficiency of image drawing, rendering and display, and reducing the probability of card frames on the interface.
In a possible implementation manner, the electronic device controls, through the surface mixer, the HWC to perform layer synthesis on the second image set based on the second attribute information, so as to obtain a third image, and specifically perform: judging whether picture adjustment is needed or not based on the second attribute information through the surface mixer, and determining that picture adjustment is not needed through the surface mixer when the display direction corresponding to the second attribute information is the same as a default display direction; determining that picture adjustment is required by the surface mixer under the condition that the display direction corresponding to the second attribute information is different from the default display direction; and under the condition that the picture adjustment is not needed, carrying out layer synthesis on the second image set through the HWC to obtain a third image. Therefore, the display direction of the second attribute information is the same as the default display direction, and the layer synthesis is performed based on the second attribute information, so that the layer synthesis is completed by the HWC instead of the GPU, the efficiency of image drawing, rendering and display can be improved, and the probability of card frames appearing on the interface is reduced.
In a possible implementation manner, the electronic device generates, by the graphics preprocessing module, a preprocessing instruction based on the first attribute information, inserts the preprocessing instruction into a first drawing instruction, and further executes, before obtaining a second drawing instruction: sending the first drawing instruction to the drawing management module through the first application, wherein the first drawing instruction is stored in an instruction buffer; reading the first drawing instruction in the instruction buffer through the graphic preprocessing module; inserting the preprocessing instruction into the first drawing instruction through the graphic preprocessing module to obtain a second drawing instruction, and specifically executing: inserting the preprocessing instruction into the first drawing instruction through the graphic preprocessing module to obtain the second drawing instruction, and inputting the second drawing instruction into the instruction buffer; and acquiring the second drawing instruction through the drawing management module, wherein the second drawing instruction instructs the drawing management module to draw and preprocess the image. Therefore, the preprocessing instruction can be inserted into the drawing instruction before drawing and rendering of the graphics, rotation, size adjustment, position adjustment and other operations are completed in the drawing and rendering processes, the preprocessing process is completed in advance by the CPU or the GPU, so that the fact that the second image set received by the SurfaceFlinger needs to be subjected to picture adjustment again is guaranteed, the fact that layer composition is completed by the HWC instead of the GPU is guaranteed, the efficiency of drawing, rendering and displaying of the images can be improved, and the probability of card frames appearing on an interface is reduced.
In one possible implementation manner, after the electronic device obtains the first attribute information in the second display direction through the graphics preprocessing module, in a case where the first attribute information includes a rotation attribute, the electronic device further performs: judging whether the second display direction corresponding to the rotation attribute in the first attribute information is different from the default display direction or not based on the default display direction by the graphic preprocessing module; determining, by the graphics preprocessing module, that the second display direction is different from the default display direction when the rotation angle of the rotation attribute is not 0 degrees; determining, by the graphics preprocessing module, that the second display direction is the same as the default display direction when the rotation angle of the rotation attribute is 0 degrees; the rotation attribute describes the display angle of the layer, and the rotation angle of the rotation attribute is 0 degrees to indicate that the screen is in a default display direction. In one possible implementation manner, the electronic device can directly judge whether pretreatment is needed or not based on the rotation attribute, the rotation attribute is directly related to the display direction, whether pretreatment is needed or not can be rapidly and accurately determined, and judging efficiency is improved.
In one possible implementation, the first attribute information includes a rotation attribute, a width-height attribute, and a position attribute; the rotation attribute describes the display angle of the layer; the wide-high attribute describes the size of the layer; the location attribute describes the order of the layers. In this way, the first attribute information includes the attribute information of the change required for switching the display direction, so that the accuracy of the first attribute information and the second attribute information can be ensured, and the accuracy of preprocessing can also be ensured.
In one possible implementation manner, the electronic device screen includes at least two display directions, the touch event includes a screen display direction switching event, and the screen display direction switching event includes one or more of an event that the electronic device detects that a placement direction thereof changes, an operation that the electronic device detects to switch the display directions, and a change event that the electronic device detects a direction of a face relative to the screen. Therefore, because the trigger events of the display direction switching of different electronic devices are different, the electronic devices can detect the trigger events which are required to be different, and the scheme completeness is ensured.
In a third aspect, the present application provides an electronic device comprising a graphics preprocessing module, a graphics rendering module, a surface mixer, a hardware synthesizer HWC and a display, wherein: the electronic equipment acquires a trigger event; the triggering event indicates that the display direction of the electronic equipment is switched from a first display direction to a second display direction; the graphic preprocessing module is used for acquiring first attribute information in the second display direction; the graphics preprocessing module is further configured to generate a preprocessing instruction based on the first attribute information, insert the preprocessing instruction into a first drawing instruction to obtain a second drawing instruction, where the first drawing instruction is used to instruct the graphics drawing module to draw, when the second display direction corresponding to the first attribute information is different from a default display direction; the graphic drawing module is further used for drawing based on the second drawing instruction to obtain a first image set; the graphic drawing module is further used for rendering the first image set to obtain a second image set, wherein the second image set is an image set drawn for switching the display direction; the surface mixer is used for controlling the HWC to perform layer synthesis on the second image set to obtain a third image; the display is used for displaying the third image.
In the embodiment of the application, in the scene of switching the display direction of the screen, the electronic equipment can finish the processing of rotation, size processing, position and the like through the GPU or the CPU in the process of rendering, so that the task of finishing the specialized layer processing of the hardware synthesizer through the processing of the surface eFlinger after the rendering is finished is avoided, the efficiency of drawing, rendering and displaying the image is improved, the probability of card frames appearing on an interface is reduced, the problems of card frames and jitter in the display are alleviated, and the user experience is improved.
In one possible implementation manner, the electronic device comprises a first application and a window manager WMS, the graphics drawing module comprises a drawing management module and a rendering module, and the electronic device is specifically configured to obtain the trigger event after obtaining the trigger event; the first application is further configured to obtain the first attribute information based on the trigger event; the first application is further configured to send the first attribute information to the WMS; the graphic preprocessing module acquires first attribute information in the second display direction, and is specifically configured to: the graphics preprocessing module is configured to receive the first attribute information from the WMS. Therefore, the graphics preprocessing module can intercept the first attribute information sent by the WMS to the SurfaceFlinger, and is convenient to change the first attribute information into the second attribute information in preprocessing, so that the SurfaceFlinger is ensured not to need to further judge, the operation of graphics layer synthesis by the GPU is avoided, the efficiency of image drawing, rendering and displaying can be improved, and the probability of card frames appearing on an interface is reduced.
In a possible implementation manner, in a case that the second display direction corresponding to the first attribute information is different from a default display direction, the method includes: the graphics preprocessing module is further configured to send second attribute information to the WMS, where a display direction corresponding to the second attribute information is the same as the default display direction; the WMS is further configured to send the second attribute information to the surface mixer; the surface mixer is further configured to control the HWC to perform layer synthesis on the second image set to obtain a third image, and specifically includes: the surface mixer is further configured to control the HWC to perform layer synthesis on the second image set based on the second attribute information, so as to obtain a third image. In this way, the electronic device can send the second attribute information with the same default display direction to the SurfaceFlinger during preprocessing, so that the SurfaceFlinger is ensured not to call the GPU to perform layer composition, and the layer composition can be ensured to be completed by HWC, thereby improving the efficiency of image drawing, rendering and display, and reducing the probability of card frames on the interface.
In a possible implementation manner, the electronic device controls, through the surface mixer, the HWC to perform layer synthesis on the second image set based on the second attribute information, so as to obtain a third image, which is specifically used for: the surface mixer is configured to determine whether picture adjustment is required based on the second attribute information, and determine that picture adjustment is not required when a display direction corresponding to the second attribute information is the same as a default display direction; determining that picture adjustment is needed under the condition that the display direction corresponding to the second attribute information is different from the default display direction; and under the condition that the picture adjustment is not needed, the HWC is used for carrying out layer synthesis on the second image set to obtain a third image. Therefore, the display direction of the second attribute information is the same as the default display direction, and the layer synthesis is performed based on the second attribute information, so that the layer synthesis is completed by the HWC instead of the GPU, the efficiency of image drawing, rendering and display can be improved, and the probability of card frames appearing on the interface is reduced.
In one possible implementation manner, the electronic device generates, by the graphics preprocessing module, a preprocessing instruction based on the first attribute information, and inserts the preprocessing instruction into a first drawing instruction to obtain a second drawing instruction, where: the first application is further configured to send the first drawing instruction to the drawing management module, where the first drawing instruction is stored in an instruction buffer; the graphics preprocessing module is further configured to read the first drawing instruction in the instruction buffer; the graphics preprocessing module inserts the preprocessing instruction into a first drawing instruction to obtain a second drawing instruction, and is specifically configured to insert the preprocessing instruction into the first drawing instruction to obtain the second drawing instruction, and input the second drawing instruction into the instruction buffer; the drawing management module is used for acquiring the second drawing instruction, and the second drawing instruction instructs the drawing management module to conduct image drawing and preprocessing. Therefore, the preprocessing instruction can be inserted into the drawing instruction before drawing and rendering of the graphics, rotation, size adjustment, position adjustment and other operations are completed in the drawing and rendering processes, the preprocessing process is completed in advance by the CPU or the GPU, so that the fact that the second image set received by the SurfaceFlinger needs to be subjected to picture adjustment again is guaranteed, the fact that layer composition is completed by the HWC instead of the GPU is guaranteed, the efficiency of drawing, rendering and displaying of the images can be improved, and the probability of card frames appearing on an interface is reduced.
In one possible implementation manner, after the electronic device obtains, through the graphics preprocessing module, the first attribute information in the second display direction, where the first attribute information includes a rotation attribute, the method includes:
the graphic preprocessing module is further used for judging whether the second display direction corresponding to the rotation attribute in the first attribute information is different from the default display direction or not based on the default display direction; the graphics preprocessing module is used for determining that the second display direction is different from the default display direction under the condition that the rotation angle of the rotation attribute is not 0 degrees; determining that the second display direction is the same as the default display direction when the rotation angle of the rotation attribute is 0 degrees; the rotation attribute describes the display angle of the layer, and the rotation angle of the rotation attribute is 0 degrees to indicate that the screen is in a default display direction. In one possible implementation manner, the electronic device can directly judge whether pretreatment is needed or not based on the rotation attribute, the rotation attribute is directly related to the display direction, whether pretreatment is needed or not can be rapidly and accurately determined, and judging efficiency is improved.
In one possible implementation, the first attribute information includes a rotation attribute, a width-height attribute, and a position attribute; the rotation attribute describes the display angle of the layer; the wide-high attribute describes the size of the layer; the location attribute describes the order of the layers. In this way, the first attribute information includes the attribute information of the change required for switching the display direction, so that the accuracy of the first attribute information and the second attribute information can be ensured, and the accuracy of preprocessing can also be ensured.
In one possible implementation manner, the electronic device screen includes at least two display directions, the touch event includes a screen display direction switching event, and the screen display direction switching event includes one or more of an event that the electronic device detects that a placement direction thereof changes, an operation that the electronic device detects to switch the display directions, and a change event that the electronic device detects a direction of a face relative to the screen. Therefore, because the trigger events of the display direction switching of different electronic devices are different, the electronic devices can detect the trigger events which are required to be different, and the scheme completeness is ensured.
In a fourth aspect, the present application provides an electronic device, comprising: one or more functional modules. One or more functional modules are configured to perform the interface display method in any of the possible implementations of the above aspect.
In a fifth aspect, an embodiment of the present application provides a computer storage medium, including computer instructions, which when executed on an electronic device, cause the electronic device to perform the interface display method in any one of the possible implementation manners of the foregoing aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, which when run on a computer causes the computer to perform the interface display method in any one of the possible implementations of the above aspect.
Drawings
Fig. 1 is a schematic hardware structure of an electronic device 100 according to an embodiment of the present application;
fig. 2 is a schematic software structure of an electronic device 100 according to an embodiment of the present application;
FIGS. 3A-3F are diagrams of user interfaces for a set of folding screens provided in accordance with embodiments of the present application;
FIG. 4 is a flow chart of a method for displaying an interface according to an embodiment of the present application;
FIG. 5 is a schematic diagram of layer drawing and composition according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a software architecture of another electronic device 100 according to an embodiment of the present application;
FIG. 7 is a flow chart of a method for providing another interface display according to an embodiment of the present application;
FIG. 8 is an interactive schematic diagram of an image rendering module according to an embodiment of the present application;
FIG. 9 is a flowchart of a method for preprocessing graphics according to an embodiment of the present application;
fig. 10A and fig. 10B are schematic views of a set of interface display flow effects according to an embodiment of the present application.
Detailed Description
In embodiments of the present application, the words "first," "second," and the like are used to distinguish between identical or similar items that have substantially the same function and effect. For example, the first chip and the second chip are merely for distinguishing different chips, and the order of the different chips is not limited. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
It should be noted that, in the embodiments of the present application, words such as "exemplary" or "such as" are used to denote examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
The embodiment will be specifically described below taking the electronic device 100 as an example. It should be understood that the electronic device 100 shown in fig. 1 is only one example, and that the electronic device 100 may have more or fewer components than shown in fig. 1, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The electronic device 100 may include: processor 110, external memory interface 120, internal memory 121, universal serial bus (universal serial bus, USB) interface 130, charge management module 140, power management module 141, battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headset interface 170D, sensor module 180, keys 190, motor 191, indicator 192, camera 193, display 194, and subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 110 may contain multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc., respectively, through different I2C bus interfaces. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, such that the processor 110 communicates with the touch sensor 180K through an I2C bus interface to implement a touch function of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, the processor 110 may contain multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through the I2S interface, to implement a function of answering a call through the bluetooth headset.
PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface to implement a function of answering a call through the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through a UART interface, to implement a function of playing music through a bluetooth headset.
The MIPI interface may be used to connect the processor 110 to peripheral devices such as a display 194, a camera 193, and the like. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the photographing functions of electronic device 100. The processor 110 and the display 194 communicate via a DSI interface to implement the display functionality of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transfer data between the electronic device 100 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and is not meant to limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also employ different interfacing manners in the above embodiments, or a combination of multiple interfacing manners.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices via wireless communication technology. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 may include the fold-in screen described above. In some embodiments, the display 194 includes the fold-in and C screens described above.
The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the touch operation intensity according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 100 through the reverse motion, so as to realize anti-shake. The gyro sensor 180B may also be used for navigating, somatosensory game scenes.
In an embodiment of the present application, the display screen 194 of the electronic device 100 may include a plurality of screens that may be folded. A gyro sensor 180B may be provided in the screen for measuring the orientation (i.e., a directional vector or a directional angle of the orientation) of the corresponding screen. The electronic apparatus 100 may determine the display direction of the screen according to the change in the orientation angle of each screen measured by the gyro sensor 180B. In addition, the camera can acquire a face photo, and the direction of the screen to be displayed is determined based on the face direction.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the application, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
Fig. 2 is a block diagram of a software architecture of an electronic device 100 according to an embodiment of the present application.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer (application layer), an application framework layer (framework layer), an Zhuoyun row (Android run) and system library, a hardware abstraction layer, a kernel layer, and a hardware layer, respectively.
An Application (APP) layer may include a series of application packages. As shown in fig. 2, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
An application Framework layer (Framework) provides an application programming interface (application programming interface, API) and programming Framework for the application of the application layer. The application framework layer includes a number of predefined functions. As shown in fig. 2, the framework layer may include a window manager (window manager service, WMS). Optionally, an activity manager (activity manager service, AMS), a content provider, a view system, a resource manager, a notification manager, etc. (not shown in the figures) may also be included.
The window manager is used for managing window programs. The window manager may obtain the display screen size, determine if there is a status bar, lock the screen, touch the screen, drag the screen, intercept the screen, etc.
In the embodiment of the application, the window manager is used as a manager of View (View), and when a current picture of a certain application needs to be displayed on a display screen, the application can issue a window task to the window manager. WMS may receive window tasks from an application layer that is an application. The window task may include attribute information of the display interface. The WMS management window acquires attribute information of various aspects of the display interface. The attribute information may include information of the direction, position, size, etc. of the window. The WMS may then pass these attribute information for Window to the SurfaceFlinger. Wherein windows (windows) are containers of View, each Window containing a display interface Surface.
In addition, the content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The activity manager is used for managing activity-related transactions such as start, state, life cycle of an application. Wherein activity is an application component that can provide an interface through which a user can interact with an electronic device to perform a task.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
Android runtime (Android run) includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), graphics rendering modules (hwui), and surface mixers (surface mixers), among others.
The graphic drawing module is a module for drawing and rendering the image before the interface is displayed. The graphics rendering module may include a rendering management module and a rendering module. The drawing management module can draw the graph on the canvas, namely, the graph can be drawn through the CPU; the rendering module may render the drawn image through the GPU.
The graphics rendering module may invoke a three-dimensional graphics processing library, a two-dimensional 2D graphics engine, and the like. The three-dimensional graphics processing library may be an open graphics library (Open Graphics Library, openGL), and is used for implementing image rendering and the like. The two-dimensional 2D graphics engine may be an SGL. The 2D graphics engine is a drawing engine for 2D drawing. OpenGL is a specialized graphical program interface that defines a cross-programming language, cross-platform programming interface specification. The method is used for three-dimensional images (two-dimensional images can also be used), and is a bottom graphic library with powerful functions and convenient calling. OpenGL ES is a subset of OpenGL three-dimensional graphics APIs designed for embedded devices such as cell phones, game hosts, etc.
The surface mixer SurfaceFlinger is a system service responsible for processing and synthesizing upper layer data and interacting with a display screen. I.e. the image can be composed based on the information provided by the WMS and the composed information is submitted to the buffer of the screen waiting for the display to map it onto the screen. SurfaceFlinger uses a hardware compositor (HardwareComposer, HWC) or GPU to compose the display layers.
In the process of drawing, rendering and synthesizing the images, data exchange can be performed through a Buffer (Buffer). Illustratively, openGL places rendering and rendering completion data into a buffer queue (BufferQueue), and surfeflinger synthesizes the data taken from the BufferQueue and then places it into the BufferQueue.
In addition, the surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
Included in the hardware abstraction layer (hardware abstraction layer, HAL) are a hardware synthesizer (hardware composer, HWC) and a graphics memory allocator (Graphics memory allocator, gralloc) (not shown), among others. Wherein:
HWC may synthesize surface with SurfaceFlinger to the screen. Assuming the screen orientation of the electronic device is portrait, the status bar is at the top. HWC is a hardware abstraction of the display controller system and surfeflinger delegates some synthesized work to HWC to reduce the load of GPU. This consumes less power than simply synthesizing with the GPU.
The kernel layer is a layer between a hardware layer and a software layer. The kernel layer may include display drivers and the like. The display driver can display data capable of controlling the display screen to map the synthesized data in the SurfaceFlinger onto the screen.
The Hardware layer (hard) may include a CPU, GPU, display screen, and the like.
Wherein the graphics processor (Graphic Processing Unit, GPU): the method is mainly used for processing graphic operation, namely a core component of a so-called display card. In the embodiment of the application, if one device supports GPU hardware acceleration rendering, when the Android application program calls the OpenGL interface to draw the UI, the UI of the Android application program is rendered by using the GPU through a hardware acceleration technology.
The CPU may execute task instructions of a window manager, a rendering management module, a SurfaceFlinger.
For electronic devices with folding screens, the user may use the screen either horizontally or vertically with the folding screen unfolded. The user interface may need to be displayed in different directions of the screen, and the following description is given below for a display scenario in which the same display interface is displayed in multiple directions of the screen:
Fig. 3A-3F are schematic views of a user interface for a set of folding screens disclosed in an embodiment of the present application.
Referring to fig. 3A to 3C, a schematic product form of an electronic device 100 with an inward folding screen according to an embodiment of the application is shown. Fig. 3A is a schematic view of a fully unfolded configuration of the invaginated folding screen. The inwardly folded panel may be folded along fold edges in directions 11a and 11B shown in fig. 3A to form a panel a and a panel B in a semi-folded configuration shown in fig. 3B. The inner folding screen may continue to fold along the fold edges in the directions 33a and 33B shown in fig. 3B, resulting in the fully folded configuration of the folding screen shown in fig. 3C. As shown in fig. 3C, after the folding screen of the electronic device 100 is fully folded, the a-screen and the B-screen are opposite and invisible to the user.
It will be appreciated that for an electronic device 100 having such an internal folding screen, the interface may be displayed on the secondary screen when the folding screen is in the fully folded configuration; when the folding screen is in a semi-folding state, the interface can be displayed on the screen A, the screen B and the auxiliary screen; when the folding screen is in an unfolding state, an interface can be displayed on the screen A and the screen B.
Referring to fig. 3D, a schematic product form of an electronic device 100 with an out-folded folding screen according to an embodiment of the application is shown. Fig. 3A is a schematic view of a fully unfolded outward folding screen. The out-turned folding screen can be turned outwards along the folding edge, i.e. in the directions 22a and 22b shown in fig. 3A. As shown in fig. 3D, after the folding screen of the electronic device 100 is fully folded, the a-screen and the B-screen are on the front and back sides visible to the user.
It will be appreciated that for an electronic device 100 having such an external folding screen, the interface may be displayed on either the A-screen or the B-screen when the folding screen is in the fully folded configuration; the interface may be displayed on the A-screen and the B-screen when the folding screen is in the semi-folded configuration, or when the folding screen is in the unfolded configuration.
When the folding screen is in the unfolded state, the display direction of the screen of the electronic equipment can be changed along with the placement direction of the electronic equipment. For example, the screen of the electronic device may be rectangular, and 3 directions of 4 directions of the rectangle can be displayed, or 4 directions can be displayed; either landscape or portrait can be displayed. When the electronic device adjusts the placement direction, the direction of its display can be switched. Illustratively, the user clicks on a gallery application of the electronic device, entering a display interface of the gallery. As shown in fig. 3E, when a user holds the electronic device across the screen, the user interface 510 of the electronic device may be displayed through the screen; as shown in fig. 3F, when a user holds the electronic device in a portrait orientation, a user interface 520 of the electronic device may be displayed in a portrait orientation. The display direction of the electronic device, the size of the screen and the position of the screen may be changed, whether the display is switched from a landscape screen to a portrait screen, or from a portrait screen to a landscape screen, or from one landscape screen to another landscape screen. As shown in fig. 3E and 3F, the display contents in the two directions are identical, but since the display direction of the screen is switched, the direction, size, position, and the like of the screen are all adjusted.
In the case that rotation adjustment can be performed in the display direction of the electronic device, fig. 4 is a schematic flow chart of a method for displaying an interface according to an embodiment of the present application. As shown in fig. 4, the method of interface display may include, but is not limited to, the following steps:
in the embodiment of the application, the application layer of the electronic device can comprise a first application, the framework layer comprises a WMS, the system library comprises a graph drawing module and a SurfaceFlinger, the hardware abstraction layer comprises a hardware synthesizer (HWC), the kernel layer comprises a display driver, and the hardware layer comprises a GPU and a display. For a specific description, reference may be made to the related description of fig. 2, which is not repeated.
S401: the first application obtains a first window display request.
The electronic device may obtain a first window (window) display request, after which the first application may create a window based on the first window display request. A window is understood to be a container of abstract concepts, and views in the window can be displayed on a screen. Specifically, under the condition that a first window display request is received by a first application, an activity component is started, and the activity component can create a window and add a view into a window.
In the embodiment of the present application, the first application may be an application program installed by an electronic device, for example, a camera, a gallery, and the like, which are not limited.
S402: the first application sends a first window task to a Window Manager (WMS).
After the first application creates the first window, a first window task may be sent to the window manager. Correspondingly, the window manager may receive a first window task from the first application. Wherein the first window task is a task that includes creating attribute information for the window that can describe the display window, e.g., size, orientation, hierarchy of display, etc. After the activity of the first application creates a window, the WMS may be accessed through a session (first window task).
In case of an update (change) of the attribute information, the first application is triggered to send a first window task to the WMS. In the process of generating the window picture, the attribute information of the window can be unchanged or unchanged. Under the condition that the attribute information is unchanged, the window manager can carry out generation of a display picture by using the attribute information determined last time; in the event of a change, the window manager updates the attribute information to generate a display to ensure that the display is able to accommodate the change and demand.
S403: the window manager obtains attribute information based on the first window task.
After the window manager receives the first window task, attribute information for creating the window may be determined based on the first window task. The attribute information of the window includes a rotation attribute, a position attribute, a width-height attribute, and the like. Wherein the rotation attribute is used to represent the display direction of the layer (window). The location attribute is used to describe the stacking order of the layers. The wide-high attribute is used for the size of the layer canvas.
S404: the window manager sends attribute information to a surface mixer (surfeflinger).
After the window manager obtains the attribute information, the attribute information may be sent to the surfeflinger. Correspondingly, surfeflinger may receive attribute information from the window manager.
S405: the first application sends a drawing instruction to the graphics drawing module.
The first application may send drawing instructions to the graphics drawing module. Correspondingly, the graphics rendering module may receive rendering instructions from the graphics rendering module. The drawing instructions instruct the graphics drawing module to draw on a canvas (canvas).
Under the condition that the electronic equipment displays the window interface of the first application, the first application can send attribute information to the surfeflinger according to the rhythm of the screen refresh rate. When the display of the electronic device displays the window interface, the window interface is displayed according to a specific screen refresh rate. For example, the screen refresh rate (frame rate (frames per second, FPS)) is 120, 90, 60, and the like. Based on the display rhythm of the electronic equipment, the first application periodically sends drawing instructions according to the rhythm.
The order of execution of S402 and S405 is not limited.
S406: the image drawing module draws based on the drawing instruction to obtain a first image set.
After the image drawing module receives the second window task, a drawing request can be generated based on the second window task, and a first image set is obtained. Wherein the first image set is an image of N layers drawn by the image drawing module. N is a positive integer.
Further, the graphics rendering module may create a canvas based on the second window task. Illustratively, in the case that the user is touching the screen, based on the touch event, the motion position of each frame of animation is calculated, each view and size are obtained and saved, the display position of the control is determined, after the display position of the control is determined, the layers in the application window are drawn on a canvas (canvas), and drawing instructions are constructed. The drawing management module of the graphic drawing module may draw by based on the drawing instruction.
The interface drawing process mainly draws the display interface layer, and the essence of the interface drawing is to fill pixels. Specifically, fig. 5 is a schematic diagram of drawing and synthesizing layers according to an embodiment of the present application, and as shown in fig. 5, taking the parent of the android master as an example, the interface drawing may include drawing of 4 layers, which are respectively a status bar, a navigation bar, an application interface, and an initiator icon layer. The application interface may be provided by an application service, the initiator icon layer may be drawn by a host application, and the layer may be completed by a system UI (user interface) thread of the user interface.
S407: the image rendering module sends rendering instructions to the GPU.
After the image drawing module finishes drawing and obtains the second image set, a rendering instruction can be generated and sent to the GPU. Correspondingly, the GPU receives rendering instructions from the image rendering module. S408: and the GPU renders the first image set based on the rendering instruction to obtain a second image set.
The first image set is an image of the N layers that are rendered. The second image set is an image of which the corresponding layer is rendered, i.e., an image of which the N layers are rendered.
After the GPU receives the rendering instruction, rendering processing can be performed on the first image set, and a rendered second image set is obtained.
The rendering process may specifically be that the GPU adjusts the drawn layer, such as brightness, contrast, saturation, and the like. Illustratively, the GPU layer performs contrast processing, and the GPU caches the second image set after the contrast operation in a buffer. Wherein the rendering process does not change the state of the original layer.
Optionally, in the rendering process, a rendering module in the image drawing module may call the OpenGL to control the GPU to perform rendering processing.
S409: the GPU sends a second image set to the SurfaceFinger.
After the GPU renders the image, a second set of images may be sent to the SurfaceFlinger. Correspondingly, surfeflinger may receive a second set of images from the GPU.
In the above-mentioned S405 to S409, the electronic device draws and renders the second image set, and in S410 to S417, the SurfaceFlinger controls the GPU or HWC to synthesize the third image, and caches the third image.
S410: the SurfaceFlinger determines whether or not picture adjustment is necessary based on the attribute information. If necessary, S411 is performed; otherwise, S414 is performed.
And instructing a hardware synthesizer to perform layer synthesis on the second image set to obtain a third image.
Wherein the third image is an image obtained by combining the rendered plurality of layers (second image set) by the hardware synthesizer.
The layer combining process is mainly a process that a surface eFlinger instructs a hardware synthesizer to sequentially stack and combine the rendered layers (the second image set) according to a proper overlapping sequence to obtain a third image. For example, the image set after rendering acquired from the buffer is subjected to layer composition. And overlapping and combining the layers of the layer synthesized by the layers according to the layer sequence requirement in the attribute information to obtain a complete image frame (third image).
S411: the SurfaceFlinger sends a first layer composition instruction to the GPU.
The SurfaceFlinger may send a first layer composition instruction to the GPU. Correspondingly, the GPU may receive a first layer composition instruction from surfeflinger. And the first layer synthesis instruction is used for instructing the GPU to synthesize different layers.
Illustratively, as shown in fig. 5, the rendered layers of the layer status bar, navigation bar, application interface, and initiator icon layer are combined in a chronologically overlapping order to obtain a composite image frame.
S412: and the GPU adjusts the picture of the second image set based on the attribute information, and synthesizes the layers of the adjusted image set to obtain a third image.
S413: the GPU sends the third image to the display driver.
The GPU may send the third image to the display driver. Correspondingly, the display driver receives a third image from the GPU.
Among them, S411 to S413 are processes executed when the SurfaceFlinger needs to perform screen adjustment.
S414: surfaceFlinger sends a second layer composition instruction to the HWC.
The SurfaceFlinger may send a second layer composition instruction to the HWC. Correspondingly, the HWC may receive a second layer composition instruction from surfeflinger, the second layer composition instruction being for instructing the HWC to compose different layers.
S415: the HWC performs layer combination processing on the second image set to obtain a third image.
The HWC performs layer combination processing on the direct second image set to obtain a third image.
The image synthesis process may refer to the description of S412, and is not described in detail.
S416: the hardware compositor sends a third image to the display driver.
After the hardware synthesizer graphics layer is completed, the synthesized third image may be sent to a display driver. Correspondingly, the display driver may receive a third image from the hardware compositor.
Here, S413 to S416 are flows executed when the surfeflinger does not need to perform screen adjustment.
S417: the display driver buffers the third image.
After the display driver receives the third image, the third image may be buffered in a buffer FrameBuffer.
S418: the display drive controls the display screen to display the third image.
The display driver may control the display screen to sequentially display the third images in the buffer. The sending and displaying process means that the electronic device can call the display drive to send the synthesized image to the display screen for display according to the specified display area.
For example, the interface drawing, interface rendering, interface composition, and rendering processes may all be triggered by a vertical synchronization (vertical synchronization, VSYNC) signal. The trigger period of the VSYNC signal is the refresh rate of the screen.
Therefore, in the process of displaying the screen of the electronic device in different directions, in S410 to S414 of fig. 4, before the surface image combiner instructs the hardware combiner to perform the image layer combining process on the second image set based on the second image set and the attribute information, it is necessary to determine whether the attribute information instructs the second image set to perform the process such as rotation (S410, whether or not screen adjustment is required), and if rotation is required, the electronic device rotates the second image set by the GPU, adjusts the image size, the icon position, and the like, and then performs the image layer combining process.
The hardware synthesizer is a special processor for layer synthesis, and the hardware processor has high processing speed and high efficiency when processing special tasks; conversely, the speed of processing other tasks is slower and the efficiency is lower. In the above process, the display direction of the screen is changed, the direction, the size, the position and the like of the image are changed, and the surfeflinger calls the GPU to process the direction, the size, the position and the like, and performs layer composition. Once the display direction is switched frequently, a large portion of the graphics layer is processed by the GPU. GPUs are not good at layer composition, which is much slower than hardware synthesizers, resulting in inefficient layer composition, which is likely to cause the displayed image frames to clip.
Aiming at the embodiment, the application provides an interface display method, in which the electronic equipment can determine whether the displayed image needs to rotate in advance through an image drawing module, and under the condition that the rotation is needed, the electronic equipment performs preprocessing on the positions, the directions, the sizes and the like of the images of different layers in advance in the process of drawing and rendering the image, so that the problem that the following SurfaceFlinger also needs to perform the operation on the electronic equipment is avoided, the hardware synthesizer can process own specialized tasks, the processing efficiency is improved, and the problem of image frame blocking or shaking in the display process is alleviated.
When the screen displays images in different display directions by combining the scene unfolded by the folding screen, the displayed directions and the positions of the sizes are different, and the images are required to be processed in advance before being sent and displayed.
Fig. 6 is a schematic diagram of a software architecture of another electronic device 100 according to an embodiment of the present application.
As shown in fig. 6, in comparison with the software architecture framework shown in fig. 2, in the embodiment of the present application, a graphics preprocessing module is added to the graphics rendering module. When the attribute information changes, the image preprocessing module can determine whether to perform preprocessing or not based on the attribute information intercepted by the window manager and preset attributes in the attribute information, and when the preprocessing is required, the image preprocessing module adjusts the preset attributes in the attribute information to default attributes and returns the default attributes to the window manager (modifies the attribute information). Further, the graphics preprocessing module may generate a preprocessing instruction based on the attribute information, and intervene in the rendering management module to execute the newly added preprocessing instruction based on the specific rendering situation, so that advanced (preprocessing) can be performed on the layer. The preprocessing refers to adjustment processing of the direction, the size and the position of the layer (the same process as the picture adjustment in fig. 4), the preset attribute may at least include one of a rotation attribute, a width-height attribute and a position attribute, and the corresponding preprocessing instruction may at least include one of a rotation instruction, a width-height instruction and a position instruction.
In an exemplary embodiment, after the image preprocessing module intercepts the first attribute information from the WMS, it may determine whether the image to be displayed currently needs to be rotated based on the first attribute information, and in a case that rotation is required, the image preprocessing module may add a rotation instruction to the received first drawing instruction, to obtain the second drawing instruction. The drawing management module may then draw according to the second drawing instruction. In this way, the drawing and rendering process can be interfered, and the electronic device processes each layer in advance. Such as rotation, sizing, or positional adjustment, etc. The synthesized image of the surface eFlinger is the image which is preprocessed, and the further processing is not needed, so that the hardware synthesizer directly stacks the image.
The description of other parts in fig. 6 may refer to the related description in fig. 2, which is not repeated.
Under the condition that the screen display of the electronic equipment is in a changeable state, the electronic equipment can start the method and the functions of the figure 7, namely the figure preprocessing module works under the condition that the display state is changeable; and under the condition that the display state is not changeable, the display device does not work. The display state being changeable refers to a state in which the display of the display screen can be changed. For example, the folding screen is in an unfolded state, and the display direction of the electronic device can be switched, or other electronic devices capable of starting the function of rotating the screen can be used, which is not limited by the application.
FIG. 7 is a flow chart of another method for displaying an interface according to an embodiment of the present application. As shown in fig. 7, the electronic device may include, but is not limited to, the following steps:
the electronic device may include the various modules and components of fig. 6, among other things. The descriptions of the respective modules and devices may be referred to the related descriptions in fig. 6 and fig. 2, and are not repeated.
S701: the first application obtains a first window display request.
The description of S701 may refer to the above description of S401 specifically, and is not repeated.
S702: the first application sends a first window task to the window manager.
The description of S702 may refer specifically to the description related to S402.
And triggering the first application to send a first window task to the WMS under the condition that the attribute information is updated (changed), wherein the first window task carries the first attribute information. In the process of generating the window picture, the attribute information of the window can be unchanged or unchanged. Under the condition that the attribute information is unchanged, the window manager can carry out generation of a display picture by using the attribute information determined last time; in the event of a change, the window manager updates the attribute information to generate a display to ensure that the display is able to accommodate the display changes and requirements.
The first application may acquire a trigger event, where the trigger event indicates that the display direction of the electronic device is switched from the first display direction to the second display direction, and it may be understood that the trigger event triggers the display direction to switch. After the first application of the electronic device acquires the trigger event, attribute information may be generated (updated) based on the trigger event, and a first window task may be generated based on the attribute information, and then the first window task may be issued to the window manager. The first application may determine the corresponding attribute information based on the acquired trigger time and the mapping relationship of the trigger event and the attribute information (or based on the acquired second display direction and the mapping relationship between the display direction and the attribute information). Different trigger events can change different attributes in the attribute information, and the mapping relation is not limited by the application. In addition, the trigger event is used to trigger the update of the attribute information.
Under the condition that the first application acquires the trigger event, the electronic device can determine that the current display direction changes based on the trigger event, wherein the display is provided with different display directions, the display directions of the display screens are indicated to be different by the different display directions, attribute information corresponding to the different display directions is different, the attribute information is used for describing a display window, the display screen of the electronic device comprises at least two display directions, and the meanings of the display screen and the display are the same.
Specifically, the first application of the electronic device judges whether a trigger event is acquired, and if the trigger event is acquired by the first application, a first window task can be sent to the WMS; otherwise, not transmitting. The touch event may include a screen display direction switching event, a split screen operation, and the like. The event of switching the display direction of the screen may include 1. The electronic device detects an event that the placement direction of the electronic device changes, 2. The electronic device detects an operation for switching the display direction, 3. The electronic device detects an event that the direction of the face relative to the screen changes, one or more of which are not limited by the embodiment of the present application. The triggering event may cause a change in at least one attribute of the attribute information. The trigger event and the changed attribute information have specific mapping relation, and the application is not limited.
In a possible implementation, the screen display direction switching event triggers an update of the attribute information (the triggering event is a screen display direction switching). Under the condition that the screen of the folding screen mobile phone is unfolded, the electronic equipment can be horizontally displayed or longitudinally displayed, so that the display direction of the electronic equipment can be cut into a longitudinal screen from a horizontal screen; the screen may be switched from a portrait screen to a landscape screen, or may be switched from a landscape direction 1 to a landscape direction 2, or vice versa. The electronic device judges whether the display direction is switched. For example, the previous display direction is landscape (fig. 3E), the user changes the angle, the user is standing, and the display direction is portrait (fig. 3F). Further, the direction of the screen display triggers a change in the rotation attribute, the size attribute, and the position attribute in the attribute information.
Alternatively, the electronic device may determine whether the display direction is changed through the gyro. The gyroscope determines the angle (direction) at which it is placed so that the gyroscope drive can determine the display direction of the electronic device based on the mode angle. And comparing the display direction of the last time with the display direction corresponding to the current angle, and judging that the screen display direction is switched if the display directions are different, namely, the electronic equipment detects a screen display direction switching event.
Optionally, when the user touches a control for switching the display direction, the electronic device may acquire a screen display direction switching event in response to the above operation. That is, when the electronic device detects that the touch operation of the current user is an operation of changing the display direction of the screen, it can determine to switch the display direction.
Alternatively, the electronic device may determine that the display direction of the screen is switched when the electronic device detects that the direction of the face relative to the screen is changed. The electronic equipment can acquire a face picture through the front camera, and under the condition that the face direction in the face picture is the same as the screen display direction, the screen display direction is determined not to be switched; in different cases, the screen display direction is switched.
In another possible implementation, the split operation triggers an update of the attribute information (the trigger event is a screen split operation). Under the condition that the screen of the folding screen mobile phone is unfolded, the electronic equipment can be displayed in a full screen or in a split screen, so that the electronic equipment can be switched from full screen display to split screen display; or cut from split screen display to full screen display. The electronic equipment judges whether the screen is split. The electronic device judges whether the touch operation of the user is the touch operation which acts on the split screen or returns to the full screen display, and determines to generate a trigger event under the condition that the touch operation of the user is the split screen operation or returns to the full screen display operation. The split operation triggers a change in the size attribute in the attribute information.
Further, the execution of the first application-specific task may be performed by the CPU.
S703: the window manager obtains first attribute information based on the first window task.
The first attribute information may include one or more of a rotation attribute, a location attribute, a wide-high attribute, and the like.
In addition, the user interface for displaying the same content horizontally and vertically may need to be adjusted except for the electronic device which needs to be rotated, the width and height of the interface, the position of the layer, and so on. Therefore, the rotation attribute, the width-height attribute, and the position attribute in the first attribute information can instruct the electronic device to rotate the display direction of the layer, to enlarge or reduce the size of the layer, to adjust the position of the layer correspondingly, and the like. The first attribute information may further include other related attribute information, which is not limited.
Further, specific execution of HWC processing tasks may be performed by the CPU.
The description of S703 may refer to the above description of S403, which is not repeated.
In S704-S706, before the WMS issues the first attribute information to the SurfaceFlinger, the graphics drawing module may intercept the first attribute information and determine whether to modify the attribute information therein, and when modification is required, modify the first attribute information and return the first attribute information to the WMS.
Fig. 8 is an interaction schematic diagram of an image drawing module according to an embodiment of the present application. As shown in fig. 8, the image drawing module may include a graphic preprocessing module and a drawing rendering module including a drawing management module and a rendering module. The drawing management module can draw graphs of different layers on the canvas, and the rendering module can render the drawn images based on rendering instructions. The graphics preprocessing module may include an attribute management module and an instruction management module. Wherein the attribute management module may receive and adjust attribute information of the window manager.
S704: the window manager sends the first attribute information to the graphics rendering module.
The window manager may send the first attribute information to the graphics rendering module. Correspondingly, the graphics rendering module may receive the first attribute information from the window manager.
In particular, the graphics rendering module may include a graphics preprocessing module through which the electronic device may intercept the first attribute information of the window manager.
S705: the graphics-rendering module generates second attribute information based on the first attribute information.
The graphics preprocessing module of the graphics rendering module may generate the second attribute information based on the first attribute information.
FIG. 9 is a flow chart of a method for preprocessing graphics according to an embodiment of the present application. In connection with fig. 8 and 9, the attribute management module of the graphic preprocessing module may include an attribute identification module and an attribute generation module. In the process that the graphic rendering module generates the second attribute information based on the first attribute information, the graphic preprocessing method may include, but is not limited to, the following steps:
s901: the attribute identification module judges whether preset attributes in the first attribute information indicate that the current display direction needs to be changed. In the case where it is indicated that the current display direction needs to be changed, S902 and S903 (and S906 and S907) are performed; in the case where it is indicated that the current display direction does not need to be changed, S904 and S905 are performed.
After the graphic preprocessing module receives the first attribute information, the attribute identification module can judge whether a preset attribute in the first attribute information indicates that the display direction needs to be changed. The preset attributes may include one or more of a rotation attribute, a width-height attribute, and a position attribute. The attribute identifying module may indicate whether the display direction needs to be changed based on the rotation attribute, the width-height attribute, and the position attribute in the first attribute information. The attribute generation module can send and keep the first attribute information unchanged under the condition that the preset attribute indicates the display direction to change; the attribute generation module may adjust the first attribute information based on a preset attribute in a case where it is indicated that the display direction does not need to be changed.
The electronic equipment stores default attribute information of a default display direction and a default display direction corresponding to a threshold value, the graphic preprocessing module judges whether the second display direction corresponding to the first attribute information is the same as the default display direction, and under the same condition, the attribute recognition module determines that updating is not needed; in different situations, the attribute identification module needs to be updated. Specifically, the attribute identification module of the graphics preprocessing module judges whether the second display direction corresponding to the rotation attribute in the first attribute information is different from the default display direction based on the default display direction, and can judge whether the preset attribute indicates that the current display direction does not need to be changed when the preset attribute in the first attribute information is the default display direction; in the case where the preset attribute in the first attribute information indicates that the display direction is not the default display direction, it may be determined whether the preset attribute indicates that the current display direction needs to be changed.
Illustratively, the degree of rotation of the rotation attribute in the first attribute information is not 0 degrees (e.g., 90 degrees, 180 degrees, 270 degrees, etc.), and can indicate that the display direction needs to be changed (i.e., it is determined that the second display direction is different from the default display direction); the rotation attribute is 0 degrees (default display direction), indicating that the display direction does not need to be changed (determining that the second display direction is the same as the default display direction). The rotation attribute describes the display angle of the layer, and a rotation angle of 0 degrees for the rotation attribute indicates that the screen is in the default display direction. The preset attributes may include one or more indication changes of a rotation attribute, a width-height attribute, and a position attribute, and all of the rotation attribute, the width-height attribute, and the position attribute may be determined as the preset attributes. For example, the rotation attribute indicates a rotation angle (e.g., 180 degrees), the width-height attribute can indicate a size x y (e.g., pixel location of the corresponding canvas vertex), and the position attribute indicates a position of the layer.
In the above embodiment, the rotation attribute can clearly indicate the display direction of the screen, and whether the pretreatment is needed in the current drawing process is judged based on the rotation attribute, which is a reliable basis, and the integrity of the scheme, the high efficiency of the processing process and the accuracy of the judgment result are ensured.
S902: the attribute identification module sends the first attribute information and the preset attribute to the attribute generation module.
In the case where the preset attribute exists, the attribute generating module may transmit the first attribute information and the preset attribute after generating the preset attribute based on the first attribute information. Correspondingly, the attribute generation module may receive the first attribute information and the preset attribute from the attribute identification module.
S903: and the attribute generation module adjusts preset attributes in the first attribute information to generate second attribute information.
After the attribute generation module receives the first attribute information and the preset attribute from the attribute identification module, the preset attribute in the first attribute information can be restored to a default attribute to generate second attribute information. The default property indicates that no extra processing by the electronic device is required.
The graphic drawing module of the electronic device may store default attribute information and a default display direction corresponding to the default attribute information. The graphics-rendering module determines the second attribute information as default attribute information if a corresponding second display direction of the first attribute information is different from a default display direction. The default attribute information is attribute information that does not require preprocessing or adjustment of the display direction, i.e., the default display direction does not require further adjustment.
Illustratively, the first and second modules are connected to one another. The first attribute information comprises a rotation attribute, a width-height attribute and a position attribute, the first attribute information is adjusted to be second attribute information, the rotation attribute in the second attribute information is that the rotation degree is 0 degrees, and the width-height attribute and the position attribute are default width-height parameters and position parameters.
S904: the attribute identification module sends the first attribute information to the attribute generation module.
The attribute identification module may send the first attribute information to the attribute generation module without the current display direction needing to be changed. Correspondingly, the attribute generation module may receive the first attribute information from the attribute identification module.
S905: the attribute generation module determines the first attribute information as second attribute information.
The attribute generation module may determine the first attribute information as the second attribute information. I.e. the first attribute information remains unchanged.
Alternatively, the execution process of S904 and S905 may be the attribute identifying module and the attribute generating module without processing, and the current flow is ended.
S906: the attribute generation module sends preset attributes to the instruction management module.
After the attribute generation module generates the preset attribute, the preset attribute may be sent to the instruction generation module, and correspondingly, the instruction generation module may receive the preset attribute from the attribute generation module. Because the electronic equipment judges whether to preprocess or not in the drawing process according to the first attribute information, the electronic equipment performs preprocessing operations such as rotation, size adjustment, position and the like in the drawing process under the condition that the preprocessing is required. At this point, the rendered multiple layers found in surfeflinger do not need to be reprocessed. Therefore, the rotation attribute in the first attribute information can be adjusted so as not to rotate, the position attribute and the wide-high attribute instruction are not changed, and the second attribute information can be obtained.
S706: the graphics rendering module sends the second attribute information to the window manager.
The graphic drawing module may transmit the second attribute information to the window manager. Correspondingly, the window manager may send the second attribute information to the graphics rendering module. That is, the attribute management module may receive the first attribute information from the window manager and determine to generate the second attribute information based on the second attribute information, and then may transmit the second attribute information to the window manager.
In the above-mentioned processes S704 to S706, the graphics drawing module may intercept the attribute information of the window manager and assign new attribute information.
S707: the window manager sends the second attribute information to the surfeflinger.
The window manager sends the second attribute information to the surfeflinger. Correspondingly, surfeflinger may receive second attribute information from the window manager.
The description of S707 may refer to the above description of S404 specifically, and is not repeated.
In S708 to S709, the instruction management module may generate a preprocessing instruction based on the adjustment attribute information, and intervene in the rendering management module and the rendering module to perform preprocessing based on the preprocessing instruction.
S708: the first application sends a first drawing instruction to the graphics drawing module.
The first application may send a first drawing instruction to the graphics drawing module. Correspondingly, the graphics rendering module may receive a first rendering instruction from the first application.
The second window task may be a graphics drawing instruction corresponding to the application process, and the second window task may instruct the graphics drawing module to perform graphics drawing.
The description of S708 may refer to the above description of S405 specifically, which is not repeated.
S709: and the graph drawing module draws based on the first attribute information and the first drawing instruction to obtain a first image set.
The graphics preprocessing module of the graphics drawing module judges whether the layer needs to be preprocessed or not based on first attribute information, and determines that the layer needs to be preprocessed under the condition that the first attribute information has preset attributes; otherwise, it is not required.
In the case that preprocessing is required, the attribute management module in the graphics preprocessing module may send the preset attribute in the first attribute information to the instruction management module. And when the instruction management module receives the preset attribute, the instruction management module can generate a preprocessing instruction based on the preset attribute. And a process of preprocessing based on the preprocessing instruction. The preprocessing operation is completed in the process of drawing by the drawing management module, so that a first image set is obtained.
And under the condition that preprocessing is not needed, the drawing management module of the graphic preprocessing module directly draws based on the first drawing instruction to obtain a first image set.
In connection with fig. 8 and 9, the graphics preprocessing module may further include an instruction management module that may include a canvas generation module and an instruction generation module. In the process that the graphic rendering module renders based on the first attribute information and the first rendering instruction to obtain the first image set, the graphic preprocessing method may include, but is not limited to, the following steps:
s907: the instruction management module generates a preprocessing instruction based on the preset attribute.
After the instruction management module receives the preset attribute from the attribute generation module, a preprocessing instruction can be generated based on the preset attribute.
For example, the preset attributes include a rotation attribute, a width-height attribute and a position attribute, the corresponding generated preprocessing instruction includes a rotation instruction, a width-height instruction and a position instruction, wherein the rotation instruction includes a rotation direction and an angle, the width-height instruction includes a width and a height of a layer pixel, and the position instruction includes a position of the layer.
In the above S903, S906, and S907, the graphics preprocessing module may generate a preprocessing instruction based on the first attribute information.
S908: the instruction management module inserts the preprocessing instruction into the first drawing instruction to generate a second drawing instruction.
The instruction management module stores the first drawing instruction in the instruction buffer, the instruction generation module can read the first drawing instruction in the instruction buffer, then the preprocessing instruction can be inserted into the first drawing instruction through the instruction insertion module, a second drawing instruction is obtained, and the second drawing instruction is input into the instruction buffer through the instruction input module.
In one possible implementation, the instruction management module inserts the preprocessing instruction into the first drawing instruction upon completion of the canvas drawing, and performs the preprocessing adjustment after the drawing.
In another possible implementation manner, when the canvas (layer) is not yet drawn, the electronic device instruction management module replaces a preset instruction in the first drawing instruction to generate a second drawing instruction, so that preprocessing can be performed.
The instruction generation module may read the first drawing instruction in the instruction buffer, and may then generate the second drawing instruction. Specifically, when a first instruction having the same purpose as that of the preprocessing instruction exists in the first drawing instruction, the first instruction is replaced with an instruction having the same purpose as that of the preprocessing instruction. And inserting the rest preprocessing instructions into the first drawing instruction.
For example, in the case where the preprocessing instruction includes a rotation instruction, a width-height instruction, and a position instruction, the rotation instruction may be inserted into the first drawing instruction, and the width-height instruction and the position instruction in the first drawing instruction may be replaced with the width-height instruction and the position instruction in the preprocessing instruction. When actually implemented, the electronic device often sets a rule for executing the instruction, and the electronic device executes the instruction according to the specific rule.
S909: the instruction management module sends a second drawing instruction to the drawing management module.
The instruction management module may send a second drawing instruction to the drawing management module, and correspondingly, the drawing management module may send the second drawing instruction that may be received from the instruction management module. The second drawing instruction includes the preprocessing instruction described above, and instructs the drawing management module to perform image drawing and preprocessing (or rendering).
S910: and the drawing management module draws based on the second drawing instruction to obtain a first image set.
Under the condition that the canvas is not drawn yet, the drawing management module can directly draw based on the second drawing instruction to obtain the first image set, and the preprocessing process is completed in the processing process. And under the condition that the canvas is drawn, the drawing management module can preprocess the drawn result according to a preprocessing instruction in the second drawing instruction to obtain a first image set.
It should be noted that the drawing result of the first image set is an image that has undergone preprocessing.
And the graph drawing module draws based on the second window task through the drawing management module to obtain a first image set. The first image set is a graphic of different layers drawn by the drawing management module on a canvas (canvas). Thereafter, the drawing management module may generate a first rendering instruction. Reference is made in particular to the description of S406 described above.
S710: the graphics rendering module sends rendering instructions to the GPU.
The graphics rendering module may send rendering instructions to the GPU. Correspondingly, the GPU may receive rendering instructions from the graphics rendering module.
The description of S710 may refer to the above description of S407 specifically, and is not repeated.
S711: and the GPU renders the first image set based on the rendering instruction to obtain a second image set.
S712: the GPU sends a second image set to the SurfaceFinger.
S713: the SurfaceFlinger instructs the hardware compositor to perform layer composition processing on the first image set based on the second image set and the second attribute information to obtain a third image.
If so, the SurfaceFlinger sends a first layer composition instruction to the GPU to instruct the GPU to complete the picture adjustment and perform layer composition. And when not needed, instructing a hardware synthesizer to perform image layer synthesis processing on the first image set to obtain a third image. Since the second attribute information has been changed to the default attribute information, that is, the corresponding result is that no screen adjustment is required, in the present embodiment, only the cases of S414 to S416 are executed.
In another possible case, the SurfaceFlinger directly sends a third layer composition instruction to the hardware compositor, and the HWC directly composes the second image set based on the third layer composition instruction.
S714: the hardware abstraction layer sends the third image to the display driver.
S715: the display driver buffers the third image.
S716: the display drive controls the display screen to display the third image.
The descriptions of S710 to S716 may specifically correspond to the descriptions related to S407 to S418 described above, and are not repeated.
In the above embodiment, in the case where the electronic device is currently in the direction 1 in the screen display direction switching scenario, if the user switches to the direction 2, the same content display needs to be adjusted in both directions, for example, rotation, size processing, position processing, and the like. The electronic equipment can complete rotation, size processing, position and other processing through the GPU or the CPU in the rendering process, so that the task of combining dedicated layers of the electronic equipment is guaranteed to be completed by a hardware synthesizer through processing of a surface eFlinger after the rendering is completed, and the task of adjusting pictures is completed by the GPU, thereby improving the efficiency of drawing, rendering and displaying images, reducing the probability of occurrence of card frames on a display interface and improving the experience of users.
In order to clearly illustrate the method embodiments of fig. 4 and 7, the following is presented in terms of the change of the pattern during processing. Fig. 10A and fig. 10B are schematic views showing a set of interface display flow effects according to an embodiment of the present application. As shown in fig. 10A, in the method flow of fig. 4, the graphics rendering module generates images of multiple layers through rendering and rendering. The SurfaceFlinger then performs a rotation process (and possibly size and position adjustment) by the GPU, and then performs a layer composition process to output a composite image. As shown in fig. 10B, in the method flow of fig. 7, the graphics rendering module renders and renders the image that generates multiple layers, followed by a rotation process (and possibly size and position adjustment). The surface eFlinger performs layer composition processing by a hardware synthesizer pair to output a synthesized image.
In contrast to the methods of fig. 4 and 7, the process of layer synthesis in fig. 4 is accomplished by the GPU; whereas the process described above in fig. 7 is performed by the HWC. Since the hardware synthesizer is a device dedicated to layer synthesis, other tasks besides layer synthesis tasks are less efficient, and when the hardware synthesizer is working in an overload, the tasks are transferred to the GPU for processing, so the method in fig. 4 may cause both the GPU and the hardware synthesizer to process their unfamiliar tasks. The method in fig. 7 processes the rotation, position and size of the graphics layer in advance by the GPU, so that the hardware synthesizer processor can work specially, but not the GPU is required to process, so that the processing efficiency is higher, the speed is faster, and the probability of card frame and jitter in the display process is smaller.
It should be noted that, in the above embodiment, the usable scene is not limited to the scene of the folding screen, and may be applied to other electronic devices having the display function in different directions.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are fully or partially produced. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described embodiment methods may be accomplished by a computer program that is stored on a computer readable storage medium and that, when executed, may comprise the steps of the above-described method embodiments. And the aforementioned storage medium includes: ROM or random access memory RAM, magnetic or optical disk, etc.

Claims (11)

1. The interface display method is characterized by being applied to electronic equipment, wherein the electronic equipment comprises a graph preprocessing module, a graph drawing module, a surface mixer, a hardware synthesizer HWC and a display, wherein the display is provided with different display directions, attribute information corresponding to the different display directions is different, and the attribute information is used for describing a display window; the method comprises the following steps:
the electronic equipment acquires a trigger event; the triggering event indicates that the display direction of the electronic equipment is switched from a first display direction to a second display direction;
the electronic equipment acquires first attribute information in the second display direction through the graphic preprocessing module;
Under the condition that the second display direction corresponding to the first attribute information is different from a default display direction, the electronic equipment generates a preprocessing instruction based on the first attribute information through the graphic preprocessing module, and inserts the preprocessing instruction into a first drawing instruction to obtain a second drawing instruction, wherein the first drawing instruction is used for indicating the graphic drawing module to draw;
the electronic equipment draws based on the second drawing instruction through the graph drawing module to obtain a first image set, wherein the first image set is an image set drawn for switching the display direction;
the electronic equipment renders the first image set through the graph drawing module to obtain a second image set;
the electronic equipment controls the HWC to perform layer synthesis on the second image set through the surface mixer to obtain a third image;
the electronic device displays the third image through the display.
2. The method according to claim 1, wherein the electronic device comprises a first application and a window manager WMS, the graphics rendering module comprises a rendering management module and a rendering module, and the electronic device, after acquiring the trigger event, specifically comprises:
The electronic equipment acquires the trigger event through the first application;
the method further comprises the steps of:
the electronic equipment acquires the first attribute information based on the trigger event through the first application;
the electronic equipment sends the first attribute information to the WMS through the first application;
the electronic device obtains the first attribute information in the second display direction through the graphic preprocessing module, and specifically includes:
and the electronic equipment sends the first attribute information to the graph preprocessing module through the WMS.
3. The method according to claim 2, wherein in a case where the second display direction corresponding to the first attribute information is different from a default display direction, the method further comprises:
the electronic equipment sends second attribute information to the WMS through the graphic preprocessing module, and the display direction corresponding to the second attribute information is the same as the default display direction;
the electronic equipment sends the second attribute information to the surface mixer through the WMS;
the electronic device controls the HWC to perform layer synthesis on the second image set through the surface mixer to obtain a third image, and the method specifically comprises the following steps:
And the electronic equipment controls the HWC to perform layer synthesis on the second image set based on the second attribute information through the surface mixer to obtain a third image.
4. The method according to claim 3, wherein the electronic device controls the HWC to perform layer composition on the second image set based on the second attribute information through the surface mixer to obtain a third image, specifically including:
the electronic equipment judges whether picture adjustment is needed or not based on the second attribute information through the surface mixer, and under the condition that the display direction corresponding to the second attribute information is the same as the default display direction, the electronic equipment determines that picture adjustment is not needed through the surface mixer; under the condition that the display direction corresponding to the second attribute information is different from the default display direction, the electronic equipment determines that picture adjustment is required through the surface mixer;
and under the condition that the image adjustment is not needed, the electronic equipment performs layer synthesis on the second image set through the HWC to obtain a third image.
5. The method of any of claims 2-4, wherein the electronic device generates, by the graphics preprocessing module, a preprocessing instruction based on the first attribute information, inserts the preprocessing instruction into a first drawing instruction, and before obtaining a second drawing instruction, the method further comprises:
The electronic equipment sends the first drawing instruction to the drawing management module through the first application, and the first drawing instruction is stored in an instruction buffer;
the electronic equipment reads the first drawing instruction in the instruction buffer through the graphic preprocessing module;
the electronic device inserts the preprocessing instruction into the first drawing instruction through the graphic preprocessing module to obtain a second drawing instruction, and the method specifically comprises the following steps:
the electronic equipment inserts the preprocessing instruction into the first drawing instruction through the graphic preprocessing module to obtain the second drawing instruction, and the second drawing instruction is input into the instruction buffer;
the electronic equipment acquires the second drawing instruction through the drawing management module, and the second drawing instruction instructs the drawing management module to conduct image drawing and preprocessing.
6. The method according to any one of claims 1-5, wherein after the electronic device obtains the first attribute information in the second display direction through the graphics preprocessing module, in a case where the first attribute information includes a rotation attribute, the method further includes:
The electronic equipment judges whether the second display direction corresponding to the rotation attribute in the first attribute information is different from the default display direction or not based on the default display direction through the graphic preprocessing module;
in the case that the rotation angle of the rotation attribute is not 0 degree, the electronic device determines that the second display direction is different from the default display direction through the graphic preprocessing module; under the condition that the rotation angle of the rotation attribute is 0 degree, the electronic equipment determines that the second display direction is the same as the default display direction through the graphic preprocessing module;
the rotation attribute describes the display angle of the layer, and the rotation angle of the rotation attribute is 0 degrees to indicate that the screen is in a default display direction.
7. The method of any one of claims 1-6, wherein the first attribute information includes a rotation attribute, a width-height attribute, and a position attribute; the rotation attribute describes the display angle of the layer; the wide-high attribute describes the size of the layer; the location attribute describes the order of the layers.
8. The method of any of claims 1-7, wherein the electronic device screen includes at least two display directions, the touch event includes a screen display direction switch event, the screen display direction switch event includes one or more of an event that the electronic device detects a change in its placement direction, an operation to switch display directions being detected by the electronic device, and a change event in a direction of the face relative to the screen being detected by the electronic device.
9. An electronic device, comprising: a graphics preprocessing module, a graphics rendering module and a surface mixer, one or more processors and one or more memories; the one or more processors are coupled with the one or more memories, the one or more memories for storing computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform:
acquiring a trigger event; the triggering event indicates that the display direction of the electronic equipment is switched from a first display direction to a second display direction;
acquiring first attribute information in the second display direction through the graphic preprocessing module;
generating a preprocessing instruction based on the first attribute information by the graphic preprocessing module under the condition that the second display direction corresponding to the first attribute information is different from a default display direction, and inserting the preprocessing instruction into a first drawing instruction to obtain a second drawing instruction, wherein the first drawing instruction is used for indicating the graphic drawing module to draw;
drawing by the graph drawing module based on the second drawing instruction to obtain a first image set, wherein the first image set is an image set drawn for switching a display direction;
And rendering the first image set through the graph drawing module to obtain a second image set.
10. The electronic device of claim 9, wherein the electronic device further comprises a HWC and a display;
the HWC is used for performing layer synthesis on the second image set to obtain a third image;
the display is used for displaying the third image.
11. A computer readable storage medium comprising instructions which, when run on an electronic device, cause the electronic device to perform the method of any of claims 1-8.
CN202211145487.3A 2022-09-20 2022-09-20 Interface display method and electronic equipment Active CN116700655B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211145487.3A CN116700655B (en) 2022-09-20 2022-09-20 Interface display method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211145487.3A CN116700655B (en) 2022-09-20 2022-09-20 Interface display method and electronic equipment

Publications (2)

Publication Number Publication Date
CN116700655A true CN116700655A (en) 2023-09-05
CN116700655B CN116700655B (en) 2024-04-02

Family

ID=87826367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211145487.3A Active CN116700655B (en) 2022-09-20 2022-09-20 Interface display method and electronic equipment

Country Status (1)

Country Link
CN (1) CN116700655B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180060010A1 (en) * 2016-08-25 2018-03-01 Samsung Electronics Co., Ltd. Electronic device including a plurality of touch displays and method for changing status thereof
US20180107440A1 (en) * 2016-10-16 2018-04-19 Dell Products, L.P. Dynamic User Interface for Multiple Shared Displays in an Electronic Collaboration Setting
US20180260095A1 (en) * 2017-03-10 2018-09-13 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Mobile terminal and method and device for controlling to display in the same
CN109871192A (en) * 2019-03-04 2019-06-11 京东方科技集团股份有限公司 A kind of display methods, device, electronic equipment and computer readable storage medium
CN112328130A (en) * 2020-09-04 2021-02-05 华为技术有限公司 Display processing method and electronic equipment
CN112799627A (en) * 2021-02-08 2021-05-14 海信视像科技股份有限公司 Display apparatus and image display method
US20220050652A1 (en) * 2019-04-29 2022-02-17 Samsung Electronics Co., Ltd. Electronic apparatus and method for outputting image thereof
CN114092595A (en) * 2020-07-31 2022-02-25 荣耀终端有限公司 Image processing method and electronic equipment
CN114501087A (en) * 2020-10-27 2022-05-13 海信视像科技股份有限公司 Display device
CN114470750A (en) * 2021-07-06 2022-05-13 荣耀终端有限公司 Display method of image frame stream, electronic device and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180060010A1 (en) * 2016-08-25 2018-03-01 Samsung Electronics Co., Ltd. Electronic device including a plurality of touch displays and method for changing status thereof
US20180107440A1 (en) * 2016-10-16 2018-04-19 Dell Products, L.P. Dynamic User Interface for Multiple Shared Displays in an Electronic Collaboration Setting
US20180260095A1 (en) * 2017-03-10 2018-09-13 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Mobile terminal and method and device for controlling to display in the same
CN109871192A (en) * 2019-03-04 2019-06-11 京东方科技集团股份有限公司 A kind of display methods, device, electronic equipment and computer readable storage medium
US20220050652A1 (en) * 2019-04-29 2022-02-17 Samsung Electronics Co., Ltd. Electronic apparatus and method for outputting image thereof
CN114092595A (en) * 2020-07-31 2022-02-25 荣耀终端有限公司 Image processing method and electronic equipment
CN112328130A (en) * 2020-09-04 2021-02-05 华为技术有限公司 Display processing method and electronic equipment
CN113791706A (en) * 2020-09-04 2021-12-14 荣耀终端有限公司 Display processing method and electronic equipment
CN114501087A (en) * 2020-10-27 2022-05-13 海信视像科技股份有限公司 Display device
CN112799627A (en) * 2021-02-08 2021-05-14 海信视像科技股份有限公司 Display apparatus and image display method
CN114470750A (en) * 2021-07-06 2022-05-13 荣耀终端有限公司 Display method of image frame stream, electronic device and storage medium

Also Published As

Publication number Publication date
CN116700655B (en) 2024-04-02

Similar Documents

Publication Publication Date Title
WO2021027747A1 (en) Interface display method and device
CN112217923B (en) Display method of flexible screen and terminal
CN115473957B (en) Image processing method and electronic equipment
US20220107821A1 (en) User interface layout method and electronic device
CN114115769B (en) Display method and electronic equipment
CN114756184A (en) Collaborative display method, terminal device and computer-readable storage medium
WO2023005751A1 (en) Rendering method and electronic device
CN116700655B (en) Interface display method and electronic equipment
CN116561085A (en) Picture sharing method and electronic equipment
CN115964231A (en) Load model-based assessment method and device
CN113495733A (en) Theme pack installation method and device, electronic equipment and computer readable storage medium
CN116688494B (en) Method and electronic device for generating game prediction frame
CN116185245B (en) Page display method and electronic equipment
CN116672707B (en) Method and electronic device for generating game prediction frame
CN116055623B (en) Power consumption control method, electronic equipment and storage medium
CN114866641B (en) Icon processing method, terminal equipment and storage medium
CN116382825B (en) Interface layout method and device
WO2024046010A1 (en) Interface display method, and device and system
CN116991532A (en) Virtual machine window display method, electronic equipment and system
CN116820288A (en) Window control method, electronic device and computer readable storage medium
CN117806744A (en) Control display method and electronic equipment
CN117130516A (en) Display method and electronic equipment
CN117689796A (en) Rendering processing method and electronic equipment
CN117806745A (en) Interface generation method and electronic equipment
CN117893665A (en) Image rendering processing method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant