CN114756359A - Image processing method and electronic equipment - Google Patents

Image processing method and electronic equipment Download PDF

Info

Publication number
CN114756359A
CN114756359A CN202011600925.1A CN202011600925A CN114756359A CN 114756359 A CN114756359 A CN 114756359A CN 202011600925 A CN202011600925 A CN 202011600925A CN 114756359 A CN114756359 A CN 114756359A
Authority
CN
China
Prior art keywords
rendering
information
rendering information
command
current layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011600925.1A
Other languages
Chinese (zh)
Inventor
陈健
吉星春
周越海
刘键
李煜
王亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202011600925.1A priority Critical patent/CN114756359A/en
Priority to PCT/CN2021/136805 priority patent/WO2022143082A1/en
Publication of CN114756359A publication Critical patent/CN114756359A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The application provides an image processing method and electronic equipment, the method can integrate GPU rendering at multiple positions in original image processing, reduce the duration of a synthesizer surface flag, timely send the GPU rendering to an LCD for display, and reduce the performance problems of frame loss and blocking. The method comprises the following steps: generating a first rendering command according to the first rendering information; generating a second rendering command according to second rendering information, wherein the second rendering information comprises dynamic effect rendering information of a current layer, and the generation of the first rendering command according to the first rendering information and the generation of the second rendering command according to the second rendering information are completed in the same stage; sending the first rendering command and the second rendering command to a graphics processor; setting first indication information, wherein the first indication information indicates that a synthesizer does not render the current layer according to the second rendering information any more; and carrying out layer composition according to the image data of the plurality of layers to obtain the current image frame.

Description

Image processing method and electronic equipment
Technical Field
The present application relates to the field of image processing and display technologies, and in particular, to an image processing method and an electronic device.
Background
Smooth and natural dynamic effect can make the interface of the electronic equipment smoother, the use is very comfortable, the quality of the whole electronic equipment can be improved, and the user experience degree is improved. Currently, common dynamic effect scenes of electronic equipment comprise amplification of application starting icons, fillet cutting of the icons, application exit and shrink to a desktop, application exit and entry to a multitask card and the like, and common dynamic effect processes such as rotation, scaling, fillet cutting and the like can be easily extracted from the dynamic effect scenes. In the current image Processing process, because a Display SubSystem (DSS) does not support fillet cutting, high-scale scaling and multi-channel rotation, and further a Graphics Processing Unit (GPU) rendering can be walked, but the GPU rendering in an image synthesis phase can result in a longer synthesis time, and cannot be timely sent to a Liquid Crystal Display (LCD) for Display, the dynamic effect is not smooth, and the user experience is affected.
Disclosure of Invention
The application provides an image processing method and electronic equipment, the method can integrate GPU rendering at multiple positions in original image processing, synchronously reduce the duration of a synthesizer surface flag, and further timely send the duration to an LCD for display, and the performance problems of frame loss and blocking are reduced.
In a first aspect, a method for image processing is provided, the method including: generating a first rendering command according to first rendering information, wherein the first rendering information comprises rendering information generated by a control and a layout of a current layer; generating a second rendering command according to second rendering information, wherein the second rendering information comprises dynamic effect rendering information of a current layer, and the generation of the first rendering command according to the first rendering information and the generation of the second rendering command according to the second rendering information are completed in the same stage; sending the first rendering command and the second rendering command to a graphics processor; setting first indication information, wherein the first indication information is used for indicating a synthesizer not to render the current layer according to the second rendering information;
and performing layer composition according to the image data of the plurality of layers to obtain a current image frame, wherein the image data of the plurality of layers comprises the image data of the current layer, and the image data of the current layer is obtained by the graphics processor through rendering according to the first rendering command and the second rendering command.
In the method, a first rendering command and a second rendering command are generated at the same stage, and the rendering of the dynamic effect characteristic of the current layer and the original GPU rendering (the original GPU rendering refers to the GPU rendering performed according to the first rendering command) are performed together, so that the GPU rendering at multiple positions in the original image processing can be integrated, the time length of a synthesizer surface flag is synchronously reduced, the surface flag can carry out DSS on-line superposition, and then the DSS is sent to an LCD for display in time, the performance problems of frame loss and jamming are reduced, and the continuous and smooth dynamic effect is realized.
With reference to the first aspect, in some implementations of the first aspect, before issuing the first rendering command and the second rendering command to a graphics processor, the method further includes: and determining the priority level of the method in the operating system according to the second indication information.
Optionally, the second indication information may be a preset level, for example, a priority level of image processing is set to be a highest level, so that the visual experience of the user can be preferentially guaranteed.
Alternatively, the second indication information may be a priority level for image processing determined according to the current system operation load.
With reference to the first aspect, in certain implementations of the first aspect, the dynamic rendering information includes at least one of the following parameters: rotation, scaling and fillet radius.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: and determining the first rendering information and the second rendering information according to a dynamic rendering strategy.
With reference to the first aspect, in certain implementations of the first aspect, the dynamic rendering policy includes: when the first parameter included in the first rendering information is different from the first parameter included in the second rendering information, the first parameter included in the first rendering information is adjusted according to the first parameter included in the second rendering information.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: responding to a first vertical synchronous signal, and generating a first rendering command according to first rendering information, wherein the first rendering information comprises rendering information generated by a control and a layout of a current layer; generating a second rendering command according to second rendering information, wherein the second rendering information comprises dynamic effect rendering information of a current layer, and the generation of the first rendering command according to the first rendering information and the generation of the second rendering command according to the second rendering information are completed in the same stage; sending the first rendering command and the second rendering command to a graphics processor; setting first indication information, wherein the first indication information is used for indicating a synthesizer not to render the current layer according to the second rendering information any more;
and responding to a second vertical synchronization signal, performing layer composition according to image data of a plurality of layers to obtain a current image frame, wherein the image data of the plurality of layers comprises the image data of the current layer, and the image data of the current layer is obtained by the graphics processor through rendering according to the first rendering command and the second rendering command.
In a second aspect, an electronic device is provided, including: the processing unit is used for generating a first rendering command according to first rendering information, wherein the first rendering information comprises a control of a current layer and rendering information generated by layout; the processing unit is further configured to generate a second rendering command according to second rendering information, where the second rendering information includes dynamic rendering information of a current layer, and the generation of the first rendering command according to the first rendering information and the generation of the second rendering command according to the second rendering information are completed at the same stage; the transceiving unit is used for sending the first rendering command and the second rendering command to a graphic processor; the processing unit is further configured to set first indication information, where the first indication information is used to indicate that the synthesizer is not to render the current layer according to the second rendering information any more; the processing unit is further configured to perform layer composition according to image data of multiple layers to obtain a current image frame, where the image data of the multiple layers includes image data of a current layer, and the image data of the current layer is obtained by the graphics processor through rendering according to the first rendering command and the second rendering command.
With reference to the second aspect, in some implementations of the second aspect, before issuing the first rendering commands and the second rendering commands to a graphics processor, the processing unit is further configured to:
and determining the priority level of the electronic equipment in the operating system according to the second indication information.
With reference to the second aspect, in certain implementations of the second aspect, the dynamic rendering information includes at least one of the following parameters: rotation, zoom, and fillet radius.
With reference to the second aspect, in certain implementations of the second aspect, the processing unit is further configured to: and determining the first rendering information and the second rendering information according to a dynamic rendering strategy.
With reference to the second aspect, in some implementations of the second aspect, the dynamic rendering policy includes: when the first parameter included in the first rendering information is different from the first parameter included in the second rendering information, the first parameter included in the first rendering information is adjusted according to the first parameter included in the second rendering information.
With reference to the second aspect, in certain implementations of the second aspect, the processing unit is further configured to: responding to a first vertical synchronous signal, and generating a first rendering command according to first rendering information, wherein the first rendering information comprises rendering information generated by a control and a layout of a current layer; generating a second rendering command according to second rendering information, wherein the second rendering information comprises dynamic effect rendering information of a current layer, and the generation of the first rendering command according to the first rendering information and the generation of the second rendering command according to the second rendering information are completed in the same stage; sending the first rendering command and the second rendering command to a graphics processor; setting first indication information, wherein the first indication information is used for indicating a synthesizer not to render the current layer according to the second rendering information any more; and responding to a second vertical synchronization signal, performing layer composition according to image data of a plurality of layers to obtain a current image frame, wherein the image data of the plurality of layers comprises the image data of the current layer, and the image data of the current layer is obtained by the graphics processor through rendering according to the first rendering command and the second rendering command.
In a third aspect, a terminal device is provided, which includes a processor, the processor is connected to a memory, the memory is used for storing a computer program, and the processor is used for executing the computer program stored in the memory, so that the terminal device executes the method in the first aspect or any possible implementation manner of the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, which stores a computer program that, when executed, implements the method of the first aspect or any possible implementation manner of the first aspect.
In a fifth aspect, a chip is provided, which includes a processor and an interface; the processor is configured to read instructions to perform the method of the first aspect or any possible implementation manner of the first aspect.
Optionally, the chip may further include a memory, the memory having instructions stored therein, and the processor being configured to execute the instructions stored in the memory or derived from other instructions.
Drawings
FIG. 1 is a schematic diagram of a software processing flow of an electronic device according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a conventional graphics system processing flow according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating the dynamic effect of an embodiment of the present application;
FIG. 4 is a schematic diagram of the dynamic effect of the embodiment of the present application with non-smooth deep disassembly;
FIG. 5 is a schematic flow chart diagram of an image processing method of an embodiment of the present application;
FIG. 6 is a schematic block diagram of a graphics system processing flow provided herein;
FIG. 7 is a schematic diagram of an image processing pipeline provided in an embodiment of the present application;
FIG. 8 is a schematic flow chart diagram of an image processing method provided by an embodiment of the present application;
FIG. 9 shows a schematic block diagram of an apparatus of an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a terminal device provided in the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
For a better understanding of the present application, terms referred to in the examples of the present application are described below:
vertical synchronization signal: such as Vsync. The vertical synchronization signal may be used to trigger drawing of one or more layers. It should be noted that in this embodiment of the present application, the "vertical synchronization signal may be used to trigger drawing of one or more image layers" specifically means: the vertical synchronization signal may be used to trigger drawing of one or more layers and to trigger rendering of the one or more layers. That is, in this embodiment of the application, the drawn layer or layers refer to the rendered layer or layers. In this embodiment of the present application, in response to a vertical synchronization signal, an electronic device may draw one or more image layers for each application through each of a plurality of drawing threads. That is, in response to the vertical synchronization signal, the electronic device may simultaneously perform a drawing task for one or more applications to draw one or more layers corresponding to each application.
Vsync may also be used to trigger layer composition of one or more layers of a drawing into an image frame, and may also be used to trigger a hardware refresh of the display image frame.
It should be noted that the names of the vertical synchronization signals may be different in different systems or architectures. For example, in some systems or architectures, the name of the vertical synchronization signal used to trigger drawing of one or more layers (i.e., vertical synchronization signal 1) described above may not be Vsync. However, whatever the name of the vertical synchronization signal, the technical idea of the method provided by the embodiments of the present application is to be covered by the protection scope of the present application as long as the vertical synchronization signal has a similar function.
A Central Processing Unit (CPU) is an operation and control core of a computer system, and is a final execution unit for information processing and program operation.
A Graphics Processing Unit (GPU) is a processor dedicated to image operation, and a core component commonly referred to as a "Graphics card" in a computer system is the GPU.
The embodiment of the application provides an image processing method which can be applied to electronic equipment comprising a touch screen. In particular, the method can be applied to the process that the electronic equipment responds to the touch operation of the user on the touch screen and displays the image on the touch screen.
Referring to fig. 1, a schematic diagram of a software processing flow of an electronic device in a process from "a user finger inputs a touch operation on a touch screen" to "the touch screen displays an image corresponding to the touch operation" is shown, taking the case that the user operation is a touch operation as an example. As shown in fig. 1, the electronic device may include: a Touch Panel (TP)/TP Driver (Driver)10, an Input frame (i.e., Input frame) 20, a UI frame (i.e., UI frame) 30, a Display frame (i.e., Display frame) 40, and a hardware Display module 50.
As shown in fig. 1, the software process flow of the electronic device may include the following steps (1) to (5).
Step (1): after the TP in the TPIC/TP driver 10 collects the touch operation of the TP of the electronic device by the user's finger, the TP driver reports a corresponding touch Event to the Event Hub.
Step (2): the Input Reader thread of the Input framework 20 can read a touch Event from the Event Hub and then send the touch Event to the Input Dispatcher thread; the touch event is uploaded by the Input Dispatcher thread to a UI thread (e.g., DoFrame) in the UI frame 30.
And (3): drawing one or more image layers corresponding to the touch event by a UI thread in the UI frame 30; and the rendering thread (such as DrawFrame) performs layer rendering on one or more layers.
And (4): a compositing thread in the Display frame 40 performs layer compositing on the drawn layer or layers (i.e., the rendered layer or layers) to obtain an image frame.
And (5): a Liquid Crystal Display (LCD) panel of the hardware Display module 50 may receive the synthesized image frame, and the LCD panel may Display the synthesized image frame. After the LCD displays the image frame, the image displayed by the LCD can be perceived by human eyes.
After a user finger inputs a touch operation on the touch screen, the touch screen displays an image or an action corresponding to the touch operation, for example, when the user touches an application WeChat icon on the touch screen, the application WeChat icon is opened, in the process of opening the application WeChat icon, the WeChat icon is slowly enlarged until the WeChat interface is opened, and the process of slowly enlarging the WeChat icon is called action. The dynamic effect is the result of the image display. FIG. 2 is a prior art graphics system process flow. As shown in fig. 2, the whole graphics system passes through a main drawing Interface Thread (UI Thread), where the UI Thread mainly calculates display contents, such as creation, measurement, layout, drawing of graphics, and the like of a view, and the UI Thread may obtain attribute information of one or more layers included in a current image, where the attribute information includes first rendering information and second rendering information, where the first rendering information includes rendering information generated by a control and a layout of the one or more layers, and the second rendering information includes dynamic rendering information of the one or more layers; a Render main Thread (Render Thread) can obtain first rendering information of a current image and convert the first rendering information of the current image into a first rendering command which can be recognized by a GPU, the Render Thread transmits the first rendering command to a Graphics Processing Unit (GPU), and the GPU renders according to the first rendering command; the synthesizer surfaceFlinger can obtain second rendering information of the current image, the surfaceFlinger transmits the second rendering information of the current image to a Display SubSystem (DSS) module, and the DSS module determines whether DSS online synthesis or GPU rendering synthesis is performed according to the self-capability of the DSS module. When the DSS online synthesis condition is not satisfied, for example, when the scaling, the fillet cutting, and the number of the rotation channels are large, GPU rendering and synthesis are required, the surfaflinger converts the second rendering information of the current image into a second rendering and synthesis command that can be recognized by the GPU, the surfaflinger transmits the second rendering and synthesis command to a Graphics Processing Unit (GPU), and the GPU renders and synthesizes according to the second rendering and synthesis command.
Fig. 3 is a schematic block diagram of a prior art graphics system processing flow. Because the DSS does not support fillet cutting, does not support multi-channel rotation, does not support scaling more than 1.8 times, the GPU is required to render and synthesize in the synthesis phase, most of the synthesis time exceeds 16.6ms in the current terminal device system, and cannot be immediately sent to the LCD for display, which results in unsmooth actuation effect and affects user experience.
Fig. 4 is a schematic diagram of the dynamic effect of the deep dismantling. When the first Vsync clock is used, the UI and the Render need to perform animation logic, layout measurement, drawing, synchronization (synchronization means that resources of the UI are synchronized to the Render, for example, first rendering information of a layer generated by the UI is synchronized to the Render), command submission (command submission means that the Render converts the first rendering information of the layer into a command that can be recognized by the GPU and sends the command to the GPU), and an exchange cache (the exchange cache is a cache address where the GPU stores a rendering result after the Render sends the GPU to the surfaflinger for rendering the layer), and when the second Vsync clock is used, the surfaflinger needs to perform message processing, composition preparation, memory acquisition (acquiring already-rendered image data), rendering the layer (the surfaflinger calls the GPU for rendering the layer), command submission, layer composition, and refreshing display. However, due to the dynamic characteristics of rotation, scaling, fillet cutting and the like, GPU rendering is required, and the GPU rendering needs to complete operations such as memory acquisition, layer rendering, command submission and the like, the operation of layer rendering is time-consuming, and the time consumption is linearly increased as the number of layers is increased and the rendering complexity is increased. Therefore, performance problems are caused, the dynamic effect process is not smooth, and occasionally, the phenomenon of blocking is caused, so that the user experience is reduced.
At present, in order to ensure smooth moving of the image, a common method is to increase the frequency points of the CPU and the GPU and accelerate the processing speed of the CPU and the GPU, so as to reduce the execution time and ensure that the next Vsync signal can be sent to the LCD for display in time before coming. However, the loss of power consumption is caused, the endurance time of the mobile phone is reduced, and the overall energy efficiency ratio is not high.
Smooth and natural dynamic effect can make the interface of the electronic equipment smoother, the use is comfortable, the quality of the whole electronic equipment is improved, and the user experience degree is improved. Currently, common dynamic effect scenes of electronic equipment comprise amplification of application starting icons, fillet cutting of the icons, application exit and shrink to a desktop, application exit and entry to a multitask card and the like, and common dynamic effect processes such as rotation, scaling, fillet cutting and the like can be easily extracted from the dynamic effect scenes. Because the DSS does not support fillet cutting, does not support high-scale scaling, does not support multiple-channel rotation, and thus GPU rendering may be performed, but GPU rendering may result in longer synthesis time, leading to a series of performance problems. How to ensure operations such as rendering layers with specific dynamic characteristics such as rotation, scaling and fillet cutting, submitting commands and the like, and processing the operations at a proper position is a technical problem to be solved by the application.
According to the image processing method provided by the embodiment of the application, the rendering of window dynamic characteristics such as rotation, scaling and fillet cutting can be performed at the Render stage, namely the rendering of the window dynamic characteristics such as rotation, scaling and fillet cutting and the original GPU rendering (the original GPU rendering refers to the GPU rendering performed according to the first rendering command) are performed together, so that the GPU rendering at multiple positions can be integrated, the duration of the surfaceFlinger is synchronously reduced, the surfaceFlinger can carry out DSS on-line superposition and then timely send the DSS to the LCD for display, the performance problems of frame loss and blockage can not be generated, and the dynamic effect is continuous and smooth.
For example, the electronic device in the embodiment of the present application may be a mobile phone, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a Personal Digital Assistant (PDA), an Augmented Reality (AR) \ Virtual Reality (VR) device, and other devices including a touch screen, and the embodiment of the present application is not particularly limited to the specific form of the electronic device.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
An image processing method provided by the present application is described in detail below with reference to fig. 5, where fig. 5 is a schematic flowchart of an image processing method 200 according to an embodiment of the present application, and the method 200 may be applied to the scenario shown in fig. 1, and of course, may also be applied to other touch device scenarios, and the embodiment of the present application is not limited herein.
It should also be understood that in the embodiments of the present application, the electronic device is taken as an example of an execution subject for executing the method, and the method is described. By way of example and not limitation, the execution subject of the execution method may also be a chip, a system-on-a-chip, a processor, or the like, applied to an electronic device.
As shown in fig. 5, the method 200 shown in fig. 5 may include S201 to S20. The various steps in method 200 are described in detail below in conjunction with fig. 5.
S201, the electronic device generates a first rendering command according to first rendering information, wherein the first rendering information comprises a control of a current layer and rendering information generated by layout.
S202, the electronic equipment generates a second rendering command according to second rendering information, wherein the second rendering information comprises dynamic rendering information of a current layer, and the generation of the first rendering command according to the first rendering information and the generation of the second rendering command according to the second rendering information are completed in the same stage.
S203, the electronic device issues the first rendering command and the second rendering command to a graphics processor.
S204, the electronic device sets first indication information, and the first indication information is used for indicating that the synthesizer does not render the current layer according to the second rendering information any more.
And S205, the electronic device performs layer composition according to image data of multiple layers to obtain a current image frame, wherein the image data of the multiple layers comprise image data of a current layer, and the image data of the current layer is obtained by rendering by the graphics processor according to the first rendering command and the second rendering command.
It should be understood that, because the second rendering information includes the dynamic rendering information of the current layer, the second rendering information may also be referred to as dynamic rendering information.
Therefore, the dynamic effect rendering and the original GPU rendering are executed together, so that the GPU rendering at multiple positions can be integrated, the duration of the surfaceFlinger is synchronously reduced, the surfaceFlinger can carry out DSS on-line superposition and then timely send the DSS to the LCD for display, the performance problems of frame loss and blockage cannot be caused, and the dynamic effect is continuous and smooth.
For a clearer understanding of the present application, the steps involved in the method 200 are described in detail below.
The first rendering information and the second rendering information are mainly obtained when a UI application program on the electronic equipment needs to perform animation logic, measurement layout, drawing and other operations so as to obtain the first rendering information and the second rendering information of the current layer.
In S201, the electronic device generates a first rendering command according to the first rendering information, mainly a renderer generates the first rendering command according to the first rendering information, and the first rendering command is a command that can be recognized by the GPU.
In S202, the electronic device generates a second rendering command according to the second rendering information, mainly, the rendering enhancement device generates the second rendering command according to the second rendering information, where the second rendering command is a command that can be recognized by the GPU, the rendering enhancement device is a thread parallel to the Render, and the generating of the first rendering command according to the first rendering information and the generating of the second rendering command according to the second rendering information are completed at the same stage.
In S203, the electronic device issues the first rendering command and the second rendering command to the GPU, which means that the renderer sends the first rendering command to the GPU, and the rendering enhancement device sends the second rendering command to the GPU.
And when the GPU receives the first command and the second command, the GPU carries out layer rendering according to the first rendering command and the second rendering command to obtain image data. The GPU carries out the rendering of the action characteristic and the original GPU rendering simultaneously, so that the GPU rendering at multiple positions can be integrated, the total time of the GPU rendering is reduced, the GPU rendering is not required to be carried out during the surface Flinger, the time of the surface Flinger is synchronously reduced, the surface Flinger can carry out DSS on-line superposition and then timely send the DSS to an LCD for displaying, the performance problems of frame loss and jamming cannot be caused, and the action effect is continuous and smooth.
In S205, the electronic device performs layer composition according to the image data of the plurality of layers to obtain a current image frame, which means that the surfefinger reads the image data in the buffer area to perform layer composition.
Optionally, before issuing the first rendering command and the second rendering command to the graphics processor, the method further includes: and determining the priority level of the method in the operating system according to the second indication information.
Optionally, the second indication information may be a preset level, for example, a priority level of image processing is set to be a highest level, so that the visual experience of the user can be preferentially guaranteed.
Alternatively, the second indication information may be a priority level for image processing determined according to the current system operation load.
Optionally, the dynamic rendering information includes at least one of the following parameters: rotation, zoom, and fillet radius.
It should be understood that the dynamic rendering information may also include other parameters, such as a rounded corner radian, etc.
Optionally, the method further includes: and determining the first rendering information and the second rendering information according to a dynamic rendering strategy.
Optionally, the dynamic rendering policy includes: when the first parameter included in the first rendering information is different from the first parameter included in the second rendering information, the first parameter included in the first rendering information is adjusted according to the first parameter included in the second rendering information.
Optionally, the method further includes: responding to a first vertical synchronizing signal, and generating a first rendering command according to first rendering information, wherein the first rendering information comprises rendering information generated by a control and a layout of a current layer; generating a second rendering command according to second rendering information, wherein the second rendering information comprises dynamic effect rendering information of a current layer, and the generation of the first rendering command according to the first rendering information and the generation of the second rendering command according to the second rendering information are completed in the same stage; sending the first rendering command and the second rendering command to a graphics processor; setting first indication information, wherein the first indication information is used for indicating a synthesizer not to render the current layer according to the second rendering information any more;
and in response to a second vertical synchronization signal, performing layer composition according to image data of multiple layers to obtain a current image frame, where the image data of the multiple layers includes image data of a current layer, and the image data of the current layer is obtained by the graphics processor through rendering according to the first rendering command and the second rendering command.
FIG. 6 is a schematic block diagram of a graphics system processing flow provided herein. As shown in fig. 6, the entire graphic system is composed and displayed on-line via UI Thread, Render enhancement device, GPU rendering, surface maker, DSS. In an original Render Thread stage, a Render enhancement device is added, the Render enhancement device is used for converting window dynamic effect attribute parameters such as rotation, scaling, fillet cutting and the like into a second rendering command which can be identified by the GPU, the Render is used for sending the first rendering command to the GPU, the Render enhancement device is used for sending the second rendering command to the GPU, when the GPU receives the first command and the second command, and when the GPU receives the first command and the second command, the GPU carries out rendering layer rendering according to the first rendering command and the second rendering command to obtain image data. The prior art GPU live-action rendering is moved forward and executed with the original GPU rendering.
Fig. 7 is a schematic diagram of an image processing pipeline according to an embodiment of the present application, as shown in fig. 7, when the first Vsync clock arrives, the UI and the Render need to perform animation logic, layout measurement, drawing, synchronization, command submission and buffer exchange, and the rendering enhancement device needs to perform synchronization (synchronization refers to synchronization of resources of the UI to the rendering enhancement device, for example, synchronization of the animation property parameters of the layer generated by the UI to the rendering enhancement device), rendering enhancement (the rendering enhancement device performs some determinations according to the rendering parameters and the rendering policy), and command submission (command submission refers to conversion of the animation property parameters of the layer into commands that can be recognized by the GPU by the rendering enhancement device and then sends the commands to the GPU); on the second Vsync clock, the surfefringer needs to do message processing, composition preparation, enhancement layer, layer composition, and refresh display. The image processing pipeline provided by the embodiment of the application modifies the traditional rendering pipeline, and is additionally provided with the rendering enhancement device, wherein the rendering enhancement device is used for converting window dynamic effect attribute parameters such as rotation, zooming, fillet cutting and the like into a second command which can be identified by the GPU, so that the GPU dynamic effect rendering in the prior art is moved forward and is executed together with the original GPU rendering, and the GPU dynamic effect rendering is directly sent to DSS (direct sequence side) for online synthesis through the enhanced layer and finally displayed on an LCD (liquid crystal display).
Fig. 8 is a schematic flowchart of an image processing method 300 according to an embodiment of the present application, and in fig. 8, an interaction flow between different modules is specifically described. As shown in fig. 8, the processing method 300 may include S301 to S311. The various steps in method 300 are described in detail below in conjunction with fig. 7.
And S301, drawing the current layer corresponding to the touch event by the UI framework.
S302, the rendering module Render is responsible for converting the control and the layout of the UI frame into a first rendering command.
And S303, the window management is responsible for managing the size, the position, the rotation, the scaling, the fillet radius and the like of the current layer, and one window is the same as one layer. Window management sends dynamic effect attribute parameters to render enhanced window management
S304, the rendering enhancement window management is mainly responsible for forwarding second rendering information such as rotation, scaling, fillet cutting and the like to the rendering enhancement engine strategy module, informing the surfeflinger to bypass a native flow, and preparing to receive a rendering enhancement layer without performing dynamic rendering.
S305, the rendering enhancement engine strategy is mainly responsible for receiving the message of the rendering enhancement window management and carries out strategy fusion with the native rendering module.
The rendering enhancement engine strategy comprises:
1. processing the management message sent by the window such as rotation, scaling, fillet cutting and the like, and directly returning other messages to the surfFlanger, such as parameters such as transparency and the like;
2. when the first parameter included in the first rendering information is different from the first parameter included in the second rendering information, the first parameter included in the first rendering information is adjusted according to the first parameter included in the second rendering information. If the size of the first rendering information fillet conflicts with the size of the second rendering information fillet, the size of the second rendering information fillet is the same as the size of the first rendering information fillet.
And rendering the current layer according to the size of the fillet of the second rendering information, so that the user experience can be ensured.
S306, the rendering enhancement engine is responsible for converting the dynamic parameters such as rotation, scaling, fillet cutting and the like into a second rendering command which can be identified by the GPU.
S307, the engine scheduling policy management is responsible for managing the scheduling priority of the whole rendering enhancement engine thread, for example, the current operating system comprises a plurality of processes, images, sounds and the like, the scheduling priority of the rendering enhancement engine thread refers to the level of the image rendering priority and the priority of other processes, and the image rendering priority can be set to be high.
S308, the GPU receives the image rendering priority, the first rendering command and the second rendering command to perform layer rendering.
S309, the rendering enhancement layer management receives a message of the rendering enhancement window management, such as receiving the first indication information. And the rendering enhancement layer management does not need to execute the native second rendering process if the first indication information is received, and needs to execute the native second rendering process if the first indication information is not received.
S310, the Surfaceflinger acquires image data rendered by the GPU from the buffer area and sends the image data to DSS for online synthesis.
And S311, displaying by an LCD.
It should be understood that the modules of image processing shown in fig. 8 are merely one example, and in particular embodiments, fusion or splitting may be performed, for example, window management may be compatible with rendering the content of enhanced window management.
It should also be appreciated that the above-described window management, render enhancement window, render enhancement engine policy, render enhancement engine, engine scheduling policy, etc. collectively implement the functionality of the render enhancement apparatus described above.
The communication parameter measurement method of the multi-card terminal device according to the embodiment of the present application is described in detail with reference to fig. 1 to 8. Hereinafter, a communication device according to an embodiment of the present application will be described in detail with reference to fig. 9 to 11.
Fig. 9 shows a schematic block diagram of an apparatus 400 of an embodiment of the present application.
In some embodiments, the apparatus 400 may be a terminal device, or may be a chip or a circuit, for example, a chip or a circuit that may be disposed on a terminal device.
In one possible approach, the apparatus 400 may include a processing unit 410 (i.e., an example of a processor) and a transceiving unit 430. In some possible implementations, the processing unit 410 may also be referred to as a determination unit. In some possible implementations, the transceiving unit 430 may include a receiving unit and a transmitting unit.
In one implementation, the transceiving unit 430 may be implemented by a transceiver or transceiver-related circuitry or interface circuitry.
In one implementation, the apparatus may further include a storage unit 420. In one possible approach, the storage unit 420 is used to store instructions. In one implementation, the storage unit may also be used to store data or information. The storage unit 420 may be implemented by a memory.
In some possible designs, the processing unit 410 is configured to execute the instructions stored by the storage unit 420, so as to enable the apparatus 400 to implement the steps performed by the terminal device in the method described above. Alternatively, the processing unit 410 may be configured to call the data of the storage unit 420, so that the apparatus 400 implements the steps performed by the terminal device in the method described above.
For example, the processing unit 410, the storage unit 420, and the transceiving unit 430 may communicate with each other via internal connection paths, and may transmit control and/or data signals. For example, the storage unit 420 is used to store a computer program, and the processing unit 410 may be used to call and run the computing program from the storage unit 420 to control the transceiver unit 430 to receive and/or transmit signals, so as to complete the steps of the terminal device or the access network device in the above-mentioned method. The storage unit 420 may be integrated into the processing unit 410 or may be provided separately from the processing unit 410.
Alternatively, if the apparatus 400 is a communication device (e.g., a terminal device), the transceiving unit 430 includes a receiver and a transmitter. Wherein the receiver and the transmitter may be the same or different physical entities. When the same physical entity, may be collectively referred to as a transceiver.
When the apparatus 400 is a terminal device, the transceiving unit 430 may be a transmitting unit or a transmitter when transmitting information, the transceiving unit 430 may be a receiving unit or a receiver when receiving information, the transceiving unit may be a transceiver, and the transceiver, the transmitter, or the receiver may be a radio frequency circuit, and when the apparatus includes a storage unit, the storage unit is configured to store computer instructions, and the processor is communicatively connected to the memory, and executes the computer instructions stored in the memory, so that the apparatus can perform the method 200 or the method 300. The processor may be a general purpose Central Processing Unit (CPU), a microprocessor, or an Application Specific Integrated Circuit (ASIC).
Optionally, if the apparatus 400 is a chip or a circuit, the transceiver unit 430 includes an input interface and an output interface.
When the apparatus 400 is a chip, the transceiving unit 430 may be an input and/or output interface, a pin or a circuit, etc. The processing unit 410 may execute computer-executable instructions stored by the memory unit to enable the apparatus to perform the method 200 or the method 300. Optionally, the storage unit is a storage unit in the chip, such as a register, a cache, and the like, and the storage unit may also be a storage unit located outside the chip in the terminal, such as a Read Only Memory (ROM) or another type of static storage device that can store static information and instructions, a Random Access Memory (RAM), and the like.
As an implementation manner, the function of the transceiving unit 430 may be considered to be implemented by a transceiving circuit or a transceiving dedicated chip. The processing unit 410 may be considered to be implemented by a dedicated processing chip, a processing circuit, a processing unit or a general-purpose chip.
As another implementation manner, it may be considered that the communication device (e.g., a terminal device or an access network device) provided in the embodiment of the present application is implemented by using a general-purpose computer. Program codes for realizing the functions of the processing unit 410 and the transceiver unit 430 are stored in the storage unit 420, and the general-purpose processing unit executes the codes in the storage unit 420 to realize the functions of the processing unit 410 and the transceiver unit 430.
In some embodiments, the apparatus 400 may be an electronic device, and the processing unit 410 is configured to generate a first rendering command according to first rendering information, where the first rendering information includes rendering information generated by a control and a layout of a current layer; the processing unit 410 is further configured to generate a second rendering command according to second rendering information, where the second rendering information includes dynamic rendering information of a current layer, and the generating of the first rendering command according to the first rendering information and the generating of the second rendering command according to the second rendering information are completed at the same stage; the transceiving unit 430 is configured to issue the first rendering command and the second rendering command to a graphics processor; the processing unit 410 is further configured to set first indication information, where the first indication information is used to indicate that a compositor does not render a current layer according to the second rendering information any more; the processing unit 410 is further configured to perform layer composition according to image data of multiple layers to obtain a current image frame, where the image data of multiple layers includes image data of a current layer, and the image data of the current layer is obtained by the graphics processor through rendering according to the first rendering command and the second rendering command.
In one implementation, before issuing the first rendering command and the second rendering command to a graphics processor, the processing unit 410 is further configured to: and determining the priority level of the electronic equipment in the operating system according to the second indication information.
In one implementation, the dynamic rendering information includes at least one of the following parameters: rotation, zoom, and fillet radius.
In one implementation, the processing unit 410 is further configured to: and determining the first rendering information and the second rendering information according to a dynamic rendering strategy.
In one implementation, the dynamic rendering strategy includes: when the first parameter included in the first rendering information is different from the first parameter included in the second rendering information, the first parameter included in the first rendering information is adjusted according to the first parameter included in the second rendering information.
In one implementation, the processing unit 410 is further configured to: responding to a first vertical synchronous signal, and generating a first rendering command according to first rendering information, wherein the first rendering information comprises rendering information generated by a control and a layout of a current layer; generating a second rendering command according to second rendering information, wherein the second rendering information comprises dynamic effect rendering information of a current layer, and the generation of the first rendering command according to the first rendering information and the generation of the second rendering command according to the second rendering information are completed in the same stage; sending the first rendering command and the second rendering command to a graphics processor; setting first indication information, wherein the first indication information is used for indicating a synthesizer not to render the current layer according to the second rendering information any more; and responding to a second vertical synchronization signal, performing layer composition according to image data of a plurality of layers to obtain a current image frame, wherein the image data of the plurality of layers comprises the image data of the current layer, and the image data of the current layer is obtained by the graphics processor through rendering according to the first rendering command and the second rendering command.
When the apparatus 400 is configured or is itself an electronic device, each module or unit in the apparatus 400 may be configured to perform each action or process performed by the electronic device in the above method, and a detailed description thereof is omitted here for avoiding redundancy.
Fig. 10 is a schematic structural diagram of an electronic device 500 according to an embodiment of the present application. As shown in fig. 10, the electronic device 500 may include a processor 510, an external memory interface 520, an internal memory 521, a Universal Serial Bus (USB) interface 530, a charging management module 540, a power management module 541, a battery 542, an antenna 1, an antenna 2, a mobile communication module 550, a wireless communication module 560, an audio module 570, a speaker 570A, a receiver 570B, a microphone 170C, an earphone interface 570D, a sensor module 580, a button 590, a motor 591, an indicator 592, a camera 593, a display 594, a Subscriber Identity Module (SIM) card interface 595, and the like.
Among them, the sensor module 580 may include a pressure sensor 580A, a gyro sensor 580B, an air pressure sensor 580C, a magnetic sensor 580D, an acceleration sensor 580E, a distance sensor 580F, a proximity light sensor 580G, a fingerprint sensor 580H, a temperature sensor 580J, a touch sensor 580K, an ambient light sensor 580L, a bone conduction sensor 580M, and the like.
It is to be understood that the illustrated structure of the present embodiment does not constitute a specific limitation to the electronic device 500. In other embodiments, the electronic device 500 may include more or fewer components than illustrated, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 510 may include one or more processing units, such as: processor 510 may include an Application Processor (AP), a modem processor, a GPU, an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processor (NPU), among others. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be a neural center and a command center of the electronic device 500. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 510 for storing instructions and data. In some embodiments, the memory in processor 510 is a cache memory. The memory may hold instructions or data that have just been used or recycled by processor 510. If the processor 510 needs to use the instruction or data again, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 510, thereby increasing the efficiency of the system.
In some embodiments, processor 510 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
It should be understood that the connection relationship between the modules illustrated in the present embodiment is only an exemplary illustration, and does not limit the structure of the electronic device 500. In other embodiments, the electronic device 500 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 540 is used to receive charging input from the charger. The charging management module 540 may also provide power to the electronic device through the power management module 541 while charging the battery 542.
The power management module 541 is used to connect the battery 542, the charging management module 540 and the processor 510. The power management module 541 receives input from the battery 542 and/or the charging management module 540, and provides power to the processor 510, the internal memory 521, the external memory, the display 594, the camera 593, the wireless communication module 560, and the like. In some other embodiments, the power management module 541 may also be disposed in the processor 510. In other embodiments, the power management module 541 and the charging management module 540 may be disposed in the same device.
The wireless communication function of the electronic device 500 may be implemented by the antenna 1, the antenna 2, the mobile communication module 550, the wireless communication module 560, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 500 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network.
The mobile communication module 550 may provide a solution including 2G/3G/4G/5G wireless communication applied on the electronic device 500. The mobile communication module 550 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 550 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 550 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 570A, the receiver 570B, etc.) or displays images or videos through the display screen 594.
The wireless communication module 560 may provide a solution for wireless communication applied to the electronic device 500, including Wireless Local Area Networks (WLANs) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite Systems (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 560 may be one or more devices integrating at least one communication processing module. The wireless communication module 560 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 510. The wireless communication module 560 may also receive a signal to be transmitted from the processor 510, frequency-modulate it, amplify it, and convert it into electromagnetic waves via the antenna 2 to radiate it.
In some embodiments, antenna 1 of the electronic device 500 is coupled to the mobile communication module 550 and antenna 2 is coupled to the wireless communication module 560 such that the electronic device 500 may communicate with networks and other devices via wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), time division code division multiple access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou satellite navigation system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 500 implements display functions via the GPU, the display screen 594, and the application processor. The GPU is an image processing microprocessor connected to a display screen 594 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 510 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 594 is used for displaying images, video, and the like. The display screen 594 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-electro, a quantum dot light-emitting diode (QLED), or the like.
The display screen 594 in the embodiment of the present application may be a touch screen. I.e., the display screen 594, has a touch sensor 580K integrated therein. The touch sensor 580K may also be referred to as a "touch panel". That is, the display screen 594 may include a display panel and a touch panel, and the touch sensor 580K and the display screen 594 form a touch screen, which is also referred to as a "touch screen". The touch sensor 580K is used to detect a touch operation applied thereto or therearound. The touch operation detected by the touch sensor 580K may be passed to the upper layer by a drive of the core layer (e.g., a TP drive) to determine the type of touch event. Visual output related to touch operations may be provided through the display screen 594. In other embodiments, the touch sensor 580K may be disposed on a surface of the electronic device 500 at a different location than the display screen 594.
The electronic device 500 may implement a capture function via the ISP, the camera 593, the video codec, the GPU, the display screen 594, and the application processor, etc. The ISP is used to process the data fed back by the camera 593. The camera 593 is used to capture still images or video. The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. Video codecs are used to compress or decompress digital video. The electronic device 500 may support one or more video codecs. In this way, the electronic device 500 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 500 can be implemented by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 550 can be used to connect an external memory card, such as a MicroSD card, to extend the memory capability of the electronic device 500. The external memory card communicates with the processor 510 through the external memory interface 520 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card. The internal memory 521 may be used to store computer-executable program code, including instructions. The processor 510 executes various functional applications of the electronic device 500 and data processing by executing instructions stored in the internal memory 521. For example, in the embodiment of the present application, the processor 510 may execute instructions stored in the internal memory 521, and the internal memory 521 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The data storage area may store data created during use of the electronic device 500 (e.g., audio data, phone book, etc.), and the like. In addition, the internal memory 521 may include a high speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The electronic device 500 may implement an audio function through the audio module 570, the speaker 570A, the receiver 570B, the microphone 170C, the earphone interface 570D, and the application processor. Such as music playing, recording, etc.
The audio module 570 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 570 may also be used to encode and decode audio signals. The speaker 570A, also called a "horn", is used to convert the audio electrical signals into sound signals. Receiver 570B, also known as a "handset," is used to convert electrical audio signals into acoustic signals. The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. The headphone interface 570D is used to connect wired headphones.
The pressure sensor 580A is used for sensing a pressure signal, which can be converted into an electrical signal. In some embodiments, pressure sensor 580A may be disposed on display screen 594. The pressure sensor 580A may be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, or the like.
Parallel plates with conductive material. When a force acts on the pressure sensor 580A, the capacitance between the electrodes changes. The electronic device 500 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 594, the electronic device 500 detects the intensity of the touch operation according to the pressure sensor 580A. The electronic apparatus 500 may also calculate the touched position from the detection signal of the pressure sensor 580A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. In this embodiment, the electronic device 500 may obtain the pressing force of the touch operation of the user through the pressure sensor 580A.
The keys 590 include a power-on key, a volume key, and the like. The keys 590 may be mechanical keys. Or may be touch keys. The electronic device 500 may receive a key input, and generate a key signal input related to user setting and function control of the electronic device 500. The motor 591 may generate a vibration indication. The motor 591 can be used for incoming call vibration prompt and also can be used for touch vibration feedback. Indicator 592 can be an indicator light that can be used to indicate a charge status, a charge change, a message, a missed call, a notification, etc. The SIM card interface 595 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 595 or removed from the SIM card interface 595 to make contact with and separate from the electronic device 500. The electronic device 500 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 595 may support a NanoSIM card, a MicroSIM card, a SIM card, etc.
The methods in the following embodiments may be implemented in the electronic device 500 having the above-described hardware structure.
Fig. 11 is a schematic structural diagram of a terminal device 600 provided in the present application. The terminal device 600 may perform the actions performed by the electronic device in the above-described method embodiments.
For convenience of explanation, fig. 11 shows only main components of the terminal device. As shown in fig. 11, the terminal apparatus 600 includes a processor, a memory, a control circuit, an antenna, and an input-output device.
The processor is mainly configured to process a communication protocol and communication data, control the entire terminal device, execute a software program, and process data of the software program, for example, to support the terminal device to perform the actions described in the above embodiment of the method for indicating a transmission precoding matrix. The memory is mainly used for storing software programs and data, for example, the codebook described in the above embodiments. The control circuit is mainly used for converting baseband signals and radio frequency signals and processing the radio frequency signals. The control circuit and the antenna together, which may also be called a transceiver, are mainly used for transceiving radio frequency signals in the form of electromagnetic waves. Input and output devices, such as touch screens, display screens, keyboards, etc., are used primarily for receiving data input by a user and for outputting data to the user.
When the terminal device is turned on, the processor can read the software program in the storage unit, interpret and execute the instruction of the software program, and process the data of the software program. When data needs to be sent wirelessly, the processor outputs a baseband signal to the radio frequency circuit after performing baseband processing on the data to be sent, and the radio frequency circuit performs radio frequency processing on the baseband signal and sends the radio frequency signal outwards in the form of electromagnetic waves through the antenna. When data is sent to the terminal equipment, the radio frequency circuit receives radio frequency signals through the antenna, converts the radio frequency signals into baseband signals and outputs the baseband signals to the processor, and the processor converts the baseband signals into the data and processes the data.
Those skilled in the art will appreciate that fig. 11 shows only one memory and processor for ease of illustration. In an actual terminal device, there may be multiple processors and memories. The memory may also be referred to as a storage medium or a storage device, and the like, which is not limited in this application.
For example, the processor may include a baseband processor and a central processing unit, the baseband processor is mainly used for processing the communication protocol and the communication data, and the central processing unit is mainly used for controlling the whole terminal device, executing the software program, and processing the data of the software program. The processor in fig. 10 integrates the functions of the baseband processor and the central processing unit, and those skilled in the art will understand that the baseband processor and the central processing unit may also be independent processors, and are interconnected through a bus or the like. Those skilled in the art will appreciate that the terminal device may include a plurality of baseband processors to accommodate different network formats, the terminal device may include a plurality of central processors to enhance its processing capability, and various components of the terminal device may be connected by various buses. The baseband processor can also be expressed as a baseband processing circuit or a baseband processing chip. The central processing unit can also be expressed as a central processing circuit or a central processing chip. The function of processing the communication protocol and the communication data may be built in the processor, or may be stored in the storage unit in the form of a software program, and the software program is executed by the processor to realize the baseband processing function.
For example, in the embodiment of the present application, the antenna and the control circuit with transceiving functions may be regarded as the transceiving unit 610 of the terminal device 600, and the processor with processing function may be regarded as the processing unit 650 of the terminal device 600. As shown in fig. 10, the terminal device 600 includes a transceiving unit 610 and a processing unit 650. A transceiver unit may also be referred to as a transceiver, a transceiving device, etc. Optionally, a device for implementing the receiving function in the transceiver unit 610 may be regarded as a receiving unit, and a device for implementing the transmitting function in the transceiver unit 610 may be regarded as a transmitting unit, that is, the transceiver unit includes a receiving unit and a transmitting unit. For example, the receiving unit may also be referred to as a receiver, a receiving circuit, etc., and the sending unit may be referred to as a transmitter, a transmitting circuit, etc.
It should be understood that in the embodiments of the present application, the processor may be a Central Processing Unit (CPU), and the processor may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will also be appreciated that the memory in the embodiments of the subject application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of Random Access Memory (RAM) are available, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchlink DRAM (SLDRAM), and direct bus RAM (DR RAM).
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions or computer programs. The procedures or functions according to the embodiments of the present application are wholly or partially generated when the computer instructions or the computer program are loaded or executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more collections of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a solid state disk.
The embodiments of the present application also provide a computer-readable medium, on which a computer program is stored, where the computer program, when executed by a computer, implements the steps performed by the electronic device in any of the above embodiments.
The embodiments of the present application further provide a computer program product, where the computer program product, when executed by a computer, implements the steps performed by the electronic device in any of the above embodiments.
An embodiment of the present application further provides a system chip, where the system chip includes: a communication unit and a processing unit. The processing unit may be, for example, a processor. The communication unit may be, for example, a communication interface, an input/output interface, a pin or a circuit, etc. The processing unit can execute computer instructions to enable a chip in the communication device to execute the steps executed by the electronic equipment provided by the embodiment of the application.
Optionally, the computer instructions are stored in a storage unit.
The embodiments in the present application may be used independently or jointly, and are not limited herein.
In addition, various aspects or features of the present application may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques. The term "article of manufacture" as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. For example, computer-readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips, etc.), optical disks (e.g., Compact Disk (CD), Digital Versatile Disk (DVD), etc.), smart cards, and flash memory devices (e.g., erasable programmable read-only memory (EPROM), card, stick, or key drive, etc.). In addition, various storage media described herein can represent one or more devices and/or other machine-readable media for storing information. The term "machine-readable medium" can include, without being limited to, wireless channels and various other media capable of storing, containing, and/or carrying instruction(s) and/or data.
It should be understood that "and/or," which describes an association relationship for an associated object, indicates that there may be three relationships, e.g., a and/or B, which may indicate: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one" means one or more than one; "at least one of a and B", similar to "a and/or B", describes an association relationship of associated objects, meaning that three relationships may exist, for example, at least one of a and B may mean: a exists alone, A and B exist simultaneously, and B exists alone.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A method of image processing, comprising:
generating a first rendering command according to first rendering information, wherein the first rendering information comprises rendering information generated by a control and a layout of a current layer;
generating a second rendering command according to second rendering information, the second rendering information including dynamic rendering information of a current layer,
wherein the generating of the first rendering command according to the first rendering information and the generating of the second rendering command according to the second rendering information are completed at the same stage;
sending the first rendering command and the second rendering command to a graphics processor;
setting first indication information, wherein the first indication information is used for indicating a synthesizer not to render the current layer according to the second rendering information any more;
and performing layer composition according to the image data of the plurality of layers to obtain a current image frame, wherein the image data of the plurality of layers comprises the image data of the current layer, and the image data of the current layer is obtained by the graphics processor through rendering according to the first rendering command and the second rendering command.
2. The method of claim 1, wherein prior to issuing the first rendering command and the second rendering command to a graphics processor, the method further comprises:
and determining the priority level of the method in the operating system according to the second indication information.
3. The method according to claim 1 or 2, wherein the dynamic rendering information comprises at least one of the following parameters:
rotation, zoom, and fillet radius.
4. The method of claim 3, further comprising:
and determining the first rendering information and the second rendering information according to a dynamic rendering strategy.
5. The method of claim 4, wherein the dynamic rendering strategy comprises:
when the first parameter included in the first rendering information is different from the first parameter included in the second rendering information, the first parameter included in the first rendering information is adjusted according to the first parameter included in the second rendering information.
6. The method according to any one of claims 1 to 5, further comprising:
responding to a first vertical synchronous signal, and generating a first rendering command according to first rendering information, wherein the first rendering information comprises rendering information generated by a control and a layout of a current layer;
generating a second rendering command according to second rendering information, the second rendering information including dynamic rendering information of a current layer,
wherein the generating of the first rendering command according to the first rendering information and the generating of the second rendering command according to the second rendering information are completed at the same stage;
sending the first rendering command and the second rendering command to a graphics processor;
setting first indication information, wherein the first indication information is used for indicating a synthesizer not to render the current layer according to the second rendering information any more;
and responding to a second vertical synchronization signal, performing layer composition according to image data of a plurality of layers to obtain a current image frame, wherein the image data of the plurality of layers comprises the image data of the current layer, and the image data of the current layer is obtained by the graphics processor through rendering according to the first rendering command and the second rendering command.
7. An electronic device, comprising:
the processing unit is used for generating a first rendering command according to first rendering information, wherein the first rendering information comprises a control of a current layer and rendering information generated by layout;
the processing unit is further configured to generate a second rendering command according to second rendering information, where the second rendering information includes dynamic rendering information of a current layer,
wherein the generating of the first rendering command according to the first rendering information and the generating of the second rendering command according to the second rendering information are completed at the same stage;
the transceiving unit is used for sending the first rendering command and the second rendering command to a graphic processor;
the processing unit is further configured to set first indication information, where the first indication information is used to indicate that the synthesizer is not to render the current layer according to the second rendering information any more;
the processing unit is further configured to perform layer composition according to image data of multiple layers to obtain a current image frame, where the image data of the multiple layers includes image data of a current layer, and the image data of the current layer is obtained by the graphics processor through rendering according to the first rendering command and the second rendering command.
8. The electronic device of claim 7, wherein prior to said issuing the first rendering commands and the second rendering commands to a graphics processor, the processing unit is further configured to:
and determining the priority level of the electronic equipment in the operating system according to the second indication information.
9. The electronic device according to claim 7 or 8, wherein the dynamic rendering information comprises at least one of the following parameters:
rotation, zoom, and fillet radius.
10. The electronic device of claim 9, wherein the processing unit is further configured to:
and determining the first rendering information and the second rendering information according to a dynamic rendering strategy.
11. The electronic device of claim 10, wherein the dynamic rendering strategy comprises:
when the first parameters included in the first rendering information are different from the first parameters included in the second rendering information, the first parameters included in the first rendering information are adjusted according to the first parameters included in the second rendering information.
12. The electronic device of any of claims 7-11, wherein the processing unit is further configured to:
responding to a first vertical synchronous signal, and generating a first rendering command according to first rendering information, wherein the first rendering information comprises rendering information generated by a control and a layout of a current layer;
generating a second rendering command according to second rendering information, the second rendering information including dynamic rendering information of a current layer,
wherein the generating of the first rendering command according to the first rendering information and the generating of the second rendering command according to the second rendering information are completed at the same stage;
sending the first rendering command and the second rendering command to a graphics processor;
setting first indication information, wherein the first indication information is used for indicating a synthesizer not to render the current layer according to the second rendering information any more;
and in response to a second vertical synchronization signal, performing layer composition according to image data of multiple layers to obtain a current image frame, where the image data of the multiple layers includes image data of a current layer, and the image data of the current layer is obtained by the graphics processor through rendering according to the first rendering command and the second rendering command.
13. An electronic device comprising a processor coupled to a memory, the memory for storing a computer program, the processor for executing the computer program stored in the memory to cause the electronic device to perform the method of any of claims 1-6.
14. A computer-readable storage medium, characterized in that it stores a computer program which, when executed, implements the method according to any one of claims 1 to 6.
15. A chip comprising a processor and an interface;
the processor is configured to read instructions to perform the method of data transfer of any of claims 1 to 6.
CN202011600925.1A 2020-12-29 2020-12-29 Image processing method and electronic equipment Pending CN114756359A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011600925.1A CN114756359A (en) 2020-12-29 2020-12-29 Image processing method and electronic equipment
PCT/CN2021/136805 WO2022143082A1 (en) 2020-12-29 2021-12-09 Image processing method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011600925.1A CN114756359A (en) 2020-12-29 2020-12-29 Image processing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN114756359A true CN114756359A (en) 2022-07-15

Family

ID=82259039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011600925.1A Pending CN114756359A (en) 2020-12-29 2020-12-29 Image processing method and electronic equipment

Country Status (2)

Country Link
CN (1) CN114756359A (en)
WO (1) WO2022143082A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024046084A1 (en) * 2022-08-31 2024-03-07 华为技术有限公司 User interface display method and related apparatus
WO2024104116A1 (en) * 2022-11-14 2024-05-23 Oppo广东移动通信有限公司 Effect processing method and electronic device

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115277924B (en) * 2022-07-26 2024-05-17 努比亚技术有限公司 Dynamic lock screen display control method, equipment and computer readable storage medium
CN117557701A (en) * 2022-08-03 2024-02-13 荣耀终端有限公司 Image rendering method and electronic equipment
CN116684677B (en) * 2022-09-20 2024-06-11 荣耀终端有限公司 Electronic equipment dynamic effect playing method, electronic equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103268620A (en) * 2013-04-28 2013-08-28 华为技术有限公司 Graphic processing method, graphic processing device and terminal device
CN106600521A (en) * 2016-11-30 2017-04-26 宇龙计算机通信科技(深圳)有限公司 Image processing method and terminal device
US20200201512A1 (en) * 2018-12-20 2020-06-25 Microsoft Technology Licensing, Llc Interactive editing system
CN111400024B (en) * 2019-01-03 2023-10-10 百度在线网络技术(北京)有限公司 Resource calling method and device in rendering process and rendering engine
CN111611031A (en) * 2019-02-26 2020-09-01 华为技术有限公司 Graph drawing method and electronic equipment
CN110503708A (en) * 2019-07-03 2019-11-26 华为技术有限公司 A kind of image processing method and electronic equipment based on vertical synchronizing signal

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024046084A1 (en) * 2022-08-31 2024-03-07 华为技术有限公司 User interface display method and related apparatus
WO2024104116A1 (en) * 2022-11-14 2024-05-23 Oppo广东移动通信有限公司 Effect processing method and electronic device

Also Published As

Publication number Publication date
WO2022143082A1 (en) 2022-07-07

Similar Documents

Publication Publication Date Title
CN110351422B (en) Notification message preview method, electronic equipment and related products
US11567623B2 (en) Displaying interfaces in different display areas based on activities
CN114756359A (en) Image processing method and electronic equipment
WO2021115194A1 (en) Application icon display method and electronic device
US20220300129A1 (en) Split-screen processing method and terminal device
CN112291764A (en) Content connection method, system and electronic equipment
CN111694475B (en) Terminal control method and device and terminal equipment
EP4060475A1 (en) Multi-screen cooperation method and system, and electronic device
CN113721785B (en) Method for adjusting sampling rate of touch screen and electronic equipment
CN111147660B (en) Control operation method and electronic equipment
US20240192835A1 (en) Display method and related apparatus
CN113672133A (en) Multi-finger interaction method and electronic equipment
US20230186013A1 (en) Annotation method and electronic device
WO2023005711A1 (en) Service recommendation method and electronic device
WO2023029985A1 (en) Method for displaying dock bar in launcher and electronic device
CN114327198A (en) Control function pushing method and device
WO2023169276A1 (en) Screen projection method, terminal device, and computer-readable storage medium
WO2023030057A1 (en) Screen recording method, electronic device, and computer readable storage medium
WO2021227847A9 (en) Method and apparatus for applying file
CN115700431A (en) Desktop display method and electronic equipment
CN115344168A (en) Message display method, terminal and computer readable storage medium
CN113672563A (en) File application method and device
CN117075786A (en) Page display method and electronic equipment
CN117667229A (en) Display method, electronic device and storage medium
CN115878232A (en) Display method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination