CN116643685A - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN116643685A
CN116643685A CN202310788039.3A CN202310788039A CN116643685A CN 116643685 A CN116643685 A CN 116643685A CN 202310788039 A CN202310788039 A CN 202310788039A CN 116643685 A CN116643685 A CN 116643685A
Authority
CN
China
Prior art keywords
layer
processing
target layer
target
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310788039.3A
Other languages
Chinese (zh)
Inventor
胡勇
刘为超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202310788039.3A priority Critical patent/CN116643685A/en
Publication of CN116643685A publication Critical patent/CN116643685A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses an image processing method, an image processing device, image processing equipment and a storage medium, wherein the method comprises the following steps: determining a target layer in a plurality of layers to be displayed; wherein the target layer is associated with a touch event; performing layer display processing on the target layer based on a first processing mode; performing layer display processing on at least one non-target layer in the plurality of layers based on a second processing mode; the first time period is less than the second time period.

Description

Image processing method, device, equipment and storage medium
Technical Field
The present application relates to, but not limited to, the field of image processing technologies, and in particular, to an image processing method, apparatus, device, and storage medium.
Background
Currently, users can review various types of content through a display screen of an electronic device. The interface display of the display screen of the terminal device usually needs to be subjected to drawing, rendering, synthesizing and other processes. In the process that a user inputs an instruction or draws a graph to a touch screen of electronic equipment through a touch pen to display a writing track, the problem that the delay experience of a display subsystem is poor exists.
Disclosure of Invention
In view of this, embodiments of the present application at least provide an image processing method, apparatus, device and storage medium.
The technical scheme of the embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides an image processing method, including:
determining a target layer in a plurality of layers to be displayed; wherein the target layer is associated with a touch event; performing layer display processing on the target layer based on a first processing mode; performing layer display processing on at least one non-target layer in the plurality of layers based on a second processing mode; the first time period is less than the second time period.
In some embodiments, the determining a target layer of the plurality of layers to be displayed includes: and under the condition that the touch event is received and the layer displayed by the current window is detected to be the target application, taking the layer corresponding to the target application as the target layer.
In some embodiments, the method further comprises: acquiring a time average value of a preset multi-frame image of the target layer for layer display processing; determining that the time average exceeds one frame period; the operating frequency of the adjustment target processor is changed from a first frequency to a second frequency, wherein the first frequency is greater than the second frequency.
In some embodiments, wherein: the layer display processing comprises rendering processing and synthesis processing; the layer display processing performed by the first processing mode occupies a first time period, and the layer display processing performed by the second processing mode occupies a second time period, which comprises the following steps: in the first processing mode, the rendering processing and the synthesizing processing aiming at the target layer occupy a first time period, and the first time period is less than or equal to one frame period; the rendering processing and the synthesizing processing aiming at the non-target layer occupy a second time period in the second processing mode; the second time period is greater than one frame period.
In some embodiments, the composition process includes a layer commit sub-phase, a control frame transfer sub-phase, a layer buffering sub-phase, and a composition preparation sub-phase, and the first processing manner includes at least one of the following manners such that the first time period is less than or equal to one frame period: in the layer submitting sub-stage, directly submitting the target layer before the delay signal of the previous layer of the target layer is not received; wherein the delay signal is used for indicating that the previous layer is processed; in the sub-stage of the control frame transmission, the target image layer is directly transmitted; setting the state of the target layer in a cache queue as a ready state in the layer cache sub-stage; the buffer queue is used for storing a layer to be processed with a preset frame period length; in the synthesis preparation sub-phase, a state setting is made for the target layer based solely on a first identification signal of a current frame, the first identification signal being used to indicate whether a buffer of the current frame is accessible.
In some embodiments, the second treatment regimen comprises at least one of the following: at the layer submitting sub-stage, submitting each non-target layer to a display subsystem after receiving the delay signal; in the control frame transmission sub-stage, controlling the transmission speed of each non-target image layer to a display subsystem; setting the state of each non-target layer in the cache queue as a waiting state in the layer cache sub-stage; in the synthesis preparation sub-stage, a state setting is performed for the non-target layer based on a first identification signal of a current frame and a first identification signal of a previous frame, where the first identification signal is used to indicate whether a buffer of the corresponding frame is accessible.
In some embodiments, the target layer is provided with a preset identifier, and the method further includes: acquiring attribute information of each layer in the plurality of layers through a hardware combination abstract layer; wherein the attribute information comprises whether the preset identifier is set; in the second processing mode, aiming at the at least one non-target layer, responding to the received second identification signal, and displaying the at least one non-target layer after overlapping; wherein the second identification signal is used for indicating that the display of the previous frame of the at least one non-target layer is completed; and in the first processing mode, for the target layer, skipping waiting for the second identification signal, and directly overlapping with the at least one non-target layer for display.
In some embodiments, the method further comprises: and in the process of synthesizing the layers to be displayed, deleting the preset identification of the target layer in response to detecting that the target layer is destroyed.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the layer determining module is used for determining a target layer in a plurality of layers to be displayed; wherein the target layer is associated with a touch event;
the first processing module is used for carrying out layer display processing on the target layer based on a first processing mode;
the second processing module is used for carrying out layer display processing on at least one non-target layer in the plurality of layers based on a second processing mode; the layer display processing performed in the first processing mode occupies a first time period, and the layer display processing performed in the second processing mode occupies a second time period; the first time period is less than the second time period.
In a third aspect, embodiments of the present application provide a computer device comprising a memory and a processor, the memory storing a computer program executable on the processor, the processor implementing some or all of the steps of the above method when the program is executed.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which when executed by a processor performs some or all of the steps of the above method.
In the embodiment of the application, the target layer of the target application interface to be displayed is determined by adopting the touch event, and then the layer display processing is carried out by adopting a first processing mode and a second processing mode respectively for the target layer and the non-target layer in the multiple layers to be displayed, so that the display scheduling time of the target layer is flexibly adjusted, and further, the better optimization effect and the better delay experience of the display subsystem are realized.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the aspects of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
Fig. 1 is an optional flowchart of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of a processor optimization strategy according to an embodiment of the present application;
Fig. 3 is a schematic flow chart of a cloud update list policy according to an embodiment of the present application;
FIG. 4 is a logic flow diagram of a frame performance optimization process provided by an embodiment of the present application;
fig. 5 is a schematic diagram of a composition structure of an image processing apparatus according to an embodiment of the present application;
fig. 6 is a schematic diagram of a hardware entity of a computer device according to an embodiment of the present application.
Detailed Description
The technical solution of the present application will be further elaborated with reference to the accompanying drawings and examples, which should not be construed as limiting the application, but all other embodiments which can be obtained by one skilled in the art without making inventive efforts are within the scope of protection of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
The term "first/second/third" is merely to distinguish similar objects and does not represent a particular ordering of objects, it being understood that the "first/second/third" may be interchanged with a particular order or precedence, as allowed, to enable embodiments of the application described herein to be implemented in other than those illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing the application only and is not intended to be limiting of the application.
Before describing embodiments of the present application in further detail, the terms and terminology involved in the embodiments of the present application will be described, and the terms and terminology involved in the embodiments of the present application are suitable for the following explanation.
The computer converts the shapes stored in memory into corresponding processes that are actually drawn on the screen, referred to as rendering.
When a frame is drawn, the display will send a vertical synchronization signal (Vertical Synchronization), VSync for short, before the next frame is ready to be drawn. The display is typically refreshed at a fixed frequency, which is the frequency at which the vertical synchronization signal is generated.
Android (Android) systems employ a UI architecture called a layer (Surface) to provide a user interface for applications. In an Android application, one or several windows are associated with each Activity (Activity) component, each window corresponding to a layer. With this layer, the application can render a User Interface (UI) of the window thereon. Finally, the drawn layers are uniformly submitted to a display subsystem (SurfaceFlinger) for composition, and finally displayed on a screen. Both the application program and the display subsystem can utilize hardware such as a GPU to perform UI rendering so as to obtain a smoother man-machine interaction interface.
The embodiment of the application provides an image processing method which can be executed by a processor of computer equipment. The computer device may be a device with image processing capability, such as a server, a notebook computer, a tablet computer, a desktop computer, a smart television, a set-top box, a mobile device (e.g., a mobile phone, a portable video player, a personal digital assistant, a dedicated messaging device, and a portable game device). Fig. 1 is an optional flowchart of an image processing method according to an embodiment of the present application, as shown in fig. 1, the method includes steps S110 to S140 as follows:
step S110, determining a target layer of the plurality of layers to be displayed.
Here, the target layer is associated with a touch event. The touch event may be a touch event of a user to the touch panel, or a pen writing event triggered by an electronic pen or a handwriting pen, which is not limited in the embodiment of the present application.
The target layer is a layer for displaying a target application program interface, wherein the touch event is triggered in the target application program, and the target application program is an application corresponding to the interface currently displayed by the display screen of the electronic device, and is usually a top-level application or a sub-top-level application.
Step S120, performing layer display processing on the target layer based on the first processing mode.
Here, a target layer in the plurality of layers may have a preset identifier, and in the layer display processing, the layer display processing is performed on the target layer by adopting a first processing manner.
Step S130, performing layer display processing on at least one non-target layer of the plurality of layers based on the second processing manner.
Here, the layer display processing performed by the first processing manner occupies a first time period, and the layer display processing performed by the second processing manner occupies a second time period, and the first time period is smaller than the second time period. That is, the display scheduling time is shortened for the target layer in the multiple layers, and the method can be specifically implemented to take corresponding jump logic in at least one transaction of calling the target layer to accelerate the layer display processing process of the target layer, for example, the method does not need to wait for the identification signal that the previous layer has been processed to be immediately scheduled at any scheduling time, or the transaction state of the target layer in the default cache queue is processed directly as a ready state.
It can be understood that the layer display processing procedure of the electronic device is as follows: the rendering thread is used for rendering one or more layers after the UI thread finishes drawing the one or more layers, and then synthesizing the layers by the synthesizing thread and displaying the synthesized layers on a display screen of the electronic device. Measurement, layout and drawing of each frame of layers and rendering are typically performed in response to a trigger of a first vertical synchronization signal, layer composition is performed in response to a second vertical synchronization signal, and a frame display refresh is performed in response to a third vertical synchronization signal. I.e. the time period required for the electronic device to render, render and synthesize the layer typically requires two synchronization periods. In this way, too long response delay of the electronic device to the touch event can be caused, and interaction smoothness of the electronic device is affected.
In the embodiment of the application, the target layer of the target application interface to be displayed is determined by adopting the touch event, and then the layer display processing is carried out by adopting a first processing mode and a second processing mode respectively for the target layer and the non-target layer in the multiple layers to be displayed, so that the display scheduling time of the target layer is flexibly adjusted, and further, the better optimization effect and the better delay experience of the display subsystem are realized.
In some embodiments, the step S110 "determining a target layer among the multiple layers to be displayed and processed" is further implemented to take, as the target layer, a layer corresponding to the target application when the touch event is received and the layer displayed in the current window is detected as the target application.
In the implementation, after the event information is written through driving, the input subsystem reports the touch event, and after the touch event is detected to report, whether the top layer or the sub-top layer application of the current window stack is a target application program is judged, the target layer is determined, and the function of the target layer is enabled. Therefore, the target layer to be optimally processed is accurately judged by adopting double conditions of the touch event and the target application program, and additional function enabling limitation is not needed.
In some embodiments, the method further comprises the following steps S140 to S160:
step S140, obtaining a time average value of the preset multi-frame image of the target layer for layer display processing.
Here, the layer display processing includes at least a rendering processing and a synthesizing processing, and the time average value of the layer display processing includes a rendering time-consuming average value and a synthesizing time-consuming average value.
The preset multi-frame image is the multi-frame image to be displayed and processed in a set statistical period, and the statistical period can be any time length of 1 second, 3 seconds, 5 seconds and the like. For example, the average rendering time consumption of the near 30 frames of the target layer application and the average synthesizing time consumption of the near 30 frames of the display subsystem are detected respectively, and then summed to determine the time average value of the layer display processing of the preset multi-frame image of the target layer.
Step S150, determining that the time average value exceeds one frame period.
It should be noted that, the UI framework periodically performs layer drawing and rendering based on the first vertical synchronization signal; the hardware synthesis HWC performs layer synthesis periodically based on the second vertical synchronization signal; the display screen is periodically refreshed with image frames based on the third vertical synchronization signal. One frame period represents, for example, a period between the second vertical synchronization signal and the third vertical synchronization signal.
In the case where the sum of the rendering time-consuming average value and the composition time-consuming average value exceeds one frame period, it is explained that the time spent on rendering the layer or the composition layer is long, and the operating frequency of the target processor needs to be raised.
In step S160, the operating frequency of the adjustment target processor is changed from the first frequency to the second frequency.
Here, the target processor may be a GPU or a CPU, and the first frequency is greater than the second frequency. In some embodiments, the working frequency of the target processor is the frequency of each core of the CPU, the second frequency is determined according to the maximum grade of the frequencies of the middle, large and small cores of the CPU, and the minimum frequency of the corresponding core of the CPU is set as the second frequency according to a preset percentage; in some embodiments, the current gear is reduced to the next gear if the first frequency is a minimum frequency of the current gear in the preset frequency map and the current gear is not the lowest gear.
It should be noted that the application App main thread starts to calculate display contents in the CPU, such as creation of views, layout calculation, picture decoding, text drawing, and the like. And then the CPU submits the calculated bitmap to the GPU, and the GPU obtains the bitmap and then transforms, synthesizes and renders the bitmap. The GPU then submits the rendering results to the frame buffer, waiting for the next time a vertical synchronization signal arrives for display on the screen. Therefore, by monitoring the system and the application and properly carrying out frequency change, the embodiment of the application can obtain better delay experience under the condition of smaller influence of power consumption.
In this way, the processing speed of the UI thread and the rendering thread can be improved by improving the working frequency of the target processor of the electronic device, so that the time spent on drawing and rendering the layer can be reduced, and the smoothness of layer display is improved.
In some implementations, the layer display processing includes rendering processing and composition processing; the layer display processing performed by the first processing mode occupies a first time period, and the layer display processing performed by the second processing mode occupies a second time period, which comprises the following steps: in the first processing mode, the rendering processing and the synthesizing processing aiming at the target layer occupy a first time period, and the first time period is less than or equal to one frame period; the rendering processing and the synthesizing processing aiming at the non-target layer occupy a second time period in the second processing mode; the second time period is greater than one frame period.
In this way, rendering and synthesizing are completed in one frame period for the target layer, and rendering and synthesizing are performed in more than one frame period for the non-target layer according to the conventional flow, so that delay can be reduced, and touch experience of the electronic device can be improved.
In some embodiments, the composition process includes a layer commit sub-phase, a control frame transfer sub-phase, a layer buffering sub-phase, and a composition preparation sub-phase, and the first processing manner includes at least one of the following manners such that the first time period is less than or equal to one frame period: in the layer submitting sub-stage, directly submitting the target layer before the delay signal of the previous layer of the target layer is not received; wherein the delay signal is used for indicating that the previous layer is processed; in the sub-stage of the control frame transmission, the target image layer is directly transmitted; setting the state of the target layer in a cache queue as a ready state in the layer cache sub-stage; the buffer queue is used for storing a layer to be processed with a preset frame period length; in the synthesis preparation sub-phase, a state setting is made for the target layer based solely on a first identification signal of a current frame, the first identification signal being used to indicate whether a buffer of the current frame is accessible.
Here, the layer submitting sub-stage is a stage of submitting the layer to the display synthesis subsystem, and the conventional flow is a delay signal to wait for the situation that the current fence of the previous layer is to be displayed immediately.
The control frame transmission sub-stage is to control the generation of the image layer to avoid the overflow of the downstream buffer when the upstream production speed is greater than the downstream consumption speed in the process of transmitting the data stream from the upstream producer to the downstream consumer.
The layer cache sub-stage is a cache queue in the scheduling framework, and the first processing mode of the embodiment of the application can select the state of the default target layer as the ready state so as to speed up scheduling, namely, release processing is performed on the transaction of the target layer, thereby reducing delay time.
The synthesis preparation sub-stage is to read the first identification signals of the previous frame before synthesizing a plurality of layers, and in the conventional process, the identification information of the previous two frames, such as the current fence (present fence) of the frame 0 and the frame 1, need to be read.
In this way, in the process of waiting for the release of the first identification signal from the previous frame in the scheduling process in the conventional process and in the process of the existence of the backpressure process or the state preparation, the direct call is realized by the jump logic or the default setting for the target layer, so that the synthesis processing process is accelerated, and the delay of the display subsystem is reduced.
In some embodiments, the second treatment regimen comprises at least one of the following: at the layer submitting sub-stage, submitting each non-target layer to a display subsystem after receiving the delay signal; in the control frame transmission sub-stage, controlling the transmission speed of each non-target image layer to a display subsystem; setting the state of each non-target layer in the cache queue as a waiting state in the layer cache sub-stage; in the synthesis preparation sub-stage, a state setting is performed for the non-target layer based on a first identification signal of a current frame and a first identification signal of a previous frame, where the first identification signal is used to indicate whether a buffer of the corresponding frame is accessible.
Here, the display subsystem (surfacefringer) is a core process of the display subsystem, and is mainly responsible for synthesizing all layers into a Framebuffer, and then a screen reads the Framebuffer and displays the Framebuffer to a user for viewing. Synthesizing layers to a screen
It will be appreciated that the identification signal is the current barrier released after the layer processing is completed, and is used to indicate that the current processing step of the corresponding layer has been completed, and it is necessary to detect whether the current barrier of the previous layer is available before processing the current layer, if so, the current processing step may use this current layer, and if not, it is necessary to wait until the current barrier of the previous layer becomes available.
In some embodiments, the target layer is provided with a preset identifier, and the method further includes: acquiring attribute information of each layer in the plurality of layers through a hardware combination abstract layer; wherein the attribute information comprises whether the preset identifier is set; in the second processing mode, aiming at the at least one non-target layer, responding to the received second identification signal, and displaying the at least one non-target layer after overlapping; wherein the second identification signal is used for indicating that the display of the previous frame of the at least one non-target layer is completed; and in the first processing mode, for the target layer, skipping waiting for the second identification signal, and directly overlapping with the at least one non-target layer for display.
In this way, the corresponding setting is performed after the preset identifier of the target layer is obtained in the hardware combination abstract layer, so that the target layer and the non-target layer are respectively overlapped and displayed by adopting a first processing mode and a second processing mode, and the performance optimization of the display stage is realized.
In some embodiments, the method further comprises: and in the process of synthesizing the layers to be displayed, deleting the preset identification of the target layer in response to detecting that the target layer is destroyed.
In this way, under the condition that the target layer is destroyed, a plurality of layers are synthesized according to the conventional flow, so that the operation complexity and the situation that program abnormality can occur are reduced.
The above image processing method is described below in connection with a specific embodiment, however, it should be noted that the specific embodiment is only for better illustrating the present application, and is not meant to be a undue limitation on the present application.
In order to solve the problem of poor delay experience of a display subsystem in a main scene process of drawing by using a handwriting pen on an android tablet, whether the android tablet is a target layer is judged by acquiring a top-layer application and a handwriting event, and aiming at the target layer, the target layer needs to enter a single frame function (single frame), which is equivalent to carrying out layer display processing on the target layer according to the first processing mode.
Fig. 2 is a schematic flow chart of a processor optimization strategy provided in the embodiment of the present application, as shown in fig. 2, after a program is started, step S21 of single-frame excitation is sequentially performed, that is, after a single-frame function is started, according to the rendering composite data of the previous frames is detected, a CPU frequency range is adjusted according to the rendering composite data in a full-frequency downshifting manner to perform frequency adjustment, so as to achieve the best effect and the lowest excitation (boost) frequency range. Step S22, detecting the state, namely detecting whether the actual rendering process and the synthesizing process can be completed in one synchronous period, if not, executing step S23 to perform excitation according to time, namely acquiring multi-frame rendering synthesized data in a fixed period to perform excitation adjustment on a CPU frequency range; if so, step 24 is performed to operate at the current frequency.
Here, the current frequency is the frequency of each core of the CPU, and the full-frequency downshift is to adjust the lowest frequency setting by percentage according to the maximum value of the large, medium and small core frequency table of the CPU, after which the minimum frequency of the CPU core is fixed to the set value.
In some implementations, the rendering of the inspection application is approximately 30 frames on average time consuming; checking the near 30 frame composition average time consumption of a display subsystem (surfacefringer, a display system service for defining-uniformly managing layers); then, according to the CPU frequency strategy adjustment of the corresponding gear which is time-consuming; if rendering and compositing can be completed within one frame period, no CPU stimulus is done.
Fig. 3 is a schematic flow chart of a cloud update list policy according to an embodiment of the present application, as shown in fig. 3, where the policy includes the following procedures: step S31, creating a layer; step S32, setting a transaction state; step S33, judging whether the current layer is a white list application; if the judgment result is that the flow is not the normal flow, executing the normal flow; if yes, executing step S34, and continuing to judge whether a writing event is received; if the pen writing event is received, continuing to execute the step S35, and enabling a single frame mark; otherwise, the conventional flow is executed.
It should be noted that, after the UI thread finishes the measurement layout drawing, the rendering thread executes the operation of creating the layer in step 31, and then the surface creator client executes the operation of setting the transaction state in step S32, and then sends the transaction state to the display subsystem for the subsequent synthesis process. And judging the pen writing event and the top layer of the current window in the display subsystem, and setting a single-frame mark enabling switch to open a single-frame function enabling switch when the pen writing event and the top layer of the current window meet the function opening condition. The white list is maintained in a cloud updating mode, the sub-threads are synchronized to the display subsystem through the first networking after the electronic equipment is started, the updating mode is simpler, and the updating can be performed without an OTA upgrading system.
In some embodiments, the composition process flow includes a layer commit sub-phase, a control frame transfer sub-phase, a layer cache sub-phase, and a composition preparation sub-phase. Fig. 4 is a logic flow chart of a single-frame performance optimization process according to an embodiment of the present application, as shown in fig. 4, including the following steps S41 to S48:
step S41, it is determined whether the single frame flag is enabled.
Here, if the single frame function is not enabled, i.e., the current layer is a non-target layer, a normal flow is performed; if the single frame function is enabled, i.e. the current layer is the target layer, the scheduler is executed in step S42.
It should be noted that when the single frame function is enabled, the fixed working time of the display subsystem is not set, so that the functions of the display subsystem become more flexible.
Step S42, scheduling program.
In step S43, immediate scheduling of frame processing is performed by the scheduler.
In step S44, in the layer submitting sub-stage, the target layer is submitted directly before the delay signal of the previous layer of the target layer is not received.
Here, in the android native mechanism, a 1ms delay signal is waited to cover the situation that the current fence is to be displayed immediately when the current fence is submitted (commit), and if the current fence is the target layer, no corresponding wait is performed, namely the target layer defaults to wait for the current fence in the process of submitting.
Notably, while the layer latches the buffer (latch buffer), the native logic will typically wait for the current barrier information for this layer, nor will it wait if the target layer.
In step S45, in the sub-stage of frame control transmission, the target layer is directly transmitted.
Here, for the non-target layer, a backpressure flow exists in the conventional flow, the backpressure function allows the GPU to control the frame of the production end, the flow of quick screen-up is not suitable for the scene of the control frame, if the flow is the target layer, the flow is skipped, and the embodiment of the application always displays and delivers for the target layer, so that the delay is shortened.
In step S46, in the layer cache sub-stage, the state of the target layer is set as the ready state in the cache queue.
The buffer queue is used for storing the to-be-processed layer with the preset frame period length, and the transaction state of the target layer is set as the ready state by default in the embodiment of the application, and can be directly called, namely, the transaction of the target layer is released to be processed and passed through in an accelerating way, so that the delay time is reduced.
In step S47, in the synthesis preparation sub-phase, a state setting is performed for the target layer based on only the first identification signal of the current frame.
Here, the first identification signal may be represented as a current fence (present fence), where the current fence of the current frame 0 and the current fence of the previous frame 1 are read according to conditions in the android native mechanism, and if the current fence is a single-frame scene, the current fence of the current frame 0 is directly taken to set the state of the target layer.
Step S48, synthesizing the layers.
It should be noted that, in the process of synthesizing the layers, if the target layer is destroyed, the single frame mark (corresponding to the preset mark) is removed as a non-target layer to be processed according to the conventional flow.
In some embodiments, after composition, at an overlay (overlay) module of the hardware composition abstraction layer, no waiting is performed on the current barrier for the target layer in the flow of waiting for the barrier time, thereby accelerating single frame layer screening.
Here, the overlay module overlays the layers synthesized by the GPU and the layers synthesized by the hardware abstraction layer through the overlay engine, when the layers are submitted to DRM for display, the overlay module needs to wait for the previous frame of each layer to display and release the current barrier and then start the mapping of the next frame, and if the layers are target layers, the current barrier is not directly mapped.
The image processing method provided by the embodiment of the application has at least the following beneficial effects: 1) By adopting corresponding jump logic for the target layer in the process of the security Zhuo Changgui, the processing process is accelerated to optimize the delay data, so that better optimization effect and more excellent delay experience can be obtained. 2) The intelligent frequency modulation function is used for monitoring the system and the application, and frequency change is properly carried out, so that the influence of power consumption is smaller when a single-frame function is realized. 3) By adopting the dual-condition judgment of the pen writing event and the top application, no additional function enabling limitation is carried out, so that the scene recognition is more accurate, and the optimized azimuth is wider. 4) The embodiment of the application adopts a cloud list updating mode, does not need to modify configuration files, has a simpler updating mode, and can update without an OTA upgrading system.
Based on the foregoing embodiments, an embodiment of the present application provides an image processing apparatus, where the apparatus includes each module included, and each sub-module and each unit included in each module may be implemented by a processor in a computer device; of course, the method can also be realized by a specific logic circuit; in practice, the processor may be a central processing unit (Central Processing Unit, CPU), microprocessor (Microprocessor Unit, MPU), digital signal processor (Digital Signal Processor, DSP) or field programmable gate array (Field Programmable Gate Array, FPGA), etc.
Fig. 5 is a schematic diagram of a composition structure of an image processing apparatus according to an embodiment of the present application, as shown in fig. 5, the apparatus 500 includes: a layer determination module 510, a first processing module 520, and a second processing module 530, wherein:
the layer determining module 510 is configured to determine a target layer among a plurality of layers to be displayed; wherein the target layer is associated with a touch event;
the first processing module 520 is configured to perform layer display processing on the target layer based on a first processing manner;
the second processing module 530 is configured to perform layer display processing on at least one non-target layer of the plurality of layers based on a second processing manner; the layer display processing performed in the first processing mode occupies a first time period, and the layer display processing performed in the second processing mode occupies a second time period; the first time period is less than the second time period.
In some embodiments, the layer determining module 510 is further configured to, when the touch event is received and it is detected that the layer displayed in the current window is a target application, take a layer corresponding to the target application as the target layer.
In some embodiments, the apparatus further comprises: the time acquisition module is used for acquiring a time average value of the preset multi-frame image of the target layer for layer display processing; a time comparison module for determining that the time average exceeds one frame period; the frequency adjustment module is used for adjusting the working frequency of the target processor to be changed from a first frequency to a second frequency, wherein the first frequency is larger than the second frequency.
In some implementations, the layer display processing includes rendering processing and composition processing; the layer display processing performed by the first processing mode occupies a first time period, and the layer display processing performed by the second processing mode occupies a second time period, which comprises the following steps: in the first processing mode, the rendering processing and the synthesizing processing aiming at the target layer occupy a first time period, and the first time period is less than or equal to one frame period; the rendering processing and the synthesizing processing aiming at the non-target layer occupy a second time period in the second processing mode; the second time period is greater than one frame period.
In some embodiments, the composition process includes a layer commit sub-phase, a control frame transfer sub-phase, a layer buffering sub-phase, and a composition preparation sub-phase, and the first processing manner includes at least one of the following manners such that the first time period is less than or equal to one frame period: in the layer submitting sub-stage, directly submitting the target layer before the delay signal of the previous layer of the target layer is not received; wherein the delay signal is used for indicating that the previous layer is processed; in the sub-stage of the control frame transmission, the target image layer is directly transmitted; setting the state of the target layer in a cache queue as a ready state in the layer cache sub-stage; the buffer queue is used for storing a layer to be processed with a preset frame period length; in the synthesis preparation sub-phase, a state setting is made for the target layer based solely on a first identification signal of a current frame, the first identification signal being used to indicate whether a buffer of the current frame is accessible.
In some embodiments, the second treatment regimen comprises at least one of the following: at the layer submitting sub-stage, submitting each non-target layer to a display subsystem after receiving the delay signal; in the control frame transmission sub-stage, controlling the transmission speed of each non-target image layer to a display subsystem; setting the state of each non-target layer in the cache queue as a waiting state in the layer cache sub-stage; in the synthesis preparation sub-stage, a state setting is performed for the non-target layer based on a first identification signal of a current frame and a first identification signal of a previous frame, where the first identification signal is used to indicate whether a buffer of the corresponding frame is accessible.
In some embodiments, the target layer is provided with a preset identifier, and the apparatus further includes: the attribute acquisition module is used for acquiring attribute information of each layer in the plurality of layers through the hardware combination abstract layer; wherein the attribute information comprises whether the preset identifier is set; the first display module is used for displaying the at least one non-target layer after overlapping the at least one non-target layer in response to receiving a second identification signal in the second processing mode; wherein the second identification signal is used for indicating that the display of the previous frame of the at least one non-target layer is completed; and the second display module is used for skipping waiting for the second identification signal aiming at the target layer in the first processing mode and displaying the second identification signal after being directly overlapped with the at least one non-target layer.
In some embodiments, the apparatus further comprises: and the de-identification module is used for deleting the preset identification of the target layer in response to the fact that the target layer is destroyed in the process of synthesizing the layers to be displayed.
The description of the apparatus embodiments above is similar to that of the method embodiments above, with similar advantageous effects as the method embodiments. In some embodiments, the functions or modules included in the apparatus provided by the embodiments of the present disclosure may be used to perform the methods described in the embodiments of the methods, and for technical details that are not disclosed in the embodiments of the apparatus of the present disclosure, please refer to the description of the embodiments of the methods of the present disclosure for understanding.
If the technical scheme of the application relates to personal information, the product applying the technical scheme of the application clearly informs the personal information processing rule before processing the personal information and obtains the autonomous agreement of the individual. If the technical scheme of the application relates to sensitive personal information, the product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'explicit consent'. For example, a clear and remarkable mark is set at a personal information acquisition device such as a camera to inform that the personal information acquisition range is entered, personal information is acquired, and if the personal voluntarily enters the acquisition range, the personal information is considered as consent to be acquired; or on the device for processing the personal information, under the condition that obvious identification/information is utilized to inform the personal information processing rule, personal authorization is obtained by popup information or a person is requested to upload personal information and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing mode, and a type of personal information to be processed.
It should be noted that, in the embodiment of the present application, if the above-mentioned image processing method is implemented in the form of a software functional module, and sold or used as a separate product, it may also be stored in a computer readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or some of contributing to the related art may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, an optical disk, or other various media capable of storing program codes. Thus, embodiments of the application are not limited to any specific hardware, software, or firmware, or any combination of hardware, software, and firmware.
The embodiment of the application provides a computer device, which comprises a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and the processor realizes part or all of the steps in the method when executing the program.
Embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs some or all of the steps of the above-described method. The computer readable storage medium may be transitory or non-transitory.
Embodiments of the present application provide a computer program comprising computer readable code which, when run in a computer device, causes a processor in the computer device to perform some or all of the steps for carrying out the above method.
Embodiments of the present application provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program which, when read and executed by a computer, performs some or all of the steps of the above-described method. The computer program product may be realized in particular by means of hardware, software or a combination thereof. In some embodiments, the computer program product is embodied as a computer storage medium, in other embodiments the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It should be noted here that: the above description of various embodiments is intended to emphasize the differences between the various embodiments, the same or similar features being referred to each other. The above description of apparatus, storage medium, computer program and computer program product embodiments is similar to that of method embodiments described above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus, the storage medium, the computer program and the computer program product of the present application, reference should be made to the description of the embodiments of the method of the present application.
It should be noted that fig. 6 is a schematic diagram of a hardware entity of a computer device according to an embodiment of the present application, and as shown in fig. 6, the hardware entity of the computer device 600 includes: a processor 601, a communication interface 602, and a memory 603, wherein:
the processor 601 generally controls the overall operation of the computer device 600.
The communication interface 602 may enable a computer device to communicate with other terminals or servers over a network.
The memory 603 is configured to store instructions and applications executable by the processor 601, and may also cache data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or processed by various modules in the processor 601 and the computer device 600, which may be implemented by a FLASH memory (FLASH) or a random access memory (Random Access Memory, RAM). Data transfer may be performed between the processor 601, the communication interface 602, and the memory 603 via the bus 604.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present application, the sequence number of each step/process described above does not mean that the execution sequence of each step/process should be determined by its functions and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present application. The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units; can be located in one place or distributed to a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read Only Memory (ROM), a magnetic disk or an optical disk, or the like, which can store program codes.
Alternatively, the above-described integrated units of the present application may be stored in a computer-readable storage medium if implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the related art in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a removable storage device, a ROM, a magnetic disk, or an optical disk.
The foregoing is merely an embodiment of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application.

Claims (10)

1. An image processing method, the method comprising:
determining a target layer in a plurality of layers to be displayed; wherein the target layer is associated with a touch event;
performing layer display processing on the target layer based on a first processing mode;
performing layer display processing on at least one non-target layer in the plurality of layers based on a second processing mode; the first time period is less than the second time period.
2. The method of claim 1, the determining a target layer of a plurality of layers to be displayed, comprising:
and under the condition that the touch event is received and the layer displayed by the current window is detected to be the target application, taking the layer corresponding to the target application as the target layer.
3. The method of claim 1, the method further comprising:
acquiring a time average value of a preset multi-frame image of the target layer for layer display processing;
determining that the time average exceeds one frame period;
the operating frequency of the adjustment target processor is changed from a first frequency to a second frequency, wherein the first frequency is greater than the second frequency.
4. A method according to any one of claims 1 to 3, wherein:
the layer display processing comprises rendering processing and synthesis processing; the layer display processing performed by the first processing mode occupies a first time period, and the layer display processing performed by the second processing mode occupies a second time period, which comprises the following steps:
in the first processing mode, the rendering processing and the synthesizing processing aiming at the target layer occupy a first time period, and the first time period is less than or equal to one frame period; the rendering processing and the synthesizing processing aiming at the non-target layer occupy a second time period in the second processing mode; the second time period is greater than one frame period.
5. The method of claim 4, the composition process comprising a layer commit sub-phase, a control frame transfer sub-phase, a layer buffering sub-phase, and a composition preparation sub-phase, the first processing means comprising at least one of the following means such that the first time period is less than or equal to one frame period:
In the layer submitting sub-stage, directly submitting the target layer before the delay signal of the previous layer of the target layer is not received; wherein the delay signal is used for indicating that the previous layer is processed;
in the sub-stage of the control frame transmission, the target image layer is directly transmitted;
setting the state of the target layer in a cache queue as a ready state in the layer cache sub-stage; the buffer queue is used for storing a layer to be processed with a preset frame period length;
in the synthesis preparation sub-phase, a state setting is made for the target layer based solely on a first identification signal of a current frame, the first identification signal being used to indicate whether a buffer of the current frame is accessible.
6. The method of claim 5, the second processing means comprising at least one of:
at the layer submitting sub-stage, submitting each non-target layer to a display subsystem after receiving the delay signal;
in the control frame transmission sub-stage, controlling the transmission speed of each non-target image layer to a display subsystem;
setting the state of each non-target layer in the cache queue as a waiting state in the layer cache sub-stage;
In the synthesis preparation sub-stage, a state setting is performed for the non-target layer based on a first identification signal of a current frame and a first identification signal of a previous frame, where the first identification signal is used to indicate whether a buffer of a corresponding frame is accessible.
7. A method according to any one of claims 1 to 3, the target layer being provided with a preset identity, the method further comprising:
acquiring attribute information of each layer in the plurality of layers through a hardware combination abstract layer; wherein the attribute information comprises whether the preset identifier is set;
in the second processing mode, aiming at the at least one non-target layer, responding to the received second identification signal, and displaying the at least one non-target layer after overlapping; wherein the second identification signal is used for indicating that the display of the previous frame of the at least one non-target layer is completed;
and in the first processing mode, for the target layer, skipping waiting for the second identification signal, and directly overlapping with the at least one non-target layer for display.
8. An image processing apparatus, the apparatus comprising:
the layer determining module is used for determining a target layer in a plurality of layers to be displayed; wherein the target layer is associated with a touch event;
The first processing module is used for carrying out layer display processing on the target layer based on a first processing mode;
the second processing module is used for carrying out layer display processing on at least one non-target layer in the plurality of layers based on a second processing mode; the layer display processing performed in the first processing mode occupies a first time period, and the layer display processing performed in the second processing mode occupies a second time period; the first time period is less than the second time period.
9. An electronic device comprising a memory and at least one processor; the memory is used for storing a computer program, and the processor is used for calling and running the computer program from the memory, so that the processor runs the computer program to execute the method as claimed in any one of claims 1 to 7.
10. A storage medium comprising a computer program for implementing the method of any one of claims 1 to 7.
CN202310788039.3A 2023-06-29 2023-06-29 Image processing method, device, equipment and storage medium Pending CN116643685A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310788039.3A CN116643685A (en) 2023-06-29 2023-06-29 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310788039.3A CN116643685A (en) 2023-06-29 2023-06-29 Image processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116643685A true CN116643685A (en) 2023-08-25

Family

ID=87619037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310788039.3A Pending CN116643685A (en) 2023-06-29 2023-06-29 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116643685A (en)

Similar Documents

Publication Publication Date Title
CN109783178B (en) Color adjusting method, device, equipment and medium for interface component
US10510317B2 (en) Controlling display performance with target presentation times
US11568588B2 (en) Controlling display performance using display statistics and feedback
CN107147939A (en) Method and apparatus for adjusting net cast front cover
CN109168026A (en) Instant video display methods, device, terminal device and storage medium
CN114205635B (en) Live comment display method, device, equipment and medium
US20230412723A1 (en) Method and apparatus for generating imagery record, electronic device, and storage medium
CN114669047A (en) Image processing method, electronic device and storage medium
CN112532896A (en) Video production method, video production device, electronic device and storage medium
CN113852757B (en) Video processing method, device, equipment and storage medium
US11895424B2 (en) Video shooting method and apparatus, electronic device and storage medium
US20150363837A1 (en) Methods, systems, and media for presenting advertisements during background presentation of media content
CN105678688B (en) Picture processing method and device
CN116719587B (en) Screen display method, electronic device and computer readable storage medium
CN116527978A (en) Multi-screen interaction control method and device
CN116643685A (en) Image processing method, device, equipment and storage medium
CN113873323B (en) Video playing method, device, electronic equipment and medium
CN115311051A (en) Page display method, equipment and storage medium for house with view
CN110310639B (en) Interactive expression implementation method and terminal
CN113760162A (en) Method, apparatus, device and storage medium for displaying information
CN113641431A (en) Method and terminal equipment for enhancing display of two-dimensional code
CN109636724A (en) A kind of display methods of list interface, device, equipment and storage medium
CN115396717B (en) Display device and display image quality adjusting method
CN112949252B (en) Text display method, apparatus and computer readable medium
KR20100125672A (en) Method and apparatus for performing java application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination