WO2024078121A1 - 图像处理方法和电子设备 - Google Patents

图像处理方法和电子设备 Download PDF

Info

Publication number
WO2024078121A1
WO2024078121A1 PCT/CN2023/113151 CN2023113151W WO2024078121A1 WO 2024078121 A1 WO2024078121 A1 WO 2024078121A1 CN 2023113151 W CN2023113151 W CN 2023113151W WO 2024078121 A1 WO2024078121 A1 WO 2024078121A1
Authority
WO
WIPO (PCT)
Prior art keywords
image frame
thread
cache
application
synthesis
Prior art date
Application number
PCT/CN2023/113151
Other languages
English (en)
French (fr)
Inventor
蔡立峰
杜鸿雁
黄通焕
李时进
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Publication of WO2024078121A1 publication Critical patent/WO2024078121A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Definitions

  • the present application relates to the field of terminal technology, and in particular to an image processing method and electronic device.
  • Animation effect is the dynamic display effect formed by the continuous display of multiple frames of images.
  • Electronic devices usually need to go through the processes of drawing, rendering and synthesis to display the picture through the display screen.
  • the application process of the electronic device is responsible for drawing and rendering each image frame in the display screen
  • the synthesis thread of the electronic device is responsible for synthesizing and displaying each image frame after drawing and rendering.
  • the application process cannot draw image frames normally, resulting in frame loss, which in turn causes image frames sent by the synthesizer to display jitter.
  • the embodiments of the present application provide an image processing method and an electronic device.
  • the problem of excessive displacement caused by a large displacement interval between two image frames is avoided, and the coherent display of each image frame of the dynamic effect is ensured, making the display effect smoother and more fluent.
  • an image processing method comprising:
  • the electronic device receives a first operation performed by a user on a touch screen of the electronic device; the electronic device starts a first application in response to the first operation; the electronic device receives a second operation performed by the user on the touch screen of the electronic device; the electronic device exits the first application in response to the second operation.
  • the electronic device performs drawing, rendering and synthesis operations on the first image frame and the second image frame of the first application.
  • the electronic device performs rendering and synthesis operations on the first image frame and the second image frame of the first application, including:
  • the application process draws and renders the first image frame within the drawing and rendering cycle of the first image frame, and stores the obtained first image frame in a free cache object in the cache queue; when the synthesis thread does not perform the synthesis operation within the synthesis cycle of the first image frame, the synthesis thread sends a first adjustment request to the application process; based on the first adjustment request, the application process increases the number of free cache objects in the cache queue, so that the application process draws and renders the second image frame within the drawing and rendering cycle of the second image frame, and stores the obtained second image frame in a free cache object in the cache queue.
  • the drawing and rendering cycle of the second image frame is located after the drawing and rendering cycle of the first image frame, and the drawing start time of the second image frame differs from the drawing start time of the first image frame by N cycles, where N is a positive integer.
  • the free cache objects in the cache queue can be dynamically increased.
  • the application process can continue to normally obtain free cache objects to store rendered image frames, thereby avoiding the problem in the prior art that the application process cannot obtain free cache objects to store rendered image frames, and does not perform the rendering operation of the next image frame, thereby causing frame loss when drawing the next image frame.
  • there is always at least one free cache object in the cache queue that can be used by the application process avoiding the problem of frame loss caused by the application process not performing image frame drawing and rendering operations due to the lack of free cache objects in the cache queue, thereby solving the problem of visual stuttering caused by frame loss after the image frame is sent for display.
  • the first image frame and the second image frame are image frames in a process of starting the first application.
  • the application process can store the rendered image frame in the free cache object of the cache queue during each image frame drawing and rendering cycle, avoiding the frame loss that may occur during the drawing and rendering of the image frame by the application process, solving the problem of display jamming of the image frame sent to the display due to frame loss, and improving the display smoothness of the startup animation during the application startup process.
  • the first image frame and the second image frame are image frames when the electronic device switches from a first refresh rate to a second refresh rate during the startup of a first application; the first refresh rate is less than the second refresh rate.
  • the first refresh rate may be 60 Hz, that is, 60 frames of images are refreshed in 1 second, and one frame of image is refreshed every 16.6 milliseconds.
  • the second refresh rate may be 90 Hz, that is, 90 frames of images are refreshed in 1 second, and one frame of image is refreshed every 11.1 milliseconds.
  • the image processing method provided in this embodiment can effectively solve the problem that during the application startup process, due to the refresh rate switching, the synthesis thread is asynchronous with the application process processing cycle, and the synthesis thread believes that there is a task backlog in the image frame to be synthesized and does not perform the synthesis operation, resulting in no cache object released in the cache queue.
  • the first image frame and the second image frame are image frames after the first application is started.
  • the image processing method provided in this embodiment can still dynamically increase the number of free cache objects in the cache queue when the synthesis thread does not perform synthesis operations. This solves the problem of display stuttering of image frames due to frame loss during the internal image display process of the first application after the first application is started, thereby improving the display smoothness of images displayed within the application.
  • the first image frame and the second image frame are image frames in a process of exiting the first application.
  • the application process can store the rendered image frames into the free cache objects of the cache queue in each image frame drawing and rendering cycle, avoiding the frame loss that may occur in the process of the application process drawing and rendering image frames, solving the problem of display stuttering of image frames caused by frame loss, and improving the display smoothness of the exit animation during the application exit process.
  • the first image frame and the second image frame are image frames when the electronic device switches from a first refresh rate to a second refresh rate during an exit process of a first application; the first refresh rate is less than the second refresh rate.
  • the first refresh rate may be 60 Hz, that is, 60 frames of images are refreshed in 1 second, and one frame of image is refreshed every 16.6 milliseconds.
  • the second refresh rate may be 90 Hz, that is, 90 frames of images are refreshed in 1 second, and one frame of image is refreshed every 11.1 milliseconds.
  • the image processing method provided in this embodiment can effectively solve the problem that during the application exit process, due to the refresh rate switching, the synthesis thread is asynchronous with the application process processing cycle, and the synthesis thread believes that there is a task backlog in the image frame to be synthesized and does not perform the synthesis operation, resulting in no cache object release in the cache queue.
  • a drawing rendering cycle of the second image frame is a next cycle of a drawing rendering cycle of the first image frame; and a drawing start time of the second image frame differs from a drawing start time of the first image frame by one cycle.
  • the drawing start time of the second image frame differs from the drawing start time of the first image frame by 1 cycle. If the synthesis thread does not perform the synthesis operation in the synthesis cycle of the first image frame, it will affect the drawing and rendering of the second image frame. Since the second image frame and the first image frame are adjacent image frames, when the second image frame is lost, there will be obvious image frame display jamming. Based on this, when the second image frame and the first image frame are adjacent image frames, the effect of the image processing method provided by the present application is more obvious. When the synthesis thread does not perform the synthesis operation, the number of free cache objects in the cache queue is dynamically increased, so that there is always at least one free cache object in the cache queue used by the application process.
  • the application process can store the rendered image frame in the free cache object of the cache queue, avoiding the frame loss that may occur in the process of drawing and rendering the image frame by the application process, solving the problem of display jamming of the image frame sent to the display due to frame loss, and improving the display smoothness of the image frame.
  • the first adjustment request includes a first indication value; the first indication value is used to indicate an increased number of cache objects, and increasing the number of free cache objects in the cache queue includes:
  • the application process adds a free cache object with a first indication value to the cache queue.
  • the application process can increase the number of free cache objects in the cache queue according to the first indication value.
  • the first indication value can be 1, 2, 3..., and the indication value can be adjusted according to different situations, so that the effect of dynamically adjusting the cache queue is better.
  • the application process adds a free cache object of a first indication value to a cache queue, including:
  • the application process adds the address of the free cache object of the first indication value to the cache queue in the order of enqueuing.
  • each cache object in the cache queue has an arrangement order.
  • the application process adds the address of the free cache object with the first indication value to the cache queue according to the order of enqueuing, which can ensure that the arrangement order of the cache objects in the original cache queue is not disturbed.
  • the method further includes:
  • the synthesis thread queries the number of all cache objects in the cache queue; if the number of all cache objects reaches the maximum number of cache objects, the synthesis thread stops sending the first adjustment request of the cache object to the application process.
  • the number of cache objects in the cache queue has a maximum number of cache objects.
  • the synthesis thread determines that the number of all cache objects in the cache queue reaches the maximum number of cache objects, it stops sending the first adjustment request for increasing the free cache objects to the application process, thus ensuring the normal operation of the electronic device and avoiding the problem of abnormality of the electronic device caused by the inability to increase the cache objects in the cache queue.
  • the method further includes:
  • the synthesis thread obtains and records the storage time when the application process stores the first image frame into the target cache object.
  • the synthesis thread can record the storage time when the application process stores the image frame into the target cache object.
  • the synthesis thread records each storage time, and according to the time difference between each storage time, it can be determined whether the application process has completed the drawing and rendering of the image frame.
  • the method further includes:
  • the synthesis thread determines the time difference between the current system time and the storage time of the last recorded image frame stored in the target cache object; if the time difference is greater than or equal to the preset time threshold, the synthesis thread sends a second adjustment request for the cache object to the application process; the application process reduces the number of idle cache objects in the cache queue according to the second adjustment request.
  • the synthesis thread may also determine the frame interval according to the current refresh rate.
  • the preset time threshold may be M frame intervals, where M is 1, 2, 3, . . . .
  • the synthesis thread can record the storage time when the application process stores the image frame into the target cache object.
  • the synthesis thread can obtain the time difference between the current system time and the last storage time. If the time difference is greater than the preset time threshold, it is determined that the rendering thread has frame loss.
  • the existence of frame loss means that the production speed of the application process is slower than the consumption speed of the synthesis thread, and there are enough free cache queues in the cache queue for it to use. Alternatively, it may be that the application process has completed the drawing of the image frame in the current scene.
  • the synthesis thread can generate a second adjustment request to reduce the cache queue to reduce the number of free cache objects in the cache queue. Timely release of free cache objects in the cache queue can reduce the occupancy of storage resources.
  • the second adjustment request includes a second indication value
  • the second indication value is used to indicate a reduced number of cache objects
  • reducing the number of free cache objects in the cache queue includes: the application process reduces the free cache objects of the second indication value from the cache queue.
  • the application process can reduce the number of free cache objects in the cache queue according to the second indication value in the second adjustment request.
  • the second indication value can be 1, 2, 3..., and the indication value can be adjusted according to different situations, so that the effect of dynamically reducing the cache queue is better.
  • the application process reduces the free cache objects of the second indication value from the cache queue, including: the application process removes the addresses of the free cache objects of the second indication value from the cache queue according to the dequeue order.
  • each cache object in the cache queue has an arrangement order.
  • the application process removes the address of the idle cache object with the second indication value from the cache queue according to the dequeue order, thereby ensuring that the arrangement order of the cache objects in the original cache queue is not disturbed.
  • the method further includes:
  • the synthesis thread queries the number of all cache objects in the cache queue; if the number of all cache objects is reduced to the minimum number of cache objects, the synthesis thread stops sending the second adjustment request of the cache object to the application process.
  • the number of cache objects in the cache queue has a minimum number of cache objects.
  • the second adjustment request for reducing the free cache objects is stopped from being sent to the application process, thereby ensuring the normal operation of the electronic device and avoiding the problem of abnormality of the electronic device caused by the inability to reduce the cache objects in the cache queue.
  • the method further includes:
  • the synthesis thread obtains the target cache object from the cache queue; the target cache object stores the first image frame after drawing and rendering; the synthesis thread performs synthesis operation on the first image frame after drawing and rendering.
  • the synthesis thread when the synthesis thread normally executes the synthesis operation of the first image frame, the synthesis thread normally obtains the cache object stored in the rendered first image frame from the cache queue, synthesizes the rendered first image frame, and realizes the display of the synthesized first image frame, as well as the space release of the cache object, so that the released free cache object can be obtained in time in the cache queue for use by the application process.
  • an electronic device which includes a memory, a display screen and one or more processors; the memory and the display screen are coupled to the processor; the memory stores computer program code, and the computer program code includes computer instructions, and when the computer instructions are executed by the processor, the electronic device executes a method as described in any one of the above-mentioned first aspects.
  • a computer-readable storage medium wherein instructions are stored in the computer-readable storage medium, and when the computer-readable storage medium is executed on an electronic device, the electronic device can execute any one of the methods according to the first aspect.
  • a computer program product comprising instructions, which, when executed on an electronic device, enables the electronic device to execute any one of the methods according to the first aspect.
  • an embodiment of the present application provides a chip, the chip including a processor, the processor being used to call a computer program in a memory to execute the method of the first aspect.
  • beneficial effects that can be achieved by the electronic device described in the second aspect, the computer-readable storage medium described in the third aspect, the computer program product described in the fourth aspect, and the chip described in the fifth aspect provided above can refer to the beneficial effects in the first aspect and any possible design method thereof, and will not be repeated here.
  • FIG1 is a schematic diagram of a normal display of an application startup animation on a mobile phone interface provided by an embodiment of the present application
  • FIG2 is a schematic diagram of a normal display of an off-screen sliding effect on a mobile phone interface provided by an embodiment of the present application
  • FIG3 is a schematic diagram of an abnormal display of an application startup animation on a mobile phone interface provided by an embodiment of the present application
  • FIG4 is a schematic diagram of the hardware structure of an electronic device provided in an embodiment of the present application.
  • FIG5 is a schematic diagram of a software structure of an electronic device provided in an embodiment of the present application.
  • FIG6 is a timing diagram of normal drawing, rendering, synthesis, and display of multiple image frames provided by an embodiment of the present application.
  • FIG. 7 is a diagram showing a state change of a cache in a cache queue provided by an embodiment of the present application.
  • FIG8 is a timing diagram of abnormal drawing, rendering, synthesis, and display of multiple image frames provided by an embodiment of the present application.
  • FIG9 is a timing diagram of multi-agent interaction during an abnormal image frame drawing, rendering, synthesis, and display process provided by an embodiment of the present application.
  • FIG10 is a timing diagram of changes in the number of caches in a cache queue during abnormal image frame drawing, rendering, synthesis, and display provided by an embodiment of the present application;
  • FIG11 is a flow chart of an image processing method in an application startup dynamic effect scenario provided by an embodiment of the present application.
  • FIG12 is a schematic diagram of a normal display of an application exit animation on a mobile phone interface provided by an embodiment of the present application
  • FIG13 is a flow chart of an image processing method in an application exit animation scenario provided by an embodiment of the present application.
  • FIG14 is a timing diagram of multi-agent interaction for dynamically adjusting the number of caches in a cache queue during image frame drawing, rendering, synthesis, and display provided by an embodiment of the present application;
  • FIG15 is a timing diagram of dynamically adjusting the number of caches in a cache queue during image frame drawing, rendering, synthesis, and display provided by an embodiment of the present application;
  • FIG16 is a schematic diagram of the structure of a chip system provided in an embodiment of the present application.
  • a and/or B can represent: A exists alone, A and B exist at the same time, and B exists alone, where A and B can be singular or plural.
  • the character "/” generally indicates that the associated objects before and after are a kind of "or" relationship.
  • references to "one embodiment” or “some embodiments” etc. described in this specification mean that one or more embodiments of the present application include specific features, structures or characteristics described in conjunction with the embodiment. Therefore, the statements “in one embodiment”, “in some embodiments”, “in some other embodiments”, “in some other embodiments”, etc. that appear in different places in this specification do not necessarily refer to the same embodiment, but mean “one or more but not all embodiments", unless otherwise specifically emphasized in other ways.
  • the terms “including”, “comprising”, “having” and their variations all mean “including but not limited to”, unless otherwise specifically emphasized in other ways.
  • connection includes direct connection and indirect connection, unless otherwise specified. "First” and “second” are used for descriptive purposes only and cannot be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated.
  • words such as “exemplary” or “for example” are used to indicate examples, illustrations or descriptions. Any embodiment or design described as “exemplary” or “for example” in the embodiments of the present application should not be interpreted as being more preferred or more advantageous than other embodiments or designs. Specifically, the use of words such as “exemplary” or “for example” is intended to present related concepts in a specific way.
  • the electronic device can display dynamic display effects (animation effects) based on the screen triggered by different operations through the display screen.
  • animation effects There is a type of animation effect in which the display position of different image frames changes. This type of animation effect is referred to as the first type of animation effect in this application.
  • the animation effect in which the display position of the image frame does not change is referred to as the second type of animation effect.
  • the first category of animations there are animations in which the display position of the image frame is related to the system time, such as the application startup animation, application exit animation, off-screen sliding animation, etc.
  • the application startup animation refers to the animation displayed when the application is started
  • the application exit animation refers to the animation displayed when the application exits
  • the off-screen sliding animation specifies that the user slides the screen with his finger, and the operating object continues to move after the finger leaves the screen.
  • the electronic device needs to calculate the display position of the current image frame based on the system time of drawing the current image frame.
  • the second category of animations includes game scene animations, application internal scene animations, and hand tracking animations in other scenes.
  • the image processing method provided in the present application can be applied to all animation scenes.
  • the image frames in the process of the startup animation of the application are processed based on the image processing method.
  • the image frames in the process of the exit animation of the application are processed based on the image processing method.
  • the image frames in the process of the off-screen sliding animation are processed based on the image processing method.
  • the image frames in the process of the game scene animation are processed based on the image processing method.
  • the image frames in the process of the internal scene animation of the application are processed based on the image processing method.
  • electronic devices can process image frames in hand-following effects based on image processing methods.
  • different image frames have different display positions, which means that the distances from a specified vertex of the image frame (such as the upper left vertex of the interface) to the origin of the image frame are different between different image frames.
  • FIG1 takes the electronic device as a mobile phone as an example, and shows a schematic diagram of displaying an application startup animation.
  • the user clicks the icon of application 5 on the mobile phone desktop.
  • application 5 is started and the startup animation of application 5 is displayed.
  • the display screen of the startup animation of application 5 is gradually displayed from FIG1 (b) to FIG1 (f), and the image frames in the startup animation of application 5 include the 5 image frames displayed from FIG1 (b) to FIG1 (f).
  • the distance between the specified vertex of these 5 image frames such as the upper left vertex of the interface
  • the origin of the image frame is different.
  • the distance between the specified vertex of the image frame such as the upper left vertex of the interface
  • the origin of the image frame gradually increases until the image frame fills the screen.
  • different image frames have different display positions, which means that the distances from a specified vertex of the image frame (such as the upper left vertex of the interface) to the origin of the screen are different between different image frames.
  • FIG2 takes a mobile phone as an example, and shows a schematic diagram of displaying an off-screen sliding effect.
  • the current interface of FIG2 is page 0 of the desktop, and the current interface includes application 1, application 2, Application 3, Application 4 and Application 5.
  • the user slides left on the current page interface of the mobile phone desktop.
  • the mobile phone displays the next page interface of the current interface with a left slide animation, wherein the next page interface of the current interface is the first page of the desktop, and the first page interface includes Application 6, Application 7 and Application 8.
  • the display screen of the sliding animation triggered by the mobile phone in response to the left slide operation gradually changes from Figure 2 (b) to Figure 2 (f), and the image frame of the left slide sliding animation includes the 5 image frames displayed from Figure 2 (b) to Figure 2 (f).
  • the distance between the specified vertex of these 5 image frames (such as the fixed point where the hollow circle is located in the figure) and the screen origin (such as the fixed point where the solid circle is located in the figure) is different.
  • the distance between the specified vertex of the image frame and the screen origin gradually decreases until the distance is 0, and the next page interface is completely displayed on the screen.
  • the minimum distance between the specified fixed point of the image frame and the screen origin can be determined according to actual conditions.
  • different image frames have different display positions, which may also mean that the distances from the image frame specified vertex to the image frame origin and the distances from the image frame specified vertex to the screen origin are different between different image frames.
  • the mobile phone display application startup dynamic effect is used as an example for description. It can be understood that the image processing method provided in the embodiments of the present application is also applicable to other types of dynamic effects.
  • the distance from a specified vertex of an image frame (such as the upper left vertex of the interface) to the origin of the image frame is called displacement.
  • the displacement change between adjacent image frames is called displacement interval.
  • the displacement of the current image frame in the application startup animation is the system time for drawing the current image frame/total animation time*total displacement, where the total animation time refers to the total time for displaying all image frames in the animation under normal circumstances; the total displacement refers to the displacement of the last frame of the animation.
  • the application process of the application is used to draw and render each image frame in the dynamic effect, wherein the application process includes the application main thread and the rendering thread, the application main thread draws the image frame, and the rendering thread renders the image frame.
  • the general synthesis thread is used to synthesize the rendered image frames. Specifically, for each image frame of the dynamic effect, the application main thread calculates the displacement of drawing the current image frame based on the system time of drawing the current image frame, and draws the current image frame based on the calculated displacement. After obtaining the drawn image frame, the rendering thread renders the drawn image frame. After obtaining the rendered image frame, the synthesis thread synthesizes multiple layers in the rendered image frame.
  • the synthesis thread sends the synthesized image frame to the display driver for display.
  • the application main thread, the rendering thread, the synthesis thread and the display driver need to perform corresponding operations based on their respective corresponding trigger signals, so as to realize the drawing, rendering, synthesis and display operations of multiple image frames in the dynamic effect, and finally realize the coherent display of multiple image frames of the dynamic effect.
  • the displacement interval between adjacent image frames is fixed. Exemplarily, as shown in FIG. 1 , in the five image frames from FIG. 1 (b) to FIG. 1 (f), the displacement intervals between all adjacent image frames remain unchanged, and in the process of displaying the startup animation of application 5, the displayed image is continuous and smooth.
  • the trigger signal may include a drawing signal for triggering the application main thread to perform a drawing operation, a synthesis signal for triggering the synthesis thread to perform a synthesis operation, etc.
  • the main thread of the application receives a drawing signal, calculates the displacement of the current image frame based on the system time of drawing the current image frame, and draws the current image frame based on the calculated displacement; after obtaining the drawn image frame, the main thread of the application wakes up the rendering thread to render the drawn image frame.
  • the synthesis thread receives a synthesis signal, and performs the synthesis of multiple layers in the rendered image frame; after obtaining the synthesized image frame, the synthesis thread sends the synthesized image frame to the display driver for display. If any of the main thread of the application, the rendering thread, the synthesis thread, and the display driver does not perform the corresponding operation according to the corresponding trigger signal or trigger condition, the calculated displacement of some image frames in the dynamic effect may deviate from the displacement of the image frame under normal circumstances. For example, the rendering thread may not perform the rendering of the drawn image frame.
  • the rendering thread Since the main thread of the application and the rendering thread are serial threads, the rendering thread does not perform the rendering operation of the drawn image frame, which reacts in the main thread of the application, causing the main thread of the application to be unable to perform the drawing operation of the new image frame.
  • the final result is that the main thread of the application cannot perform the drawing operation of the current image frame after executing the drawing operation of the previous image frame.
  • the time interval between the application main thread executing the drawing operation of the current image frame and the drawing operation of the previous image frame is too long.
  • the displacement interval between the displacement calculated based on the system time of drawing the current image frame and the displacement calculated for the previous image frame is too large.
  • FIG. 3 shows a schematic diagram of the process of abnormally displaying the startup animation of application 5 in an electronic device such as a mobile phone.
  • the startup animation includes 3 image frames.
  • the main thread of the application of the electronic device shown in FIG3 does not render the image frame shown in FIG1 (d) and the image frame shown in FIG1 (e).
  • the main thread of the application calculates the drawing displacement of the image frame shown in FIG3 (f) based on the system time. Therefore, the displacement interval between the displacement of the image frame shown in FIG3 (f) and the displacement of the image frame shown in FIG3 (c) is too large.
  • the display screen of the startup animation of application 5 is gradually displayed from FIG3 (b) to FIG3 (f).
  • the displacement interval between the adjacent image frames of the startup animation of application 5 changes between FIG3 (c) and FIG3 (f)
  • the display screen of FIG3 (c) to FIG3 (f) has obvious sudden enlargement changes, resulting in the incoherence of the entire display process, causing visual freezes for the user.
  • the above scenarios are problems that exist in the process of drawing, rendering, synthesizing, and displaying image frames for the first type of animation.
  • For the second type of animation when the application process is drawing and rendering image frames, there may be no free cache objects in the cache queue due to the synthesis thread not synthesizing or other reasons. The application process cannot perform the drawing and rendering operations and frames are dropped. Once the application process drops frames in the process of drawing and rendering image frames, it means that during the display of the image frames, the display screen will be stuck.
  • the present application provides an image processing method that can be applied to an electronic device with a display screen, including image processing of image frame drawing, rendering, synthesis, and display transmission, and is suitable for image frame processing scenarios in all types of motion effects.
  • the application process for image frame drawing and rendering can obtain a free cache object in each drawing cycle to store the rendered image frame to perform image frame drawing and rendering in the next drawing cycle.
  • the image processing method provided by this embodiment can avoid the problem that the application process does not perform drawing and rendering operations due to the lack of free cache objects in the cache queue, thereby avoiding the problem that the displacement interval between two adjacent image frames is too large, resulting in visual freezes after being sent to the display.
  • the image processing method provided by this embodiment can still avoid the problem that the application process does not perform drawing and rendering operations due to the lack of free cache objects in the cache queue, thereby solving the problem of frame loss in the application process and avoiding freezes in the image frame display process.
  • the electronic device in the embodiments of the present application can be a portable computer (such as a mobile phone), a tablet computer, a laptop computer, a personal computer (PC), a wearable electronic device (such as a smart watch), an augmented reality (AR) device, a virtual reality (VR) device, a car computer, a smart TV, and other devices including a display screen.
  • a portable computer such as a mobile phone
  • a tablet computer such as a laptop computer, a personal computer (PC)
  • a wearable electronic device such as a smart watch
  • AR augmented reality
  • VR virtual reality
  • FIG. 4 shows a block diagram of an electronic device (such as an electronic device 100) provided in an embodiment of the present application.
  • the electronic device 100 may include a processor 310, an external memory interface 320, an internal memory 321, a universal serial bus (USB) interface 330, a charging management module 340, a power management module 341, a battery 342, an antenna 1, an antenna 2, a radio frequency module 350, a communication module 360, an audio module 370, a speaker 370A, a receiver 370B, a microphone 370C, an earphone interface 370D, a sensor module 380, a button 390, a camera 391, and a display screen 392.
  • the sensor module 380 may include a pressure sensor 380A, a touch sensor 380B, and the like.
  • the structure shown in the embodiment of the present invention does not constitute a limitation on the electronic device 100. It may include more or fewer components than shown in the figure, or combine some components, or split some components, or arrange the components differently.
  • the components shown in the figure may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 310 may include one or more processing units.
  • the processor 310 may include an application processor (AP), a modem processor, a graphics processor (GPU), an image signal processor (ISP), a controller, a memory, a video codec, a digital signal processor (DSP), a baseband processor, and/or a neural-network processing unit (NPU).
  • AP application processor
  • GPU graphics processor
  • ISP image signal processor
  • controller a memory
  • DSP digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • Different processing units may be independent devices or integrated in one or more processors.
  • the controller can be a decision maker that directs the various components of the electronic device 100 to work in coordination according to the instructions. It is the nerve center and command center of the electronic device 100.
  • the controller generates an operation control signal according to the instruction operation code and timing signal to complete the control of fetching and executing instructions.
  • the processor 310 may also be provided with a memory for storing instructions and data.
  • the memory in the processor 310 is a high-speed cache memory, which can store instructions or data that the processor 310 has just used or cyclically used. If the processor 310 needs to use the instruction or data again, it can be directly called from the memory. This avoids repeated access, reduces the waiting time of the processor 310, and thus improves the efficiency of the system.
  • the processor 310 may include an interface.
  • the interface may include an inter-integrated circuit (I2C) interface, an inter-integrated circuit sound (I2S) interface, a pulse code modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (MIPI), a universal input/output (UART) interface, a 12-bit ... general-purpose input/output (GPIO) interface, SIM interface, and/or USB interface, etc.
  • I2C inter-integrated circuit
  • I2S inter-integrated circuit sound
  • PCM pulse code modulation
  • UART universal asynchronous receiver/transmitter
  • MIPI mobile industry processor interface
  • UART universal input/output
  • GPIO general-purpose input/output
  • the interface connection relationship between the modules shown in the embodiment of the present invention is only for illustrative purposes and does not constitute a structural limitation on the electronic device 100.
  • the electronic device 100 may adopt different interface connection methods in the embodiment of the present invention, or a combination of multiple interface connection methods.
  • the wireless communication function of the electronic device 100 can be implemented through the antenna 1, the antenna 2, the radio frequency module 350, the communication module 360, the modem and the baseband processor.
  • the electronic device 100 implements the display function through a GPU, a display screen 392, and an application processor.
  • the GPU is a microprocessor for image processing, connecting the display screen 392 and the application processor AP.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 310 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 392 is used to display images, videos, etc.
  • the display screen 392 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diodes (QLED), etc.
  • the electronic device 100 may include 1 or N display screens 392, where N is a positive integer greater than 1.
  • the display screen can be any type of display screen, which can be a touch screen or a non-touch screen.
  • the display screen 392 can display operation-triggered animations, such as triggering the display of application startup animations by clicking on an application icon in the display screen; for example, triggering the display of application exit animations by clicking on an exit application control; for example, the display screen displays hand-following animations, game scene animations, etc.
  • the electronic device 100 can realize the shooting function through ISP, camera 391, video codec, GPU, display screen and application processor.
  • the external memory interface 320 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 310 through the external memory interface 320 to implement a data storage function. For example, files such as music and videos can be stored in the external memory card.
  • the internal memory 321 can be used to store computer executable program codes, which include instructions.
  • the processor 310 executes various functional applications and data processing of the electronic device 100 by running the instructions stored in the internal memory 321.
  • the memory 121 may include a program storage area and a data storage area.
  • the program storage area may store an operating system, an application required for at least one function (such as a sound playback function, an image playback function, etc.), etc.
  • the data storage area may store data created during the use of the electronic device 100 (such as audio data, a phone book, etc.), etc.
  • the memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one disk storage device, a flash memory device, other volatile solid-state storage devices, a universal flash storage (UFS), etc.
  • a non-volatile memory such as at least one disk storage device, a flash memory device, other volatile solid-state storage devices, a universal flash storage (UFS), etc.
  • the electronic device 100 can implement audio functions such as music playing and recording through the audio module 370, the speaker 370A, the receiver 370B, the microphone 370C, the headphone jack 370D, and the application processor.
  • the pressure sensor 380A is used to sense pressure signals and convert them into electrical signals.
  • the pressure sensor 380A can be disposed on the display screen 392.
  • Capacitive pressure sensors can
  • the pressure sensor 380A includes at least two parallel plates with conductive materials. When a force acts on the pressure sensor, the capacitance between the electrodes changes.
  • the electronic device 100 determines the intensity of the pressure based on the change in capacitance.
  • the electronic device 100 detects the intensity of the touch operation based on the pressure sensor 380A.
  • the electronic device 100 can also calculate the position of the touch based on the detection signal of the pressure sensor 380A.
  • the touch sensor 380B also called a “touch panel”, may be disposed on the display screen 392 to detect a touch operation applied thereto or thereabout. The detected touch operation may be transmitted to the application processor to determine the type of touch event and provide a corresponding visual output through the display screen 392.
  • the key 390 includes a power key, a volume key, etc.
  • the key 390 may be a mechanical key or a touch key.
  • the electronic device 100 receives the key 390 input and generates a key signal input related to the user settings and function control of the electronic device 100.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture.
  • the Android system of the layered architecture is taken as an example to exemplify the software structure of the electronic device 100.
  • FIG5 is a software structure diagram of the electronic device 100 according to an embodiment of the present invention.
  • the layered architecture divides the software into several layers, each of which has a clear role and division of labor.
  • the layers communicate with each other through software interfaces.
  • the Android system is divided into four layers, namely, from top to bottom, the application layer, the application framework layer, the Android runtime (Android runtime) and system library, the hardware abstraction layer, and the kernel layer.
  • the application layer may include a series of application packages. As shown in FIG5 , the application package may include applications such as camera, gallery, calendar, phone, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
  • Each application includes an application main thread and a rendering thread.
  • the application main thread is used to draw the corresponding image frame when a drawing signal arrives.
  • the rendering thread is used to render the drawn image frame.
  • the application framework layer provides application programming interface (API) and programming framework for the applications in the application layer.
  • API application programming interface
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a desktop launcher, a window manager, a content provider, an image synthesis system, a view system, an input manager, an activity manager, and a resource manager, etc.
  • the desktop launcher is used to receive a first operation of a user on the touch screen of the electronic device, and start a first application in response to the first operation; and is also used to receive a second operation of a user on the touch screen of the electronic device, and exit the first application in response to the second operation.
  • the first application can be any application included in the application layer.
  • the window manager is used to manage window programs.
  • the window manager can obtain the display screen size, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • Content providers are used to store and retrieve data and make it accessible to applications.
  • This data can include videos, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
  • the image synthesis system is used to control image synthesis and generate a vertical synchronization (Vsync) signal.
  • the image synthesis system can be a synthesizer (surface flinger).
  • the image synthesis system includes: a synthesis thread and a Vsync thread.
  • the synthesis thread is used to trigger the synthesis operation of multiple layers in the rendered image frame when the Vsync signal arrives.
  • the Vsync thread is used to generate the next Vsync signal according to the Vsync signal request and send the Vsync signal to the corresponding other threads.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying images, etc.
  • the view system can be used to build applications.
  • a display interface can be composed of one or more views.
  • a display interface including a text notification icon can include a view for displaying text and a view for displaying images.
  • a display interface can be composed of one or more views.
  • the input manager is used to manage input device programs.
  • the input system can determine input operations such as mouse click operations, keyboard input operations, and touch slide operations.
  • the activity manager is used to manage the life cycle of each application and the navigation back function. It is responsible for creating the Android main thread and maintaining the life cycle of each application.
  • the resource manager provides various resources for applications, such as localized strings, icons, images, layout files, video files, and so on.
  • Android runtime includes core libraries and virtual machines. Android runtime is responsible for scheduling and management of the Android system.
  • the core library consists of two parts: one is the function that needs to be called by the Java language, and the other is the Android core library.
  • the application layer and the application framework layer run in a virtual machine.
  • the virtual machine executes the Java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library can include multiple functional modules, such as image rendering library, image synthesis library, input library, surface manager, media library, 3D graphics processing library (such as openGL ES), 2D graphics engine (such as SGL), etc.
  • image rendering library image synthesis library
  • image synthesis library input library
  • surface manager media library
  • 3D graphics processing library such as openGL ES
  • 2D graphics engine such as SGL
  • Image rendering library used for rendering two-dimensional or three-dimensional images.
  • Image synthesis library used for synthesis of two-dimensional or three-dimensional images.
  • the application renders the image through an image rendering library, and then sends the rendered image to the cache queue of the application, so that the image synthesis system sequentially obtains a frame of image to be synthesized from the cache queue, and then performs image synthesis through the image synthesis library.
  • the input library is a library for processing input devices, which can implement mouse, keyboard and touch input processing, etc.
  • the surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of multiple commonly used audio and video formats, as well as static image files.
  • the media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing, etc.
  • the 2D graphics engine is a drawing engine for 2D drawing.
  • the hardware abstraction layer can contain multiple library modules, such as hardware synthesizer (hwcomposer, HWC), camera library module, etc.
  • the Android system can load the corresponding library module for the device hardware, thereby enabling the application framework layer to access the device hardware.
  • Device hardware can include displays, cameras, etc. in electronic devices.
  • HWC is the HAL layer module for window synthesis and display in Android.
  • the image synthesis system provides a complete list of all windows to HWC, allowing HWC to decide how to handle these windows based on its hardware capabilities.
  • HWC will mark the synthesis method for each window, for example, whether it is synthesized by GPU or HWC.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer includes at least touch panel (TP) driver, display driver, camera driver, audio driver and camera driver.
  • TP touch panel
  • Hardware can be audio devices, Bluetooth devices, camera devices, sensor devices, etc.
  • an electronic device displays a startup animation of an application as an example to illustrate, which involves the interaction between the application of the electronic device, the image synthesis system and the display driver.
  • the image synthesis system can be a synthesizer.
  • the application process of each application includes an application main thread and a rendering thread; the synthesizer includes a synthesis thread and a Vsync thread.
  • the Vsync thread generates a Vsync signal and sends the Vsync signal to the corresponding other threads to wake up the other threads to perform corresponding operations.
  • a user For example, a user generates a touch operation to start an application on an electronic device, and the display driver of the electronic device sends the input event corresponding to the touch operation to the input thread of the system service, and the input thread sends the input event to the application main thread.
  • the application main thread requests the Vsync signal from the synthesis thread for drawing the image frame.
  • the application main thread performs operations such as drawing the current image frame of the startup animation of the application to obtain a drawn image frame.
  • the rendering thread performs a rendering operation on the drawn image frame to obtain a rendered image frame.
  • the synthesis thread When the Vsync signal arrives, the synthesis thread performs a synthesis operation of multiple layers of the rendered image frame to obtain a synthesized image frame. Furthermore, the synthesis thread is also responsible for sending the synthesized image frames to the HWC, and the HWC displays the image frames through the display driver.
  • the Vsync signals generated by the Vsync thread include Vsync_APP signal, Vsync_SF signal, and HW_Vsync signal.
  • the Vsync thread generates a Vsync_APP signal and sends the Vsync_APP signal to the application main thread.
  • the application main thread performs the drawing operation of the current image frame.
  • the Vsync thread generates a Vsync_SF signal and sends the Vsync_SF signal to the synthesis thread.
  • the synthesis thread obtains the rendered image frame and performs the synthesis operation of the image frame.
  • the Vsync thread generates a HW_Vsync signal and sends the HW_Vsync signal to the display driver of the electronic device.
  • the display driver refreshes the display image frame.
  • the period in which the Vsync thread generates a Vsync signal is related to the frame rate of the electronic device.
  • the frame rate refers to the number of frames that refresh the picture in 1 second, which can also be understood as the number of times the graphics processor in the electronic device refreshes the screen per second.
  • a high frame rate can produce smoother and more realistic animations. The more frames per second, the smoother the displayed action will be.
  • a frame rate of 60Hz means refreshing 60 frames of pictures in 1 second, that is, refreshing one frame of pictures every 16.6 milliseconds, and accordingly, the period in which the Vsync thread generates a Vsync signal is 16.6 milliseconds.
  • a frame rate of 90Hz means refreshing 90 frames of pictures in 1 second, that is, refreshing one frame of pictures every 11.1 milliseconds, and accordingly, the period in which the Vsync thread generates a Vsync signal is 11.1 milliseconds.
  • Figure 6 shows a timing diagram of each thread processing job in an electronic device when the frame rate is 60Hz.
  • the total number of frames included in the animation is 6
  • the total distance of the animation is 96
  • the total time of the animation is 99.6ms.
  • the Vsync thread generates a VSYNC_APP signal according to a cycle of 16.6ms and sends it to the main thread of the application to wake up the main thread of the application and the rendering thread to perform drawing and rendering operations.
  • the time interval and displacement interval for the main thread of the application to draw each image frame in the animation remain unchanged.
  • the drawing time interval is 16.6ms
  • the drawing displacement interval is 16.
  • the correspondence between the frame interval and the frame rate in FIG6 is 16.6 ms when the frame rate is 60 Hz.
  • the timestamp is used to record the time when the main thread of the application draws each image frame.
  • the displacement interval of the image drawing corresponds to the frame interval.
  • the displacement interval is 16.
  • VSYNC_APP ID is the cycle sequence number of the VSYNC_APP signal received by the main thread of the application.
  • Drawing and rendering refers to the schematic diagram of the main thread of the application and the rendering thread performing drawing and rendering operations.
  • the cache queue in FIG6 is used to store rendered image frames.
  • the rendering thread can store the rendered image frames in the empty cache of the cache queue; the synthesis thread can obtain the rendered image frames from the cache queue for synthesis.
  • the rendering thread is the producer of the cache queue
  • the synthesis thread is the consumer of the cache queue.
  • the cache queue has a maximum number of caches. In the example of FIG6, the maximum number of caches in the cache queue is 4.
  • the synthesis thread in the figure represents the schematic diagram of the synthesis thread performing the synthesis operation. Display refers to the schematic diagram of the display driver displaying the image frames. FIG6 also includes the displacement of each displayed image frame and the time interval between adjacent displayed image frames.
  • An electronic device may create a buffer queue (buffer queue), the producer of which is a rendering thread and the consumer is a synthesis thread.
  • the buffer queue may include multiple buffers (buffers), and in the initial state of the buffer queue, each buffer is an empty buffer (free buffer), and an empty buffer is a buffer that is not occupied by a rendering thread or a synthesis thread.
  • the maximum number of buffers (MaxBufferCount) of the buffer queue is determined by the frame rate of the electronic device. Exemplarily, when the frame rate of the electronic device is 60Hz, the MaxBufferCount of the buffer queue may be 10.
  • the MaxBufferCount of the buffer queue is an empirical value.
  • the rendering thread is the producer of the cache queue
  • the synthesis thread is the consumer of the cache queue. In the process of drawing, rendering, synthesis and displaying the image frame, it includes:
  • the rendering thread performs rendering operations on the drawn image frames, and if there is an empty buffer (free buffer) in the buffer queue, it dequeues an empty buffer (free buffer) from the buffer queue to store the rendered image frames. At this time, the status of the buffer is updated to dequeued, indicating that the buffer is in a state of being obtained by the rendering thread for corresponding operations.
  • the process of dequeueing a buffer includes: the rendering thread sends a request to dequeue a free buffer to the main thread of the application, and the main thread of the application determines whether the number of buffers in the cache queue with a status of dequeued has reached the maximum number of buffers that can be dequeued. If the number of buffers in the dequeued state is less than the maximum number of buffers that can be dequeued, it means that there are still free buffers in the current cache queue.
  • the main thread of the application searches for a free buffer in the order of free buffers and marks the state of the buffer as dequeued. After marking the state of the buffer, the cache information of the buffer is returned to the rendering thread, and the rendering thread performs a storage operation of the rendered image frame based on the cache information.
  • the cache information includes the cache address, cache status identifier, etc.
  • the rendering thread After completing the storage operation of the rendered image frame, the rendering thread queues the cache of the rendered image frame into the cache queue. At this time, the status of the cache is updated to queued, indicating that the cache is in a state of waiting to be synthesized.
  • the process of queue buffer includes: the rendering thread sends a request for queue buffer to the application main thread, and the request carries the cache information of the buffer.
  • the application main thread updates the state of the cache to queued according to the cache information.
  • the synthesis thread requests (acquire) a stored
  • the cache of the rendered image frame performs the synthesis operation of the image frame layer.
  • the state of the cache is updated to be requested (acquired), indicating that the cache is in a state of being acquired by the synthesis thread for synthesis.
  • the acquire buffer process includes: the composition thread sends an acquire buffer request to the application main thread, the application main thread determines whether the number of acquired buffers in the cache queue is greater than or equal to the maximum number that can be synthesized, if the number of acquired buffers in the current cache queue is less than the maximum number that can be synthesized, the application main thread sends the cache information of the first queued buffer to the composition thread in the order of the queued buffers, and marks the cache as acquired.
  • the composition thread performs a composition operation on the rendered image in the cache based on the cache information.
  • the synthesis thread can send the synthesized image frame to the HWC and the display driver for display.
  • the display driver releases the cache. At this time, the state of the cache is updated to free.
  • the process of releasing the buffer includes: the display driver releases the buffer occupied by the composite image frame displayed in the previous display cycle in the current display cycle, and returns the cache information of the buffer to the synthesis thread, and the synthesis thread returns the cache information of the buffer to the application main thread.
  • the application main thread updates the state of the buffer to free according to the cache information.
  • the application main thread may also notify the rendering thread that there is an empty buffer in the buffer queue, so that the rendering thread obtains the empty buffer in the current drawing rendering cycle or the next drawing rendering cycle to store the rendered image frame.
  • the cache queues involved in this embodiment are all cache queues stored in the application.
  • the application main thread is responsible for obtaining information and updating the status of each cache in the cache queue.
  • the rendering thread needs to interact with the application main thread to implement the dequeue buffer and queue buffer operations; the synthesis thread needs to interact with the application main thread to implement the acquire buffer and release buffer operations.
  • Vsync-APP ID 1
  • Vsync_APP signal arrives, the main thread of the application draws the image frame 4, and the displacement of the initial first image frame 4 is 0 by default.
  • the rendering thread renders the drawn image frame 4, and through interaction with the main thread of the application, obtains a free buffer from the cache queue to store the rendered image frame 4, and the main thread of the application updates the status of the cache stored in the rendered image frame 4 to dequeued.
  • the synthesis thread determines to perform the synthesis operation.
  • the synthesis thread interacts with the application main thread to obtain the rendered image frame 1 from the cache queue for synthesis.
  • the application main thread updates the status of the cache occupied by the image frame 1 to acquired.
  • the cache queue also includes the cache for storing the rendered image frame 2 and the cache for storing the rendered image frame 3, and their corresponding statuses are both queued. At this time, there is no free buffer in the cache queue.
  • the synthesis thread completes the synthesis operation of the rendered image frame 1, and sends its synthesized image frame 1 to the HWC for display.
  • the HWC displays the synthesized image frame 1 through the display driver, and releases the cache occupied by the synthesized image frame 1 before the end of the current cycle, and returns the cache information of the cache to the synthesis thread.
  • the synthesis thread returns the cache information of the cache to the main application thread, and the main application thread updates the status of the cache according to the cache information. At this time, there is a free buffer in the cache queue.
  • the main thread of the application draws image frame 5. After calculation, the displacement of image frame 5 is 16, and the displacement interval between image frame 5 and image frame 4 is 16. After obtaining the drawn image frame 5, the rendering thread renders the drawn image frame 5, and through interaction with the main thread of the application, obtains a free buffer from the cache queue and stores it in the rendered image frame 5. The main thread of the application updates the status of the cache stored in the rendered image frame 5 to dequeued.
  • the composition thread determines to perform the composition operation.
  • the composition thread interacts with the application main thread to obtain the rendered image frame 2 from the cache queue for composition operation, and the application main thread updates the status of the cache occupied by the image frame 2 to acquired.
  • the cache queue also includes the cache for storing the rendered image frame 3 and the cache for storing the rendered image frame 4, and their corresponding statuses are both queued.
  • the cache queue also includes the buffer being used to store the rendered image frame 5, and its corresponding status is dequeued. At this time, there is no free buffer in the cache queue.
  • the synthesis thread completes the synthesis operation of the rendered image frame 2, and sends the synthesized image frame 2 to the HWC for display.
  • the HWC displays the synthesized image frame 2 through the display driver, and releases the cache occupied by the synthesized image frame 2 before the end of the current cycle, and returns the cache information of the cache to the synthesis thread.
  • the synthesis thread returns the cache information of the cache to the main application thread, and the main application thread updates the status of the cache according to the cache information. At this time, there is a free buffer in the cache queue.
  • the main thread of the application draws image frame 6. After calculation, the displacement of image frame 6 is 32, and the displacement interval between image frame 6 and image frame 5 is 16. After obtaining the drawn image frame 6, the rendering thread renders the drawn image frame 6, and through interaction with the main thread of the application, obtains a free buffer from the cache queue to store the rendered image frame 6. The main thread of the application updates the status of the cache stored in the rendered image frame 6 to dequeued.
  • the composition thread determines to perform the composition operation.
  • the composition thread interacts with the application main thread to obtain the rendered image frame 3 from the cache queue for composition operation, and the application main thread updates the status of the cache occupied by the image frame 3 to acquired.
  • the cache queue also includes the buffer storing the rendered image frame 4 and the buffer storing the rendered image frame 5, and their corresponding status are both queued.
  • the cache queue also includes the buffer being used to store the rendered image frame 6, and its corresponding status is dequeued. At this time, there is no free buffer in the cache queue.
  • the operations performed by the rendering thread and the synthesis thread are similar to those in the first three cycles.
  • the synthesis thread performs synthesis operations normally in each cycle, and sends the synthesized image frame to the display in the next cycle of the synthesized image frame.
  • the display driver releases the buffer of the synthesized image frame before the end of the current display cycle, and returns the cache information of the released buffer to the synthesis thread.
  • the synthesis thread sends the cache information to the application main thread, and the application main thread updates the status of the cache according to the cache information.
  • the application main thread notifies the rendering thread that there is a free buffer in the cache queue, so that the rendering thread can obtain the last free buffer in the cache queue in each cycle and store it in the next image frame that has been drawn and rendered. Since the image frame drawn by the application main thread in each cycle, the calculated displacement interval between the image frame and the previous image frame remains unchanged by 16. Accordingly, the displacement interval between adjacent synthesized image frames that have been rendered, synthesized, and sent to the display also remains unchanged by 16, so that continuous multiple frames are displayed coherently during the display process.
  • the rendering thread cannot obtain the free buffer to store the rendered image.
  • the rendering thread and the application main thread are serial threads, and the rendering thread does not perform rendering operations. If the main thread of the application draws the next image frame, it will affect the drawing of the next image frame. This will cause the time interval between the time the main thread of the application draws the previous image frame and the time interval between the time the main thread of the application draws the current image frame to be too large, resulting in a large displacement interval between the current image frame and the previous image frame. After the adjacent image frames with a large displacement interval are rendered, synthesized, and sent to the display, the adjacent image frames will experience visual freezes during the display process due to the large displacement interval.
  • Figure 8 shows an example where the rendering thread cannot obtain the free buffer from the cache queue to store the rendered image frame because the compositing thread fails to perform the compositing operation of the image frame in the corresponding cycle in time, which in turn affects the calculation and drawing of the next image frame by the application main thread, resulting in an excessively large displacement interval between adjacent frames.
  • the maxbuffercount of the cache sequence is 3.
  • the main thread of the application calculates the displacement of the drawn image frame 1 according to the timestamp and the total displacement distance. At 16.6ms, the displacement of the drawn image frame 1 is 0.
  • the main thread of the application wakes up the rendering thread to render the image frame 1.
  • the rendering thread interacts with the main thread of the application to obtain a free buffer from the cache queue and store it in the rendered image frame 1.
  • the main thread of the application updates the status of the cache storing the rendered image frame 1 to dequeued.
  • the synthesis thread determines not to perform the synthesis operation.
  • the cache queue includes one dequeued buffer and two free buffers.
  • the rendering thread completes the operation of storing the rendered image frame 1 in the cache.
  • the rendering thread stores the cache queue of the rendered image frame in the cache queue, and the main thread of the application updates the status of the cache to queued.
  • the cache queue includes one queued buffer and two free buffers.
  • the main thread of the application calculates the displacement of the drawn image frame 2 according to the timestamp and the total displacement distance.
  • the displacement interval is 16, the time interval is 16.6mm, and at 33.2ms, the calculated displacement of the image frame 2 is 16.
  • the main thread of the application wakes up the rendering thread to render the image frame 2.
  • the rendering thread interacts with the main thread of the application to obtain a free buffer from the cache queue and store it in the rendered image frame 2.
  • the main thread of the application updates the status of the cache storing the rendered image frame 2 to dequeued.
  • the composition thread determines not to perform the composition operation.
  • the cache queue includes a queued buffer, a dequeued buffer, and a free buffer.
  • the rendering thread completes the operation of storing the rendered image frame 2 in the cache.
  • the rendering thread stores the cache queue of the rendered image frame in the cache queue, and the main thread of the application updates the status of the cache to queued.
  • the cache queue includes two queued buffers and one free buffer.
  • the main thread of the application calculates the displacement of the drawn image frame 3 according to the timestamp and the total displacement distance.
  • the displacement interval is 16, the time interval is 16.6mm, and at 49.8ms, the calculated displacement of the image frame 3 is 32.
  • the main thread of the application wakes up the rendering thread to render the image frame 3.
  • the rendering thread interacts with the main thread of the application to obtain a free buffer from the cache queue and store it in the rendered image frame 3.
  • the main thread of the application updates the status of the cache storing the rendered image frame 3 to dequeued.
  • the synthesis thread determines to perform the synthesis operation.
  • the synthesis thread communicates with the application main line
  • the main thread of the application returns the cache information corresponding to the rendered image frame 1 to the synthesis thread according to the cache order, and the synthesis thread performs the synthesis operation of the rendered image frame 1 according to the cache information.
  • the cache queue includes an acquired buffer, a queued buffer, and a dequeued buffer.
  • the composition thread determines not to perform composition operations.
  • the display driver displays the received synthesized image frame 1 .
  • the synthesis thread determines to perform the synthesis operation. By interacting with the main thread of the application, it obtains the rendered image frame 2 from the cache queue for synthesis operation. There is no new image to be sent to the display in the current cycle, and the display driver still displays the synthesized image frame 1.
  • the display driver releases the cache occupied by the synthesized image frame 1 and returns the cache information of the cache to the synthesis thread.
  • the synthesis thread returns the cache information to the application main thread.
  • the application main thread updates the status of the cache according to the cache information and updates the cache status to free.
  • the cache queue includes an acquired buffer, a queued buffer, and a free buffer.
  • the main thread of the application wakes up the rendering thread to render image frame 4.
  • the rendering thread interacts with the main thread of the application to obtain the last free buffer from the cache queue and store it in the rendered image frame 4.
  • the main thread of the application updates the status of the cache storing the rendered image frame 4 to dequeued.
  • the cache queue includes an acquired buffer, a queued buffer, and a dequeued buffer.
  • the composition thread determines to perform the composition operation, and obtains the rendered image frame 3 from the cache queue for composition by interacting with the application main thread.
  • the synthesis thread sends the synthesized image frame 2 for display, and the display driver displays the received synthesized image frame 2.
  • the display driver releases the cache occupied by the synthesized image frame 2 and returns the cache information of the cache to the synthesis thread.
  • the synthesis thread returns the cache information to the application main thread, and the application main thread updates the status of the cache according to the cache information.
  • the cache queue includes a free buffer, an acquired buffer, and a queued buffer.
  • the synthesis thread determines to perform the synthesis operation.
  • image frame 4 is obtained from the cache queue for synthesis.
  • the synthesized image frame 3 is sent for display, and the display driver displays the received synthesized image frame 3.
  • the display driver releases the cache occupied by the synthesized image frame 3 and returns the cache information of the cache to the synthesis thread.
  • the synthesis thread returns the cache information to the main thread of the application, and the main thread of the application updates the status of the cache according to the cache information. Since the revelation timestamp of the current cycle has exceeded the total time of the animation, the main thread of the application and the rendering thread do not perform drawing and rendering operations.
  • the synthesized image frame 4 is sent for display, and the display driver displays the received synthesized image frame 4.
  • the display driver releases the cache occupied by the synthesized image frame 4 and returns the cache information of the cache to the synthesis thread.
  • the synthesis thread returns the cache information to the application main thread, and the application main thread updates the status of the cache according to the cache information. Since the revelation timestamp of the current cycle has exceeded the total time of the animation, the application main thread and rendering thread do not perform drawing and rendering operations.
  • the buffer of image frame 1 is released before the end of the cycle with Vsync-APP ID 7, and the application main thread updates the status of the released buffer in the cycle with Vsync-APP ID 7, thereby waking up the rendering thread to obtain free buffer for drawing image frame 4.
  • the calculated displacement of image frame 4 is 96
  • the displacement interval between image frame 4 and image frame 3 is 64, which is different from the displacement interval of 16 under normal drawing.
  • Phase 1 triggering the application drawing and rendering phase:
  • the Vsync thread of the synthesizer sends a Vsync_APP signal to the application main thread of the application.
  • the Vsync thread of the synthesis thread generates a Vsync_APP signal and sends Vsync_APP to the main thread of the application Signal, after the application main thread Vsync_APP signal arrives, it starts to execute operations such as drawing and rendering of the current image frame.
  • S102 The main thread of the application starts measuring, laying out, and drawing.
  • the main thread of the application can obtain the system time for drawing the current image frame, and based on the motion effect curve and the system time, measure, calculate, lay out and draw the displacement of the current frame image, thereby obtaining the drawn image frame.
  • the image frame here can be image frame 1.
  • the application main thread wakes up the rendering thread to perform the rendering operation of the drawn image frame 1.
  • S104 The rendering thread dequeues an empty cache from the cache queue through the application main thread.
  • the rendering thread After completing the rendering operation of image frame 1, the rendering thread requests to dequeue an empty buffer from the buffer queue through the application main thread to store the rendered image frame 1.
  • the rendering thread obtains the last empty buffer and stores the rendered image frame 1 into the buffer.
  • the rendering thread stores the rendered image frames into a cache, and updates the state of the cache through the application main thread.
  • the rendering thread enqueues the cache of the rendered image frame 1 into the cache queue through the application main thread, and the application main thread updates the state of the cache, so that the synthesis thread can obtain the rendered image frame from the cache queue for synthesis operation during the synthesis cycle.
  • Phase 2 The synthesis thread does not execute the synthesis phase:
  • the Vsync thread of the synthesizer sends a Vsync_SF signal to the synthesis thread.
  • the Vsync thread generates a Vsync_SF signal and sends the Vsync_SF signal to the synthesis thread. After the Vsync_SF signal arrives, the synthesis thread determines whether to perform a synthesis operation of the image frame.
  • the synthesis thread determines not to perform the synthesis operation.
  • the situations in which the synthesis thread determines not to perform the synthesis operation include abnormal performance of the synthesis thread itself, which causes the synthesis thread to run too long, missing the display signal and causing frame loss; or, due to frame cutting, the interval between two adjacent image frames is too large, and the synthesis thread cannot wait for the display signal, resulting in non-synthesis based on the back pressure mechanism.
  • the consequence of the synthesis thread not performing synthesis operations is that the cache of the application's cache queue that stores rendered image frames cannot wait for the synthesis thread to consume. If the synthesis thread does not synthesize the image frames in the cache queue, it will not perform the subsequent display and release process, and the number of empty caches in the cache queue will continue to decrease until there are no empty caches in the cache queue, and the rendering thread cannot obtain the empty cache to store the rendered image frames.
  • Phase 3 Application triggers the drawing and rendering phase:
  • the Vsync thread of the synthesizer sends a Vsync_APP signal to the application main thread.
  • the Vsync thread generates a Vsync_APP signal and sends the Vsync_APP signal to the application main thread. After the Vsync_APP signal arrives, the application main thread starts to execute the current image frame drawing and rendering operation.
  • the main thread of the application can obtain the system time for drawing the current image frame, and measure, calculate, lay out and draw the displacement of the current frame image based on the motion effect curve and the system time, so as to obtain the drawn image frame.
  • the image frame here can be image frame 2.
  • S303 The main application thread wakes up the rendering thread to perform rendering operations.
  • the application main thread wakes up the rendering thread to perform the rendering operation of the drawn image frame 2.
  • S304 The rendering process requests an empty cache from the cache queue through the application main thread.
  • the rendering thread After completing the rendering operation of image frame 1, the rendering thread requests to dequeue an empty buffer from the buffer queue through the application main thread to store the rendered image frame 2.
  • the synthesis thread does not perform synthesis operation in S202, the buffer in the buffer queue storing the rendered image frame has not been consumed, and there is no released buffer. Therefore, after the last empty buffer in the buffer queue is used in S105, there is no empty buffer in the buffer queue. The rendering thread cannot obtain an empty buffer and is in a waiting stage.
  • Phase 4 The synthesis thread performs synthesis and HWC/application interaction:
  • the Vsync thread sends a Vsync_SF signal to the synthesis thread.
  • the Vsync thread generates a Vsync_SF signal and sends the Vsync_SF signal to the synthesis thread. After the Vsync_SF signal arrives, the synthesis thread determines whether to perform a synthesis operation.
  • the synthesis thread starts synthesis, and after synthesis is completed, the synthesized image frame is sent to the HWC for display.
  • the synthesis thread determines to perform the synthesis operation, obtains the rendered image frame 1 from the cache queue through the application main thread for the synthesis operation, and sends the synthesized image frame 1 to the HWC for display.
  • HWC returns the cache information of the empty cache that has been displayed to the synthesis thread.
  • the HWC After displaying the synthesized image frame 1, the HWC releases the cache of the synthesized image frame 1 in the next display cycle and returns the cache information of the cache to the synthesis thread.
  • the synthesis thread returns the cache information of the cache to the main thread of the application.
  • the synthesis thread After obtaining the cache information of the cache, the synthesis thread returns the cache information of the cache to the main thread of the application.
  • S405 The application main thread updates the status of the cache in the cache queue according to the cache information, increases the number of empty caches in the cache queue by 1, and wakes up the rendering thread for rendering.
  • the main thread of the application After obtaining the cache information, the main thread of the application increases the number of empty caches in the cache queue by one according to the cache information, and wakes up the rendering thread waiting for the empty cache to perform rendering operations.
  • the rendering thread dequeues an empty cache from the cache queue through the application main thread to perform a storage operation on the rendered image frame, and updates the state of the cache through the application main thread.
  • the rendering thread After receiving the wake-up message from the application main thread, the rendering thread dequeues an empty cache from the cache queue through the application main thread to store the rendered image frame 2, and updates the state of the cache through the application main thread.
  • the rendering thread of the application is unable to obtain an empty cache from the cache queue in time, resulting in the rendering thread being in a waiting state (S305) all the time.
  • the rendering thread does not perform rendering operations, which affects the drawing operations of the main thread of the application, thereby causing the main thread of the application to draw adjacent image frames with a displacement interval that is too large.
  • Figure 10 shows a timing diagram of changes in the MaxBufferCount of a cache queue during image frame drawing, rendering, synthesis and display.
  • the cycle in which frame 11 is located is the first cycle.
  • the MaxBufferCount in the cache queue is 4.
  • the number of queued buffers in the cache queue is 2, and there are 2 free buffers and 2 queued buffers in the cache queue.
  • the main application thread draws image frame 11, and the displacement interval of image frame 11 calculated by the main application thread is 16.
  • the rendering thread is awakened to perform the rendering of image frame 11.
  • the rendering thread completes the rendering of image frame 11, it dequeues a free buffer from the cache queue through the main application thread to store the rendered image frame 11.
  • the cache queue has 2 free buffers and 2 queued buffers.
  • the number of queued buffers increases by 1 and the number of free buffers decreases by 1.
  • the synthesis thread interacts with the application main thread and requests a queued buffer from the cache queue for synthesis operations. According to the order of image frames, the synthesis thread obtains image frame 10 to perform synthesis operations.
  • the number of queued buffers in the cache queue is 3, and there is 1 free buffer and 3 queued buffers in the cache queue.
  • the main application thread draws image frame 12, and the displacement interval of image frame 12 calculated by the main application thread is 16. After the main application thread completes the drawing of image frame 12, the rendering thread is awakened to perform the rendering of image frame 12. After the rendering thread completes the rendering of image frame 12, the main application thread dequeues the last free buffer from the cache queue to store the rendered image frame 12. At this time, the number of queued buffers in the cache queue increases by 1, and the number of free buffers decreases by 1.
  • the synthesis thread does not synthesize during this cycle.
  • the synthesis thread does not synthesize during this cycle.
  • the synthesis thread does not synthesize.
  • the display driver displays the received synthesized image frame 9, and the displacement interval of the image frame 9 is 16.
  • the synthesis thread requests a queued buffer from the cache queue for synthesis operation by interacting with the application main thread.
  • the synthesis thread obtains the rendered image frame 11 to perform the synthesis operation.
  • the number of queued buffers in the cache queue is reduced by one.
  • the display driver displays the received synthesized image frame 10, and the displacement interval of the image frame 10 is 16.
  • the buffer of the synthesized image frame 9 displayed in the previous cycle is released, and the cache information of the buffer is returned to the synthesis thread.
  • the synthesis thread returns the cache information of the buffer to the application main thread, and updates the status of the buffer in the cache queue through the application main thread. At this time, there is a free buffer in the cache queue.
  • the application main thread wakes up the rendering thread to execute the operation of storing the rendered image frame.
  • the rendering thread interacts with the application main thread to dequeue a free buffer from the cache queue to store the rendered image frame 13.
  • the main application thread draws the image frame 14 , and the calculated displacement interval of the image frame 14 is 48.
  • the rendering thread is awakened to perform the rendering operation of the image frame 14 .
  • the synthesis thread requests a queued buffer from the cache queue for synthesis operation through interaction with the main thread of the application. According to the order of the image frames, the synthesis thread obtains the rendered image frame 12 for synthesis operation.
  • the display driver displays the received synthesized image frame 11, and the displacement interval of the image frame 11 is 16.
  • the displacement interval of the image frame 14 drawn by the main thread in this cycle is different from the displacement interval calculated when drawing the image frame 13.
  • the displacement interval of the image frame 13 is 16, and the displacement interval of the image frame 14 is 48 calculated based on the time and motion effect curve of the interval of two cycles.
  • the displacement of two adjacent image frames is too large. This causes visual freezes when the screen displays the image frames 13 and 14.
  • the rendering thread deposits the rendered image frames into the free buffers in the cache queue.
  • the number of free buffers in the cache queue gradually decreases, while the number of queued buffers gradually increases.
  • the synthesis thread does not consume the queued buffer in time, that is, the synthesizer does not request the queued buffer in time to perform synthesis operations on the rendered image frames, resulting in no buffer being sent to the display and released to the free state.
  • the number of free buffers in the cache queue is getting smaller and smaller, and the synthesis thread does not perform synthesis operations until there are no free buffers in the cache queue.
  • the rendering thread can no longer obtain free buffers from the cache queue to deposit the rendered image frames, causing the rendering thread to be in a waiting state.
  • the rendering thread cannot continue to perform rendering operations, affecting the application main thread's drawing of the next image frame.
  • the time interval of the application main thread's waiting causes the displacement interval between the two adjacent image frames it draws.
  • the reason why the synthesis thread does not perform the synthesis operation may be that the performance of the synthesizer where the synthesis thread is located is abnormal, resulting in too long running time and missing the display signal, resulting in frame loss and no synthesis operation is performed; it may also be due to frame cutting causing the interval between two adjacent image frames to be too large, and the synthesis thread cannot wait for the display signal, resulting in the failure to perform the synthesis operation based on the back pressure mechanism.
  • the back pressure mechanism refers to the fact that the synthesis thread believes that there is a task backlog in the image frames to be synthesized (rendered image frames), causing the synthesis thread to misjudge that it does not need to perform the synthesis operation at present, thereby causing the synthesis task of the synthesis thread to lag.
  • the mechanism of the synthesis thread is that when there is GPU synthesis, it will directly send the current image frame to HWC without waiting for the previous image frame to be sent to the display.
  • HWC maintains an asynchronous cache queue, and HWC serially synthesizes the image frames to be synthesized sent by the synthesis thread. Since the asynchronous cache queue allows accumulation, the synthesis thread will not execute the synthesis task when it determines that there is a task accumulation of the image frame to be synthesized (the rendered image frame).
  • the frame rate of the electronic device is 60Hz
  • the application process, synthesis thread, and display driver all perform corresponding drawing rendering, synthesis, and display sending operations according to the cycle corresponding to the frame rate of 60Hz.
  • the frame rate of the electronic device is switched to 90Hz, and the application process, synthesis thread, and display driver all perform corresponding drawing rendering, synthesis, and display sending operations according to the cycle corresponding to the frame rate of 90Hz.
  • the cycle of the frame rate of 90Hz is shorter than the cycle of the frame rate of 60Hz, that is, when the frame rate is 90Hz, the application process, synthesis thread, and display driver execute the processing speed of the image frames of each cycle faster, and when the frame rate is 60Hz, the application process, synthesis thread, and display driver execute the processing speed of the image frames of each cycle slower.
  • the display driver displays the synthesized image frames at a frame rate of 60Hz and a period of 16.6 milliseconds
  • the application process has already started to draw the rendered image frames at a frame rate of 90Hz and a period of 11.1 milliseconds, which causes the display driver to display the image frames slower than the application process draws the rendered image frames and the synthesis thread synthesizes the image frames. This causes the synthesis image frames to pile up, which causes the synthesis thread to misjudge that it does not need to perform the synthesis operation at the moment.
  • the non-synthesis of the synthesis thread results in no consumer in the cache queue, which further causes the rendering thread to be unable to dequeue the free buffer from the cache queue, thereby blocking the normal operation of the rendering thread and the main thread of the application, resulting in the above problem.
  • the present embodiment provides an image processing method, which can effectively avoid the situation where the synthesis thread does not perform synthesis operations and there is no free buffer in the cache queue, resulting in the rendering thread being unable to dequeue the free buffer from the cache queue to store the rendered image frame, thereby affecting the application main thread's rendering of the next image frame, resulting in frame loss when the application main thread draws the image frame, thereby causing the image frame sent to the display to be stuck.
  • FIG. 11 shows an example in which a user performs a first operation based on the display screen of an electronic device, and the electronic device starts a first application in response to the first operation.
  • the application process of the electronic device interacts with the synthesis thread to implement an example of the image processing method.
  • it includes:
  • S501 The electronic device receives a first operation of a user on a touch screen of the electronic device.
  • the execution subject can be a desktop application of an electronic device, for example, a desktop launcher of an electronic device, and the launcher is used to receive a first operation of a user on the touch screen of the electronic device.
  • the first operation can be a single-click operation, a double-click operation, etc. of the user on the touch screen.
  • the first operation is a selection operation of the user for the desktop application of the electronic device.
  • the first operation is a single-click operation of the user on the touch screen for the first application on the desktop of the electronic device.
  • the first operation can be a single-click operation of the user on the touch screen of the mobile phone for application 5, which is used to start application 5.
  • S502 The electronic device starts a first application in response to a first operation.
  • the launcher In response to the first operation, the launcher starts the desktop application corresponding to the first operation.
  • the user performs a single-click operation on the touch screen of the mobile phone for application 5 , and the launcher starts application 5 in response to the single-click operation.
  • the launcher displays all the image frames of the launch animation of application 5 on the mobile phone desktop.
  • the image frames of the launch animation of application 5 include 5 image frames
  • the display process of the launch animation of application 5 can refer to Figure 1 (b) to Figure 1 (f). All the image frames in the launch animation have a time sequence.
  • S503 The application process draws and renders the first image frame within the drawing and rendering cycle of the first image frame, and stores the obtained first image frame in an idle cache object in the cache queue.
  • the electronic device Before displaying the five image frames of the startup animation of the startup application 5, the electronic device needs to draw, render, and synthesize these image frames, and then send the synthesized image frames for display to present the final display effects of Figure 1 (b) to Figure 1 (f).
  • the image frames are drawn and rendered by the application process. Specifically, the image frames are drawn by the application main thread in the application process, and the drawn image frames are rendered by the rendering thread in the application process. The rendered image frames are synthesized by the synthesis thread.
  • the first image frame is an image frame in the startup animation during the startup process of application 5.
  • the main thread of the application process of the application is in the drawing and rendering cycle of the first image frame.
  • the first image frame is drawn, and the rendering thread of the application process renders the drawn first image frame to obtain a rendered first image frame. If there is an idle cache object in the cache queue, the rendering thread stores the rendered first image frame in an idle cache object in the cache queue. Accordingly, after the rendered first image frame is stored in an idle cache object in the cache queue, the number of idle cache objects in the cache queue is reduced by 1.
  • the synthesis cycle of the first image frame is after the drawing and rendering cycle of the first image frame.
  • the synthesis thread does not perform a synthesis operation within the synthesis cycle of the first image frame, that is, the synthesis thread does not perform a synthesis operation on the rendered first image frame
  • the cache queue has no consumption, and there may be no free cache objects in the cache queue.
  • the synthesis thread sends a first adjustment request to the application process.
  • the application process increases the number of free cache objects in the cache queue based on the first adjustment request, so that the application process draws and renders the second image frame within the drawing and rendering cycle of the second image frame and stores the obtained second image frame in a free cache object in the cache queue.
  • the first adjustment request may carry a first indication value, which is used to indicate an increase in the number of cache objects and to increase the number of free cache objects in the cache queue.
  • the free cache object is a free buffer in the cache queue.
  • the application process increases the number of free cache objects in the cache queue based on the first indication value in the first adjustment request. For example, if the first indication value is 1, the application process increases the number of free cache objects in 1 cache queue. For example, if the first indication value is 2, the application process increases the number of free cache objects in 2 cache queues.
  • the cache queue After increasing the number of free cache objects in the cache queue, it can be ensured that there is always at least one free cache object in the cache queue that can be used by the application process. That is, there is always at least one free cache object in the cache queue that can be used by the rendering thread of the application process to store the image frame obtained in the next image frame drawing rendering cycle, for example, the second image frame obtained in the drawing rendering cycle of the second image frame.
  • the second image frame is an image frame in the startup animation during the startup process of application 5.
  • the drawing and rendering cycle of the second image frame is located after the drawing and rendering cycle of the first image frame, and the drawing start time of the second image frame differs from the drawing start time of the first image frame by N cycles, where N is a positive integer.
  • the image frames in the startup animation have a time sequence, and the drawing and rendering cycle of the second image frame is located after the drawing and rendering cycle of the first image frame.
  • the drawing and rendering cycle of the second image frame can be the next cycle of the drawing and rendering cycle of the first image frame; or, the drawing and rendering cycle of the second image frame can be the next N cycles of the drawing and rendering cycle of the first image frame, for example, the drawing and rendering cycle of the second image frame can be the second cycle after the drawing and rendering cycle of the first image frame.
  • the first image frame and the second image frame may also be image frames after the electronic device has completed launching the first application, for example, multiple image frames in an internal display screen of the first application.
  • the first image frame and the second image frame may also be image frames in a process in which the electronic device switches from a first refresh rate to a second refresh rate during the startup of the first application, wherein the first refresh rate is lower than the second refresh rate.
  • the refresh rate is the frame rate of the electronic device.
  • the first refresh rate can be 60Hz, that is, 60 frames of images are refreshed in 1 second, and one frame of image is refreshed every 16.6 milliseconds.
  • the second refresh rate can be 90Hz, that is, 1 second. 90 frames of images are refreshed in a certain period of time, and one frame of image is refreshed every 11.1 milliseconds.
  • the image processing method provided in this embodiment can effectively solve the problem that during the application startup process, due to the refresh rate switching, the synthesis thread is asynchronous with the application process processing cycle, and the synthesis thread believes that there is a task backlog in the image frame to be synthesized and does not perform the synthesis operation, resulting in no cache object released in the cache queue.
  • image processing is performed on the image frame of the startup animation during the startup process.
  • the synthesis thread does not perform the synthesis operation, the number of free cache objects in the cache queue is dynamically increased, so that there is always at least one free cache object in the cache queue used by the application process.
  • the application process can store the rendered image frame into the free cache object of the cache queue, avoiding the frame loss that may occur in the process of the application process drawing and rendering the image frame, solving the problem of display jamming of the image frame sent for display due to frame loss, and improving the display smoothness of the startup animation during the application startup process.
  • FIG13 shows an example of an image processing method implemented by the interaction between the application process of the electronic device and the synthesis thread in the process of exiting the first application.
  • the method includes:
  • the electronic device receives a second operation of a user on a touch screen of the electronic device.
  • the second operation may be a swiping operation.
  • the execution subject here can be a desktop application of an electronic device, for example, a desktop launcher of an electronic device, and the launcher is used to receive a second operation of the user on the touch screen of the electronic device.
  • the second operation can be a sliding operation of the user on the touch screen, etc.
  • the first operation is an exit operation of the user for the application of the electronic device.
  • the second operation is an upward swiping exit operation of the user on the touch screen for the first application on the desktop.
  • the first operation can be an upward swiping operation of the user on the touch screen of the mobile phone for application 5, which is used to exit application 5 and return to the desktop of the mobile phone.
  • S602 The electronic device exits the first application in response to the second operation.
  • the launcher In response to the second operation, the launcher exits the current interface of the application corresponding to the second operation and returns to the desktop of the mobile phone.
  • the user swipes up on the touch screen of the mobile phone to open application 5, and the launcher responds to the swipe up operation to exit application 5 and return to the desktop display interface of the mobile phone.
  • the launcher displays all image frames of the exit animation of application 5 on the mobile phone desktop.
  • the image frames of the exit animation of application 5 include 5 image frames
  • the display process of the exit animation of application 5 can refer to Figure 12 (b) to Figure 12 (f). All image frames in the exit animation have a time sequence.
  • the electronic device performs the following steps:
  • S503 The application process draws and renders the first image frame within the drawing and rendering cycle of the first image frame, and stores the obtained first image frame in an idle cache object in the cache queue.
  • the electronic device Before displaying the five image frames of the exit animation of exiting application 5, the electronic device needs to draw, render, and synthesize these image frames, and then display the synthesized image frames to present the final display effects of Figure 12 (b) to Figure 12 (f).
  • the first image frame is an image frame in the exit animation during the exit process of application 5.
  • the application main thread of the application process draws the first image frame within the drawing and rendering cycle of the first image frame, and the rendering thread of the application process renders the drawn first image frame to obtain the rendered first image frame.
  • the first image frame is stored in an idle cache object in the cache queue. Accordingly, after the rendered first image frame is stored in an idle cache object in the cache queue, the number of idle cache objects in the cache queue is reduced by one.
  • the synthesis cycle of the first image frame is after the drawing and rendering cycle of the first image frame.
  • the synthesis thread does not perform a synthesis operation within the synthesis cycle of the first image frame, that is, the synthesis thread does not perform a synthesis operation on the rendered first image frame
  • the cache queue has no consumption, and there may be no free cache objects in the cache queue.
  • the synthesis thread sends a first adjustment request to the application process.
  • the application process increases the number of free cache objects in the cache queue based on the first adjustment request, so that the application process draws and renders the second image frame within the drawing and rendering cycle of the second image frame and stores the obtained second image frame in a free cache object in the cache queue.
  • the first adjustment request may carry a first indication value, and the first indication value is used to indicate an increased number of cache objects, thereby increasing the number of free cache objects in the cache queue.
  • the application process increases the number of free cache objects in the cache queue based on the first indication value in the first adjustment request. For example, if the first indication value is 1, the application process increases the number of free cache objects in 1 cache queue. For example, if the first indication value is 2, the application process increases the number of free cache objects in 2 cache queues.
  • the cache queue After increasing the number of free cache objects in the cache queue, it can be ensured that there is always at least one free cache object in the cache queue that can be used by the application process. That is, there is always at least one free cache object in the cache queue that can be used by the rendering thread of the application process to store the image frame obtained in the next image frame drawing rendering cycle, for example, the second image frame obtained in the drawing rendering cycle of the second image frame.
  • the second image frame is an image frame in the exit animation during the exit process of application 5.
  • the drawing and rendering cycle of the second image frame is located after the drawing and rendering cycle of the first image frame, and the drawing start time of the second image frame differs from the drawing start time of the first image frame by N cycles, where N is a positive integer.
  • the image frames in the exit animation have a time sequence, and the drawing and rendering cycle of the second image frame is located after the drawing and rendering cycle of the first image frame.
  • the drawing and rendering cycle of the second image frame can be the next cycle of the drawing and rendering cycle of the first image frame; or, the drawing and rendering cycle of the second image frame can be the next N cycles of the drawing and rendering cycle of the first image frame, for example, the drawing and rendering cycle of the second image frame can be the second cycle after the drawing and rendering cycle of the first image frame.
  • the first image frame and the second image frame may also be image frames in a process in which the electronic device switches from a first refresh rate to a second refresh rate during the exit process of the first application, wherein the first refresh rate is lower than the second refresh rate.
  • the refresh rate is the frame rate of the electronic device.
  • the first refresh rate can be 60Hz, that is, 60 frames of images are refreshed in 1 second, and one frame of images is refreshed every 16.6 milliseconds.
  • the second refresh rate can be 90Hz, that is, 90 frames of images are refreshed in 1 second, and one frame of images is refreshed every 11.1 milliseconds.
  • the image processing method provided in this embodiment can effectively solve the problem that during the application exit process, due to the refresh rate switching, the synthesis thread is asynchronous with the application process processing cycle, and the synthesis thread believes that there is a task backlog in the image frame to be synthesized and does not perform the synthesis operation, resulting in no cache object being released in the cache queue.
  • the exit animation during the exit process is Image processing is performed on the image frame.
  • the number of free cache objects in the cache queue is dynamically increased, so that there is always at least one free cache object in the cache queue used by the application process.
  • the application process can store the rendered image frame in the free cache object of the cache queue, avoiding the frame loss that may occur during the drawing and rendering of the image frame by the application process, solving the problem of display jamming of the image frame sent to the display due to frame loss, and improving the display smoothness of the exit animation during the application exit process.
  • image processing method of the electronic device performing the above steps S503-S505 can also be applied in the process of starting up to exiting the first application.
  • the image processing method of steps S503-S505 performed by the electronic device can also be applied in other scenarios.
  • the image processing scenario of image frames of internal scene effects of the electronic device's application the image processing scenario of image frames of game scene effects of the electronic device, the image processing scenario of image frames of off-screen sliding effects of the electronic device, or the image processing scenario of image frames of other hand-following effects of the electronic device.
  • the problem of frame loss caused by the synthesis thread of the electronic device not performing the synthesis operation, resulting in jamming of the image frames being sent for display can be solved, thereby optimizing the display smoothness of the image frames.
  • the synthesis thread is more likely not to perform synthesis operations in the scenarios of application startup of the electronic device (refer to FIG1 ), application exit of the electronic device (refer to FIG12 ), and off-screen sliding of the electronic device (refer to FIG2 ), the effect of the image processing method provided in this embodiment is more obvious in these scenarios, and the optimized motion effect display effect is smoother.
  • an image processing method in which an application main thread, a rendering thread, a synthesizer synthesis thread, a synthesizer Vsync thread, and a HWC interact with each other during the drawing, rendering, synthesis, and display of an animated image frame.
  • the method includes the following stages:
  • Phase 1 Application drawing and rendering phase:
  • the Vsync thread of the synthesizer sends a Vsync_APP signal to the application main thread of the application.
  • the synthesizer includes a Vsync thread and a synthesis thread.
  • the Vsync thread is used to generate a Vsync signal.
  • the Vsync signal includes a Vsync_APP signal and a Vsync_SF signal.
  • the Vsync_APP signal is used to trigger the application main thread to perform a drawing operation of an image frame.
  • the Vsync_SF signal is used to trigger the synthesis thread to perform a synthesis operation of an image frame.
  • the Vsync thread determines the signal period according to the frame rate of the electronic device. For example, if the frame rate of the electronic device is 60 and the image frame interval is 16.6ms, the Vsync thread generates a Vsync_APP signal every 16.6ms and sends the Vsync_APP signal to the application main thread. The Vsync thread generates a Vsync_SF signal every 16.6ms and sends the Vsync_SF signal to the synthesis thread.
  • S1102 The main thread of the application starts measuring, laying out, and drawing.
  • the main thread of the application can obtain the system time for drawing the current image frame, and measure, calculate, lay out and draw the displacement of the current frame image based on the motion curve and the system time, so as to obtain the drawn image frame.
  • the image frame here can be image frame 2.
  • the main thread of the application has completed the drawing of image frame 1
  • the rendering thread has completed the rendering of image frame 1.
  • the main thread of the application executes the current image frame drawing operation in the current cycle.
  • the drawn image frame is image frame 2.
  • This embodiment is aimed at drawing image frames of the first type of dynamic effects whose displacement is related to system time.
  • t is the current time
  • t total is the total display time of the animation
  • y total is the displacement distance between the first image frame and the last image frame of the animation.
  • y(0) is the displacement of the first image frame of the animation
  • t is the calculation time
  • n is the preset displacement interval.
  • t t c -(t c -t 0 )%q
  • tc is the current time
  • t0 is the drawing time of the first image frame of the animation
  • q is the frame rate of the electronic device.
  • the main application thread measures, lays out, and draws the image frame 2 according to a preset calculation method for the image frame displacement.
  • S1103 The application main thread wakes up the rendering thread of the application to perform a rendering operation.
  • the main thread of the application After the main thread of the application completes the measurement, layout, and drawing of the image frame 2 , the main thread of the application wakes up the rendering thread of the application to perform a rendering operation of the drawn image frame 2 .
  • S1104 The rendering process dequeues an empty cache from the cache queue through the application main thread.
  • the rendering thread after completing the rendering operation of image frame 2, the rendering thread interacts with the application main thread to dequeue a free buffer from the cache queue to store the rendered image frame 2.
  • the rendering thread can interact with the application main thread to dequeue the free buffer and store the rendered image frame 1.
  • the rendering thread can dequeue a free buffer from the cache queue in a first-in-first-out (FIFO) manner; or, the rendering thread can dequeue a free buffer from the cache queue in other agreed ways.
  • FIFO first-in-first-out
  • the application main thread updates the status of the cache to dequeued.
  • the application main thread updates the state of the cache and sends a response to the synthesis thread.
  • the rendering thread may queue the cache of the rendered image frame 2 into the cache queue in accordance with the FIFO acquisition method; or, the rendering thread may queue the cache of the rendered image frame 2 into the cache queue in accordance with other agreed methods.
  • the rendering thread stores the rendered image frame 2 into the cache
  • the cache stored in the rendered image frame 2 is queued into the cache queue
  • the application main thread updates the state of the cache to queued by interacting with the application main thread.
  • the application main thread sends a response to the synthesis thread so that the synthesis thread requests the queued buffer for synthesis operation.
  • Phase 2 The synthesis thread does not execute the synthesis phase:
  • the Vsync thread of the synthesizer sends a Vsync_SF signal to the synthesis thread.
  • the Vsync thread of the synthesizer generates a Vsync_SF signal according to a frame interval and sends the Vsync_SF signal to the synthesis thread.
  • the synthesis thread determines whether to perform a synthesis operation of the image frame.
  • the synthesis thread determines that the current synthesis cycle does not perform synthesis operations.
  • the synthesis thread does not synthesize the image frames in the cache queue, which means that the queued buffer in the cache queue is not consumed and no acquired buffer is released.
  • the current synthesis cycle of the synthesis thread is the next cycle of the drawing cycle of the image frame 2 of the application main thread.
  • S1203 The synthesis thread determines to increase the maximum cache quantity of the cache queue by 1.
  • the synthesis thread if the synthesis thread does not perform the synthesis operation, it will inevitably cause the queued buffer in the cache queue to not be consumed. In this case, if the synthesis thread determines that the synthesis operation will not be performed in the current cycle, the maximum buffer count (MaxBufferCount) of the cache queue is increased by 1.
  • the synthesis thread can use a timer to determine whether a synthesis operation is executed within a period of time. If no synthesis operation is performed, it is determined to increase the maximum cache quantity of the cache queue by 1.
  • the period of time can also be the length of N cycles corresponding to the current frame rate; here N is 1, 2, 3...k (integer).
  • N should not be too large. It should be noted that when N is 2, it means that the synthesis thread has not performed a synthesis operation for two consecutive cycles.
  • the synthesis thread can send a request to the application main thread to increase MaxBufferCount by 1, so that the application main thread performs corresponding operations based on the request. For example, after receiving the synthesis thread's request to increase MaxBufferCount by 1, the application main thread increases the maximum buffer count of the cache queue.
  • the synthesizer can first query the MaxBufferCount of the cache queue, and then determine the increase based on the MaxBufferCount.
  • the synthesizer can call a preset query function to query the application main thread for the MaxBufferCount of the cache queue.
  • the preset query function can be IGrapgicBufferConsumer, and the getMaxBufferCount interface is added to IGrapgicBufferComsumer to dynamically query the maximum value.
  • the synthesizer acts as a consumer and calls the IGrapgicBufferComsumer::getMaxBufferCount() function to query the application main thread through the thread Binder.
  • the application main thread may obtain a cache from other caches not used by the composition thread and the rendering thread, and set the cache as a cache that can be used by the composition thread and the rendering thread, thereby achieving the purpose of increasing the number of cache queues.
  • S1204 The application main thread adds an available cache to the cache queue, so that the maximum cache quantity of the cache queue increases by 1.
  • the cache in the electronic device is occupied by various threads to perform corresponding operations.
  • Some caches can be used by synthesis threads and rendering threads to implement the drawing, rendering and synthesis operations of image frames. These caches form the cache queue of this embodiment. These caches are not allowed to be used by synthesis threads and rendering threads. These caches are called unavailable caches in this embodiment.
  • the unavailable cache includes empty caches and occupied caches. The main thread of the application can obtain an empty cache from the unavailable cache and add it to the cache queue of this embodiment, thereby increasing the MaxBufferCount of the cache queue, so that the MaxBufferCount in the cache queue is increased by one.
  • the Vsync thread of the synthesizer sends a Vsync_APP signal to the application main thread.
  • the Vsync thread generates a Vsync_APP signal according to the frame interval, sends the Vsync_APP signal to the application main thread, and when the Vsync_APP signal arrives, the application main thread starts to execute image frame drawing and rendering. Dyeing operation.
  • the main application thread can obtain the system time for drawing the current image frame, and measure, calculate, lay out and draw the displacement of the current frame image based on the dynamic effect curve and the system time, so as to obtain the drawn image frame.
  • the image frame here can be image frame 3.
  • the period of the application main thread drawing image frame 3 is the next period of the period of drawing image frame 2, and the period of the application main thread drawing image frame 3 can be considered as a normal period.
  • the displacement interval of image frame 3 is the same as the displacement interval of image frame 2. Alternatively, the displacement interval of image frame 3 is equal to the preset displacement interval threshold.
  • the application main thread After the application main thread completes the measurement, layout, and drawing of the image frame 3 , the application main thread wakes up the application rendering thread to perform a rendering operation of the drawn image frame 3 .
  • the rendering thread dequeues a free buffer from the cache queue to execute the storage of the rendered image frame 3.
  • the free buffer is a buffer added by the application main thread. Before the application main thread adds a buffer to the cache queue, all buffers in the cache queue have been occupied.
  • the application rendering thread obtains the last free buffer in the cache queue to store the rendered image frame 3.
  • the application main thread can update the status of the buffer in the buffer queue, and update the status of the buffer from free to dequeued.
  • the rendering thread stores the rendered image frame into a cache, and the application main thread sends a response to the synthesis thread.
  • the rendering thread through interaction with the application main thread, queues the buffer containing the rendered image frame 3 into the cache queue, and the application main thread updates the state of the cache from dequeued to queued.
  • the application main thread sends a response to the compositing thread, so that the compositing thread can request the queued buffer from the cache queue to perform compositing operations on the rendered image frame.
  • the composition thread receives the response sent by the application main thread and records the time when the application main thread executes the queue buffer.
  • the time of the queue buffer can also represent the time when the rendering thread executes the rendering operation.
  • the synthesis thread may also obtain the queue buffer time from the application main thread, thereby recording the obtained queue buffer time.
  • the synthesis thread records the time of each queue buffer in the cache queue, and can determine whether the rendering thread performs the rendering operation normally according to the cycle corresponding to the frame rate by comparing the time of two adjacent queue buffers. If the time difference between the times of two adjacent queue buffers is consistent with the frame interval, it is considered that the rendering thread performs the rendering operation normally according to the cycle, indicating that the consumption and production of buffers in the application's buffer queue are in a balanced state, that is, the rendering thread can dequeue free buffers from the cache queue for use. If the time difference between the times of two adjacent queue buffers is greater than the frame interval, it is considered that the rendering thread is abnormal, or, The rendering thread has completed the rendering operation of all image frames of the current animation. At this time, the production of buffers in the cache queue is less than the consumption.
  • the synthesis thread can dynamically adjust the number of buffers in the cache queue, for example, reducing the MaxBufferCount of the cache queue.
  • Phase 4 The synthesizer performs synthesis and interacts with HWC and applications:
  • the Vsync thread of the synthesizer sends a Vsync_SF signal to the synthesis thread.
  • the Vsync thread After a frame interval, the Vsync thread generates a Vsync_SF signal and sends the Vsync_SF signal to the synthesis thread. After receiving the Vsync_SF signal, the synthesis thread determines whether to perform a synthesis operation of the image frame.
  • the synthesis thread starts synthesis, and after synthesis is completed, the synthesized image frame is sent to the HWC for display.
  • the synthesis thread determines to perform the synthesis operation.
  • the synthesis thread obtains the rendered image frame 1 from the cache queue through interaction with the application main thread, performs a synthesis operation on the rendered image frame 1, and sends the synthesized image frame 1 to the HWC for display.
  • the composition thread can acquire a rendered image frame (queued buffer) from the cache queue in a FIFO manner; or, the composition thread can acquire a queued buffer from the cache queue in other agreed ways.
  • the synthesis thread acquires a queued buffer through the application main thread, and the application main thread can update the state of the cache from queued to acquired.
  • the HWC returns the cache information of the released cache to the composition thread.
  • the HWC displays the composite image frame 1 sent by the composition thread, and releases the cache of the previous image frame before the end of displaying the composite image frame 1. After releasing the cache occupied by the previous image frame, the HWC returns the cache information of the cache to the composition thread.
  • the composition thread returns the cache information to the application main thread through a callback function, and the application main thread performs an update operation on the cache status in the cache queue based on the cache information.
  • Phase 5 The synthesis thread determines whether to adjust the number of cache queues:
  • the Vsync thread sends a Vsync_SF message to the synthesis thread.
  • the Vsync thread After a frame interval, the Vsync thread generates a Vsync_SF signal and sends the Vsync_SF signal to the synthesis thread.
  • the synthesis thread determines whether to perform a synthesis operation of the image frame.
  • S1502 The synthesis thread starts synthesis, and after synthesis is completed, the synthesized image frame is sent to the HWC.
  • the composition thread determines to perform the composition operation.
  • the composition thread interacts with the application main thread, obtains the rendered image frame 2 from the cache queue, performs the composition operation on the rendered image frame 2, and sends the composite image frame 2 to the HWC for display.
  • the composition thread may acquire a queued buffer from the cache queue in a FIFO acquisition manner; or, the composition thread may acquire a queued buffer from the cache queue in other agreed manners.
  • the compositor stops the operation of increasing the MaxBufferCount of the cache queue.
  • the application main thread can update the state of the cache from queued to acquired.
  • the synthesis thread obtains the time of the last queue cache, and calculates the current system time and the last queue time The difference between the cache times. When the difference is greater than or equal to the preset threshold, the decision is made to dynamically reduce the maximum cache quantity of the cache queue.
  • the synthesis thread determines whether it is necessary to adjust the MaxBufferCount of the cache queue according to the time of each queue buffer in the cache queue recorded in S1307 above. Adjusting the MaxBufferCount is actually adjusting the number of free buffers in the cache queue.
  • the synthesis thread can obtain the current system time and the time of the last queue buffer, calculate the time difference between the current system time and the time of the last queue buffer, and if the time difference is greater than the interval of two frames, it is determined that the rendering thread has lost frames and has lost two image frames. In this case, the synthesis thread can generate a request to reduce the MaxBufferCount of the cache queue and send the request to the main thread of the application to reduce the MaxBufferCount.
  • the composition thread may also determine whether the number of queued buffers in the cache queue increases to determine whether to dynamically reduce the number of free buffers in the cache queue. For example, if the composition thread determines that the number of queued buffers in the cache queue is no longer increasing, it determines that the rendering thread has not performed a rendering operation. In this case, the composition thread may also generate a request to reduce the MaxBufferCount of the cache queue and send the request to the application main thread to reduce the MaxBufferCount.
  • the composition thread can call a preset query function to query the status of each buffer in the cache queue from the application main thread, so as to determine whether the number of queued buffers has increased.
  • the getQueuedBufferCount interface can be added to IGrapgicBufferConsumer to dynamically query the number of queued buffers.
  • the composition thread acts as a consumer and calls the IGrapgicBufferComsumer::getQueuedBufferCount() function to query the application main thread through Binder.
  • Stage 6 The stage where the synthesizer dynamically adjusts the number of cache queues:
  • the Vsync thread sends a Vsync_SF message to the synthesis thread.
  • the Vsync thread After a frame interval, the Vsync thread generates a Vsync_SF signal and sends the Vsync_SF signal to the synthesis thread. After the Vsync_SF signal arrives, the synthesis thread determines whether to perform the synthesis operation of the image frame.
  • S1602 The synthesis thread starts synthesis, and after synthesis is completed, the synthesized image frame is sent to the HWC.
  • the synthesis thread determines to perform the synthesis operation.
  • the synthesis thread interacts with the application main thread, obtains the rendered image frame 3 from the cache queue, performs the synthesis operation on the rendered image frame 3, and sends the synthesized image frame 3 to the HWC for display.
  • the composition thread may acquire a buffer (queued buffer) of a rendered image frame from the cache queue in a FIFO acquisition manner; or, the composition thread may acquire a queued buffer from the cache queue in other agreed manners.
  • the compositor stops increasing MaxBufferCount.
  • the application main thread can update the state of the cache from queued to acquired.
  • the HWC returns the cache information of the released cache to the composition thread.
  • the HWC displays the composite image frame 2 sent by the composition thread, and releases the buffer of the image frame 1 before the end of displaying the composite image frame 2. After releasing the buffer occupied by the image frame 1, the HWC returns the buffer information of the buffer to the composition thread.
  • S1604 The synthesis thread sends the cache information back to the application main thread through the callback function and sends the cache information to the application main thread.
  • the process sends a request to reduce the maximum cache quantity of the cache queue by 1.
  • the composition thread returns the cache information of the cache to the application main thread through the callback function, and the application main thread performs a status update operation of the cache in the cache queue according to the cache information of the cache.
  • the application main thread receives a request sent by the composition thread to reduce the MaxBufferCount of the cache queue by one, and the application main thread reduces the MaxBufferCount of the cache queue by 1.
  • S1605 The application main thread removes the empty cache in the cache queue, and updates the removed cache to an unavailable cache.
  • the application main thread can remove one free buffer in the cache queue. Destroy one free buffer in the cache queue, release the graphic buffer, and update the removed buffer to an unavailable buffer. Becoming an unavailable buffer means that it is not allowed to be used by the rendering thread and the synthesis thread, so the MaxBufferCount of the cache queue in this embodiment is reduced by 1.
  • the composition thread interacts with the main thread of the application without performing composition operations, and increases the MaxBufferCount of the cache queue, that is, increases the number of free buffers in the cache queue, so as to ensure that even when the composition thread does not perform composition (that is, the consumer of the cache queue does not consume), the rendering thread can still obtain free buffers to store rendered image frames (the producer of the cache queue still has free buffers to use), thereby achieving a balance between production and consumption of the cache queue, and there will be no problem that the rendering thread cannot obtain free buffers for production due to non-consumption, which affects the rendering thread's rendering operation and the application main thread's drawing operation.
  • the application main thread can normally draw each image frame according to the frame interval, and the rendering thread can normally render each image frame according to the frame interval.
  • the main thread of the application can normally draw each image frame of the animation. Therefore, the problem of too long a time interval between two adjacent image frames drawn by the main thread of the application in the prior art, which causes visual stuttering, will not occur. This improves the smoothness of the animation display and avoids the problems of stuttering and frame loss.
  • the synthesis thread determines whether the rendering thread works normally according to the frame interval. If it is determined that the rendering thread does not perform the rendering operation for at least one cycle, it generates a request to reduce the MaxBufferCount of the cache queue, and interacts with the application main thread to reduce the MaxBufferCount of the cache queue. That is, the number of free buffers in the cache queue is reduced, so that the redundant buffers in the cache queue can be released in time. The released buffers can be used for other operations to improve the utilization of the buffer.
  • FIG15 shows a timing diagram of the change of MaxBufferCount in the cache queue during the image frame drawing, rendering and synthesis process in combination with the image processing method provided in the embodiment of the present application.
  • the dynamic adjustment process of MaxBufferCount of the cache queue in the present embodiment is further explained. According to each segmented area, the cycle in which frame 11 is located is considered to be the first cycle. In this example, the MaxBufferCount of the cache queue is 4.
  • the number of queued buffers in the cache queue is 2, that is, there are 2 free buffers and 2 queued buffers in the cache queue.
  • the application main thread draws image frame 11, and the displacement interval of image frame 11 calculated by it is 16.
  • the rendering thread is awakened to perform the rendering of image frame 11.
  • the rendering thread completes the rendering of image frame 11, it interacts with the application main thread to dequeue a free buffer from the cache queue and stores it in the rendered image frame 11.
  • the number of queued buffers in the cache queue increases by 1, and the number of free buffers decreases by 1.
  • the synthesizer obtains the rendered image frame 10 according to the order of the image frames and performs a synthesis operation on the image frame 10.
  • the display driver displays the synthesized image frame 9 on the screen.
  • the number of queued buffers in the cache queue is 3, that is, there are 3 free buffers and 1 queued buffer in the cache queue.
  • the application main thread draws image frame 12, and the displacement interval of image frame 12 calculated by it is 16. After the application main thread completes the drawing of image frame 12, it wakes up the rendering thread to perform the rendering of image frame 12. After the rendering thread completes the rendering of image frame 12, it interacts with the application main thread to obtain the last free buffer from the cache queue and stores it in the rendered image frame 12. The number of queued buffers in the cache queue increases by 1, and the number of free buffers decreases by 1. At this time, the number of queued buffers in the cache queue has reached MaxBufferCount.
  • the synthesis thread does not synthesize.
  • the display driver displays the synthesized image frame 9 on the screen.
  • MaxBufferCount which is 4. That is, all buffers in the cache queue are occupied.
  • the main thread of the application draws image frame 13, and the calculated displacement interval of image frame 13 is 16.
  • MaxBufferCount of the cache queue is dynamically increased by 1.
  • MaxBufferCount is 5.
  • the rendering thread can obtain the last newly added free buffer from the cache queue and store it in the rendered image frame 13.
  • the rendering thread and the main thread of the application are both in normal state.
  • the compositor does not synthesize.
  • the number of queued buffers in the cache queue has reached MaxBufferCount, which is 5. All buffers in the cache queue are occupied.
  • the main thread of the application draws image frame 14, and the calculated displacement interval of image frame 14 is 16.
  • the number of buffers in the cache queue is dynamically increased, and the number of MaxBufferCount is increased by 1. At this time, MaxBufferCount is 6.
  • the rendering thread can obtain the last newly added free buffer from the cache queue and store it in the rendered image frame 14. The rendering thread and the main thread of the application are both in normal state.
  • the display driver displays the synthesized image frame 9 on the screen.
  • the synthesis thread obtains a queued buffer from the cache queue to perform synthesis operations in this cycle, for example, obtains image frame 11 to perform synthesis operations.
  • the number of queued buffers in the cache queue decreases by one, to 5.
  • the display driver displays the received synthesized image frame 10, and the displacement interval of the image frame 10 is 16.
  • the buffer of the synthesized image frame 9 is released before the display of the synthesized image frame 10 ends, and the cache information of the buffer is returned to the synthesis thread.
  • the synthesis thread returns the cache information to the application main thread, and the application main thread updates the cache status of the cache queue according to the cache information. At this time, there is a free buffer in the cache queue.
  • the main thread of the application draws image frame 15, and the displacement interval of image frame 15 calculated by the application is 16.
  • the rendering thread interacts with the main thread of the application to dequeue free buffer and store the rendered image frame 15.
  • the composition thread obtains a queued buffer from the cache queue for composition operation. For example, it obtains image frame 12 for composition operation. During this cycle, the number of queued buffers in the cache queue decreases by one to 5.
  • the display driver displays the composite image frame 11, and the displacement interval of the image frame 11 is 16.
  • the buffer of the composite image frame 10 is released, and the cache information of the buffer is returned to the composition thread.
  • the composition thread returns the cache information to the application main thread, and the application main thread updates the cache status of the cache queue according to the cache information. At this time, there is a free buffer in the cache queue.
  • the application main thread does not draw image frames, and the rendering thread does not render image frames. operate.
  • the synthesis thread interacts with the main application thread to reduce the MaxBufferCount of the cache queue, reducing the number of MaxBufferCount by 1. At this time, MaxBufferCount is 5.
  • the composition thread obtains a queued buffer from the cache queue for composition operation, for example, obtains image frame 13 for composition operation. During this cycle, the number of queued buffers in the cache queue decreases by one to 4.
  • the display driver displays the composite image frame 12, and the displacement interval of the image frame 12 is 16.
  • the buffer of the composite image frame 11 is released, and the cache information of the buffer is returned to the composition thread.
  • the composition thread returns the cache information to the application main thread, and the application main thread updates the cache status of the cache queue according to the cache information. At this time, there is a free buffer in the cache queue.
  • the application main thread does not draw image frames
  • the rendering thread does not render image frames.
  • the synthesis thread interacts with the main application thread to reduce the MaxBufferCount of the cache queue, reducing the number of MaxBufferCount by 1. At this time, MaxBufferCount is 4.
  • the MaxBufferCount of the cache queue is dynamically increased, so that the rendering thread can continuously and stably obtain the free buffer to store the rendered image frames, and also ensure that the application main thread can calculate the displacement of the image frames of each cycle according to the normal frame interval.
  • the normal operation of the application main thread and the rendering thread ensures that the displacement interval of adjacent image frames drawn by the application main thread remains unchanged, so that the motion effects formed by multiple adjacent image frames after synthesis and display are coherent and smooth, avoiding the problem of visual freeze.
  • the MaxBufferCount of the cache queue can be dynamically reduced to release redundant buffers in time and improve buffer utilization.
  • the free cache objects in the cache queue can be dynamically increased, and the application process can continue to normally obtain the free cache objects to store the rendered image frames, thereby avoiding the problem in the prior art that the application process cannot obtain the free cache objects to store the rendered image frames, and does not perform the rendering operation of the next image frame, thereby causing frame loss when drawing the next image frame.
  • the image processing method provided by the present application there is always at least one free cache object in the cache queue that can be used by the application process, avoiding the problem of frame loss caused by the application process not performing the image frame drawing and rendering operation due to the lack of free cache objects in the cache queue, thereby solving the problem of visual freeze caused by frame loss after the image frame is sent for display.
  • Some embodiments of the present application provide an electronic device, which may include: a memory, a display screen, and one or more processors.
  • the display screen, the memory, and the processor are coupled.
  • the memory is used to store computer program code, and the computer program code includes computer instructions.
  • the processor executes the computer instructions, the electronic device may perform the various functions or steps performed by the electronic device in the above method embodiment.
  • the structure of the electronic device can refer to the structure of the electronic device 100 shown in Figure 4.
  • the present application also provides a chip system (e.g., a system on a chip (SoC)).
  • the chip system includes at least one processor 701 and at least one interface circuit 702.
  • the processor 701 and the interface circuit 702 can be interconnected through a line.
  • the interface circuit 702 can be used to receive data from other devices (e.g., The interface circuit 702 may be used to receive a signal from a memory of an electronic device.
  • the interface circuit 702 may be used to send a signal to another device (such as a processor 701 or a camera of an electronic device).
  • the interface circuit 702 may read an instruction stored in the memory and send the instruction to the processor 701.
  • the electronic device may perform the various steps in the above embodiments.
  • the chip system may also include other discrete devices, which are not specifically limited in the embodiments of the present application.
  • An embodiment of the present application also provides a computer-readable storage medium, which includes computer instructions.
  • the computer instructions When the computer instructions are executed on the above-mentioned electronic device, the electronic device executes each function or step executed by the electronic device 100 in the above-mentioned method embodiment.
  • the embodiment of the present application further provides a computer program product, which, when executed on a computer, enables the computer to execute the functions or steps executed by the electronic device 100 in the above method embodiment.
  • the computer may be the above electronic device 100.
  • the disclosed devices and methods can be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the modules or units is only a logical function division. There may be other division methods in actual implementation, such as multiple units or components can be combined or integrated into another device, or some features can be ignored or not executed.
  • Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or units, which can be electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may be one physical unit or multiple physical units, that is, they may be located in one place or distributed in multiple different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the present embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a readable storage medium.
  • the technical solution of the embodiment of the present application is essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium and includes several instructions to enable a device (which can be a single-chip microcomputer, chip, etc.) or a processor (processor) to execute all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), disk or optical disk and other media that can store program code.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请公开一种图像处理方法和电子设备,包括:在电子设备的第一应用的启动到退出的过程中,电子设备对第一应用的第一图像帧和第二图像帧进行绘制渲染以及合成操作。应用进程在第一图像帧的绘制渲染周期内对第一图像帧进行绘制和渲染,并将得到的第一图像帧存储至缓存队列的一个空闲缓存对象中,当合成线程在第一图像帧的合成周期内未执行合成操作,合成线程向应用进程发送第一调整请求,以使应用进程增加缓存队列中空闲缓存对象的数量,进而应用进程可以将绘制和渲染后的第二图像帧存储至缓存队列的一个空闲缓存对象中。本方法避免了应用进程在图像帧的绘制渲染过程中出现丢帧情况,解决了图像帧显示卡顿的问题。

Description

图像处理方法和电子设备
本申请要求于2022年10月13日提交国家知识产权局、申请号为202211253545.4、发明名称为“图像处理方法和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端技术领域,尤其涉及一种图像处理方法和电子设备。
背景技术
随着終端技术的发展,各类终端(如手机)与用户的交互性能越来越优化。比如,终端可以通过设置操作对应的动效来提高用户体验。动效也即多帧图像的连续显示形成的动态显示效果。电子设备通过显示屏显示画面通常需要经过绘制、渲染和合成等过程。
其中,电子设备的应用进程负责显示画面中各图像帧的绘制和渲染,电子设备的合成线程负责对绘制和渲染后的各图像帧进行合成并送显。
但是,在某些情况下,应用进程无法正常进行图像帧的绘制,出现图像帧丢帧的情况,进而使得合成器送显的图像帧产生显示卡顿的问题。
发明内容
本申请实施例提供一种图像处理方法和电子设备,在绘制动效的图像帧的过程中,避免了绘制两帧图像帧的位移间隔过大导致位移过大的问题,保证了动效各图像帧的连贯性显示,使得显示效果更加平滑、流畅。
为达到上述目的,本申请的实施例采用如下技术方案:
第一方面,提供了一种图像处理方法,该方法包括:
电子设备接收用户在电子设备的触摸屏的第一操作;电子设备响应于第一操作启动第一应用;电子设备接收用户在电子设备的触摸屏的第二操作;电子设备响应于第二操作退出第一应用。
其中,在第一应用的启动到第一应用的退出的过程中,电子设备对第一应用的第一图像帧和第二图像帧的绘制渲染以及合成操作。
电子设备对第一应用的第一图像帧和第二图像帧的绘制渲染以及合成操作,包括:
应用进程在第一图像帧的绘制渲染周期内,对第一图像帧进行绘制和渲染,并将得到的第一图像帧存储至缓存队列的一个空闲缓存对象中;当合成线程在第一图像帧的合成周期内未执行合成操作的情况下,合成线程向应用进程发送第一调整请求;应用进程基于第一调整请求,增加缓存队列中空闲缓存对象的数量,以使应用进程在第二图像帧的绘制渲染周期内,对第二图像帧进行绘制和渲染后,将得到的第二图像帧存储至缓存队列的一个空闲缓存对象中。
其中,第二图像帧的绘制渲染周期位于第一图像帧的绘制渲染周期之后,第二图像帧的绘制起始时刻与第一图像帧的绘制起始时刻相差N个周期,N为正整数。
在本申请中,在合成器不合成的情况下,可以动态增加缓存队列的空闲缓存对象, 应用进程可以持续正常获取空闲缓存对象存入已渲染的图像帧,从而避免现有技术中应用进程无法获取空闲缓存对象来存入已渲染的图像帧,而不执行下一图像帧的渲染操作,从而下一图像帧的绘制,出现丢帧的问题。通过本申请提供的图像处理方法,缓存队列中始终有至少一个空闲缓存对象可被应用进程使用,避免了应用进程由于缓存队列中没有空闲缓存对象导致的不执行图像帧绘制渲染操作而丢帧的问题,从而解决了由于丢帧造成的图像帧送显后出现的视觉卡顿的问题。通过保证应用进程有足够的空闲缓存对象进行已渲染的图像帧的存入操作,从而保证了送显图像帧的动效显示效果的流畅性。
在第一方面的一种可能的实现方式,第一图像帧和第二图像帧是第一应用的启动过程中的图像帧。
在本申请中,在电子设备执行应用的启动过程中,针对启动过程中启动动效的图像帧进行图像处理,在合成线程不执行合成操作的情况下,动态增加缓存队列中空闲缓存对象的数量,使得缓存队列中始终有至少一个空闲缓存对象被应用进程使用。应用进程在每一个图像帧的绘制渲染周期,均可将已渲染的图像帧存入至缓存队列的空闲缓存对象中,避免了应用进程绘制渲染图像帧过程中可能会出现的丢帧的情况,解决了由于丢帧导致的送显图像帧出现显示卡顿的问题,提高了应用启动过程中,启动动效的显示流畅性。
在第一方面的一种可能的实现方式,第一图像帧和第二图像帧为电子设备在第一应用的启动过程中,由第一刷新率切换至第二刷新率过程中的图像帧;第一刷新率小于第二刷新率。
其中,第一刷新率可以为60Hz,也即,1秒钟时间里刷新60帧图片,每16.6毫秒刷新一帧图片。第二刷新率可以为90Hz,也即,1秒钟时间里刷新90帧图片,每11.1毫秒刷新一帧图片。
在本申请中,在刷新率切换的场景下,本实施例提供的图像处理方法可有效解决应用启动过程中,由于刷新率切换,造成的合成线程与应用进程处理周期异步、合成线程认为待合成的图像帧存在任务堆积而不执行合成操作,导致缓存队列中无缓存对象释放的问题。
在第一方面的一种可能的实现方式,第一图像帧和第二图像帧是第一应用的启动完成之后的图像帧。
在本申请中,针对不同的场景,在第一应用的启动完成之后,本实施例提供的图像处理方法仍可以在合成线程不执行合成操作的情况下,动态增加缓存队列中空闲缓存对象的数量,以此来解决了第一应用启动完成之后,比如,在第一应用的内部图像显示过程中,由于丢帧导致的送显图像帧出现显示卡顿的问题,提高了应用内部显示图像的显示流畅性。
在第一方面的一种可能的实现方式,第一图像帧和第二图像帧是第一应用的退出过程中的图像帧。
在本申请中,在电子设备执行应用的退出过程中,针对退出过程中退出动效的图像帧进行图像处理,在合成线程不执行合成操作的情况下,动态增加缓存队列中空闲缓存对象的数量,使得缓存队列中始终有至少一个空闲缓存对象被应用进程使用。应 用进程在每一个图像帧的绘制渲染周期,均可将已渲染的图像帧存入至缓存队列的空闲缓存对象中,避免了应用进程绘制渲染图像帧过程中可能会出现的丢帧的情况,解决了由于丢帧导致的送显图像帧出现显示卡顿的问题,提高了应用退出过程中,退出动效的显示流畅性。
在第一方面的一种可能的实现方式,第一图像帧和第二图像帧为电子设备在第一应用的退出过程中,由第一刷新率切换至第二刷新率过程中的图像帧;第一刷新率小于第二刷新率。
其中,第一刷新率可以为60Hz,也即,1秒钟时间里刷新60帧图片,每16.6毫秒刷新一帧图片。第二刷新率可以为90Hz,也即,1秒钟时间里刷新90帧图片,每11.1毫秒刷新一帧图片。
在本申请中,在刷新率切换的场景下,本实施例提供的图像处理方法可有效解决应用退出过程中,由于刷新率切换,造成的合成线程与应用进程处理周期异步、合成线程认为待合成的图像帧存在任务堆积而不执行合成操作,导致缓存队列中无缓存对象释放的问题。
在第一方面的一种可能的实现方式,第二图像帧的绘制渲染周期为第一图像帧的绘制渲染周期的下一周期;第二图像帧的绘制起始时刻与第一图像帧的绘制起始时刻相差1个周期。
在本申请中,第二图像帧的绘制起始时刻与第一图像帧的绘制起始时刻相差1个周期,若合成线程在第一图像帧的合成周期不执行合成操作,则会影响第二图像帧的绘制渲染,由于第二图像帧与第一图像帧为相邻图像帧,第二图像帧丢帧的情况下,会出现明显的图像帧显示卡顿问题。基于此,在第二图像帧与第一图像帧为相邻图像帧的情况下,本申请提供的图像处理方法的效果更明显。在合成线程不执行合成操作的情况下,动态增加缓存队列中空闲缓存对象的数量,使得缓存队列中始终有至少一个空闲缓存对象被应用进程使用。应用进程在第二图像帧的绘制渲染周期,可将已渲染的图像帧存入至缓存队列的空闲缓存对象中,避免了应用进程绘制渲染图像帧过程中可能会出现的丢帧的情况,解决了由于丢帧导致的送显图像帧出现显示卡顿的问题,提高了图像帧的显示流畅性。
在第一方面的一种可能的实现方式,第一调整请求中包括第一指示值;第一指示值用于指示缓存对象的增加数量,增加缓存队列中空闲缓存对象的数量,包括:
应用进程增加第一指示值的空闲缓存对象至缓存队列。
在本申请中,应用进程可以根据第一指示值来增加缓存队列中的空闲缓存对象的数量。在实际应用中,第一指示值可以为1,2,3...,根据不同的情况指示值可以进行调整,这样动态调整缓存队列的效果更好。
在第一方面的一种可能的实现方式,应用进程增加第一指示值的空闲缓存对象至缓存队列,包括:
应用进程按照入队顺序,将第一指示值的空闲缓存对象的地址添加至缓存队列中。
在本申请中,缓存队列中的各个缓存对象具有排列顺序,应用进程按照入队顺序,将第一指示值的空闲缓存对象的地址添加至缓存队列中,可以保证不对原有的缓存队列中的缓存对象的排列顺序造成干扰。
在第一方面的一种可能的实现方式,该方法还包括:
合成线程查询缓存队列中所有缓存对象的数量;若所有缓存对象的数量达到最大缓存对象数量,合成线程停止向应用进程发送缓存对象的第一调整请求。
在本申请中,由于电子设备的硬件性能要求,缓存队列中的缓存对象的数量具有最大缓存对象数量。在合成线程确定缓存队列中所有缓存对象的数量达到最大缓存对象数量,则停止向应用进程发送用于增加空闲缓存对象的第一调整请求,这样,保证了电子设备的正常运行,避免了由于缓存队列的缓存对象无法增加导致的电子设备异常的问题。
在第一方面的一种可能的实现方式,在将第一图像帧存储至缓存队列的一个空闲缓存对象中之后,该方法还包括:
合成线程获取并记录应用进程将第一图像帧存储至目标缓存对象中的存入时刻。
在本申请中,在合成线程执行完一次图像帧的合成操作之后,合成线程可以记录应用进程将图像帧存储至目标缓存对象中的存入时刻。合成线程记录每一次的存入时刻,根据每一次存入时间之间的时间差,可以判断应用进程是否完成图像帧的绘制渲染。
在第一方面的一种可能的实现方式,该方法还包括:
在合成线程在第一图像帧的合成周期执行合成操作的情况下,合成线程确定当前的系统时刻与最后一次记录的图像帧存储至目标缓存对象中的存入时刻之间的时间差;若时间差大于或等于预设时间阈值,合成线程向应用进程发送缓存对象的第二调整请求;应用进程根据第二调整请求,减少缓存队列中空闲缓存对象的数量。
可选地,合成线程还可以根据当前刷新率确定帧间隔。预设时间阈值可以为M个帧间隔,这里M为1,2,3...。
在本申请中,在合成线程执行完一次图像帧的合成操作之后,合成线程可以记录应用进程将图像帧存储至目标缓存对象中的存入时刻。合成线程可以获取当前系统时间与最后一个存入时刻之间的时间差。若时间差大于预设时间阈值,则确定渲染线程存在丢帧的情况。存在丢帧的情况意味着应用进程的生产速度比合成线程消费速度慢,缓存队列中有足够的空闲缓存队列供其使用。或者,还可能是应用进程已经完成当前场景下的图像帧的绘制。在这些情况下,合成线程可以生成减少缓存队列的第二调整请求,来减少缓存队列中空闲缓存对象的数量,及时释放缓存队列中的空闲缓存对象可以减少存储资源的占用。
在第一方面的一种可能的实现方式,第二调整请求中包括第二指示值,第二指示值用于指示缓存对象的减少数量;减少缓存队列中空闲缓存对象的数量,包括:应用进程从缓存队列中减少第二指示值的空闲缓存对象。
在本申请中,应用进程可以根据第二调整请求中的第二指示值来减少缓存队列中的空闲缓存对象的数量。在实际应用中,第二指示值可以为1,2,3...,根据不同的情况指示值可以进行调整,这样动态减少缓存队列的效果更好。
在第一方面的一种可能的实现方式,应用进程从缓存队列中减少第二指示值的空闲缓存对象,包括:应用进程按照出队顺序,将第二指示值的空闲缓存对象的地址,从缓存队列中剔除。
在本申请中,缓存队列中的各个缓存对象具有排列顺序,应用进程按照出队顺序,将第二指示值的空闲缓存对象的地址,从缓存队列中剔除,可以保证不对原有的缓存队列中的缓存对象的排列顺序造成干扰。
在第一方面的一种可能的实现方式,该方法还包括:
合成线程查询缓存队列中所有缓存对象的数量;若所有缓存对象的数量减少至最小缓存对象数量,合成线程停止向应用进程发送缓存对象的第二调整请求。
在本申请中,由于电子设备的渲染性能要求,缓存队列中的缓存对象的数量具有最小缓存对象数量。在合成线程确定缓存队列中所有缓存对象的数量减少至最小缓存对象数量,则停止向应用进程发送用于减少空闲缓存对象的第二调整请求,这样,保证了电子设备的正常运行,避免了由于缓存队列的缓存对象无法减少导致的电子设备异常的问题。
在第一方面的一种可能的实现方式,该方法还包括:
如果合成线程在第一图像帧的合成周期执行合成操作,合成线程从缓存队列中获取目标缓存对象;目标缓存对象中存储了绘制和渲染后的第一图像帧;合成线程对绘制和渲染后的第一图像帧进行合成操作。
在本申请中,在合成线程正常执行第一图像帧的合成操作的情况下,合成线程正常从缓存队列中获取存入已渲染的第一图像帧的缓存对象,进行已渲染的第一图像帧的合成,并实现已合成的第一图像帧的送显,以及该缓存对象的空间释放,从而缓存队列中可以及时得到已释放的空闲缓存对象以供应用进程使用。
第二方面,提供了一种电子设备,该电子设备包括存储器、显示屏和一个或多个处理器;所述存储器、所述显示屏与所述处理器耦合;所述存储器中存储有计算机程序代码,所述计算机程序代码包括计算机指令,当所述计算机指令被所述处理器执行时,使得所述电子设备执行如上述第一方面中任一项所述的方法。
第三方面,提供了一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当其在电子设备上运行时,使得电子设备可以执行上述第一方面中任一项所述的方法。
第四方面,提供了一种包含指令的计算机程序产品,当其在电子设备上运行时,使得电子设备可以执行上述第一方面中任一项所述的方法。
第五方面,本申请实施例提供了一种芯片,芯片包括处理器,处理器用于调用存储器中的计算机程序,以执行如第一方面的方法。
可以理解地,上述提供的第二方面所述的电子设备,第三方面所述的计算机可读存储介质,第四方面所述的计算机程序产品,第五方面所述的芯片所能达到的有益效果,可参考第一方面及其任一种可能的设计方式中的有益效果,此处不再赘述。
附图说明
图1为本申请实施例提供的一种包括应用的启动动效在手机界面正常显示的示意图;
图2为本申请实施例提供的一种包括离屏滑动动效在手机界面正常显示的示意图;
图3为本申请实施例提供的一种包括应用的启动动效在手机界面异常显示的示意图;
图4为本申请实施例提供的一种电子设备的硬件结构示意图;
图5为本申请实施例提供的一种电子设备的软件结构示意图;
图6为本申请实施例提供的一种多个图像帧正常绘制、渲染、合成、显示的时序图;
图7为本申请实施例提供的一种缓存队列中缓存的状态变化图;
图8为本申请实施例提供的一种多个图像帧异常绘制、渲染、合成、显示的时序图;
图9为本申请实施例提供的一种图像帧绘制、渲染、合成、显示异常过程中多主体交互的时序示意图;
图10为本申请实施例提供的一种图像帧绘制、渲染、合成、显示异常过程中缓存队列中缓存数量变化的时序图;
图11为本申请实施例提供的一种在应用的启动动效场景下图像处理方法流程图;
图12为本申请实施例提供的一种包括应用的退出动效在手机界面正常显示的示意图;
图13为本申请实施例提供的一种在应用的退出动效场景下图像处理方法流程图;
图14为本申请实施例提供的一种图像帧绘制、渲染、合成、显示过程中动态调整缓存队列中缓存数量的多主体交互的时序示意图;
图15为本申请实施例提供的一种图像帧绘制、渲染、合成、显示过程中动态调整缓存队列中缓存数量的时序图;
图16为本申请实施例提供的一种芯片系统的结构示意图。
具体实施方式
在本申请实施例的描述中,以下实施例中所使用的术语只是为了描述特定实施例的目的,而并非旨在作为对本申请的限制。如在本申请的说明书和所附权利要求书中所使用的那样,单数表达形式“一种”、“所述”、“上述”、“该”和“这一”旨在也包括例如“一个或多个”这种表达形式,除非其上下文中明确地有相反指示。还应当理解,在本申请以下各实施例中,“至少一个”、“一个或多个”是指一个或两个以上(包含两个)。术语“和/或”,用于描述关联对象的关联关系,表示可以存在三种关系;例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A、B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。
在本说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。术语“连接”包括直接连接和间接连接,除非另外说明。“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
电子设备可以通过显示屏显示基于不同操作所触发的画面的动态显示效果(动效)。有一类动效的不同图像帧的显示位置存在变化,本申请中将这一类动效称为第一类动效。将图像帧的显示位置不存在变化的动效称为第二类动效。
在第一类动效中又存在图像帧的显示位置与系统时间相关的动效,例如应用的启动动效,应用的退出动效,离屏滑动动效等。其中,应用的启动动效指的是应用启动时显示的动效;应用的退出动效指的是应用退出时显示的动效;离屏滑动动效指定是用户使用手指滑动屏幕,手指离开屏幕后操作对象继续移动的动效。在绘制这类动效中的每一个图像帧时,电子设备需要基于绘制当前图像帧的系统时间来计算当前图像帧的显示位置。第二类动效包括游戏场景动效、应用内部场景动效以及其他场景下的跟手动效等。
本申请所提供的图像处理方法可应用于所有的动效场景中。比如,在电子设备显示应用的启动动效的过程中,基于图像处理方法对应用的启动动效的过程中的图像帧进行处理。比如,在电子设备显示应用的退出动效的过程中,基于图像处理方法对应用的退出动效的过程中的图像帧进行处理。比如,在电子设备显示离屏滑动动效的过程中,基于图像处理方法对离屏滑动动效的过程中的图像帧进行处理。比如,在电子设备显示游戏场景动效的过程中,基于图像处理方法对游戏场景动效的过程中的图像帧进行处理。比如,在电子设备显示应用内部场景动效的过程中,基于图像处理方法对应用内部场景动效的过程中的图像帧进行处理。比如,在其他跟手动效的场景下,电子设备均可基于图像处理方法对跟手动效中的图像帧进行处理。
基于本申请提供的图像处理方法对图像帧进行处理,可以有效避免显示动效的图像帧绘制过程中出现图像帧丢帧的情况,进一步有效解决显示动效的图像帧所产生的显示卡顿的问题。
在一些示例中,不同图像帧的显示位置不同,指的是不同的图像帧之间,图像帧指定顶点(比如界面左上顶点)至图像帧原点的距离不同。
示例性的,图1以电子设备是手机为例,示出了一种显示应用启动动效的示意图。参考图1的(a),用户点击手机桌面上应用5的图标,响应于该点击操作,应用5启动,显示应用5的启动动效。应用5的启动动效的显示画面由图1的(b)逐渐显示为图1的(f),应用5的启动动效中的图像帧包括图1的(b)到图1的(f)显示的这5帧图像帧。可以看出,这5帧图像帧的指定顶点(比如界面左上顶点)与图像帧原点之间的距离不同,从图1的(b)至图1的(f),图像帧的指定顶点(比如界面左上顶点)与图像帧原点之间距离逐渐变大,直至图像帧铺满屏幕。
在一些示例中,不同图像帧的显示位置不同,指的是不同的图像帧之间,图像帧指定顶点(比如界面左上顶点)至屏幕原点的距离不同。
示例性的,图2以电子设备是手机为例,示出了一种显示离屏滑动动效的示意图。参考图2的(a),图2的当前界面为桌面的第0页,当前界面包括应用1、应用2、 应用3、应用4和应用5。用户在手机桌面的当前页界面进行左滑操作,响应于该左滑操作,手机以向左滑动动效来显示当前界面的下一页界面,其中,当前界面的下一页界面为桌面的第1页,第1页界面中包括应用6、应用7和应用8。手机响应左滑操作触发的滑动动效的显示画面由图2的(b)逐渐显示为图2的(f),左滑滑动动效的图像帧包括图2的(b)到图2的(f)显示的这5帧图像帧。可以看出,这5帧图像帧的指定顶点(比如图中空心圆所在定点)与屏幕原点(比如图中实心圆所在定点)的距离不同,从图2的(b)至图2的(f),图像帧的指定顶点与屏幕原点之间的距离逐渐变小,直至距离为0,下一页界面完全显示于屏幕中。本实施例中图像帧指定定点与屏幕原点的最小距离可以根据实际情况确定。
可以理解的,在一些示例中,不同图像帧的显示位置不同,也可以指的是不同的图像帧之间,图像帧指定顶点至图像帧原点以及图像帧指定顶点至屏幕原点的距离都不同。
本申请下述实施例中,以手机显示应用启动动效为例进行说明。可以理解的,本申请实施例提供的图像处理方法同样适用于其他类型的动效。
本申请实施例中,将图像帧指定顶点(比如界面左上顶点)至图像帧原点的距离称为位移。相邻图像帧之间的位移变化称为位移间隔。
示例性地,应用启动动效中当前图像帧的位移为绘制当前图像帧的系统时间/动效总时间*总位移,其中,动效总时间指的是正常情况下显示动效中所有图像帧的总时间;总位移指的是动效的最后一帧图像的位移。
在另一示例中,当前图像帧的位移y(t)计算方法还可以为y(t)=y(0)+t*n;其中,y(0)为该动效第一帧图像帧的位移。n为预设的位移间隔。t的计算方式可以为t=currenttime-(currenttime-t0)%帧率;这里“%”为取余运算;currenttime为当前时间;t0为该动效第一帧图像帧的绘制时间。
在显示应用的启动动效的技术实现过程中,应用的应用进程用于对动效中的每一个图像帧进行绘制渲染,其中,应用进程又包括应用主线程和渲染线程,应用主线程对图像帧进行绘制,渲染线程对图像帧进行渲染。通用的合成线程用于对已渲染的图像帧进行合成。具体地,针对动效的每一个图像帧,应用主线程基于绘制当前图像帧的系统时间,计算绘制当前图像帧的位移,并基于计算得到的位移对当前图像帧进行绘制。在得到已绘制的图像帧之后,渲染线程对已绘制的图像帧进行渲染。在得到已渲染的图像帧之后,合成线程对已渲染的图像帧中的多个图层进行合成。从而合成线程将合成后的图像帧送至显示驱动进行显示。整个过程中,需要应用主线程、渲染线程、合成线程以及显示驱动基于各自对应的触发信号执行相应的操作,从而实现动效中多个图像帧的绘制、渲染、合成以及显示的操作,最终实现动效的多个图像帧连贯性显示。正常情况下,相邻图像帧的位移间隔是固定的。示例性的,如图1所示,从图1的(b)到图1的(f)这5个图像帧中,所有相邻图像帧之间的位移间隔保持不变,在显示应用5的启动动效的过程中,显示画面连贯且流畅。
生成应用启动动效过程中,应用主线程、渲染线程、合成线程以及显示驱动均会在接收到各自对应的触发信号时执行对应的操作。触发信号可以包括用于触发应用主线程执行绘制操作的绘制信号、用于触发合成线程执行合成操作的合成信号等。示例 性地,应用主线程接收到绘制信号,执行基于绘制当前图像帧的系统时间计算绘制当前图像帧的位移,并基于计算得到的位移对当前图像帧进行绘制的操作;在得到已绘制的图像帧之后,应用主线程唤醒渲染线程进行已绘制的图像帧的渲染操作。合成线程接收到合成信号,执行对已渲染的图像帧中的多个图层进行合成的操作;在得到合成图像帧之后,合成线程将合成图像帧发送至显示驱动进行显示。若应用主线程、渲染线程、合成线程以及显示驱动中,任意一方没有按照各自对应的触发信号或触发条件执行相应的操作,可能导致动效中某些图像帧的位移的计算结果与正常情况下的该图像帧的位移存在偏差。例如,渲染线程可能出现不执行已绘制的图像帧的渲染的情况,由于应用主线程和渲染线程为串行线程,渲染线程不执行已绘制的图像帧的渲染操作,反作用在应用主线程中,便会导致应用主线程无法进行新的图像帧的绘制操作。最终结果就导致应用主线程在执行完上一图像帧的绘制操作之后,无法执行当前图像帧的绘制操作。从而应用主线程执行当前图像帧的绘制操作与上一图像帧的绘制操作之间的时间间隔过长,在应用主线程开始绘制当前图像帧时,基于绘制当前图像帧的系统时间计算得到的位移与上一图像帧计算得到的位移之间的位移间隔过大,实际绘制的当前图像帧的位移,与理论上上一图像帧的下一图像帧的位移产生偏差,相邻图像帧之间的位移间隔发生变化,从而导致在显示该启动动效时产生画面突然放大的效果,造成用户视觉上的卡顿。图3给出了一种电子设备为手机,异常显示应用5的启动动效的过程示意图。在一种示例中,在异常的情况下,由于应用主线程绘制当前图像帧时间间隔太久,基于绘制的系统时间计算得到的当前图像帧的位移间隔过大,导致应用主线程丢帧,因此启动动效中包括3个图像帧。参考图3,在与图1处于启动应用5的同样场景下,图3的所示的电子设备的应用主线程未进行图1的(d)所示的图像帧和图1的(e)所示的图像帧的绘制渲染,在经过一定的时间间隔之后,应用主线程基于系统时刻计算了图3的(f)所示的图像帧的绘制位移,因此,图3的(f)所示的图像帧的位移与图3的(c)所示的图像帧的位移之间的位移间隔过大。参考图3的(a),用户点击显示界面的应用5的图标,响应于用户的点击操作,应用5启动,显示启动动效,应用5的启动动效的显示画面由图3的(b)逐渐显示为图3的(f)。其中,应用5的启动动效的相邻图像帧在图3的(c)至图3的(f)之间的位移间隔发生变化,图3的(c)至图3的(f)显示画面有明显的突然放大的变化,导致整个显示过程不连贯,造成用户视觉上的卡顿。
以上场景为对第一类动效进行图像帧的绘制、渲染、合成、送显过程中存在的问题。针对于第二类动效,也可能存在应用进程在绘制渲染图像帧时,由于合成线程不合成或者其他原因,导致的缓存队列中没有空闲缓存对象,应用进程无法执行绘制渲染操作而出现丢帧的情况。一旦应用进程在绘制渲染图像帧过程中出现丢帧,意味着在图像帧的送显过程中,显示画面会出现卡顿等情况。
本申请实施例提供一种图像处理方法,可应用于具有显示屏的电子设备,包括图像帧的绘制、渲染、合成以及送显过程的图像处理,适应于对所有类型的动效中的图像帧的处理场景。通过本实施例提供的图像处理方法,用于图像帧绘制渲染的应用进程可以在每一个绘制周期获得空闲缓存对象,来存储已渲染的图像帧,以进行下一绘制周期的图像帧的绘制渲染。在针对第一类动效的图像处理场景下,通过本实施例提 供的图像处理方法,应用进程可以避免了由于缓存队列中没有空闲缓存对象导致的应用进程不执行绘制渲染操作的问题,从而避免两个相邻图像帧之间位移间隔过大,导致在送显之后出现视觉卡顿的问题。在针对第二类动效的图像处理场景下,通过本实施例提供的图像处理方法,应用进程依然可以避免了由于缓存队列中没有空闲缓存对象导致的应用进程不执行绘制渲染操作的问题,从而解决应用进程丢帧的情况下,避免了图像帧显示过程中出现的卡顿的问题。
本申请实施例中的电子设备可以为便携式计算机(如手机)、平板电脑、笔记本电脑、个人计算机(personal computer,PC)、可穿戴电子设备(如智能手表)、增强现实(augmented reality,AR)\虚拟现实(virtual reality,VR)设备、车载电脑、智能电视等包括显示屏的设备,以下实施例对该电子设备的具体形式不做特殊限制。
请参考图4,其示出本申请实施例提供一种电子设备(如电子设备100)的结构框图。其中,电子设备100可以包括处理器310,外部存储器接口320,内部存储器321,通用串行总线(universal serial bus,USB)接口330,充电管理模块340,电源管理模块341,电池342,天线1,天线2,射频模块350,通信模块360,音频模块370,扬声器370A,受话器370B,麦克风370C,耳机接口370D,传感器模块380,按键390,摄像头391以及显示屏392等。其中传感器模块380可以包括压力传感器380A,触摸传感器380B等。
本发明实施例示意的结构并不构成对电子设备100的限定。可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器310可以包括一个或多个处理单元。例如,处理器310可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
上述控制器可以是指挥电子设备100的各个部件按照指令协调工作的决策者。是电子设备100的神经中枢和指挥中心。控制器根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器310中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器310中的存储器为高速缓冲存储器,可以保存处理器310刚用过或循环使用的指令或数据。如果处理器310需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器310的等待时间,因而提高了系统的效率。
在一些实施例中,处理器310可以包括接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输 出(general-purpose input/output,GPIO)接口,SIM接口,和/或USB接口等。
本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。电子设备100可以采用本发明实施例中不同的接口连接方式,或多种接口连接方式的组合。
电子设备100的无线通信功能可以通过天线1,天线2,射频模块350,通信模块360,调制解调器以及基带处理器等实现。
电子设备100通过GPU,显示屏392,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏392和应用处理器AP。GPU用于执行数学和几何计算,用于图形渲染。处理器310可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏392用于显示图像、视频等。显示屏392包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏392,N为大于1的正整数。
在本实施例中,显示屏可以任意一种类型的显示屏,其可以为触摸屏也可以为非触摸的显示屏。在本实施例中,显示屏392可以显示操作触发的动效,比如,通过点击显示屏中的应用图标触发显示应用的启动动效;比如,通过点击退出应用控件触发显示应用的退出动效;比如,显示屏显示跟手动效、游戏场景动效等。
电子设备100可以通过ISP,摄像头391,视频编解码器,GPU,显示屏以及应用处理器等实现拍摄功能。
外部存储器接口320可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口320与处理器310通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器321可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器310通过运行存储在内部存储器321的指令,从而执行电子设备100的各种功能应用以及数据处理。存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,其他易失性固态存储器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备100可以通过音频模块370,扬声器370A,受话器370B,麦克风370C,耳机接口370D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
压力传感器380A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器380A可以设置于显示屏392。压力传感器380A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以 是包括至少两个具有导电材料的平行板。当有力作用于压力传感器,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏392,电子设备100根据压力传感器380A检测所述触摸操作强度。电子设备100也可以根据压力传感器380A的检测信号计算触摸的位置。
触摸传感器380B,也称“触控面板”。可设置于显示屏392。用于检测作用于其上或附近的触摸操作。可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型,并通过显示屏392提供相应的视觉输出。
按键390包括开机键,音量键等。按键390可以是机械按键。也可以是触摸式按键。电子设备100接收按键390输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本发明实施例以分层架构的Android系统为例,示例性说明电子设备100的软件结构。
图5是本发明实施例的电子设备100的软件结构框图。分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,硬件抽象层以及内核层。
应用程序层可以包括一系列应用程序包。如图5所示,应用程序包可以包括相机,图库,日历,电话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
每个应用均包括应用主线程和渲染线程。应用主线程用于在绘制信号到来时,对相应的图像帧进行绘制。渲染线程用于对已绘制的图像帧进行渲染。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图5所示,应用程序框架层可以包括桌面启动器(launcher)、窗口管理器,内容提供器,图像合成系统、视图系统,输入管理器,活动管理器和资源管理器等。
在本实施例中,桌面启动器用于接收用户在电子设备的触摸屏的第一操作,并响应于第一操作启动第一应用;还用于接收用户在电子设备的触摸屏的第二操作,并响应于第二操作退出所述第一应用。其中,第一应用可以为应用程序层包括的任意一个应用。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。这些数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
图像合成系统用于控制图像合成,以及产生垂直同步(vetical synchronization,Vsync)信号。图像合成系统可以为合成器(surface flinger)。
图像合成系统包括:合成线程和Vsync线程。合成线程用于在Vsync信号到来时,触发对已渲染的图像帧进行图像帧中多个图层的合成操作。Vsync线程用于根据Vsync信号请求生成下一个Vsync信号,并将Vsync信号发送至对应的其他线程。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。显示界面可以由一个或多个视图组成的。
输入管理器用于管理输入设备的程序。例如,输入系统可以确定鼠标点击操作、键盘输入操作和触摸滑动等输入操作。
活动管理器用于管理各个应用程序的生命周期以及导航回退功能。负责Android的主线程创建,各个应用程序的生命周期的维护。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
Android runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:图像渲染库,图像合成库,输入库,表面管理器(surface manager),媒体库(media libraries),三维图形处理库(例如:openGL ES),2D图形引擎(例如:SGL)等。
图像渲染库,用于二维或三维图像的渲染。
图像合成库,用于二维或三维图像的合成。
可能的实现方式中,应用通过图像渲染库对图像进行绘制渲染,然后应用将绘制渲染后的图像发送至应用的缓存队列中,以使图像合成系统从该缓存队列中按顺序获取待合成的一帧图像,然后通过图像合成库进行图像合成。
输入库用于处理输入设备的库,可以实现鼠标、键盘和触摸输入处理等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。
媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。2D图形引擎是2D绘图的绘图引擎。
硬件抽象层,可以包含多个库模块,库模块如可以为硬件合成器(hwcomposer,HWC)、摄像头库模块等。Android系统可以为设备硬件加载相应的库模块,进而实现应用程序框架层访问设备硬件的目的。设备硬件可以包括如电子设备中的显示屏、摄像头等。
HWC是Android中进行窗口合成和显示的HAL层模块。图像合成系统向HWC提供所有窗口的完整列表,让HWC根据其硬件能力,决定如何处理这些窗口。HWC会为每个窗口标注合成方式,比如,是通过GPU还是通过HWC合成。surfaceflinger负责先 把所有注明GPU合成的窗口合成到一个输出buffer,然后把这个输出buffer和其他窗口一起交给HWC,让HWC完成剩余窗口的合成和显示。
内核层是硬件和软件之间的层。内核层至少包含触控(touch panel,TP)驱动、显示驱动,摄像头驱动,音频驱动和相机驱动等。
硬件可以是音频设备、蓝牙设备、相机设备、传感器设备等。
结合上述图5提供的电子设备的软件结构,来分析现有技术显示动效的过程中,由于动效中的图像帧丢帧或者动效中的相邻帧出现绘制时间间隔过大、位移间隔过大而导致的显示动效的相邻图像帧时产生视觉卡顿的原因。
示例性地,以电子设备显示某一应用的启动动效为例来说明,涉及到电子设备的应用、图像合成系统以及显示驱动之间的交互。其中,图像合成系统可以为合成器。各应用的应用进程包括应用主线程和渲染线程;合成器包括合成线程和Vsync线程。Vsync线程产生Vsync信号,并将Vsync信号发送至对应的其他线程,用于唤醒其他线程执行相应地操作。比如,用户在电子设备上产生启动应用的触摸操作,电子设备的显示驱动将触摸操作对应的输入事件发送至系统服务的输入线程中,输入线程将输入事件发送至应用主线程。应用主线程在接收到输入事件之后,向合成线程请求Vsync信号用于进行图像帧的绘制。应用主线程在Vsync信号到来时,进行应用的启动动效的当前图像帧的绘制等操作,得到已绘制的图像帧。渲染线程对已绘制的图像帧进行渲染操作,得到已渲染的图像帧。合成线程在Vsync信号到来时,进行已渲染的图像帧的多个图层的合成操作,得到已合成的图像帧。进一步地,合成线程还负责将已合成的图像帧发送至HWC,HWC通过显示驱动进行图像帧的显示。
Vsync线程产生的Vsync信号包括Vsync_APP信号、Vsync_SF信号、HW_Vsync信号。Vsync线程产生Vsync_APP信号,将Vsync_APP信号发送至应用主线程,应用主线程在Vsync_APP信号到来时,执行当前图像帧的绘制操作。Vsync线程产生Vsync_SF信号,将Vsync_SF信号发送至合成线程,合成线程在Vsync_SF信号到来时,获取已渲染的图像帧,进行图像帧的合成操作。Vsync线程产生HW_Vsync信号,将HW_Vsync信号发送给电子设备的显示驱动,显示驱动在HW_Vsync信号到来时,刷新显示图像帧。
Vsync线程产生Vsync信号的周期与电子设备的帧率相关。帧率是指在1秒钟时间里刷新图片的帧数,也可以理解为电子设备中图形处理器每秒钟刷新画面的次数。高的帧率可以得到更流畅和更逼真的动画。每秒钟帧数越多,所显示的动作就会越流畅。示例性地,帧率为60Hz意味着1秒钟时间里刷新60帧图片,也即,每16.6毫秒刷新一帧图片,相应地,Vsync线程产生Vsync信号的周期为16.6毫秒。示例性地,帧率为90Hz意味着1秒钟时间里刷新90帧图片,也即,每11.1毫秒刷新一帧图片,相应地,Vsync线程产生Vsync信号的周期为11.1毫秒。
图6给出了一种帧率为60Hz时,电子设备中各线程处理作业的时序图。在帧率为60Hz的显示场景下,该动效包括的总帧数为6,动效的总距离为96,动效的总时间为99.6ms。Vsync线程按照16.6ms为一个周期产生VSYNC_APP信号发送至应用主线程,以唤醒应用主线程和渲染线程执行绘制和渲染操作。正常情况下,按照信号周期,应用主线程对动效中的每一图像帧进行绘制的时间间隔和位移间隔保持不变。在本实施例中,绘制时间间隔为16.6ms,绘制位移间隔为16。
图6中帧间隔与帧率的对应,在帧率为60Hz的情况下,帧间隔为16.6ms。时间戳用于记录应用主线程绘制每一图像帧的时间。绘制图像的位移间隔与帧间隔对应,这里,在帧间隔为16.6ms的情况下,位移间隔为16。VSYNC_APP ID为应用主线程接收到的VSYNC_APP信号的周期序号。绘制渲染指的是应用主线程与渲染线程执行绘制渲染操作的示意。图6中的缓存队列用于存放已渲染的图像帧。渲染线程可以将已渲染的图像帧存入至缓存队列的空缓存中;合成线程可从缓存队列中获取已渲染的图像帧进行合成。也即,渲染线程为缓存队列的生产者,合成线程为缓存队列的消费者。缓存队列具有最大缓存数量。在图6示例中,缓存队列的最大缓存数量为4。图中合成线程表示合成线程执行合成操作的示意。显示指的是显示驱动进行图像帧显示的示意。图6中还包括各显示图像帧的位移、显示的相邻图像帧之间的时间间隔。
电子设备可以创建一个缓存队列(buffer queue),该缓存队列的生产者为渲染线程,消费者为合成线程。缓存队列中可以包括多个缓存(buffer),在缓存队列的初始状态下,每一个缓存均为空缓存(free buffer),空缓存为未被渲染线程或合成线程占用的缓存。一般的,缓存队列的最大缓存数量(MaxBufferCount)由电子设备的帧率确定。示例性地,电子设备的帧率为60Hz的情况下,缓存队列的MaxBufferCount可以为10。缓存队列的MaxBufferCount为经验值。
结合缓存队列的生产者和消费者,对缓存队列中缓存的使用机制进行简单说明。可参考图7所示。渲染线程为缓存队列的生产者,合成线程为缓存队列的消费者,在对图像帧进行绘制、渲染、合成以及显示的过程中,包括:
(1)应用主线程在VSYNC_APP信号到达时执行当前图像帧的绘制操作,得到已绘制的图像帧。
渲染线程对已绘制的图像帧进行渲染操作,并在缓存队列存在空缓存(free buffer)的情况下,从缓存队列中出队(dequeue)一个空缓存(free buffer)来存入已渲染的图像帧,此时该缓存的状态更新为已出队(dequeued),表示该缓存处于被渲染线程获取进行相应操作的状态。
其中,dequeue buffer的过程包括:渲染线程向应用主线程发送dequeue free buffer的请求,应用主线程判断缓存队列中状态为dequeued的buffer的数量是否已达到最大可出队数量。若状态为dequeued的buffer的数量小于最大可出队数量,说明当前缓存队列中还存在free buffer。此时,应用主线程按照free buffer的顺序,查找一个free buffer,将该buffer的状态标记为dequeued状态。在对该buffer进行状态标记之后,将该buffer的缓存信息返回至渲染线程,渲染线程基于缓存信息进行已渲染图像帧的存入操作。缓存信息包括缓存地址、缓存状态标识等。
(2)在完成已渲染的图像帧的存入操作之后,渲染线程将存入已渲染图像帧的缓存入队(queue)至缓存队列中,此时该缓存的状态更新为已入队(queued),表示该缓存处于等待被合成的状态。
其中,queue buffer的过程包括:渲染线程向应用主线程发送queue buffer的请求,该请求中携带了该buffer的缓存信息。应用主线程根据缓存信息更新该缓存的状态为queued。
(3)合成线程在Vsync_SF信号到达时,从缓存队列中请求(acquire)一个存入 已渲染图像帧的缓存执行图像帧图层的合成操作,此时,该缓存的状态更新为已被请求(acquired),表示该缓存处于被合成线程获取进行合成的状态。
其中,acquire buffer的过程包括:合成线程向应用主线程发送acquire buffer请求,应用主线程判断缓存队列中状态为acquired buffer的数量是否大于等于最大可合成数量,如果当前缓存队列中,状态为acquired buffer的数量小于最大可合成数量,应用主线程按照queued buffer的顺序,将第一个queued buffer的缓存信息发送至合成线程,并将该缓存标记为acquired状态。合成线程基于缓存信息对该缓存中的已渲染的图像进行合成操作。
(4)在完成对图像帧的合成操作之后,合成线程可以将已合成的图像帧发送至HWC和显示驱动进行显示,在完成合成图像帧的送显之后,显示驱动释放该缓存。此时,该缓存的状态更新为空闲(free)。
其中,release buffer的过程包括:显示驱动在当前显示周期释放(release)上一显示周期显示的合成图像帧所占用的缓存,并将该缓存的缓存信息返回至合成线程,合成线程将该缓存的缓存信息返回至应用主线程。应用主线程根据缓存信息更新该buffer的状态为free。
可选地,应用主线程还可以通知渲染线程缓存队列中存在空缓存,以使渲染线程在当前绘制渲染周期或下一绘制渲染周期获取空缓存进行已渲染的图像帧的存入操作。
需要说明的是,本实施例中涉及到的缓存队列均为存储在应用的缓存队列。应用主线程负责对缓存队列中各缓存的信息获取以及状态更新。渲染线程需要通过与应用主线程交互来实现dequeue buffer以及queue buffer的操作;合成线程需要通过与应用主线程交互来实现acquire buffer以及release buffer的操作。
基于上述缓存队列中缓存的使用机制,来对图6所示出的应用主线程、渲染线程、合成线程以及显示驱动处理作业的时序图进行说明。图6中的缓存序列的MaxBufferCount为4。
1、在Vsync-APP ID为1的周期内,在Vsync_APP信号达到时,应用主线程绘制图像帧4,默认初始第一图像帧4的位移为0。在得到已绘制的图像帧4之后,渲染线程对已绘制的图像帧4进行渲染,通过与应用主线程交互,从缓存队列中获取一个free buffer存入已渲染的图像帧4,应用主线程将存入已渲染图像帧4的缓存的状态更新为dequeued。
在Vsync_SF信号到达时,合成线程确定执行合成操作。合成线程通过与应用主线程交互,从缓存队列中获取已渲染的图像帧1进行合成,应用主线程将该图像帧1所占用的缓存的状态更新为acquired。缓存队列中还包括存入已渲染图像帧2的缓存和存入已渲染图像帧3的缓存,其对应的状态均为queued。此时缓存队列中已经没有free buffer了。
2、在Vsync-APP ID为2的周期内,合成线程完成对已渲染的图像帧1的合成操作,并将其合成图像帧1发送至HWC进行送显。HWC将合成图像帧1通过显示驱动进行显示,并在当前周期结束之前释放合成图像帧1所占用的缓存,将该缓存的缓存信息返回至合成线程,合成线程该缓存的缓存信息返回至应用主线程,应用主线程根据该缓存信息更新该缓存的状态。此时缓存队列中存在一个free buffer。
在Vsync_APP信号达到时,应用主线程绘制图像帧5,经过计算图像帧5的位移为16,图像帧5与图像帧4的位移间隔为16。在得到已绘制的图像帧5之后,渲染线程对已绘制的图像帧5进行渲染,通过与应用主线程交互,从缓存队列中获取一个free buffer存入已渲染的图像帧5,应用主线程将存入已渲染图像帧5的缓存的状态更新为dequeued。
在Vsync_SF信号到达时,合成线程确定执行合成操作。合成线程通过与应用主线程交互,从缓存队列中获取已渲染的图像帧2进行合成操作,应用主线程将该图像帧2所占用的缓存的状态更新为acquired。缓存队列中还包括存入已渲染图像帧3的缓存和存入已渲染图像帧4的缓存,其对应的状态均为queued。缓存队列中还包括正在用于存入已渲染图像帧5的buffer,其对应的状态为dequeued。此时缓存队列中已无free buffer。
3、在Vsync-APP ID为3的周期内,合成线程完成对已渲染图像帧2的合成操作,并将其合成图像帧2发送至送给HWC进行送显。HWC将合成图像帧2通过显示驱动进行显示,并在当前周期结束之前释放合成图像帧2所占用的缓存,将该缓存的缓存信息返回至合成线程,合成线程将缓存的缓存信息返回至应用主线程,应用主线程根据缓存信息更新该缓存的状态。此时缓存队列中存在一个free buffer。
在Vsync_APP信号达到时,应用主线程绘制图像帧6,经过计算图像帧6的位移为32,图像帧6与图像帧5的位移间隔为16。在得到已绘制的图像帧6之后,渲染线程对已绘制的图像帧6进行渲染,通过与应用主线程交互,从缓存队列中获取一个free buffer存入已渲染的图像帧6,应用主线程将存入已渲染图像帧6的缓存的状态更新为dequeued。
在Vsync_SF信号到达时,合成线程确定执行合成操作。合成线程通过与应用主线程交互,从缓存队列中获取已渲染的图像帧3进行合成操作,应用主线程将该图像帧3所占用的缓存的状态更新为acquired。缓存队列中还包括存入已渲染图像帧4的buffer和存入已渲染图像帧5的buffer,其对应的状态均为queued。缓存队列中还包括正在用于存入已渲染图像帧6的buffer,其对应的状态为dequeued。此时缓存队列中已无free buffer。
在之后的周期,渲染线程与合成线程执行的操作与前三个周期类似,合成线程在每一个周期内都正常执行合成操作,在合成图像帧的下一个周期将合成图像帧送显,显示驱动在当前显示周期结束之前释放该合成图像帧的buffer,并将释放后的buffer的缓存信息返回至合成线程,合成线程将该缓存信息发给应用主线程,应用主线程根据缓存信息更新该缓存的状态。同时,应用主线程通知渲染线程缓存队列中存在free buffer,使得渲染线程可以在每一个周期都获取到缓存队列中最后一个free buffer存入已绘制渲染的下一图像帧。由于应用主线程在每个周期绘制的图像帧其计算得到的与上一图像帧之间的位移间隔保持16不变。相应地,经过渲染、合成以及送显的相邻的合成图像帧之间也保持位移间隔16不变,从而在显示过程中连续多帧呈现连贯性显示。
但是在实际过程中,会因为不同的原因,导致渲染线程无法获取到free buffer进行已渲染图像的存储,渲染线程与应用主线程为串行线程,渲染线程不执行渲染操 作,则会影响应用主线程进行下一图像帧的绘制。这就出现应用主线程绘制上一图像帧的时间与绘制当前图像帧的时间间隔过大,从而造成应用主线程计算得到的当前图像帧的位移与上一图像帧的位移间隔过大,在位移间隔过大的相邻图像帧经过渲染、合成以及送显操作之后,显示过程中,相邻图像帧会由于位移间隔过大出现视觉上的卡顿现象。
图8给出了一种由于合成线程没有及时在对应周期内执行图像帧的合成操作,导致渲染线程无法从缓存队列中获取free buffer存入已渲染的图像帧,进而影响了应用主线程对于下一图像帧的计算和绘制,从而导致相邻帧的位移间隔过大的示例。参考图8,缓存序列的maxbuffercount为3。
1、在Vsync_APP ID为1的周期内,在Vsync_APP信号达到时,应用主线程根据时间戳和总位移距离计算绘制的图像帧1的位移,在16.6ms时,绘制的图像帧1的位移为0。应用主线程在得到已绘制的图像帧1之后,唤醒渲染线程对图像帧1进行渲染。渲染线程通过与应用主线程交互,从缓存队列中获取一个free buffer存入已渲染的图像帧1,应用主线程将正在存入已渲染的图像帧1的缓存的状态更新为dequeued。
在Vsync_SF信号到达时,合成线程确定不执行合成操作。此时缓存队列中包括一个dequeued buffer和两个free buffer。
2、在Vsync_APP ID为2的周期内,渲染线程完成已渲染图像帧1存入缓存的操作,通过与应用主线程交互,渲染线程将存入已渲染图像帧的缓存queue至缓存队列中,应用主线程将该缓存的状态更新为queued。此时缓存队列中包括状态为一个queued buffer和两个free buffer。
在Vsync_APP信号达到时,应用主线程根据时间戳和总位移距离计算绘制的图像帧2的位移,位移间隔为16,时间间隔为16.6mm,在33.2ms时,计算得到的图像帧2的位移为16。应用主线程在得到已绘制的图像帧2之后,唤醒渲染线程对图像帧2进行渲染。渲染线程通过与应用主线程交互,从缓存队列中获取一个free buffer存入已渲染的图像帧2,应用主线程将正在存入已渲染的图像帧2的缓存的状态更新为dequeued。
在Vsync_SF信号到达时,合成线程确定不执行合成操作。此时缓存队列中包括状态为一个queued buffer、一个dequeued buffer以及一个free buffer。
3、在Vsync_APP ID为3的周期内,渲染线程完成已渲染图像帧2存入缓存的操作,通过与应用主线程交互,渲染线程将存入已渲染图像帧的缓存queue至缓存队列中,应用主线程将该缓存的状态更新为queued。此时缓存队列中包括状态为两个queued buffer以及一个free buffer。
在Vsync_APP信号达到时,应用主线程根据时间戳和总位移距离计算绘制的图像帧3的位移,位移间隔为16,时间间隔为16.6mm,在49.8ms时,计算得到的图像帧3的位移为32。应用主线程在得到已绘制的图像帧3之后,唤醒渲染线程对图像帧3进行渲染。渲染线程通过与应用主线程交互,从缓存队列中获取一个free buffer存入已渲染的图像帧3,应用主线程将正在存入已渲染的图像帧3的缓存的状态更新为dequeued。
在Vsync_SF信号到达时,合成线程确定执行合成操作。合成线程通过与应用主线 程交互,从缓存队列中请求一个queued buffer进行合成操作,按照缓存顺序,应用主线程将已渲染图像帧1对应的缓存信息返回至合成线程,合成线程根据缓存信息进行已渲染图像帧1的合成操作。
此时,缓存队列中包括一个acquired buffer、一个queued buffer以及一个dequeued buffer。
4、在Vsync-APP ID为4的周期内,在Vsync_APP信号达到时,由于缓存队列中没有free buffer,渲染线程无法从缓存队列中获取free buffer来存入已渲染图像帧,渲染线程不执行渲染操作,同时应用主线程也不执行绘制操作。
在Vsync_SF信号达到时,合成线程确定不执行合成操作。
显示驱动显示接收到的已合成图像帧1。
5、在Vsync-APP ID为5的周期内,在Vsync_SF信号达到时,合成线程确定不执行合成操作。由于没有新的送显图像,显示驱动仍然显示已合成图像帧1。
缓存队列中依然没有free buffer,因此在该周期内应用主线程和渲染线程不执行绘制渲染操作。
6、在Vsync-APP ID为6的周期内,在Vsync_SF信号达到时,合成线程确定不执行合成操作。由于没有新的送显图像,显示驱动仍然显示已合成图像帧1。
缓存队列中依然没有free buffer,因此在该周期内应用主线程和渲染线程不执行绘制渲染操作。
7、在Vsync-APP ID为7的周期内,在Vsync_SF信号达到时,合成线程确定执行合成操作。通过与应用主线程交互,从缓存队列中获取已渲染图像帧2进行合成操作。当前周期没有新的送显图像,显示驱动仍然显示已合成图像帧1。
在本周期显示结束之前,显示驱动释放已合成图像帧1所占用的缓存,并向合成线程返回该缓存的缓存信息,合成线程将该缓存信息返回至应用主线程,应用主线程根据该缓存信息更新该缓存的状态,将该缓存状态更新为free。
此时,缓存队列中包括一个acquired buffer、一个queued buffer以及一个free buffer。
在Vsync_APP信号达到时,应用主线程根据时间戳和总位移距离计算绘制的图像帧4的位移,此时已经过了绘制渲染三个周期,当前周期为第四个周期,时间间隔为16.6ms*(1+3)=66.4ms,因此,计算得到的位移间隔应为16*(1+3)=64,那么最终绘制的图像位移即为32+16*4=96,图像帧4的位移为96。应用主线程在得到已绘制的图像帧4之后,唤醒渲染线程对图像帧4进行渲染。渲染线程通过与应用主线程交互,从缓存队列中获取最后一个free buffer存入已渲染的图像帧4,应用主线程将正在存入已渲染的图像帧4的缓存的状态更新为dequeued。
此时,缓存队列中包括一个acquired buffer、一个queued buffer以及一个dequeued buffer。
8、在Vsync-APP ID为8的周期内,缓存队列中无free buffer,因此应用主线程和渲染线程不执行绘制渲染操作。
在Vsync_SF信号达到时,合成线程确定执行合成操作。通过与应用主线程交互,从缓存队列中获取已渲染图像帧3进行合成。
在本周期内,合成线程将合成后的图像帧2进行送显,显示驱动显示接收到的已合成图像帧2。
在本周期显示结束之前,显示驱动释放合成图像帧2所占用的缓存,并向合成线程返回该缓存的缓存信息,合成线程将缓存信息返回至应用主线程,应用主线程根据该缓存信息更新该缓存的状态。
此时,缓存队列中包括一个free buffer、一个acquired buffer以及一个queued buffer。
由于当前周期时间戳已经超过动效的总时间,认为动效已经绘制完成,因此应用主线程和渲染线程不执行绘制渲染操作。
9、在Vsync-APP ID为9的周期内,在Vsync_SF信号达到时,合成线程确定执行合成操作。通过与应用主线程交互,从缓存队列中获取图像帧4进行合成。合成后的图像帧3进行送显,显示驱动显示接收到的已合成图像帧3。在显示图像帧3的周期结束之前,显示驱动释放合成图像帧3所占用的缓存,并向合成线程返回该缓存的缓存信息,合成线程将缓存信息返回至应用主线程,应用主线程根据该缓存信息更新该缓存的状态。由于当前周期启示时间戳已经超过动效的总时间,因此应用主线程和渲染线程不执行绘制渲染操作。
10、在Vsync-APP ID为10的周期内,合成后的图像帧4进行送显,显示驱动显示接收到的已合成图像帧4。在显示图像帧4的周期结束之前,显示驱动释放合成图像帧4所占用的缓存,并向合成线程返回该缓存的缓存信息,合成线程将缓存信息返回至应用主线程,应用主线程根据该缓存信息更新该缓存的状态。由于当前周期启示时间戳已经超过动效的总时间,因此应用主线程和渲染线程不执行绘制渲染操作。
经过上述1-10个步骤,可见,在Vsync-APP ID为4、5、6的周期内由于合成线程均没有执行合成操作,没有对应的buffer被释放,导致渲染线程在Vsync-APP ID为4的周期内无法从缓存队列中获取free buffer来存入已渲染的图像帧4,从而导致应用主线程无法进行图像帧4的计算绘制。正常情况下,应用主线程在Vsync-APP ID为4的周期内绘制图像帧4,得到的图像帧4与图像帧5的位移间隔应为16。但是,在图6这种示例中,在Vsync-APP ID为7的周期结束前释放了图像帧1的buffer,应用主线程在Vsync-APP ID为7的周期才更新了被释放缓存的状态,从而唤醒渲染线程获取free buffer进行图像帧4的绘制,这时,计算得到的图像帧4的位移为96,图像帧4与图像帧3的位移间隔为64,与正常绘制下的位移间隔16不同。
在这种位移间隔发生变化,相邻帧的位移间隔较大的情况下,在显示驱动显示图像帧4以及图像帧3的过程中,由于图像帧4与图像帧3的位移间隔过大,便会产生相邻图像帧位移变化不连贯的问题,在显示该动效时便会出现视觉上的明显的卡顿。
为了更清晰地了解现有技术中的问题,基于动效中各图像帧的绘制渲染到图像帧的合成过程,结合应用、合成器与HWC的交互的角度来给出示例。可参考图9所示,包括以下几个阶段:
阶段一,触发应用绘制渲染阶段:
S101、合成器的Vsync线程向应用的应用主线程发送Vsync_APP信号。
合成线程的Vsync线程产生Vsync_APP信号,向应用的应用主线程发送Vsync_APP 信号,应用主线程Vsync_APP信号到达之后,开始执行当前图像帧的绘制、渲染等操作。
S102、应用主线程开始进行测量、布局以及绘制。
应用主线程可以获取绘制当前图像帧的系统时间,基于动效曲线和系统时间,进行当前帧图像位移的测量计算、布局以及绘制,从而得到绘制后的图像帧。比如这里图像帧可以为图像帧1。
S103、应用主线程唤醒应用的渲染线程进行渲染操作。
应用主线程唤醒渲染线程进行已绘制图像帧1的渲染操作。
S104、渲染线程通过应用主线程向缓存队列中出队一个空缓存。
渲染线程在完成图像帧1的渲染操作之后,通过应用主线程从缓存队列中请求出队一个空缓存,用来存入已渲染的图像帧1。
S105、缓存队列的最后一个空缓存被渲染线程占用。
渲染线程获取最后一个空缓存,将已渲染的图像帧1存入至该缓存中。
S106、渲染线程将已渲染的图像帧存入缓存,通过应用主线程更新该缓存的状态。
渲染线程将存入已渲染图像帧1的缓存通过应用主线程入队至缓存队列中,应用主线程对该缓存的状态进行更新,从而合成线程在合成周期可以从缓存队列中获取已渲染图像帧进行合成操作。
阶段二,合成线程不执行合成阶段:
S201、合成器的Vsync线程向合成线程发送Vsync_SF信号。
Vsync线程产生Vsync_SF信号,向合成线程发送Vsync_SF信号,在Vsync_SF信号到达之后,合成线程确定是否要执行图像帧的合成操作。
S202、合成线程不执行合成操作。
合成线程确定不执行合成操作,示例性地,合成线程确定不执行合成操作的情况包括合合成线程本身性能出现异常导致合成线程运行时间过长,错过送显信号而导致丢帧;或者,由于切帧导致相邻两图像帧间隔太大,合成线程等不到送显信号,基于背压机制导致的不合成。
合成线程不执行合成操作导致的后果就是应用的缓存队列中的存入已渲染的图像帧的缓存等不到合成线程来消费。合成线程不对缓存队列中的图像帧进行合成,就不会执行后续送显并释放缓存的过程,那么缓存队列中的空缓存的数量就会一直减少,直到缓存队列中没有空缓存,则渲染线程无法获取空缓存存入已渲染的图像帧。
阶段三,应用触发绘制渲染阶段:
S301、合成器的Vsync线程向应用主线程发送Vsync_APP信号。
Vsync线程产生Vsync_APP信号,向应用主线程发送Vsync_APP信号,在Vsync_APP信号到达之后,应用主线程开始执行当前图像帧绘制渲染操作。
S302、应用主线程开始进行测量、布局以及绘制。
应用主线程可以获取绘制当前图像帧的系统时间,基于动效曲线和系统时间进行当前帧图像位移的测量计算、布局以及绘制,从而得到绘制后的图像帧。比如这里图像帧可以为图像帧2。
S303、应用主线程唤醒渲染线程进行渲染操作。
应用主线程唤醒渲染线程进行已绘制图像帧2的渲染操作。
S304、渲染进程通过应用主线程向缓存队列中请求一个空缓存。
渲染线程在完成图像帧1的渲染操作之后,通过应用主线程从缓存队列中请求出队一个空缓存,用来存入已渲染的图像帧2。
S305、缓存队列中无空缓存,渲染线程等待合成线程释放缓存。
由于上述S202中合成线程没有执行合成操作,缓存队列中的存入已渲染图像帧的缓存未被消费,且没有被释放的缓存。因此,缓存队列的最后一个空缓存在S105中被使用之后,缓存队列中没有空缓存。渲染线程获取不到空缓存,处于等待阶段。
阶段四,合成线程执行合成与HWC、应用交互阶段:
S401、Vsync线程向合成线程发送Vsync_SF信号。
Vsync线程产生Vsync_SF信号,向合成线程发送Vsync_SF信号,在Vsync-SF信号到达之后,合成线程确定是否要执行合成操作。
S402、合成线程开始合成,合成结束后将合成图像帧送给HWC进行送显。
合成线程确定执行合成操作,通过应用主线程从缓存队列中获取已渲染的图像帧1进行合成操作,并将合成图像帧1送至HWC进行显示。
S403、HWC向合成线程返回送显完的空缓存的缓存信息。
HWC在显示完合成图像帧1之后,在下一个显示周期将合成图像帧1的缓存释放并向合成线程返回该缓存的缓存信息。
S404、合成线程将该缓存的缓存信息返回至应用主线程。
合成线程在得到该缓存的缓存信息之后,将该缓存的缓存信息返回至应用主线程。
S405、应用主线程根据缓存信息,更新缓存队列中缓存的状态,将缓存队列的空缓存的数量+1,并唤醒渲染线程进行渲染。
应用主线程在得到缓存信息之后,根据缓存信息将该缓存队列的空缓存的数量加一,并唤醒等待空缓存的渲染线程进行渲染操作。
S406、渲染线程通过应用主线程从缓存队列中出队一个空缓存进行已渲染图像帧的存储操作,通过应用主线程更新该缓存的状态。
渲染线程在得到应用主线程的唤醒消息之后,通过应用主线程从缓存队列中出队一个空缓存存入已渲染的图像帧2,并将通过应用主线程更新该缓存的状态。
显然,上述过程中应用的渲染线程因为无法及时从缓存队列中获取空缓存,导致渲染线程一直处于等待状态(S305),渲染线程不执行渲染操作,影响了应用主线程的绘制操作,从而造成了应用主线程绘制相邻图像帧的位移间隔过大的问题。
可选地,图10给出了一种缓存队列的MaxBufferCount在图像帧绘制、渲染、合成以及显示过程中的变化时序图。参考图10,帧11所在的周期为第一个周期。在这个示例中,缓存队列中MaxBufferCount为4。
在第一个周期内,缓存队列中的queued buffer的数量为2,在缓存队列中存在2个free buffer和2个queued buffer。应用主线程进行图像帧11的绘制,其计算得到的图像帧11的位移间隔为16,在应用主线程完成图像帧11的绘制之后,唤醒渲染线程执行图像帧11的渲染。渲染线程在完成对图像帧11的渲染后,通过应用主线程,从缓存队列中出队一个free buffer用于存入已渲染的图像帧11,此时,缓存队列中 的queued buffer的数量增加1,free buffer数量减1。
在本周期内,合成线程通过与应用主线程交互,从缓存队列中请求一个queued buffer进行合成操作,按照图像帧的顺序,合成线程获取图像帧10执行合成操作。
在第二个周期内,缓存队列中的queued buffer的数量为3,在缓存队列中存在1个free buffer和3个queued buffer。应用主线程进行图像帧12的绘制,其计算得到的图像帧12的位移间隔为16,在应用主线程完成图像帧12的绘制之后,唤醒渲染线程执行图像帧12的渲染。渲染线程在完成对图像帧12的渲染后,通过应用主线程,从缓存队列中出队最后一个free buffer用于存入已渲染的图像帧12,此时,缓存队列中的queued buffer的数量增加1,free buffer数量减1。
在本周期内,合成线程不合成。
在第三个周期内,缓存队列中的buffer全部被占用,queued buffer数量达到最大,与MaxBufferCount数量相同,均为4。缓存队列中没有free buffer可被渲染线程使用。应用主线程基于图像帧13的位移间隔16,完成图像帧13的绘制之后,渲染线程无法获取free buffer进行已渲染图像帧的存入,渲染线程处于等待状态。
在本周期内,合成线程不合成。
在第四周期内,缓存队列中的buffer全部被占用。渲染线程处于等待状态,不执行渲染操作;应用主线程也无法执行下一图像帧的绘制。
在本周期内,合成线程不合成。显示驱动显示接收到的合成图像帧9,该图像帧9的位移间隔为16。
在第五个周期内,合成线程通过与应用主线程交互,从缓存队列中请求一个queued buffer进行合成操作,按照图像帧的顺序,合成线程获取已渲染图像帧11执行合成操作。在本周期内,缓存队列中的queued buffer数量减少一个。显示驱动显示接收到的合成图像帧10,该图像帧10的位移间隔为16。在显示合成图像帧10结束之前释放上一周期显示的合成图像帧9的buffer,并将该buffer的缓存信息返回至合成线程。合成线程将该buffer的缓存信息返回至应用主线程,通过应用主线程更新缓存队列中该缓存的状态。此时,缓存队列中存在一个free buffer。
应用主线程唤醒渲染线程执行存入已渲染图像帧的操作,渲染线程通过与应用主线程的交互,从缓存队列中出队一个free buffer进行已渲染图像帧13的存入操作。
在第六个周期,应用主线程进行图像帧14的绘制,其计算得到的图像帧14的位移间隔为48,在应用主线程完成图像帧14的绘制之后,唤醒渲染线程进行图像帧14的渲染操作。
在本周期内,合成线程通过与应用主线程的交互,从缓存队列中请求一个queued buffer进行合成操作,按照图像帧的顺序,合成线程获取已渲染图像帧12进行合成操作。显示驱动显示接收到的合成图像帧11,该图像帧11的位移间隔为16。
显然,在本周期内应用主线程绘制的图像帧14的位移间隔与绘制图像帧13时计算得到的位移间隔不同,图像帧13的位移间隔为16,图像帧14的位移间隔为间隔两个周期时长根据时间和动效曲线计算能得到的48,相邻两个图像帧的位移过大。这样就导致屏幕在显示图像帧13与图像帧14的过程中,发生视觉卡顿的现象。
结合上述说明的缓存队列中buffer使用机制,分析在与时间相关的动效的绘制渲 染合成过程中,造成渲染线程不能及时从缓存队列中获取free buffer的原因。
渲染线程对缓存队列中的free buffer进行已渲染图像帧的存入操作,缓存队列中的free buffer数量逐渐减少,queued buffer的数量逐渐增多。而在这种情况下,合成线程并没有及时消费queued buffer,也即,合成器没有及时请求queued buffer进行已渲染图像帧的合成操作,导致没有buffer被送显并释放为free状态。缓存队列中的free buffer数量越来越少,合成线程一直不执行合成操作,直到缓存队列中没有free buffer时,渲染线程无法再从缓存队列中获取free buffer进行已渲染图像帧的存入操作,导致了渲染线程处于等待状态,渲染线程无法继续执行渲染操作,影响了应用主线程进行下一图像帧的绘制,应用主线程等待的时间间隔导致了其绘制相邻两个图像帧之间的位移间隔。
示例性地,合成线程不执行合成操作的原因,可能是合成线程所在的合成器的性能出现异常,导致运行时间过长而错过送显信号,出现丢帧而不执行合成操作的情况;也可能是由于切帧导致相邻两个图像帧间隔太大,合成线程等不到送显信号,基于背压机制导致的不执行合成操作。
其中,背压机制指的是合成线程认为待合成的图像帧(已渲染的图像帧)出现任务堆积,造成合成线程确定当前不需要执行合成操作的误判,从而导致了合成线程的合成任务滞后。
其中,合成线程的机制是在有GPU合成时,不等前一图像帧送显结束,直接将当前图像帧送给HWC,HWC中维护了一个异步缓存队列,HWC串行地合成合成线程发送的待合成的图像帧。由于异步缓存队列允许有堆积,则会造成合成线程在确定待合成的图像帧(已渲染的图像帧)出现任务堆积时,不执行合成任务的情况。
示例性地,在电子设备显示图像帧的过程中,可能存在帧率切换的情况,不同帧率切换也可能导致合成器不执行合成操作,尤其是从低帧率切换至高帧率的场景。例如,一开始电子设备的帧率60Hz,应用进程、合成线程以及显示驱动均按照帧率60Hz对应的周期执行相应的绘制渲染、合成以及送显的操作。在某一周期电子设备的帧率切换为90Hz,应用进程、合成线程以及显示驱动均按照帧率90Hz对应的周期执行相应的绘制渲染、合成以及送显的操作。帧率90Hz的周期比帧率60Hz的周期时长短,也就是说,在帧率为90Hz的情况下,应用进程、合成线程以及显示驱动执行每一周期的图像帧的处理速度更快,在帧率60Hz的情况下,应用进程、合成线程以及显示驱动执行每一周期的图像帧的处理速度更慢。当显示驱动以帧率60Hz、周期为16.6毫秒显示已合成的图像帧时,应用进程已经开始以帧率90Hz、周期为11.1毫秒绘制渲染图像帧,这就导致显示驱动显示图像帧的速度要慢于应用进程绘制渲染图像帧、合成线程合成图像帧的速度。从而造成合成图像帧的堆积,从而给合成线程造成一种当前不需要执行合成操作的误判。
合成线程的不合成导致缓存队列无消费者消费,进一步导致渲染线程从缓存队列中出队不到free buffer,从而阻塞渲染线程和应用主线程的正常作业,而出现上述问题。
除了上述实施例中说明的可能造成合成线程不执行合成操作的原因之外,在一些电子设备的正常运行场景中,还存在其他导致合成线程无法合成图像的情况。比如, 由于合成线程进入的死亡状态(Dead)状态造成的合成线程不执行合成操作的情况;或者,电子设备的输入输出(I/O)访问异常导致的合成线程不执行合成操作的情况;或者,电子设备的中央处理器(central processing unit,CPU)资源异常导致的合成线程不执行合成操作的情况。
本实施例提供了一种图像处理方法,可以有效地避免合成线程不执行合成操作的情况下,缓存队列中没有free buffer而导致的渲染线程无法从缓存队列中出队free buffer进行已渲染图像帧的存入操作,从而影响了应用主线程进行下一图像帧的绘制渲染,导致应用主线程绘制图像帧出现丢帧情况,从而造成送显的图像帧出现卡顿的问题。
基于图4和图5所示的电子设备的硬件结构与电子设备的软件架构,以电子设备100执行本公开实施例为例,提供一种图像处理方法。示例性地,图11给出了一种用户基于电子设备的显示屏执行第一操作,电子设备响应于该第一操作启动第一应用。在启动第一应用的过程中,电子设备的应用进程与合成线程交互实现图像处理方法的示例。其中,包括:
S501、电子设备接收用户在电子设备的触摸屏的第一操作。
这里执行主体可以为电子设备的桌面应用,例如,电子设备的桌面启动器(launcher),launcher用于接收用户在电子设备的触摸屏的第一操作。其中,第一操作可以为用户在触摸屏的单击操作、双击操作等。第一操作为用户针对电子设备的桌面应用的选择操作。比如,第一操作为用户在触摸屏上对电子设备的桌面的第一应用的单击操作。可参考图1的(a),第一操作可以为用户在手机的触摸屏上针对应用5的单击操作,用于启动应用5。
S502、电子设备响应于第一操作启动第一应用。
launcher响应于该第一操作,启动第一操作对应的桌面应用。参考图1,用户在手机的触摸屏上进行针对应用5的单击操作,launcher响应于该单击操作,启动应用5。
在启动应用5的过程中,launcher在手机桌面显示应用5的启动动效的所有图像帧。比如,应用5的启动动效的图像帧包括5个图像帧,启动应用5的启动动效的显示过程可参考图1的(b)至图1的(f)。启动动效中的所有图像帧具有时序。
在第一应用的启动的过程中,执行以下步骤:
S503、应用进程在第一图像帧的绘制渲染周期内,对第一图像帧进行绘制和渲染,并将得到的第一图像帧存储至缓存队列的一个空闲缓存对象中。
在显示启动应用5的启动动效的5个图像帧之前,电子设备需要绘制、渲染、合成这些图像帧,从而将合成后的图像帧进行送显,呈现最终图1的(b)至图1的(f)的显示效果。
一般的,由应用进程对图像帧进行绘制和渲染。具体地,由应用进程中的应用主线程对图像帧进行绘制,由应用进程中的渲染线程对已绘制的图像帧进行渲染。由合成线程对已渲染的图像帧进行合成。
其中,第一图像帧为应用5的启动过程中,启动动效中的一个图像帧。
在应用5的启动过程中,应用进程的应用主线程在第一图像帧的绘制渲染周期内 对第一图像帧进行绘制,应用进程的渲染线程对已绘制的第一图像帧进行渲染,得到渲染后的第一图像帧。在缓存队列存在空闲缓存对象的情况下,渲染线程将渲染后的第一图像帧存入至缓存队列的一个空闲缓存对象中。相应地,在将渲染后的第一图像帧存入至缓存队列的一个空闲缓存对象中之后,缓存队列中的空闲缓存对象的数量减少1个。
S504、当合成线程在第一图像帧的合成周期内未执行合成操作的情况下,合成线程向应用进程发送第一调整请求。
第一图像帧的合成周期在第一图像帧的绘制渲染周期之后。当合成线程在第一图像帧的合成周期内未执行合成操作的情况下,也即,合成线程未对已渲染的第一图像帧进行合成操作,缓存队列无消费,缓存队列中可能存在无空闲缓存对象的情况。在这种情况下,合成线程向应用进程发送第一调整请求。
S505、应用进程基于第一调整请求,增加缓存队列中空闲缓存对象的数量,以使应用进程在第二图像帧的绘制渲染周期内,对第二图像帧进行绘制和渲染后,将得到的第二图像帧存储至缓存队列的一个空闲缓存对象中。
其中,第一调整请求中可以携带第一指示值,该第一指示值用于指示缓存对象的增加数量,增加缓存队列中空闲缓存对象的数量。这里空闲缓存对象即为缓存队列中的free buffer。
应用进程基于第一调整请求中的第一指示值,增加缓存队列中空闲缓存对象的数量。比如,第一指示值为1,应用进程则增加1个缓存队列中空闲缓存对象的数量。比如,第一指示值为2,应用进程则增加2个缓存队列中空闲缓存对象的数量。
在增加缓存队列中的空闲缓存对象的数量之后,可以保证缓存队列中始终有至少一个空闲缓存对象可被应用进程使用。也即,缓存队列中始终有至少一个空闲缓存对象,可被应用进程的渲染线程用来存入下一个图像帧绘制渲染周期得到的图像帧,比如,在第二图像帧的绘制渲染周期得到的第二图像帧。这里第二图像帧为应用5的启动过程中,启动动效中的一个图像帧。
其中,第二图像帧的绘制渲染周期位于第一图像帧的绘制渲染周期之后,第二图像帧的绘制起始时刻与第一图像帧的绘制起始时刻相差N个周期,N为正整数。
启动动效中的图像帧具有时序,第二图像帧的绘制渲染周期位于第一图像帧的绘制渲染周期之后。可选地,第二图像帧的绘制渲染周期可以为第一图像帧的绘制渲染周期的下一个周期;或者,第二图像帧的绘制渲染周期可以为第一图像帧的绘制渲染周期的下N个周期,比如,第二图像帧的绘制渲染周期可以为第一图像帧的绘制渲染周期之后的第2个周期。
可选地,上述第一图像帧和第二图像帧还可以为电子设备在第一应用的启动完成之后的图像帧。比如,在第一应用的内部显示画面中的多个图像帧。
可选地,上述第一图像帧和第二图像帧还可以为电子设备在第一应用的启动过程中,由第一刷新率切换至第二刷新率过程中的图像帧。其中,第一刷新率小于第二刷新率。
这里刷新率即为电子设备的帧率。第一刷新率可以为60Hz,也即,1秒钟时间里刷新60帧图片,每16.6毫秒刷新一帧图片。第二刷新率可以为90Hz,也即,1秒钟 时间里刷新90帧图片,每11.1毫秒刷新一帧图片。
在刷新率切换的场景下,本实施例提供的图像处理方法可有效解决应用启动过程中,由于刷新率切换,造成的合成线程与应用进程处理周期异步、合成线程认为待合成的图像帧存在任务堆积而不执行合成操作,导致缓存队列中无缓存对象释放的问题。
在本实施例中,在电子设备执行应用的启动过程中,针对启动过程中启动动效的图像帧进行图像处理,在合成线程不执行合成操作的情况下,动态增加缓存队列中空闲缓存对象的数量,使得缓存队列中始终有至少一个空闲缓存对象被应用进程使用。应用进程在每一个图像帧的绘制渲染周期,均可将已渲染的图像帧存入至缓存队列的空闲缓存对象中,避免了应用进程绘制渲染图像帧过程中可能会出现的丢帧的情况,解决了由于丢帧导致的送显图像帧出现显示卡顿的问题,提高了应用启动过程中,启动动效的显示流畅性。
可选地,参考图12,用户还可以基于电子设备的显示屏执行第二操作,电子设备响应于第二操作退出第一应用。参考图13,图13给出了一种在退出第一应用的过程中,电子设备的应用进程与合成线程交互实现图像处理方法的示例。其中,包括:
S601、电子设备接收用户在电子设备的触摸屏的第二操作。
可选地,第二操作可以为上滑操作。
相应地,这里执行主体可以为电子设备的桌面应用,例如,电子设备的桌面启动器(launcher),launcher用于接收用户在电子设备的触摸屏的第二操作。其中,第二操作可以为用户在触摸屏的滑动操作等。第一操作为用户针对电子设备的应用的退出操作。比如,第二操作为用户在触摸屏上对桌面的第一应用的上滑退出操作。可参考图12的(a),第一操作可以为用户在手机的触摸屏上针对应用5的上滑操作,用于退出应用5,返回手机的桌面。
S602、电子设备响应于第二操作退出第一应用。
launcher响应于该第二操作,退出第二操作对应的应用的当前界面,返回至手机的桌面。参考图12,用户在手机的触摸屏上进行应用5的上滑操作,launcher响应于该上滑操作,退出应用5。返回手机的桌面的显示界面。
在退出应用5的过程中,launcher在手机桌面显示应用5的退出动效的所有图像帧。比如,应用5的退出动效的图像帧包括5个图像帧,退出应用5的退出动效的显示过程可参考图12的(b)至图12的(f)。退出动效中的所有图像帧具有时序。
在第一应用的退出的过程中,电子设备执行以下步骤:
S503、应用进程在第一图像帧的绘制渲染周期内,对第一图像帧进行绘制和渲染,并将得到的第一图像帧存储至缓存队列的一个空闲缓存对象中。
在显示退出应用5的退出动效的5个图像帧之前,电子设备需要绘制、渲染、合成这些图像帧,从而将合成后的图像帧进行送显,呈现最终图12的(b)至图12的(f)的显示效果。
其中,第一图像帧为应用5的退出过程中,退出动效中的一个图像帧。
在应用5的启动过程中,应用进程的应用主线程在第一图像帧的绘制渲染周期内对第一图像帧进行绘制,应用进程的渲染线程对已绘制的第一图像帧进行渲染,得到渲染后的第一图像帧。在缓存队列存在空闲缓存对象的情况下,渲染线程将渲染后的 第一图像帧存入至缓存队列的一个空闲缓存对象中。相应地,在将渲染后的第一图像帧存入至缓存队列的一个空闲缓存对象中之后,缓存队列中的空闲缓存对象的数量减少1个。
S504、当合成线程在第一图像帧的合成周期内未执行合成操作的情况下,合成线程向应用进程发送第一调整请求。
第一图像帧的合成周期在第一图像帧的绘制渲染周期之后。当合成线程在第一图像帧的合成周期内未执行合成操作的情况下,也即,合成线程未对已渲染的第一图像帧进行合成操作,缓存队列无消费,缓存队列中可能存在无空闲缓存对象的情况。在这种情况下,合成线程向应用进程发送第一调整请求。
S505、应用进程基于第一调整请求,增加缓存队列中空闲缓存对象的数量,以使应用进程在第二图像帧的绘制渲染周期内,对第二图像帧进行绘制和渲染后,将得到的第二图像帧存储至缓存队列的一个空闲缓存对象中。
其中,第一调整请求中可以携带第一指示值,该第一指示值用于指示缓存对象的增加数量,增加缓存队列中空闲缓存对象的数量。
应用进程基于第一调整请求中的第一指示值,增加缓存队列中空闲缓存对象的数量。比如,第一指示值为1,应用进程则增加1个缓存队列中空闲缓存对象的数量。比如,第一指示值为2,应用进程则增加2个缓存队列中空闲缓存对象的数量。
在增加缓存队列中的空闲缓存对象的数量之后,可以保证缓存队列中始终有至少一个空闲缓存对象可被应用进程使用。也即,缓存队列中始终有至少一个空闲缓存对象,可被应用进程的渲染线程用来存入下一个图像帧绘制渲染周期得到的图像帧,比如,在第二图像帧的绘制渲染周期得到的第二图像帧。这里第二图像帧为应用5的退出过程中,退出动效中的一个图像帧。
其中,第二图像帧的绘制渲染周期位于第一图像帧的绘制渲染周期之后,第二图像帧的绘制起始时刻与第一图像帧的绘制起始时刻相差N个周期,N为正整数。
退出动效中的图像帧具有时序,第二图像帧的绘制渲染周期位于第一图像帧的绘制渲染周期之后。可选地,第二图像帧的绘制渲染周期可以为第一图像帧的绘制渲染周期的下一个周期;或者,第二图像帧的绘制渲染周期可以为第一图像帧的绘制渲染周期的下N个周期,比如,第二图像帧的绘制渲染周期可以为第一图像帧的绘制渲染周期之后的第2个周期。
可选地,上述第一图像帧和第二图像帧还可以为电子设备在第一应用的退出过程中,由第一刷新率切换至第二刷新率过程中的图像帧。其中,第一刷新率小于第二刷新率。
这里刷新率即为电子设备的帧率。第一刷新率可以为60Hz,也即,1秒钟时间里刷新60帧图片,每16.6毫秒刷新一帧图片。第二刷新率可以为90Hz,也即,1秒钟时间里刷新90帧图片,每11.1毫秒刷新一帧图片。
在刷新率切换的场景下,本实施例提供的图像处理方法可有效解决应用退出过程中,由于刷新率切换,造成的合成线程与应用进程处理周期异步、合成线程认为待合成的图像帧存在任务堆积而不执行合成操作,导致缓存队列中无缓存对象释放的问题。
在本实施例中,在电子设备执行应用的退出过程中,针对退出过程中退出动效的 图像帧进行图像处理,在合成线程不执行合成操作的情况下,动态增加缓存队列中空闲缓存对象的数量,使得缓存队列中始终有至少一个空闲缓存对象被应用进程使用。应用进程在每一个图像帧的绘制渲染周期,均可将已渲染的图像帧存入至缓存队列的空闲缓存对象中,避免了应用进程绘制渲染图像帧过程中可能会出现的丢帧的情况,解决了由于丢帧导致的送显图像帧出现显示卡顿的问题,提高了应用退出过程中,退出动效的显示流畅性。
可以理解的是,电子设备执行上述步骤S503-S505的图像处理方法还可以应用在第一应用的启动至退出的过程中。
可选的,电子设备所执行的上述步骤S503-S505的图像处理方法还可以应用在其他场景下。例如,电子设备的应用内部场景动效的图像帧的图像处理场景、电子设备的游戏场景动效的图像帧的图像处理场景、电子设备的离屏滑动动效的图像帧的图像处理场景、或者电子设备的其他跟手动效的图像帧的图像处理场景等。在这些场景中,均可解决由于电子设备的合成线程不执行合成操作而导致的应用主线程丢帧,造成送显图像帧出现卡顿的问题,优化图像帧的显示流畅性。
又由于,在电子设备的应用启动(参考图1)、电子设备的应用退出(参考图12)以及电子设备的离屏滑动(参考图2)的场景中,合成线程不执行合成操作的可能性比较大,因此在这些场景中本实施例提供的图像处理方法的效果更明显,优化后的动效显示效果更流畅。
在一种示例中,给出一种应用的应用主线程、渲染线程、合成器的合成线程、合成器的Vsync线程以及HWC之间交互,在动效图像帧的绘制、渲染、合成以及送显过程中的图像处理方法。参见图14给出的方法流程图,包括以下几个阶段:
阶段一,应用绘制渲染阶段:
S1101、合成器的Vsync线程向应用的应用主线程发送Vsync_APP信号。
在本实施例中,合成器包括Vsync线程和合成线程。其中,Vsync线程用于产生Vsync信号。其中,Vsync信号包括Vsync_APP信号和Vsync_SF信号。Vsync_APP信号用于触发应用主线程执行图像帧的绘制操作。Vsync_SF信号用于触发合成线程执行图像帧的合成操作。
Vsync线程根据电子设备的帧率确定信号周期。例如,电子设备的帧率为60,图像帧间隔为16.6ms,Vsync线程每隔16.6ms产生一个Vsync_APP信号,并向应用主线程发送该Vsync_APP信号。Vsync线程每隔16.6ms产生一个Vsync_SF信号,并向合成线程发送该Vsync_SF信号。
S1102、应用主线程开始进行测量、布局以及绘制。
应用主线程可以获取绘制当前图像帧的系统时间,基于动效曲线和系统时间进行当前帧图像位移的测量计算、布局以及绘制,从而得到绘制后的图像帧。比如这里图像帧可以为图像帧2。这里可以认为应用主线程已经完成图像帧1的绘制,且渲染线程已经完成图像帧1的渲染。
在Vsync_APP信号到达之后,应用主线程在当前周期执行当前图像帧绘制操作。绘制的图像帧为图像帧2。应用主线程绘制图像帧之前需要对图像帧进行测量布局,也即,需要计算图像帧的位移。
本实施例中针对的是绘制位移与系统时间相关的第一类动效的图像帧的绘制。示例性地,图像帧的位移的计算方式可以为:
y(t)=t/ttotal*ytotal
其中,t为当前时间,ttotal为动效的显示总时间,ytotal为动效第一图像帧与最后一图像帧之间的位移距离。
或者,图像帧的位移的计算方式还可以为:
y(t)=y(0)+t*n
其中,y(0)为该动效第一图像帧的位移;t为计算时间;n为预设的位移间隔。
可选地,t的计算方式可以表示为:
t=tc-(tc-t0)%q
其中,tc为当前时间;t0为该动效第一图像帧的绘制时间;q为电子设备的帧率。
在Vsync_APP信号到达之后,应用主线程根据预设的图像帧位移的计算方式,在进行图像帧2的测量、布局以及绘制。
S1103、应用主线程唤醒应用的渲染线程进行渲染操作。
在应用主线程完成图像帧2的测量、布局以及绘制之后,应用主线程唤醒应用的渲染线程进行已绘制图像帧2的渲染操作。
S1104、渲染进程通过应用主线程向缓存队列中出队一个空缓存。
在本实施例中,渲染线程在完成图像帧2的渲染操作之后,通过与应用主线程交互,从缓存队列中出队一个free buffer,用来存入已渲染的图像帧2。
S1105、缓存队列的最后一个空缓存被渲染线程占用。
若缓存队列中存在free buffer,则渲染线程可通过与应用主线程交互,出队该free buffer进行已渲染图像帧1的存入操作。
可选地,渲染线程可以按照先进先出(first input first output,FIFO)的获取方式来从缓存队列中出队(dequeue)一个free buffer;或者,渲染线程还可以按照其他约定的方式来从缓存队列中dequeue一个free buffer。
在渲染线程dequeue到free buffer之后,应用主线程将该缓存的状态更新为dequeued。
S1106、应用主线程更新该缓存的状态,并向合成线程发送响应。
渲染线程可以按照先进先出FIFO的获取方式将已渲染图像帧2的缓存入队(queue)至缓存队列中;或者,渲染线程还可以按照其他约定的方式将已渲染图像帧2的缓存入队(queue)至缓存队列中。在渲染线程将已渲染图像帧2存入至该缓存之后,将存入已渲染图像帧2的缓存入队至缓存队列,通过与应用主线程交互,应用主线程将该缓存的状态更新为queued。应用主线程在更新缓存的状态之后,向合成线程发送响应,以使合成线程请求queued buffer进行合成操作。
阶段二,合成线程不执行合成阶段:
S1201、合成器的Vsync线程向合成线程发送Vsync_SF信号。
在本实施例中,合成器的Vsync线程按照帧间隔产生Vsync_SF信号,向合成线程发送Vsync_SF信号。合成线程确定是否要执行图像帧的合成操作。
S1202、合成线程不执行合成操作。
在本实施例中,在Vsync_SF信号到达时,合成线程确定当前合成周期不执行合成操作。合成线程不对缓存队列中的图像帧进行合成,意味着缓存队列中的queued buffer未被消费,也没有acquired buffer被释放。其中,合成线程的当前合成周期为应用主线程图像帧2的绘制周期的下一周期。
S1203、合成线程确定将缓存队列的最大缓存数量加1。
在本实施例中,合成线程不执行合成操作,势必会导致缓存队列中queued buffer不被消费。在这种情况下,合成线程若确定当前周期不执行合成操作,则将缓存队列的最大缓存数量(MaxBufferCount)加1。
可选地,合成线程可以通过定时器来确定一段时间内是否执行的合成操作,若没有执行合成操作,则确定将缓存队列的最大缓存数量加1。其中,一段时间还可以为当前帧率对应的N个周期的时长;这里N为1、2、3...k(整数)。为了更有效地调整缓存队列的最大缓存数量,N不宜过大。需要说明的是,N为2时,说明合成线程连续两个周期未执行合成操作。
可选的,合成线程可以通过向应用主线程发送请求MaxBufferCount加1的请求,以使应用主线程基于该请求执行相应的操作。例如,应用主线程在接收到合成线程的MaxBufferCount加1的请求,增加缓存队列的最大缓存数量。合成器可以先查询缓存队列的MaxBufferCount,再根据MaxBufferCount来确定增加的数量。可选地,合成器可以调用预设的查询函数来向应用主线程查询缓存队列的MaxBufferCount。示例性地,预设的查询函数可以为IGrapgicBufferComsumer,在IGrapgicBufferComsumer中增加getMaxBufferCount接口来动态查询最大值,查询时合成器作为消费者,调用IGrapgicBufferComsumer::getMaxBufferCount()函数通过线程Binder调用到应用主线程内查询。
可选地,应用主线程可以从其他不被合成线程、渲染线程使用的缓存中获取一个缓存,将该缓存设定为可被合成线程、渲染线程使用的缓存,从而实现对缓存队列的数量增加的目的。
S1204、应用主线程向缓存队列中添加一个可用缓存,使得缓存队列的最大缓存数量加1。
在本实施例中,电子设备中的缓存被各种线程占用执行相应的操作,有些缓存可被合成线程、渲染线程使用,来实现图像帧的绘制、渲染和合成操作,这些缓存形成本实施例的缓存队列。这些缓存不允许被合成线程、渲染线程使用,这些缓存在本实施例中称为不可用缓存。不可用缓存中包括空缓存和已占用缓存。应用主线程可从不可用缓存中获取一个空缓存,将其添加至本实施例的缓存队列中,从而增加缓存队列的MaxBufferCount,使得缓存队列中的MaxBufferCount加一。
在应用主线程增加缓存队列的MaxBufferCount之后,缓存队列中存在至少一个free buffer可被渲染线程使用。
阶段三,应用绘制渲染阶段:
S1301、合成器的Vsync线程向应用主线程发送Vsync_APP信号。
在本实施例中,Vsync线程按照帧间隔产生Vsync_APP信号,向应用主线程发送Vsync_APP信号,接收到Vsync_APP信号到达时,应用主线程开始执行图像帧绘制渲 染操作。
S1302、应用主线程开始进行测量、布局以及绘制。
与上述步骤1102类似的,应用主线程可以获取绘制当前图像帧的系统时间,基于动效曲线和系统时间进行当前帧图像位移的测量计算、布局以及绘制,从而得到绘制后的图像帧。比如这里图像帧可以为图像帧3。
应用主线程绘制图像帧3的周期为绘制图像帧2的周期的下一个周期,可以认为应用主线程绘制图像帧3的周期为正常周期。图像帧3的位移间隔与图像帧2的位移间隔相同。或者,图像帧3的位移间隔等于预设位移间隔阈值。
S1303、应用主线程唤醒渲染线程进行渲染操作。
在应用主线程完成图像帧3的测量、布局以及绘制之后,应用主线程唤醒应用的渲染线程进行已绘制图像帧3的渲染操作。
S1304、渲染进程通过应用主线程向缓存队列中出队一个空缓存。
由于在S1204中应用主线程增加了缓存队列的中MaxBufferCount,此时,缓存队列中存在至少一个free buffer可以被渲染线程使用。渲染线程从缓存队列中请dequeue一个free buffer,用来执行存入已渲染图像帧3。
S1305、获取到最后一个空缓存。
在本实施例中,缓存队列中还剩最后一个free buffer,在本实施例中,该free buffer为应用主线程新增的buffer。在应用主线程增加缓存队列中buffer之前,缓存队列中的所有buffer已被占用。
应用渲染线程获取到缓存队列中最后一个free buffer进行渲染图像帧3的存入操作。
在渲染线程dequeue到的free buffer,应用主线程可更新缓存队列中该缓存的状态,将该缓存的状态由free更新为dequeued。
S1306、渲染线程将已渲染的图像帧存入缓存,应用主线程向合成线程发送响应。
渲染线程通过与应用主线程的交互,将存入已渲染图像帧3的buffer入队至缓存队列中,应用主线程将该缓存的状态由dequeued更新为queued。应用主线程向合成线程发送响应,以使合成线程可从缓存队列中请求queued buffer进行已渲染图像帧的合成操作。
S1307、合成线程记录入队缓存的时间。
在本实施例中,合成线程接收到应用主线程发送的响应,记录应用主线程执行queue buffer的时间。queue buffer的时间也可以表示渲染线程执行渲染操作的时间。
可选地,合成线程还可以向应用主线程获取queue buffer的时间,从而将获取到的queue buffer的时间进行记录。
在本实施例中,合成线程记录缓存队列每一次queue buffer的时间,可以通过比对相邻两次queue buffer的时间来确定渲染线程是否按照帧率对应的周期正常执行渲染操作。如果相邻两次queue buffer的时间之间的时间差与帧间隔一致,则认为渲染线程按照周期正常执行渲染操作,说明应用的buffer queue中的buffer的消费和生产处于平衡状态,也即,渲染线程可从缓存队列中出队free buffer使用。如果相邻两次queue buffer的时间之间的时间差大于帧间隔,则认为渲染线程出现异常,或者, 渲染线程已完成当前动效所有图像帧的渲染操作,此时,缓存队列中的buffer的生产小于消费,则合成线程可以对缓存队列中的buffer数量进行动态调整,例如,减少缓存队列的MaxBufferCount。
阶段四,合成器执行合成与HWC、应用交互阶段:
S1401、合成器的Vsync线程向合成线程发送Vsync_SF信号。
按照电子设备的帧率,在经过一个帧间隔之后,Vsync线程产生Vsync_SF信号,向合成线程发送Vsync_SF信号,合成线程在接收到Vsync_SF信号之后,判断是否要执行图像帧的合成操作。
S1402、合成线程开始合成,合成结束后将合成后图像帧送给HWC进行送显。
在本实施例中,合成线程在接收到Vsync_SF信号之后,确定执行合成操作。合成线程通过与应用主线程的交互,从缓存队列中获取已渲染的图像帧1,对已渲染的图像帧1进行合成操作,并将合成图像帧1送至HWC进行显示。
可选地,合成线程可以按照FIFO的获取方式来从缓存队列中获取(acquire)一个已渲染的图像帧(queued buffer);或者,合成线程还可以按照其他约定的方式来从缓存队列中acquire一个queued buffer。
合成线程通过应用主线程acquire一个queued buffer,应用主线程可以将该缓存的状态由queued更新为acquired。
S1403、HWC向合成线程返回被释放的缓存的缓存信息。
在本实施例中,HWC显示合成线程发送的合成图像帧1,并在显示合成图像帧1的结束之前,释放上一图像帧的缓存。在释放上一图像帧所占用的缓存之后,HWC将该缓存的缓存信息返回至合成线程。
S1404、合成线程将该缓存信息通过回调函数返回至应用主线程。
在本实施例中,合成线程通过回调函数将该缓存信息返回至应用主线程,应用主线程基于缓存信息进行缓存队列中缓存状态的更新操作。
阶段五:合成线程判断是否要调整缓存队列的数量的阶段:
S1501、Vsync线程向合成线程发送Vsync_SF消息。
在经过一个帧间隔之后,Vsync线程产生Vsync_SF信号,向合成线程发送Vsync_SF信号,在Vsync_SF信号到达时,合成线程判断是否要执行图像帧的合成操作。
S1502、合成线程开始合成,合成结束后将合成图像帧发送至HWC。
在本实施例中,合成线程确定执行合成操作。合成线程通过与应用主线程交互,从缓存队列中获取已渲染的图像帧2,对已渲染的图像帧2进行合成操作,并将合成图像帧2送至HWC进行显示。
可选地,合成线程可以按照FIFO的获取方式来从缓存队列中acquire一个queued buffer;或者,合成线程还可以按照其他约定的方式来从缓存队列中acquire一个queued buffer。在本实施例中,合成器在执行合成操作之后,停止增加缓存队列的MaxBufferCount的操作。
在合成器acquire一个queued buffer,应用主线程可将该缓存的状态由queued更新为acquired。
S1503、合成线程获取上一次入队缓存的时间,并计算当前系统时间与上一次入队 缓存的时间之间的差值,在差值大于或等于预设阈值的情况下,决策开始动态减少缓存队列的最大缓存数量。
在合成线程执行合成操作时,合成线程根据上述S1307中记录的缓存队列中每一次queue buffer的时间,来确定是否需要调整缓存队列的MaxBufferCount。调整MaxBufferCount实际上是调整缓存队列中的free buffer的数量。示例性地,合成线程可以获取当前系统时间与最后一次queue buffer的时间,计算当前系统时间与最后一次queue buffer的时间的时间差,若时间差大于两个帧间隔,则确定渲染线程存在丢帧的情况,已丢失两个图像帧。在这种情况下,合成线程可以生成减少缓存队列的MaxBufferCount的请求,并向应用主线程发送该请求,来减少MaxBufferCount。
可选地,合成线程还可以判断缓存队列中的queued buffer的数量是否增加来确定是否要动态减少缓存队列中的free buffer的数量。例如,合成线程确定缓存队列中的queued buffer的数量不再增加,则确定渲染线程没执行渲染操作。这种情况下,合成线程也可以生成减少缓存队列的MaxBufferCount的请求,并向应用主线程发送该请求,来减少MaxBufferCount。
可选地,合成线程可以调用预设的查询函数来向应用主线程查询缓存队列中各个buffer的状态,从而确定queued buffer的数量是否增加。示例性地,可以在IGrapgicBufferComsumer中增加getQueuedBufferCount接口来动态queued buffer的数量,查询时合成线程作为消费者,调用IGrapgicBufferComsumer::getQueuedBufferCount()函数通过Binder调用到应用主线程内查询。
阶段六:合成器动态调整缓存队列的数量的阶段:
S1601、Vsync线程向合成线程发送Vsync_SF消息。
在经过一个帧间隔之后,Vsync线程产生Vsync_SF信号,向合成线程发送Vsync_SF信号。在Vsync_SF信号到达之后,合成线程判断是否执行图像帧的合成操作。
S1602、合成线程开始合成,合成结束后将合成图像帧送给HWC。
在本实施例中,合成线程确定执行合成操作。合成线程通过与应用主线程交互,从缓存队列中获取已渲染的图像帧3,对已渲染的图像帧3进行合成操作,并将合成图像帧3送至HWC进行显示。
可选地,合成线程可以按照FIFO的获取方式来从缓存队列中获取(acquire)一个已渲染的图像帧的buffer(queued buffer);或者,合成线程还可以按照其他约定的方式来从缓存队列中acquire一个queued buffer。在本实施例中,合成器在执行合成操作之后,停止增加MaxBufferCount的操作。
在合成器acquire一个queued buffer,应用主线程可将该缓存的状态由queued更新为acquired。
S1603、HWC向合成线程返回被释放的缓存的缓存信息。
在本实施例中,示例性地,HWC显示合成线程发送的合成图像帧2,并在显示合成图像怎2的结束之前,释放图像帧1的buffer。在释放图像帧1所占用的buffer之后,HWC将该缓存的缓存信息返回至合成线程。
S1604、合成线程将缓存的缓存信息通过回调函数送回应用主线程,并向应用主线 程发送将缓存队列的最大缓存数量减1的请求。
在本实施例中,合成线程通过回调函数将该缓存的缓存信息返回至应用主线程,应用主线程根据该缓存的缓存信息进行缓存队列中缓存的状态更新操作。可选地,应用主线程接收合成线程发送的将缓存队列的MaxBufferCount减一的请求,应用主线程对缓存队列的MaxBufferCount减1。
S1605、应用主线程将缓存队列中的空缓存移除,将移除的缓存更新为不可用缓存。
在本实施例中,应用主线程可以将缓存队列中的free buffer移除1个。将缓存队列中的1个free buffer销毁,并释放graphicbuffer,并将移除的缓存更新为不可用缓存。成为不可用缓存意味着不允许被渲染线程和合成线程使用,从而本实施例的缓存队列的MaxBufferCount减1。
在本实施例中,合成线程在不执行合成操作的情况下,与应用主线程交互,增加缓存队列的MaxBufferCount,也即,增加缓存队列中的free buffer的数量,可以确保及时合成线程不执行合成(也即,缓存队列的消费者不消费)的情况下,渲染线程仍然可以获取free buffer进行已渲染图像帧的存入操作(缓存队列的生产者仍有free buffer可使用),从而达到缓存队列的生产与消费平衡,不会出现由于不消费导致的渲染线程无法获取free buffer进行生产,导致影响渲染线程执行渲染操作,以及影响应用主线程执行绘制操作的问题。这样一来,应用主线程可以按照帧间隔正常地进行每一图像帧的绘制、渲染线程可以按照帧间隔正常地进行每一图像帧的渲染。在绘制渲染图像帧的位移与系统时间有关的动效的场景下,基于本实施例提供的方式,应用主线程均可以正常地绘制出动效的每一图像帧,因此,不会出现现有技术中应用主线程绘制相邻两个图像帧时间间隔过大,时间间隔过大而导致视觉卡顿的问题,提高了动效展示的流畅性,避免了卡顿、丢帧的问题。
此外,合成线程在执行合成操作之后,确定渲染线程是否按照帧间隔正常工作,若确定渲染线程存在至少一个周期不执行渲染操作的情况下,生成减少缓存队列的MaxBufferCount的请求,与应用主线程交互减少缓存队列的MaxBufferCount,也即,减少缓存队列中的free buffer的数量,可以及时释放缓存队列中的冗余buffer,被释放的buffer可被用作其他操作,提高buffer的利用率。
图15给出了结合本申请实施例提供的图像处理方法,缓存队列中MaxBufferCount在图像帧绘制、渲染以及合成过程中变化的时序图,参考图15,来进一步说明本实施例中,缓存队列的MaxBufferCount的动态调整过程。按照每个分割区域,认为帧11所在的周期为第一个周期。在这个示例中,缓存队列的MaxBufferCount为4。
在第一个周期内,缓存队列中的queued buffer的数量为2,也即,在缓存队列中存在2个free buffer和2个queued buffer。应用主线程进行图像帧11的绘制,其计算得到的图像帧11的位移间隔为16,在应用主线程完成图像帧11的绘制之后,唤醒渲染线程执行图像帧11的渲染。渲染线程在完成图像帧11的渲染后,通过与应用主线程交互,从缓存队列中dequeue一个free buffer存入已渲染的图像帧11。缓存队列中的queued buffer的数量增加1,free buffer数量减1。
在本周期内,合成器按照图像帧的顺序,获取已渲染的图像帧10,执行图像帧10的合成操作。显示驱动通过屏幕显示合成图像帧9。
在第二个周期内,缓存队列中的queued buffer的数量为3,也即,在缓存队列中存在3个free buffer和1个queued buffer。应用主线程进行图像帧12的绘制,其计算得到的图像帧12的位移间隔为16,在应用主线程完成图像帧12的绘制之后,唤醒渲染线程执行图像帧12的渲染。渲染线程在完成图像帧12的渲染后,通过与应用主线程交互,从缓存队列中获取最后一个free buffer存入已渲染的图像帧12。缓存队列中的queued buffer的数量增加1,free buffer数量减1。此时,缓存队列中的queued buffer的数量已经达到MaxBufferCount。
在本周期内,合成线程不合成。显示驱动通过屏幕显示合成图像帧9。
在第三个周期内,缓存队列中的queued buffer的数量已经达到MaxBufferCount,均为4,也即,缓存队列中的buffer全部被占用。应用主线程进行图像帧13的绘制,其计算得到的图像帧13的位移间隔为16。
在本周期动态增加缓存队列的MaxBufferCount,将MaxBufferCount的数量加1。此时,MaxBufferCount为5。渲染线程可从缓存队列中获取新增的最后一个free buffer存入已渲染的图像帧13。渲染线程和应用主线程均处于正常状态。在该周期内,合成器不合成。
在第四周期内,缓存队列中的queued buffer的数量已经达到MaxBufferCount,均为5,缓存队列中的buffer全部被占用。应用主线程进行图像帧14的绘制,其计算得到的图像帧14的位移间隔为16。
在本周期动态增加缓存队列中buffer的数量,将MaxBufferCount的数量加1。此时,MaxBufferCount为6。渲染线程可从缓存队列中获取新增的最后一个free buffer存入已渲染的图像帧14。渲染线程和应用主线程均处于正常状态。
在该周期内,合成线程不合成。显示驱动通过屏幕显示合成图像帧9。
在第五个周期内,合成线程在本周期内,从缓存队列中获取一个queued buffer进行合成操作,例如,获取图像帧11执行合成操作。在这个周期内,缓存队列中的queued buffer数量减少一个,为5。显示驱动显示接收到的合成图像帧10,该图像帧10的位移间隔为16。在显示合成图像帧10结束之前释放合成图像帧9的buffer,并将该缓存的缓存信息返回至合成线程。合成线程将该缓存信息返回至应用主线程,应用主线程根据缓存信息更新缓存队列的缓存状态。此时,缓存队列中存在一个free buffer。
应用主线程进行图像帧15的绘制,其计算得到的图像帧15的位移间隔为16。渲染线程通过与应用主线程交互,dequeue free buffer存入已渲染的图像帧15。
在第六个周期,合成线程从缓存队列中获取一个queued buffer进行合成操作。例如,获取图像帧12执行合成操作。在这个周期内,缓存队列中的queued buffer数量减少一个,为5。
在本周期内,显示驱动显示合成图像帧11,该图像帧11的位移间隔为16。在显示合成图像帧11结束之前释放合成图像帧10的buffer,并将该缓存的缓存信息返回至合成线程。合成线程将该缓存信息返回至应用主线程,应用主线程根据缓存信息更新缓存队列的缓存状态。此时,缓存队列中存在一个free buffer。
在本周期内,应用主线程不进行图像帧的绘制,渲染线程也不进行图像帧的渲染 操作。
合成线程通过应用主线程交互,减少缓存队列的MaxBufferCount,将MaxBufferCount的数量减1。此时,MaxBufferCount为5。
在第七个周期,合成线程从缓存队列中获取一个queued buffer进行合成操作,例如,获取图像帧13执行合成操作。在这个周期内,缓存队列中的queued buffer数量减少一个,为4。
在本周期内,显示驱动显示合成图像帧12,该图像帧12的位移间隔为16。在显示合成图像帧12结束之前释放合成图像帧11的buffer,并将该缓存的缓存信息返回至合成线程。合成线程将该缓存信息返回至应用主线程,应用主线程根据缓存信息更新缓存队列的缓存状态。此时,缓存队列中存在一个free buffer。
在本周期内,应用主线程不进行图像帧的绘制,渲染线程也不进行图像帧的渲染操作。
合成线程通过应用主线程交互,减少缓存队列的MaxBufferCount,将MaxBufferCount的数量减1。此时,MaxBufferCount为4。
显然,在本实施例中,在合成线程不执行合成的第2、3、4周期内,动态增加缓存队列的MaxBufferCount,可以使得渲染线程持续稳定地获取free buffer进行存入已渲染图像帧的操作,也保证了应用主线程可以按照正常的帧间隔计算每一周期的图像帧的位移。应用主线程和渲染线程的正常运行,保证了应用主线程绘制的相邻图像帧的位移间隔保持不变,从而经过合成、显示之后的多个相邻图像帧形成的动效连贯顺畅,避免了出现视觉卡顿的问题。
此外,在确定应用主线程和渲染线程不执行绘制渲染操作的情况下,动态减少缓存队列的MaxBufferCount,可以及时释放冗余buffer,提供buffer的利用率。
在本实施例中,在合成器不合成的情况下,可以动态增加缓存队列的空闲缓存对象,应用进程可以持续正常获取空闲缓存对象存入已渲染的图像帧,从而避免现有技术中应用进程无法获取空闲缓存对象来存入已渲染的图像帧,而不执行下一图像帧的渲染操作,从而下一图像帧的绘制,出现丢帧的问题。通过本申请提供的图像处理方法,缓存队列中始终有至少一个空闲缓存对象可被应用进程使用,避免了应用进程由于缓存队列中没有空闲缓存对象导致的不执行图像帧绘制渲染操作而丢帧的问题,从而解决了由于丢帧造成的图像帧送显后出现的视觉卡顿的问题。通过保证应用进程有足够的空闲缓存对象进行已渲染的图像帧的存入操作,从而保证了送显图像帧的动效显示效果的流畅性。
本申请一些实施例提供了一种电子设备,该电子设备可以包括:存储器、显示屏和一个或多个处理器。该显示屏、存储器和处理器耦合。该存储器用于存储计算机程序代码,该计算机程序代码包括计算机指令。当处理器执行计算机指令时,电子设备可执行上述方法实施例中电子设备执行的各个功能或者步骤。该电子设备的结构可以参考图4所示的电子设备100的结构。
本申请实施例还提供一种芯片系统(例如,片上系统(system on a chip,SoC)),如图16所示,该芯片系统包括至少一个处理器701和至少一个接口电路702。处理器701和接口电路702可通过线路互联。例如,接口电路702可用于从其它装置(例如 电子设备的存储器)接收信号。又例如,接口电路702可用于向其它装置(例如处理器701或者电子设备的摄像头)发送信号。示例性的,接口电路702可读取存储器中存储的指令,并将该指令发送给处理器701。当所述指令被处理器701执行时,可使得电子设备执行上述实施例中的各个步骤。当然,该芯片系统还可以包含其他分立器件,本申请实施例对此不作具体限定。
本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质包括计算机指令,当所述计算机指令在上述电子设备上运行时,使得该电子设备执行上述方法实施例中电子设备100执行的各个功能或者步骤。
本申请实施例还提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行上述方法实施例中电子设备100执行的各个功能或者步骤。例如,该计算机可以是上述电子设备100。
通过以上实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (18)

  1. 一种图像处理方法,其特征在于,包括:
    电子设备接收用户在电子设备的触摸屏的第一操作;
    所述电子设备响应于所述第一操作启动第一应用;
    所述电子设备接收用户在电子设备的触摸屏的第二操作;
    所述电子设备响应于所述第二操作退出所述第一应用;
    其中,在所述第一应用的启动到所述第一应用的退出的过程中,所述电子设备对所述第一应用的第一图像帧和第二图像帧的绘制渲染以及合成操作;
    所述电子设备对所述第一应用的第一图像帧和第二图像帧的绘制渲染以及合成操作,包括:
    应用进程在所述第一图像帧的绘制渲染周期内,对所述第一图像帧进行绘制和渲染,并将得到的第一图像帧存储至缓存队列的一个空闲缓存对象中;
    当合成线程在所述第一图像帧的合成周期内未执行合成操作的情况下,所述合成线程向所述应用进程发送第一调整请求;
    所述应用进程基于所述第一调整请求,增加所述缓存队列中空闲缓存对象的数量,以使所述应用进程在所述第二图像帧的绘制渲染周期内,对所述第二图像帧进行绘制和渲染后,将得到的第二图像帧存储至所述缓存队列的一个空闲缓存对象中,其中,所述第二图像帧的绘制渲染周期位于所述第一图像帧的绘制渲染周期之后,所述第二图像帧的绘制起始时刻与所述第一图像帧的绘制起始时刻相差N个周期,N为正整数。
  2. 根据权利要求1所述的方法,其特征在于,所述第一图像帧和所述第二图像帧是所述第一应用的启动过程中的图像帧。
  3. 根据权利要求2所述的方法,其特征在于,所述第一图像帧和所述第二图像帧为所述电子设备在所述第一应用的启动过程中,由第一刷新率切换至第二刷新率过程中的图像帧;所述第一刷新率小于所述第二刷新率。
  4. 根据权利要求1所述的方法,其特征在于,所述第一图像帧和所述第二图像帧是所述第一应用的启动完成之后的图像帧。
  5. 根据权利要求1所述的方法,其特征在于,所述第一图像帧和所述第二图像帧是所述第一应用的退出过程中的图像帧。
  6. 根据权利要求5所述的方法,其特征在于,所述第一图像帧和所述第二图像帧为所述电子设备在所述第一应用的退出过程中,由第一刷新率切换至第二刷新率过程中的图像帧;所述第一刷新率小于所述第二刷新率。
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,所述第二图像帧的绘制渲染周期为所述第一图像帧的绘制渲染周期的下一周期;所述第二图像帧的绘制起始时刻与所述第一图像帧的绘制起始时刻相差1个周期。
  8. 根据权利要求1-6中任一项所述的方法,其特征在于,所述第一调整请求中包括第一指示值;所述第一指示值用于指示缓存对象的增加数量,所述增加所述缓存队列中空闲缓存对象的数量,包括:
    所述应用进程增加所述第一指示值的空闲缓存对象至所述缓存队列。
  9. 根据权利要求8所述的方法,其特征在于,所述应用进程增加所述第一指示值 的空闲缓存对象至所述缓存队列,包括:
    所述应用进程按照入队顺序,将所述第一指示值的空闲缓存对象的地址添加至所述缓存队列中。
  10. 根据权利要求1-6中任一项所述的方法,其特征在于,所述方法还包括:
    所述合成线程查询所述缓存队列中所有缓存对象的数量;
    若所述所有缓存对象的数量达到最大缓存对象数量,所述合成线程停止向所述应用进程发送缓存对象的第一调整请求。
  11. 根据权利要求1-6中任一项所述的方法,其特征在于,在所述将所述第一图像帧存储至缓存队列的一个空闲缓存对象中之后,所述方法还包括:
    所述合成线程获取并记录所述应用进程将所述第一图像帧存储至所述目标缓存对象中的存入时刻。
  12. 根据权利要求11所述的方法,其特征在于,所述方法还包括:
    在所述合成线程在所述第一图像帧的合成周期执行合成操作的情况下,所述合成线程确定当前的系统时刻与最后一次记录的图像帧存储至所述目标缓存对象中的存入时刻之间的时间差;
    若所述时间差大于或等于预设时间阈值,所述合成线程向所述应用进程发送缓存对象的第二调整请求;
    所述应用进程根据所述第二调整请求,减少所述缓存队列中空闲缓存对象的数量。
  13. 根据权利要求12所述的方法,其特征在于,所述第二调整请求中包括第二指示值,所述第二指示值用于指示缓存对象的减少数量;所述减少所述缓存队列中空闲缓存对象的数量,包括:
    所述应用进程从所述缓存队列中减少所述第二指示值的空闲缓存对象。
  14. 根据权利要求13所述的方法,其特征在于,所述应用进程从所述缓存队列中减少所述第二指示值的空闲缓存对象,包括:
    所述应用进程按照出队顺序,将所述第二指示值的空闲缓存对象的地址,从所述缓存队列中剔除。
  15. 根据权利要求1-6中任一项所述的方法,其特征在于,所述方法还包括:
    所述合成线程查询所述缓存队列中所有缓存对象的数量;
    若所述所有缓存对象的数量减少至最小缓存对象数量,所述合成线程停止向所述应用进程发送缓存对象的第二调整请求。
  16. 根据权利要求1-6中任一项所述的方法,其特征在于,所述方法还包括:
    如果所述合成线程在所述第一图像帧的合成周期执行合成操作,所述合成线程从所述缓存队列中获取目标缓存对象;所述目标缓存对象中存储了绘制和渲染后的第一图像帧;
    所述合成线程对所述绘制和渲染后的第一图像帧进行合成操作。
  17. 一种电子设备,其特征在于,所述电子设备包括存储器、显示屏和一个或多个处理器;所述存储器、所述显示屏与所述处理器耦合;所述存储器中存储有计算机程序代码,所述计算机程序代码包括计算机指令,当所述计算机指令被所述处理器执行时,使得所述电子设备执行如权利要求1-16中任一项所述的方法。
  18. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-16中任一项所述的方法。
PCT/CN2023/113151 2022-10-13 2023-08-15 图像处理方法和电子设备 WO2024078121A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211253545.4 2022-10-13
CN202211253545.4A CN117891422A (zh) 2022-10-13 2022-10-13 图像处理方法和电子设备

Publications (1)

Publication Number Publication Date
WO2024078121A1 true WO2024078121A1 (zh) 2024-04-18

Family

ID=90640108

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/113151 WO2024078121A1 (zh) 2022-10-13 2023-08-15 图像处理方法和电子设备

Country Status (2)

Country Link
CN (1) CN117891422A (zh)
WO (1) WO2024078121A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090002381A1 (en) * 2007-06-28 2009-01-01 Apple Inc. Media synchronization via image queue
US20150161756A1 (en) * 2013-12-05 2015-06-11 DeNA Co., Ltd. Image processing device, and non-transitory computer-readable storage medium storing image processing program
US20170365086A1 (en) * 2016-06-17 2017-12-21 The Boeing Company Multiple-pass rendering of a digital three-dimensional model of a structure
CN112422873A (zh) * 2020-11-30 2021-02-26 Oppo(重庆)智能科技有限公司 插帧方法、装置、电子设备及存储介质
CN114092595A (zh) * 2020-07-31 2022-02-25 荣耀终端有限公司 一种图像处理方法及电子设备
CN114579075A (zh) * 2022-01-30 2022-06-03 荣耀终端有限公司 数据处理方法和相关装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090002381A1 (en) * 2007-06-28 2009-01-01 Apple Inc. Media synchronization via image queue
US20150161756A1 (en) * 2013-12-05 2015-06-11 DeNA Co., Ltd. Image processing device, and non-transitory computer-readable storage medium storing image processing program
US20170365086A1 (en) * 2016-06-17 2017-12-21 The Boeing Company Multiple-pass rendering of a digital three-dimensional model of a structure
CN114092595A (zh) * 2020-07-31 2022-02-25 荣耀终端有限公司 一种图像处理方法及电子设备
CN112422873A (zh) * 2020-11-30 2021-02-26 Oppo(重庆)智能科技有限公司 插帧方法、装置、电子设备及存储介质
CN114579075A (zh) * 2022-01-30 2022-06-03 荣耀终端有限公司 数据处理方法和相关装置

Also Published As

Publication number Publication date
CN117891422A (zh) 2024-04-16

Similar Documents

Publication Publication Date Title
CN111651116B (zh) 分屏交互方法、电子设备及计算机存储介质
US20190391730A1 (en) Computer application launching
KR102269481B1 (ko) 디바이스 간에 화면 공유 방법 및 이를 이용하는 디바이스
WO2019233306A1 (zh) 图标显示方法、装置及终端
WO2023160194A1 (zh) 控制屏幕刷新率动态变化的方法及电子设备
CN110300328B (zh) 一种视频播放控制方法、装置及可读存储介质
US8595640B2 (en) Render transform based scrolling and panning for smooth effects
US20220391158A1 (en) Systems and Methods for Interacting with Multiple Display Devices
KR20170045257A (ko) 메시지 대화 이력의 신속 내비게이션
CN111597000A (zh) 一种小窗口管理方法及终端
CN115097994B (zh) 数据处理方法和相关装置
WO2024041047A1 (zh) 一种屏幕刷新率切换方法及电子设备
US20230393700A1 (en) Systems and Methods for Interacting with Multiple Applications on an Electronic Device
WO2024078121A1 (zh) 图像处理方法和电子设备
WO2023279980A1 (zh) 一种转屏控制方法、装置及电子设备
KR102233443B1 (ko) 스크롤 제스처에 기초하여 이미지를 디스플레이하는 방법 및 이를 수행하는 뷰어 장치
WO2024055904A1 (zh) 一种请求vSync信号的方法及电子设备
CN117707406B (zh) 一种亮屏显示方法、电子设备及存储介质
WO2024001871A1 (zh) 一种操控方法和电子设备
WO2024098871A9 (zh) 数据处理方法、设备及存储介质
CN115016714B (zh) 电子设备控制方法、系统、电子设备及存储介质
WO2024016798A1 (zh) 图像显示方法和相关装置
WO2024193358A1 (zh) 回收内存的方法及电子设备
WO2024152719A1 (zh) 帧率控制方法及相关装置
WO2024067599A1 (zh) 应用显示的方法和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23876326

Country of ref document: EP

Kind code of ref document: A1