CN117130774A - Thread acceleration processing method and device - Google Patents

Thread acceleration processing method and device Download PDF

Info

Publication number
CN117130774A
CN117130774A CN202310493154.8A CN202310493154A CN117130774A CN 117130774 A CN117130774 A CN 117130774A CN 202310493154 A CN202310493154 A CN 202310493154A CN 117130774 A CN117130774 A CN 117130774A
Authority
CN
China
Prior art keywords
thread
rendering
main
layer
rendering thread
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310493154.8A
Other languages
Chinese (zh)
Other versions
CN117130774B (en
Inventor
姜仕双
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202310493154.8A priority Critical patent/CN117130774B/en
Publication of CN117130774A publication Critical patent/CN117130774A/en
Application granted granted Critical
Publication of CN117130774B publication Critical patent/CN117130774B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Telephone Function (AREA)
  • Image Generation (AREA)

Abstract

The embodiment of the application provides a thread acceleration processing method and device, relates to the field of terminals, and can prevent an application program from being blocked and improve user experience. The method comprises the following steps: firstly, the electronic equipment receives a first operation of a user on an interface of a first application, wherein the first operation is used for triggering the electronic equipment to display target information; then, the electronic equipment acquires a thread identifier of a rendering thread, wherein the rendering thread is used for executing layer rendering to obtain a rendered layer, and the rendered layer corresponds to the target information; the electronic device determines a main thread with a wake-up relation with the rendering thread according to the thread identification of the rendering thread; the electronic device accelerates the rendering thread and the main thread.

Description

Thread acceleration processing method and device
Technical Field
The present application relates to the field of terminals, and in particular, to a method and an apparatus for thread acceleration processing.
Background
With the development of electronic technology, electronic devices (such as mobile phones, tablet computers or smart watches) have more and more functions. For example, various application programs are installed in the electronic device, and the electronic device can realize various functions through the various application programs.
Currently, applications running on electronic devices may customize critical threads, which may include a main thread and a rendering thread. When the resource allocation of the key thread is insufficient, the application program is caused to lose frames, so that the clamping occurs, and the user experience is poor. Therefore, how to identify these critical threads and schedule resources in time, and avoiding the application program from being blocked is an urgent problem to be solved.
Disclosure of Invention
The embodiment of the application provides a thread acceleration processing method and a thread acceleration processing device, which can avoid the occurrence of clamping of an application program and can improve user experience.
In a first aspect, an embodiment of the present application provides a thread acceleration processing method, which is applied to an electronic device, and includes: the method comprises the steps that the electronic equipment receives first operation of a user on an interface of a first application, wherein the first operation is used for triggering the electronic equipment to display target information; the electronic equipment acquires a thread identifier of a rendering thread, wherein the rendering thread is used for executing layer rendering to obtain a rendered layer, and the rendered layer corresponds to the target information; the electronic device determines a main thread with a wake-up relation with the rendering thread according to the thread identification of the rendering thread; the electronic device accelerates the rendering thread and the main thread.
Based on the method provided by the embodiment of the application, after the electronic equipment receives the first operation for triggering the electronic equipment to display the target information, the thread identification of the rendering thread can be obtained, and the main thread with the wake-up relation with the rendering thread can be determined according to the thread identification of the rendering thread, so that the rendering thread and the main thread are accelerated, and the frame loss and the clamping caused by the slow processing speeds of the rendering thread and the main thread can be avoided, thereby improving the user experience.
In one possible implementation, the acceleration process includes at least one of: adjusting the rendering threads and the main threads from a first priority scheduling group to a second priority scheduling group, wherein the scheduling priority of the threads in the second priority scheduling group is higher than that of the threads in the first priority scheduling group; performing frequency boosting processing on the rendering thread and the main thread; and improving the thread priority of the rendering thread and the main thread. In this way, the electronic device can schedule the rendering thread and the main thread to execute the corresponding tasks preferentially, and can avoid frame loss and blocking caused by slow processing speeds of the rendering thread and the main thread, thereby improving user experience.
In one possible implementation, before the electronic device obtains the thread identifier of the rendering thread, the method further includes: the electronic equipment inserts the instrumentation function in the insertion point of the enqueue function corresponding to the rendering thread; the electronic device obtaining the thread identification of the rendering thread comprises: and when the electronic equipment executes the insertion point to the enqueuing function, the instrumentation information is acquired, and the instrumentation information comprises the thread identification of the rendering thread. In this way, the electronic device can acquire the thread identification of the rendering thread by the 'pile-inserting' method, then determine the main thread having a wake-up relation with the rendering thread according to the thread identification of the rendering thread, and further perform acceleration processing on the rendering thread and the main thread, so that frame loss and clamping caused by slow processing speeds of the rendering thread and the main thread can be avoided, and user experience can be improved.
In one possible implementation, the determining, by the electronic device, a main thread having a wake-up relationship with the rendering thread according to a thread identification of the rendering thread includes: the electronic equipment traces at least one thread with a wake-up relation in the current drawing frame period by taking a rendering thread as a starting point based on important event information recorded in the current drawing frame period, wherein the important event information comprises wake-up events among threads, and the wake-up relation is used for representing the wake-up and wake-up relation among the threads; the electronic equipment determines at least one target path according to the wake-up relation between at least one thread; and the electronic equipment determines a main thread according to at least one target path, wherein the main thread is the thread with the largest number of times of serving as the end point of the path in the at least one target path. Therefore, the main thread can be determined, and further the main thread can be accelerated, so that frame loss and blocking caused by low processing speed of the main thread can be avoided, and user experience can be improved.
In one possible implementation manner, the important event information further includes an identifier of a processor corresponding to the awakened thread, a scheduling group corresponding to the awakened thread, a running duration of the awakened thread on the processor, and at least one of running frequencies of the awakened thread on the processor, wherein the awakened thread includes a rendering thread and a main thread; the electronic device accelerating the rendering thread and the main thread comprises: the electronic device performs acceleration processing on the rendering thread and the main thread according to the identifier of the processor corresponding to the awakened thread, the scheduling group corresponding to the awakened thread, and at least one of the operating frequencies of the awakened thread on the processor. For example, if before the acceleration processing, the schedule group corresponding to the rendering thread and the main thread is determined to be the first priority schedule group according to the important event information, the rendering thread and the main thread may be adjusted from the first priority schedule group (for example, the CFS schedule group) to the second priority schedule group (for example, the VIP schedule group), and the schedule priority of the threads in the second priority schedule group is higher than that of the threads in the first priority schedule group. The electronic device can preferentially allocate processing resources for threads in the second priority dispatch group. As another example, assume that the processor's frequencies may include 1.0GHz, 1.5GHz, 2.0GHz. If the frequency of the processor corresponding to the rendering thread and the main thread is determined to be 1.0GHz according to the important event information before the acceleration processing, the frequency of the processor corresponding to the rendering thread and the main thread can be increased to 1.5GHz or 2.0GHz, and therefore the execution efficiency of the rendering thread and the main thread can be improved.
In one possible implementation, the method further includes: when the electronic device traces back to a thread awakened by a kernel thread or system interrupt or application thread, the trace back is not continued.
In one possible implementation, the method further includes: the electronic equipment compares the thread identification of the rendering thread with the standard thread identification; the thread identification of the rendering thread is different from the standard thread identification. It can be understood that if the RTID of the currently acquired rendering thread is the same as the RTID in the preset table, the currently acquired rendering thread is the standard rendering thread, and the standard rendering thread has a corresponding acceleration scheme, so that the acceleration blowing is not required to be executed. In this way, it is possible to avoid performing repeated acceleration processing on the standard rendering thread.
In one possible implementation, the thread identification of the rendering thread is a first application-custom thread identification.
In one possible implementation, the rendering thread includes: at least one of barrage rendering thread, navigation rendering thread, game rendering thread and applet rendering thread. Therefore, the frame loss problem of navigation, barrage, games, applets and the like in the drawing frame scene can be avoided, and the user experience is improved.
In one possible implementation, in the case where the rendering thread is a barrage rendering thread, the rendered layer is a barrage layer; in the case that the rendering thread is a navigation rendering thread, the rendered layer is a navigation layer; in the case that the rendering thread is a game rendering thread, the rendered layer is a game layer; in the case where the rendering thread is an applet rendering thread, the rendered layer is an applet layer. Therefore, the frame loss problem of navigation, barrage, games, applets and the like in the drawing frame scene can be avoided, and the user experience is improved.
In one possible implementation, the rendering thread for performing a rendering process on the target information includes: the rendering thread is used for executing layer drawing and layer rendering to obtain a rendered layer. I.e., the rendering thread may perform layer drawing and layer rendering.
In one possible implementation, an electronic device includes a rendering thread identification module, a message processing module, a main thread identification module, and a critical thread acceleration module; the electronic device obtaining the thread identification of the rendering thread comprises: the rendering thread identification module inserts the instrumentation function in the insertion point of the enqueue function corresponding to the rendering thread in advance; when the execution is carried out to the insertion point of the enqueuing function, the rendering thread identification module acquires instrumentation information, wherein the instrumentation information comprises thread identification of the rendering thread; the electronic device determining the main thread with the wake-up relation with the rendering thread according to the thread identification of the rendering thread comprises: the rendering thread identification module sends the thread identification of the rendering thread to the message processing module; the message processing module sends the thread identification of the rendering thread to the main thread identification module; the main thread identification module determines a main thread with a wake-up relation with the rendering thread according to the thread identification of the rendering thread; the electronic device accelerating the rendering thread and the main thread according to the thread identification of the rendering thread and the thread identification of the main thread comprises: the main thread identification module sends key thread group information to the key thread acceleration module, wherein the key thread group information comprises the thread identification of the rendering thread and the thread identification of the main thread; the key thread acceleration module accelerates the key thread group, wherein the key thread group comprises a rendering thread and a main thread. In this way, the electronic device can schedule the rendering thread and the main thread to execute the corresponding tasks preferentially, and can avoid frame loss and blocking caused by slow processing speeds of the rendering thread and the main thread, thereby improving user experience.
In one possible implementation manner, the electronic device further includes a layer processing module and an image synthesis system, and before the electronic device obtains the thread identifier of the rendering thread, the method further includes: responding to the operation of a user, and sending a layer creating request to a layer processing module by a main thread; the layer processing module sends a request for acquiring a buffer area queue to the image synthesis system; the image synthesis system returns a buffer queue corresponding to the image layer processing module; the layer processing module returns attribute information of the layer to the main thread.
In one possible implementation, the method further includes: the main thread requests a vertical synchronous signal from the image synthesis system; the image synthesis system returns a vertical synchronous signal to the main thread; the main line Cheng Huanxing renders the thread; the rendering thread draws the rendered image.
In one possible implementation, the method further includes: the rendering thread sends a request cache command to the image composition system; the image synthesis system sends an instruction for indicating cache dequeuing to the rendering thread; and the rendering thread stores the drawn and rendered image into a buffer area queue.
In a second aspect, the present application provides a chip system comprising one or more interface circuits and one or more processors. The interface circuit and the processor are interconnected by a wire. The chip system described above may be applied to an electronic device including a communication module and a memory. The interface circuit is for receiving signals from a memory of the electronic device and transmitting the received signals to the processor, the signals including computer instructions stored in the memory. When executed by a processor, the electronic device may perform the method as described in the first aspect and any one of its possible designs.
In a third aspect, the present application provides a computer-readable storage medium comprising computer instructions. When executed on an electronic device (such as a mobile phone) the computer instructions cause the electronic device to perform the method as described in the first aspect and any one of its possible designs.
In a fourth aspect, the present application provides a computer program product which, when run on a computer, causes the computer to carry out the method according to the first aspect and any one of its possible designs.
In a fifth aspect, an embodiment of the present application provides a thread acceleration processing apparatus, including a processor, the processor being coupled to a memory, the memory storing program instructions that, when executed by the processor, cause the apparatus to implement the method according to the first aspect and any one of the possible designs thereof. The apparatus may be an electronic device or a server device; or may be an integral part of an electronic device or server device, such as a chip.
In a sixth aspect, an embodiment of the present application provides a thread acceleration processing apparatus, where the apparatus may be functionally divided into different logic units or modules, and each unit or module performs a different function, so that the apparatus performs the method described in the first aspect and any possible design manner thereof.
It will be appreciated that the advantages achieved by the chip system according to the second aspect, the computer readable storage medium according to the third aspect, the computer program product according to the fourth aspect, and the apparatus according to the fifth aspect and the sixth aspect provided above may refer to the advantages as in the first aspect and any of the possible designs thereof, and will not be repeated here.
Based on the method provided by the embodiment of the application, after the electronic equipment receives the first operation for triggering the electronic equipment to display the target information, the thread identification of the rendering thread can be obtained, and the main thread with the wake-up relation with the rendering thread can be determined according to the thread identification of the rendering thread, so that the rendering thread and the main thread are accelerated, and the frame loss and the clamping caused by the slow processing speeds of the rendering thread and the main thread can be avoided, thereby improving the user experience.
Drawings
FIG. 1 is a schematic diagram of an interface display process flow in the prior art;
FIG. 2 is a schematic diagram of an interface display of the prior art;
FIG. 3 is a schematic diagram of an interface display process flow in the prior art;
FIG. 4 is a schematic diagram of an interface display of the prior art;
fig. 5 is a schematic hardware structure of an electronic device according to an embodiment of the present application;
Fig. 6 is a schematic software architecture diagram of an electronic device according to an embodiment of the present application;
fig. 7 is a schematic diagram of an applicable scenario provided in an embodiment of the present application;
FIG. 8 is a schematic diagram of interactions between modules according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a layer according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a relationship among a rendering thread, an enqueue function, and a instrumentation function according to an embodiment of the present application;
FIG. 11 is a schematic diagram of wake-up relationships between threads according to an embodiment of the present application;
FIG. 12 is a schematic diagram of a wake-up relationship between threads according to an embodiment of the present application;
FIG. 13 is a schematic diagram of an acceleration process according to an embodiment of the present application;
FIG. 14 is a schematic diagram of yet another acceleration process according to an embodiment of the present application;
FIG. 15 is a schematic diagram of an interface display processing flow according to an embodiment of the present application;
FIG. 16 is a schematic diagram of an interface display according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of a chip system according to an embodiment of the present application.
Detailed Description
For clarity and conciseness in the description of the embodiments below, a brief introduction to related concepts or technologies is first given:
Frame: refers to a single picture of the minimum unit in the interface display. A frame is understood to mean a still picture, and displaying a plurality of successive frames in rapid succession may create the illusion of object motion. The frame rate refers to the number of frames that a picture is refreshed in 1 second, and can also be understood as the number of times a graphics processor in the terminal device refreshes a picture per second. A high frame rate may result in a smoother and more realistic animation. The more frames per second, the smoother the displayed motion.
It should be noted that, before the frame is displayed on the interface, the process of drawing, rendering, synthesizing, etc. is usually required.
And (3) frame drawing: refers to picture drawing of a display interface. The display interface may be composed of one or more views, each of which may be drawn by a visual control of the view system, each of which is composed of sub-views, one of which corresponds to a widget in the view, e.g., one of which corresponds to a symbol in the picture view.
And (3) frame rendering: the rendered view is subjected to coloring operation, 3D effect is added, or the like. For example: the 3D effect may be a light effect, a shadow effect, a texture effect, etc.
And (3) frame synthesis: is a process of combining a plurality of the one or more rendered views into a display interface.
Thread (thread): a thread is the smallest unit that an operating system can perform operational scheduling. It is included in the process and is the actual unit of operation in the process. One thread refers to a single sequential control flow in a process, multiple threads may be concurrent in a process, and each thread performs different tasks in parallel. Threads are the basic units of independent scheduling and dispatch. The thread may be a kernel thread scheduled by an operating system kernel; the method can also be a user thread which is self-scheduled by a user process; or a thread that is hybrid scheduled by the kernel and the user process.
Rendering threads: threads for performing frame drawing and/or frame rendering.
And (3) main thread: a thread for creating or waking up a rendering thread.
Currently, applications running on electronic devices may customize critical threads, which may include a main thread and a rendering thread. When the resource allocation of the key threads is insufficient, the application program loses frames, so that the clamping occurs, and the user experience is poor. Therefore, how to identify these critical threads and schedule resources in time, and avoiding the application program from being blocked is an urgent problem to be solved.
Fig. 1 is a schematic diagram illustrating a barrage display process flow of an electronic device. The content displayed by the electronic device corresponds to frame 1, frame 2, and frame 3 in sequence in time order. The electronic device may display based on a vertical synchronization (Vsync) signal to synchronize the flow of drawing, rendering, synthesizing, and screen refresh display of the image.
Taking the display of frame 1 as an example, first, a main thread corresponding to a barrage layer of a video application may draw frame 1, and a rendering thread (e.g., thread 205) corresponding to a barrage layer of the video application may render frame 1. After the frame 1 rendering is completed, the rendering thread of the electronic device sends the rendered frame 1 to the image composition system (e.g., surfaceFlinger, SF). The image composition system composes the rendered frame 1. After the frame 1 is synthesized, the electronic device can start a display driver by calling a kernel layer, and display the content corresponding to the frame 1 on a screen (display screen). The process of frame 3, which is similar to that of frame 1, is also synthesized and displayed and will not be described again here. It should be noted that, when rendering the frame 1 and the frame 3, the rendering thread (thread 205) is in a Running (Running) state for a long time, so that the frame 1 and the frame 3 can be smoothly rendered (the frame 1 or the frame 3 can be respectively rendered in one Vsync period) and sent to the image composition system. However, when the rendering thread (thread 205) renders the frame 2, since the rendering thread (thread 205) is in a state of waiting for execution (Runnable) for a long time, the rendering of the frame 2 cannot be completed in one Vsync period, and the rendering needs to be completed in a plurality of Vsync periods, so that the frame 2 cannot be timely synthesized and displayed, and then a bullet screen is blocked.
For example, as shown in fig. 2, the electronic device may be a mobile phone, which may run a video application. As shown in fig. 2 (a), the cell phone may display a video interface 1001, and the video interface 1001 may include a bullet screen 1002, a bullet screen 1004, a video character 1003, and other video content (e.g., cliffs). The barrages (barrages 1002, 1004) may scroll along the video playing schedule.
In the video playing process, if the resources of the key thread (e.g. rendering thread) of the bullet screen layer are not allocated enough, the bullet screen will be blocked. As shown in fig. 2 (b), as the video playing progress changes, the mobile phone can switch from the video interface 1001 to the video interface 1005, and the video content changes, for example, the display position of the video character 1003 changes. However, the barrage 1002 and 1004 are also displayed in the original positions, i.e. the barrage display is jammed, which affects the user experience.
For another example, fig. 3 is a schematic diagram of a display processing flow of a map application of an electronic device. The content displayed by the electronic device corresponds to frame 1, frame 2, and frame 3 in sequence in time order. The electronic device may display based on the Vsync signal to synchronize the flow of drawing, rendering, compositing, and screen refresh display of the image.
The drawing, rendering and compositing of frames 1 and 2 by the main thread and rendering thread (e.g., thread GL MAP) corresponding to the navigation layer of the MAP application may be described with reference to the associated description of fig. 1. When the rendering thread (for example, the thread GL MAP) corresponding to the navigation layer renders the frame 1 and the frame 2, the rendering thread (for example, the thread GL MAP) is in a Running (Running) state for a long time, so that the frame 1 and the frame 2 can be smoothly rendered and sent to the image composition system. However, when the rendering thread (thread 205) renders the frame 3, the rendering thread (thread GL MAP) is in a state of waiting for execution (Runnable) for a long time, that is, the resource allocation of the rendering thread (thread GL MAP) is insufficient, so that the rendering thread (thread GL MAP) cannot timely complete the rendering of the frame 3 (cannot complete the rendering of the frame 3 in one Vsync period, and needs to complete the rendering in multiple Vsync periods), and thus the frame 3 cannot be timely synthesized and displayed, and then a display interface of the MAP application is blocked.
For example, as shown in fig. 4, the electronic device may be a mobile phone, which may run a map application. As shown in fig. 4 (a), the handset may display a route selection interface 4001, and the route selection interface 4001 may include a start walking navigation control 4002. In response to the user clicking to start the operation of the walking navigation control 4002, if the resources of the key thread (e.g., rendering thread) corresponding to the walking navigation layer are not allocated sufficiently, as shown in (b) of fig. 4, the map application is jammed, the jammed interface 4003 is displayed, the walking navigation interface cannot be displayed in time, and the user experience is affected.
In the related art, a standard main thread (UIThread) and a rendering thread (RenderThread) have fixed thread names, and acceleration processing can be recognized by the thread names. For example, the number of the cells to be processed,the main thread name is cent. Tmgp. Sgame, < >>Is named tv. The main thread may perform acceleration processing when creating the application process. The name of the rendering thread may be, for example, renderThread. All threads under the process of the application program can be traversed, rendering threads are identified through thread names, and acceleration processing is performed.
However, the application program custom main thread, rendering thread, etc. cannot recognize and accelerate due to the fact that the thread name is not fixed. For example, the rendering threads Thread-205 and GL-Map customized by the video application and the Map application cannot be identified and accelerated due to the fact that the Thread names are not fixed.
The embodiment of the application provides a thread acceleration processing method and a thread acceleration processing device, aiming at the actual running state of an application program, the self-defined key thread for drawing frames of the application program can be accurately identified, and the key thread can comprise a main thread and a rendering thread. And moreover, the resource allocation of the identified key threads can be accelerated as required, the frame loss is reduced, the application program is prevented from being blocked, and the user experience can be improved.
The embodiment of the application can be applied to frame drawing scenes such as sliding, navigation, barrage, games, applets (e.g. WebView applets) and the like, and can avoid the problem of frame loss in the frame drawing scenes such as sliding, navigation, barrage, games, webView applets and the like, thereby improving user experience.
Fig. 5 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present application.
As shown in fig. 5, the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, a user identification module (subscriber identification module, SIM) card interface 195, and the like.
The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the structure illustrated in the present embodiment does not constitute a specific limitation on the electronic apparatus 100. In other embodiments, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and command center of the electronic device 100. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
It should be understood that the connection relationship between the modules illustrated in this embodiment is only illustrative, and does not limit the structure of the electronic device 100. In other embodiments, the electronic device 100 may also employ different interfaces in the above embodiments, or a combination of interfaces.
The charge management module 140 is configured to receive a charge input from a charger. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), a light-emitting diode (LED), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (flex light-emitting diode, FLED), a mini, micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like. The ISP is used to process data fed back by the camera 193. The camera 193 is used to capture still images or video. The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The camera 193 may include 1 to N. For example, the electronic device may include 2 front cameras and 4 rear cameras. The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card. The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. For example, in an embodiment of the present application, the processor 110 may include a storage program area and a storage data area by executing instructions stored in the internal memory 121. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. The earphone interface 170D is used to connect a wired earphone.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100. The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc. The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with the electronic device 100. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like.
The methods in the following embodiments may be implemented in the electronic device 100 having the above-described hardware structure.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. In the embodiment of the application, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate via interfaces. In some embodiments, the Android system may include an application layer, an application framework layer, a kernel layer, and a hardware layer. It should be noted that, in the embodiment of the present application, an Android system is illustrated, and in other operating systems (such as a hong mo system, an IOS system, etc.), the scheme of the present application can be implemented as long as the functions implemented by the respective functional modules are similar to those implemented by the embodiment of the present application.
The application layer may include a series of application packages, among other things.
As shown in fig. 6, the application package may include video, games, maps, WLANs, music, short messages, gallery, talk, navigation, etc. Of course, the application layer may also include other application packages, such as bluetooth, calendar, camera, settings, etc. applications, and the application is not limited.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions. For example, an activity manager, a window manager, a content provider, a view system, a resource manager, a notification manager, etc., to which embodiments of the application are not limited in any way.
In the embodiment of the application, the application program framework layer can also comprise a layer processing module, an image synthesis system, a rendering thread identification module and the like.
The layer processing module can execute layer creation, layer drawing and other processes.
The image composition system is used to control image composition and generate vertical synchronization (vetical synchronization, vsync) signals.
The rendering thread identification module is configured to identify a rendering thread in a current drawing frame period, and may include, for example, a rendering thread 1 and a rendering thread 2.
The kernel layer is a layer between hardware and software. The kernel layer contains at least a display driver, a camera driver, an audio driver, and a sensor driver (not shown). The kernel layer may also include Real Time (RT) schedulers, guest (very important person, VIP) schedulers, full fair scheduling (completely fair scheduler, CFS) schedulers/energy aware (energy aware scheduler, EAS) schedulers, and the like.
In the embodiment of the application, the kernel layer can also comprise a message processing module, a main thread identification module and a key thread acceleration module.
The message processing module is used for receiving the notification message from the rendering thread identification module. The notification message may include a thread identifier of the rendering thread identified by the rendering thread identification module. The message processing module may send the thread identification of the rendering thread to the main thread identification module.
The main thread identification module can trace back the wake-up relation between the thread running in the current drawing frame period and the rendering thread based on the important event information recorded in the current drawing frame period. And determining a main thread in the current drawing frame period based on the wake-up relation between the thread running in the current drawing frame period and the rendering thread. The main thread identification module may send the thread identification of the rendering thread and the thread identification of the main thread to the critical thread acceleration module. The thread identification of the rendering thread may be RTID (render thread identity), for example, and the thread identification of the main thread may be UTID (user interface thread identity), for example.
The key thread acceleration module can allocate resources of the rendering thread and the main thread according to the RTID of the rendering thread and the UTID of the main thread, so that acceleration processing of the rendering thread and the main thread is realized.
The hardware layers include a central processing unit (central processing unit/processor, CPU), GPU, double rate synchronous dynamic random access memory (double data rate SDRAM) (synchronous dynamic random-access memory), DDR SDRAM), etc. Of course, the hardware layer may also include other hardware, such as a display, a camera, and the like.
The Android system may further include other layers (not shown in the figure), such as An Zhuoyun row (Android run) and system library, hardware abstraction layer (hardware abstraction layer, HAL), etc., which are not limited by the present application.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio video encoding formats, such as: MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
OpenGL ES is used to implement three-dimensional graphics drawing, image rendering, compositing, and layer processing, among others.
SGL is the drawing engine for 2D drawing.
Android Runtime (Android run) includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system. The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android. The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The HAL layer is encapsulation of a Linux kernel driver, provides an interface upwards, and shields implementation details of low-level hardware.
The following describes the software modules and interactions between modules involved in the thread acceleration processing method provided by the embodiment of the present application.
As shown in fig. 6, the rendering thread identification module may identify a rendering thread in the current drawing frame period, which may include rendering thread 1 and rendering thread 2, for example. After the rendering thread identification module identifies the rendering thread in the current drawing frame period, a notification message can be sent to the message processing module of the kernel layer through a system call, and the notification message can comprise the RTID of the rendering thread identified by the rendering thread identification module. The message processing module can send the RTID of the rendering thread to the main thread identification module, and the main thread identification module can trace back the wake-up relation between the thread running in the current drawing frame period and the rendering thread based on the important event information recorded in the current drawing frame period. And determining a main thread in the current drawing frame period based on the wake-up relation between the thread running in the current drawing frame period and the rendering thread. And then, the main thread identification module can send the RTID of the rendering thread and the UTID of the main thread to the key thread acceleration module, and the key thread acceleration module can allocate resources for the rendering thread and the main thread according to the RTID of the rendering thread and the UTID of the main thread, so that the acceleration processing of the rendering thread and the main thread is realized, the problem of application clamping caused by frame loss of an application program is avoided, and the user experience is improved.
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application.
In the description of the present application, terms and english abbreviations, such as wake-up relation, target path, drawing frame period, etc., are given as examples for convenience of description, and should not be construed as limiting the present application. The present application does not exclude the possibility of defining other terms in existing or future protocols that perform the same or similar functions.
Wherein, in the description of the application, unless otherwise indicated, "at least one" means one or more, and "a plurality" means two or more. In addition, in order to facilitate the clear description of the technical solution of the embodiments of the present application, in the embodiments of the present application, the words "first", "second", etc. are used to distinguish the same item or similar items having substantially the same function and effect. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
The application scenario provided by the embodiment of the application is illustrated below with reference to the accompanying drawings.
For example, as shown in fig. 7 (a), the electronic device may receive an operation of starting playing by a user (for example, an operation of clicking a play button 702 by the user) at an interface 701 of video playing of the video application, and perform processes of frame drawing, rendering, composition, and the like by the electronic device in response to the operation of the user, and display the video content and the bullet screen content.
As shown in fig. 7 (b), the electronic device may receive an operation of starting navigation by a user (for example, an operation of clicking a start walking navigation button 704 by the user) at the route selection interface 703 of the map application, and perform processes of frame drawing, rendering, composition, and the like in response to the operation by the user, and display the walking navigation interface.
As shown in fig. 7 (c), the electronic device may be in a WeChatThe display interface 705 receives a user open appletFor example, the operation of clicking the applet entry 706 by the user), the electronic device performs processes such as frame drawing, rendering, composition, etc., in response to the operation of the user, and displays the interface of the applet.
As shown in fig. 7 (d), the electronic device may receive a user sliding operation (e.g., a user sliding up or down operation) on the display interface 707 of the gallery application, and perform processes such as frame drawing, rendering, synthesizing, and the like in response to the user operation, and display the slid interface.
It should be understood that, the application scenario provided in the embodiment of the present application may further include other interfaces of other application programs, and the operation of the user may further include other operations (for example, double click, triple click, etc.), and the application scenario and the operation of the user are not specifically limited in the embodiment of the present application.
For easy understanding, the following describes a process of interaction between each module involved in the thread acceleration processing method provided in the embodiment of the present application with reference to fig. 8.
The module related to the data processing method provided by the embodiment of the application may include: the system comprises a main thread and a rendering thread of an application program, a layer processing module of an application framework layer, an image synthesis system (Surfaceflinger), a rendering thread identification module, a message processing module of a kernel layer, a main thread identification module and a key thread acceleration module.
801. The electronic device receives a first operation of a user on an interface of a first application, and in response to the first operation, a main thread of the first application sends a layer creation request to a layer processing module of an application framework layer.
The first application may include a video application, a navigation application, a social application, a search engine application, and the like, which is not limited by the present application.
The first operation is used for triggering the electronic equipment to display target information. The target information may include, for example, bullet screen information, navigation information, game information, applet information, and the like. The first operation may include clicking, sliding, etc., and the present application is not limited thereto.
Illustratively, taking the first application as a video application as an example, as shown in (a) of fig. 7, in response to an operation (an example of a first operation) of clicking the play button 702 by a user, a main thread of the video application may send a layer creation request to an application framework layer processing module. The layer creation request may include an ID of a layer object and a buffer queue corresponding to the layer.
Illustratively, the layer to be created may include a bullet screen layer as shown in (a) of fig. 9, and the layer to be created may include a video layer as shown in (b) of fig. 9. Of course, the layers to be created may also include other layers, such as a program window layer, a station caption layer, and the like, which is not limited by the present application.
802. The layer processing module of the application framework layer sends a acquire buffer queue (BufferQueue) request to the Surfaceflinger.
The acquisition of the BufferQueue request is used for requesting the Surfaceflinger to distribute the BufferQueue for the layer. The acquisition buffer queue request may carry the ID of the buffer queue corresponding to the layer.
803. The surface executor returns the BufferQueue corresponding to the layer processing module.
It is understood that different layers may correspond to different BufferQueue.
For example, the barrage layer may correspond to BufferQueue1, where one or more buffers may be included in BufferQueue 1. The video layer may correspond to BufferQueue 2, and one or more buffers may be included in BufferQueue 2. The BufferQueue1 is different from the BufferQueue 2.
804. The layer processing module returns attribute information of the layer to the main thread.
Among them, attribute information of the layers includes, but is not limited to, height, width, center coordinates, scaling attributes, rotation attributes, and the like.
805. The main thread requests a Vsync signal from a surfaefanger.
The Vsync signal may be Vsync-APP, which is used to trigger the rendering process.
806. The Surfaceflinger returns a Vsync signal to the main thread.
After the main thread receives the Vsync signal, the frame interval may be calculated according to a time stamp of the Vsync signal. For example, the main thread may calculate a difference between the timestamp of the Vsync-APP signal received this time and the timestamp of the Vsync-APP signal received last time, which is the frame interval corresponding to the rendering of the previous frame. The main thread can also calculate the displacement, which is the product of the frame interval and the speed. The main thread may determine the speed based on a pre-stored speed profile.
807. The main line Cheng Huanxing renders the thread.
The main thread may send the frame interval and the displacement of the current frame to the rendering thread to wake up the rendering thread. After the rendering thread is awakened, drawing of the rendered image is started.
It should be noted that, the name of the rendering thread of the application (first application) may be application-customized. For example, the name of a rendering thread that a video application may customize a barrage layer is thread 205. For another example, the name of a rendering thread that the navigation application may customize the navigation layer is thread 205.
808. The rendering thread draws the rendered image.
The rendering thread is used for executing layer rendering to obtain a rendered layer, and the rendered layer corresponds to the target information.
For example, in the case where the rendering thread is a barrage rendering thread, the rendered layer is a barrage layer; in the case that the rendering thread is a navigation rendering thread, the rendered layer is a navigation layer; in the case that the rendering thread is a game rendering thread, the rendered layer is a game layer; in the case where the rendering thread is an applet rendering thread, the rendered layer is an applet layer.
809. The rendering thread sends a request cache command to the Surfaceflinger.
The request cache command is used for requesting cache to a Surfaceflinger so as to store the rendered image.
810. The Surfaceflinger sends an instruction to the rendering thread indicating cache dequeuing.
After receiving a request cache command sent by a rendering thread, the Surfaceflinger can reserve a space for storing the drawn and rendered image, and send an instruction for indicating cache dequeuing to the rendering thread.
811. And the rendering thread stores the drawn and rendered image into a buffer area queue.
After receiving an instruction for indicating cache dequeuing, the rendering thread draws the rendered image according to the displacement, and stores the drawn and rendered image into a buffer zone queue (buffer) by calling a queue buffer function. The buffer queue may also be referred to as a cache queue, and the present application is not limited thereto.
Further, the rendering thread notifies the Surfaceflinger to read rendering data from the BufferQueue. The Surfaceflinger may synthesize rendering data. After the synthesis is completed, the electronic device can start the display driver by calling the kernel layer, and display the content corresponding to the frame synthesized by rendering on a screen (display screen).
812. The rendering thread identification module obtains rendering thread information.
In the embodiment of the application, the electronic equipment can acquire the thread identification of the rendering thread through the rendering thread identification module. The rendering thread identification module can insert the instrumentation function into an insertion point of a queue buffer function through 'instrumentation' in the queue buffer function, namely, the instrumentation function is inserted into the insertion point of the queue buffer function in advance, and when the instrumentation function is executed to the insertion point of the queue buffer function, the instrumentation function is executed, so that the thread identification of the rendering thread can be obtained in real time. Wherein the queue buffer function is used for submitting the rendered image, namely filling the rendered image into a buffer queue (buffer queue).
For example, as shown in fig. 10, after the rendering thread is awakened by the main thread, a frame rendering operation may be performed to obtain rendering data. For example, the rendering thread may perform a rendering operation on frame 1 in a previous Vsync period to obtain rendering data and store the rendering data in a buffer queue. The rendering operation is performed on frame 2 in the latter Vsync period to obtain rendering data and store the rendering data in the buffer queue. When the rendering thread stores the rendering data into the buffer queue, the thread identification of the rendering thread can be obtained in real time through the instrumentation function inserted into the insertion point of the buffer queue.
In one possible design, the electronic device may filter the thread identification of the rendering thread obtained through the "instrumentation". For example, the electronic device may compare the currently acquired RTID of the rendering thread with the RTID in the preset table. The name of the standard thread is stored in a preset table. If the RTID of the currently acquired rendering thread is the same as the RTID in the preset table, the currently acquired rendering thread is filtered, and the tracing and acceleration processing of the currently acquired rendering thread is not required, i.e. the following steps 813-817 are not executed. If the RTID of the currently acquired rendering thread is different from the RTID in the preset table, the currently acquired rendering thread is traced and accelerated, that is, the following steps 813-817 are continuously executed.
It can be appreciated that if the RTID of the currently acquired rendering thread is the same as the RTID in the preset table, it indicates that the currently acquired rendering thread is a standard rendering thread, and the following steps 813-817 are not needed to be executed because the standard rendering thread has a corresponding acceleration scheme. In this way, it is possible to avoid performing repeated acceleration processing on the standard rendering thread.
813. The rendering thread identification module sends rendering thread information to the message processing module of the kernel layer.
The rendering thread information may include an RTID of the rendering thread.
814. The message processing module sends rendering thread information to the main thread identification module.
After receiving the rendering thread information, the message processing module of the kernel layer can send the rendering thread information to the main thread identification module.
815. The main thread identification module determines a main thread with a wake-up relation with the rendering thread according to the thread identification of the rendering thread.
In one possible design, the main thread identification module may trace back a wake-up relationship between a thread running in a current drawing frame period and a rendering thread based on important event information recorded in the current drawing frame period, and determine the main thread based on the wake-up relationship.
The current drawing frame period may refer to a time between an execution time of the queue buffer function corresponding to a current frame (e.g., frame 1) and an execution time of the queue buffer function corresponding to a next frame (e.g., frame 2). Alternatively, the current drawing frame period may refer to a time between the Vsync signal corresponding to the current frame (e.g., frame 1) and the Vsync signal corresponding to the next frame (e.g., frame 2).
The electronic device may execute multiple threads during each drawing frame period, during which the electronic device may record important event information. The electronic equipment can record important event information through the scheduled event acquisition module of the kernel layer, and the main thread identification module can acquire the important event information from the scheduled event acquisition module. The important event information may include, among other things, wake events from thread to thread. The wake event from thread to thread may include the running thread waking up the thread in sleep, the running thread creating the thread and waking up the created thread.
Illustratively, the main thread identification module of the electronic device may obtain a thread in sleep that the running thread wakes up through a wake parameter/function (e.g., sched_wake), may obtain a thread creation thread that is running through a wake new thread parameter/function (e.g., sched_process_fork) and wake up the created thread.
Optionally, the above important event information may further include information such as a process identifier (process id, PID) of the awakened thread, a scheduling group corresponding to the awakened thread, a timestamp corresponding to the awakened thread (time when the thread is awakened), an identifier of a processor corresponding to the awakened thread, an operation duration of the awakened thread on the processor, and an operation frequency of the awakened thread. Wherein the awakened threads include a rendering thread and a main thread.
The main thread identification module of the electronic device can determine a plurality of threads with a wake-up relationship by adopting a traceback (backtracking) method with the rendering thread as a starting point. When tracing back to a thread awakened by a kernel thread or system interrupt, the electronic device no longer traces back. The electronic device obtains the execution order of the plurality of threads according to the wake-up relation among the plurality of threads, and determines a target path according to the execution order of the plurality of threads. Wherein the execution order of the plurality of threads may be arranged from back to front (from late to early) in wake-up time.
It should be noted that the target path may include at least one (one or more). The start of each target path may be a rendering thread and the end of each target path may be a thread awakened by a kernel thread or system interrupt or application thread.
Illustratively, FIG. 11 shows a schematic diagram of an entry target path (first target path). As shown in fig. 11, the thread that wakes up the rendering thread is thread B, and the rendering thread is woken up by thread B at time point t2 (i.e., thread B wakes up thread a at time point t 2), the thread that wakes up thread B is thread a, and thread B is woken up by thread a at time point t1. If thread A is awakened by the kernel thread or the system interrupt, the electronic device does not trace back any more. Wherein time t2 is later than time t1. The first target path may include rendering thread, thread B, thread a. The starting point of the first target path is a rendering thread, and the end point is a thread A.
In addition, in fig. 11, if the thread B wakes up the thread D in addition to the rendering thread, the thread D is not included in the first target path because the thread D is a thread that cannot be traced when the rendering thread is the starting point.
Illustratively, fig. 12 shows a schematic diagram of yet another target path (second target path). As shown in fig. 12, the rendering thread may be awakened by thread C at time point t4, thread C may be awakened by the rendering thread at time point t5, and the rendering thread may be awakened by thread a at time point t6. It should be appreciated that the rendering thread may enter a sleep state after waking thread C at time t5, and then be re-woken up by thread C at time t 4. If thread A is awakened by the kernel thread or the system interrupt, the electronic device does not trace back any more. Wherein time t4 is later than time t5, and time t5 is later than time t6. The second target path may include a rendering thread, thread C, rendering thread, thread a. The starting point of the first target path is a rendering thread, and the end point is a thread A.
The electronic device may determine the main thread based on at least one target path. The main thread is the thread with the largest number of times as the endpoint in at least one target path.
For example, if the at least one target path includes a first target path as shown in fig. 11 and a second target path as shown in fig. 12, since the end points of the first target path and the second target path are both threads a, the threads a are main threads.
816. The main thread identification module sends the key thread group information to the key thread acceleration module.
The key thread group information is used for indicating a key thread group, and the key thread group can comprise rendering threads and main threads.
For example, the critical thread group information may include a thread identification of the rendering thread and a thread identification of the main thread.
817. The critical thread acceleration module accelerates the critical thread group.
The key thread acceleration module can determine information such as a dispatch group corresponding to the rendering thread and the main thread, an identifier of a processor, an operation time length on the processor, an operation frequency and the like according to the important event information, and perform acceleration processing on the rendering thread and the main thread according to the information.
The accelerating processing of the rendering thread and the main thread by the key thread accelerating module may include at least one of:
1) And improving the priority of the dispatch groups corresponding to the rendering thread and the main thread. For example, if before the acceleration processing, the schedule group corresponding to the rendering thread and the main thread is determined to be the first priority schedule group according to the important event information, the rendering thread and the main thread may be adjusted from the first priority schedule group (for example, the CFS schedule group) to the second priority schedule group (for example, the VIP schedule group), and the schedule priority of the threads in the second priority schedule group is higher than that of the threads in the first priority schedule group. The electronic device can preferentially allocate processing resources for threads in the second priority dispatch group. In this manner, the electronic device may preferentially schedule threads in the second priority scheduling group to perform the corresponding task.
2) And frequency-boosting processing is carried out on the rendering thread and the main thread.
The critical thread acceleration module can improve the frequency of corresponding processors of the rendering thread and the main thread, thereby improving the execution efficiency of the rendering thread and the main thread. By way of example, assume that the processor's frequencies may include 1.0GHz, 1.5GHz, 2.0GHz. If the frequency of the processor corresponding to the rendering thread and the main thread is determined to be 1.0GHz according to the important event information before the acceleration processing, the frequency of the processor corresponding to the rendering thread and the main thread can be increased to 1.5GHz or 2.0GHz.
Alternatively, a higher computational power processor may be allocated to the rendering thread and the main thread, thereby improving the execution efficiency of the rendering thread and the main thread. For example, if the rendering thread and the main thread Cheng Duiying are determined from the important event information before acceleration processing, the first CPU may have a frequency of 1.0GHz, and the rendering thread and the main thread may be reassigned to the second CPU, where the frequency of the second CPU is 1.5GHz or 2.0GHz.
3) And improving the thread priority of the rendering thread and the main thread.
I.e., the priority of rendering threads and main threads in the dispatch group may be increased. For example, if the thread priorities of the rendering thread and the main thread are the third priority before the acceleration processing, the rendering thread and the main thread may be tuned up from the third priority to the fourth priority, where the fourth priority is higher than the third priority, and the thread of the fourth priority in the same scheduling group is executed before the thread of the third priority.
For example, as shown in fig. 13, the schedule group of rendering threads and/or main threads may be set as a VIP schedule group, instead of a CFS schedule group. It should be appreciated that the Real Time (RT) scheduling group has higher processing performance than the VIP scheduling group, which has higher processing performance than the full fair scheduling (completely fair scheduler, CFS) scheduling group. Further, the rendering thread and/or the processor corresponding to the main thread may be configured as a large core processor. It will be appreciated that the processing speed of a large core processor is higher than that of a medium core processor, which is higher than that of a small core processor. Still further, the rendering thread and/or the main thread may be frequency-up processed. Still further, the thread priority of the rendering thread and/or the main thread may be set high.
Alternatively, as shown in fig. 14, the schedule group of the rendering thread and/or the main thread may be set as a VIP schedule group, instead of a CFS schedule group. Further, the rendering thread and/or the main thread may be frequency-up processed. Still further, the thread priority of the rendering thread and/or the main thread may be set high.
It should be appreciated that in a scene where the bullet screen is displayed, navigated displayed, or the user is continuously sliding, the processes of frame drawing, rendering, compositing, etc. may be performed all the time, and steps 808-817 may be performed in a loop. In a scenario where the bullet screen display is closed (closing bullet screen), the navigation display is closed (closing navigation), or the sliding interface is stopped, steps 808-817 may not be performed.
Based on the method provided by the embodiment of the application, the rendering thread and the main thread are identified, so that resources are accurately allocated for the rendering thread and the main thread, frame loss and clamping can be avoided, and user experience can be improved.
Taking video application as an example, through experimental verification, after the rendering thread and the main thread corresponding to the bullet screen layer of the video application are identified and accelerated by adopting the scheme provided by the embodiment of the application, as shown in fig. 15, the time that the rendering thread (for example, the thread 205) is in a runnable state can be obviously reduced. As shown in table 1, the frame rate (the average of 4 frames per second can be increased) can be obviously increased under the condition of small power consumption increase (the current increase is small, so that the power consumption increase is small), the frame loss is reduced, and the clamping is avoided. As shown in (a) and (b) in fig. 16, compared with fig. 2, the barrage 1002 and 1004 of the barrage layer can be displayed in a scrolling manner along with video playing, and no longer be jammed, so that the user experience can be improved.
TABLE 1
Some embodiments of the application provide an electronic device that may include: a touch screen, a memory, and one or more processors. The touch screen, memory, and processor are coupled. The memory is for storing computer program code, the computer program code comprising computer instructions. When the processor executes the computer instructions, the electronic device may perform the various functions or steps performed by the electronic device in the method embodiments described above. The structure of the electronic device may refer to the structure of the electronic device 100 shown in fig. 5.
Embodiments of the present application also provide a system on a chip (SoC) including at least one processor 1701 and at least one interface circuit 1702 as shown in fig. 17. The processor 1701 and the interface circuit 1702 may be interconnected by wires. For example, the interface circuit 1702 may be used to receive signals from other devices (e.g., a memory of an electronic apparatus). For another example, the interface circuit 1702 may be used to send signals to other devices (e.g., the processor 1701 or a touch screen of an electronic device). The interface circuit 1702 may, for example, read instructions stored in a memory and send the instructions to the processor 1701. The instructions, when executed by the processor 1701, may cause the electronic device to perform the various steps described in the embodiments above. Of course, the system-on-chip may also include other discrete devices, which are not particularly limited in accordance with embodiments of the present application.
Embodiments of the present application also provide a computer-readable storage medium including computer instructions that, when executed on an electronic device described above, cause the electronic device to perform the functions or steps performed by the electronic device (e.g., a mobile phone) in the method embodiments described above.
Embodiments of the present application also provide a computer program product which, when run on an electronic device, causes the electronic device to perform the functions or steps performed by the electronic device (e.g., a mobile phone) in the method embodiments described above.
It will be apparent to those skilled in the art from this description that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (18)

1. A thread acceleration processing method is applied to electronic equipment and is characterized by comprising the following steps:
the electronic equipment receives a first operation of a user on an interface of a first application, wherein the first operation is used for triggering the electronic equipment to display target information;
the electronic equipment acquires a thread identifier of a rendering thread, wherein the rendering thread is used for executing layer rendering to obtain a rendered layer, and the rendered layer corresponds to the target information;
the electronic equipment determines a main thread with a wake-up relation with the rendering thread according to the thread identification of the rendering thread;
the electronic device accelerates the rendering thread and the main thread.
2. The method of claim 1, wherein the acceleration process comprises at least one of:
adjusting the rendering thread and the main thread from a first priority scheduling group to a second priority scheduling group, wherein the scheduling priority of the threads in the second priority scheduling group is higher than that of the threads in the first priority scheduling group;
Performing frequency boosting processing on the rendering thread and the main thread;
and improving the thread priority of the rendering thread and the main thread.
3. The method of claim 1 or 2, wherein before the electronic device obtains the thread identification of the rendering thread, the method further comprises:
the electronic equipment inserts a stake-inserting function in an insertion point of the enqueue function corresponding to the rendering thread;
the electronic device obtaining the thread identifier of the rendering thread comprises:
and when the electronic equipment executes the insertion point to the enqueuing function, pile inserting information is obtained, wherein the pile inserting information comprises a thread identifier of the rendering thread.
4. A method according to any of claims 1-3, wherein the electronic device determining, from the thread identification of the rendering thread, a main thread having a wake-up relationship with the rendering thread comprises:
the electronic equipment traces back at least one thread with a wake-up relation in the current drawing frame period by taking the rendering thread as a starting point based on important event information recorded in the current drawing frame period, wherein the important event information comprises wake-up events among threads, and the wake-up relation is used for representing the wake-up relation between the threads;
The electronic equipment determines at least one target path according to the wake-up relation between the at least one thread;
and the electronic equipment determines the main thread according to the at least one target path, wherein the main thread is the thread with the largest number of times of serving as the path end point in the at least one target path.
5. The method of claim 4, wherein the step of determining the position of the first electrode is performed,
the important event information further comprises an identifier of a processor corresponding to a awakened thread, a scheduling group corresponding to the awakened thread, the operation time of the awakened thread on the processor, and at least one of the operation frequencies of the awakened thread on the processor, wherein the awakened thread comprises the rendering thread and the main thread;
the electronic device accelerating the rendering thread and the main thread includes:
the electronic device performs acceleration processing on the rendering thread and the main thread according to at least one of the running frequencies of the awakened thread on the processor according to the identifier of the processor corresponding to the awakened thread, the scheduling group corresponding to the awakened thread, and the running time of the awakened thread on the processor.
6. The method according to claim 4 or 5, characterized in that the method further comprises:
and when the electronic equipment traces back to the thread awakened by the kernel thread or the system interrupt or the application thread, the tracing back is not continued.
7. The method according to any one of claims 1-6, further comprising:
the electronic equipment compares the thread identification of the rendering thread with a standard thread identification;
the thread identification of the rendering thread is different from the standard thread identification.
8. The method according to any one of claims 1 to 7, wherein,
the thread identification of the rendering thread is a thread identification customized by the first application.
9. The method according to any one of claims 1 to 8, wherein,
the rendering thread includes: at least one of barrage rendering thread, navigation rendering thread, game rendering thread and applet rendering thread.
10. The method of claim 9, wherein the step of determining the position of the substrate comprises,
in the case that the rendering thread is a barrage rendering thread, the rendered layer is a barrage layer;
in the case that the rendering thread is a navigation rendering thread, the rendered layer is a navigation layer;
In the case that the rendering thread is a game rendering thread, the rendered layer is a game layer;
in the case that the rendering thread is an applet rendering thread, the rendered layer is an applet layer.
11. The method of any of claims 1-10, wherein the rendering thread to perform a rendering process on the target information comprises:
and the rendering thread is used for executing layer drawing and layer rendering to obtain a rendered layer.
12. The method of any of claims 1-11, wherein the electronic device comprises a rendering thread identification module, a message processing module, a main thread identification module, and a critical thread acceleration module; the electronic device obtaining the thread identifier of the rendering thread comprises:
the rendering thread identification module inserts an instrumentation function in an insertion point of an enqueue function corresponding to the rendering thread in advance;
when the execution is performed to the insertion point of the enqueuing function, the rendering thread identification module acquires instrumentation information, wherein the instrumentation information comprises a thread identifier of the rendering thread;
the electronic device determining, according to the thread identifier of the rendering thread, a main thread having a wake-up relationship with the rendering thread includes:
The rendering thread identification module sends a thread identification of a rendering thread to the message processing module;
the message processing module sends the thread identification of the rendering thread to the main thread identification module;
the main thread identification module determines a main thread with a wake-up relation with the rendering thread according to the thread identification of the rendering thread;
the electronic device performing acceleration processing on the rendering thread and the main thread according to the thread identifier of the rendering thread and the thread identifier of the main thread comprises:
the main thread identification module sends key thread group information to the key thread acceleration module, wherein the key thread group information comprises a thread identifier of the rendering thread and a thread identifier of the main thread;
the critical thread acceleration module accelerates the critical thread group, wherein the critical thread group comprises the rendering thread and the main thread.
13. The method of claim 12, wherein the electronic device further comprises a layer processing module and an image composition system, the method further comprising, prior to the electronic device obtaining the thread identification of the rendering thread:
Responding to the operation of a user, and sending a layer creation request to the layer processing module by the main thread;
the layer processing module sends a request for acquiring a buffer area queue to the image synthesis system;
the image synthesis system returns a buffer queue corresponding to the layer processing module;
and the layer processing module returns attribute information of the layer to the main thread.
14. The method of claim 13, wherein the method further comprises:
the main thread requesting a vertical synchronization signal from the image composition system;
the image synthesis system returns a vertical synchronization signal to the main thread;
the main line Cheng Huanxing the rendering thread;
the rendering thread draws a rendered image.
15. The method of claim 14, wherein the method further comprises:
the rendering thread sends a request cache command to the image composition system;
the image synthesis system sends an instruction for indicating cache dequeuing to the rendering thread;
and the rendering thread stores the drawn and rendered image into the buffer queue.
16. An electronic device comprising a processor for executing a computer program stored in a memory to cause the electronic device to implement the method of any one of claims 1-15.
17. A system on a chip, comprising a processor coupled to a memory, the processor executing a computer program stored in the memory to implement the method of any of claims 1-15.
18. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when run on a processor, implements the method according to any of claims 1-15.
CN202310493154.8A 2023-04-28 2023-04-28 Thread acceleration processing method and device Active CN117130774B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310493154.8A CN117130774B (en) 2023-04-28 2023-04-28 Thread acceleration processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310493154.8A CN117130774B (en) 2023-04-28 2023-04-28 Thread acceleration processing method and device

Publications (2)

Publication Number Publication Date
CN117130774A true CN117130774A (en) 2023-11-28
CN117130774B CN117130774B (en) 2024-07-12

Family

ID=88855349

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310493154.8A Active CN117130774B (en) 2023-04-28 2023-04-28 Thread acceleration processing method and device

Country Status (1)

Country Link
CN (1) CN117130774B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118092728A (en) * 2024-04-08 2024-05-28 荣耀终端有限公司 Rendering method and electronic equipment

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170161865A1 (en) * 2013-10-28 2017-06-08 Vmware, Inc. Method and System to Virtualize Graphic Processing Services
CN106843859A (en) * 2016-12-31 2017-06-13 歌尔科技有限公司 The method for drafting and device and a kind of virtual reality device of a kind of virtual reality scenario
US20180322605A1 (en) * 2017-05-04 2018-11-08 Facebook, Inc. Asynchronous ui framework
CN110347947A (en) * 2019-06-17 2019-10-18 阿里巴巴集团控股有限公司 A kind of page rendering method and device
CN111240926A (en) * 2019-12-31 2020-06-05 苏州极光无限信息技术有限公司 IOS stuck monitoring method and system
CN111739136A (en) * 2019-06-14 2020-10-02 腾讯科技(深圳)有限公司 Rendering method, computer device, and storage medium
CN111813520A (en) * 2020-07-01 2020-10-23 Oppo广东移动通信有限公司 Thread scheduling method and device, storage medium and electronic equipment
CN113051047A (en) * 2021-03-03 2021-06-29 惠州Tcl移动通信有限公司 Method and device for identifying drawing thread of android system, mobile terminal and storage medium
US20220100512A1 (en) * 2021-12-10 2022-03-31 Intel Corporation Deterministic replay of a multi-threaded trace on a multi-threaded processor
US20220180528A1 (en) * 2020-02-10 2022-06-09 Nvidia Corporation Disentanglement of image attributes using a neural network
CN115016706A (en) * 2021-12-31 2022-09-06 荣耀终端有限公司 Thread scheduling method and electronic equipment
CN115017002A (en) * 2021-12-22 2022-09-06 荣耀终端有限公司 Frequency prediction method and frequency prediction device
US20230058935A1 (en) * 2021-08-18 2023-02-23 Micron Technology, Inc. Managing return parameter allocation
CN115802092A (en) * 2022-11-18 2023-03-14 中船重工鹏力(南京)大气海洋信息系统有限公司 Multi-window display method based on interaction priority

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170161865A1 (en) * 2013-10-28 2017-06-08 Vmware, Inc. Method and System to Virtualize Graphic Processing Services
CN106843859A (en) * 2016-12-31 2017-06-13 歌尔科技有限公司 The method for drafting and device and a kind of virtual reality device of a kind of virtual reality scenario
US20180322605A1 (en) * 2017-05-04 2018-11-08 Facebook, Inc. Asynchronous ui framework
CN111739136A (en) * 2019-06-14 2020-10-02 腾讯科技(深圳)有限公司 Rendering method, computer device, and storage medium
CN110347947A (en) * 2019-06-17 2019-10-18 阿里巴巴集团控股有限公司 A kind of page rendering method and device
CN111240926A (en) * 2019-12-31 2020-06-05 苏州极光无限信息技术有限公司 IOS stuck monitoring method and system
US20220180528A1 (en) * 2020-02-10 2022-06-09 Nvidia Corporation Disentanglement of image attributes using a neural network
CN111813520A (en) * 2020-07-01 2020-10-23 Oppo广东移动通信有限公司 Thread scheduling method and device, storage medium and electronic equipment
CN113051047A (en) * 2021-03-03 2021-06-29 惠州Tcl移动通信有限公司 Method and device for identifying drawing thread of android system, mobile terminal and storage medium
US20230058935A1 (en) * 2021-08-18 2023-02-23 Micron Technology, Inc. Managing return parameter allocation
US20220100512A1 (en) * 2021-12-10 2022-03-31 Intel Corporation Deterministic replay of a multi-threaded trace on a multi-threaded processor
CN115017002A (en) * 2021-12-22 2022-09-06 荣耀终端有限公司 Frequency prediction method and frequency prediction device
CN115016706A (en) * 2021-12-31 2022-09-06 荣耀终端有限公司 Thread scheduling method and electronic equipment
CN115802092A (en) * 2022-11-18 2023-03-14 中船重工鹏力(南京)大气海洋信息系统有限公司 Multi-window display method based on interaction priority

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JOHN E. STONE 等: "Immersive Molecular Visualization with Omnidirectional Stereoscopic Ray Tracing and Remote Rendering", 《2016 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS (IPDPSW)》, 4 August 2016 (2016-08-04), pages 1048 - 1057 *
付仲良 等: "一种三维天际线快速提取和显示算法", 《测绘地理信息》, vol. 47, no. 3, 5 January 2022 (2022-01-05), pages 96 - 99 *
王丽 等: "深度学习编译器模型训练负载均衡优化方法", 《计算机科学与探索》, 16 February 2023 (2023-02-16), pages 1 - 18 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118092728A (en) * 2024-04-08 2024-05-28 荣耀终端有限公司 Rendering method and electronic equipment

Also Published As

Publication number Publication date
CN117130774B (en) 2024-07-12

Similar Documents

Publication Publication Date Title
CN111813536B (en) Task processing method, device, terminal and computer readable storage medium
CN115631258B (en) Image processing method and electronic equipment
CN114579075B (en) Data processing method and related device
CN113254120B (en) Data processing method and related device
CN114338952B (en) Image processing method based on vertical synchronous signal and electronic equipment
CN115048012B (en) Data processing method and related device
CN114531519B (en) Control method based on vertical synchronous signal and electronic equipment
CN116991354A (en) Data processing method and related device
CN117130774B (en) Thread acceleration processing method and device
CN113986002A (en) Frame processing method, device and storage medium
CN116257235B (en) Drawing method and electronic equipment
CN115904184B (en) Data processing method and related device
CN116414337A (en) Frame rate switching method and device
CN115686403A (en) Display parameter adjusting method, electronic device, chip and readable storage medium
CN116700578B (en) Layer synthesis method, electronic device and storage medium
WO2023124227A1 (en) Frame rate switching method and device
WO2023124225A1 (en) Frame rate switching method and apparatus
CN117909071B (en) Image display method, electronic device, storage medium, and chip system
WO2024093431A1 (en) Image drawing method and electronic device
WO2024032430A1 (en) Memory management method and electronic device
CN116414336A (en) Frame rate switching method and device
CN118447141A (en) Image display method and related device
CN118426888A (en) Display control method and electronic equipment
CN117501233A (en) Screen projection image processing method and device
CN117407127A (en) Thread scheduling method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant