CN113204425B - Method, device, electronic equipment and storage medium for process management internal thread - Google Patents

Method, device, electronic equipment and storage medium for process management internal thread Download PDF

Info

Publication number
CN113204425B
CN113204425B CN202110431231.8A CN202110431231A CN113204425B CN 113204425 B CN113204425 B CN 113204425B CN 202110431231 A CN202110431231 A CN 202110431231A CN 113204425 B CN113204425 B CN 113204425B
Authority
CN
China
Prior art keywords
terminal
processor core
internal thread
processor
priority
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110431231.8A
Other languages
Chinese (zh)
Other versions
CN113204425A (en
Inventor
徐士立
陈晶晶
刘专
张其田
洪楷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tencent Network Information Technology Co Ltd
Original Assignee
Shenzhen Tencent Network Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tencent Network Information Technology Co Ltd filed Critical Shenzhen Tencent Network Information Technology Co Ltd
Priority to CN202310421243.1A priority Critical patent/CN116450353A/en
Priority to CN202110431231.8A priority patent/CN113204425B/en
Publication of CN113204425A publication Critical patent/CN113204425A/en
Application granted granted Critical
Publication of CN113204425B publication Critical patent/CN113204425B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3058Monitoring arrangements for monitoring environmental properties or parameters of the computing system or of the computing system component, e.g. monitoring of power, currents, temperature, humidity, position, vibrations
    • G06F11/3062Monitoring arrangements for monitoring environmental properties or parameters of the computing system or of the computing system component, e.g. monitoring of power, currents, temperature, humidity, position, vibrations where the monitored property is the power consumption
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/508Monitor
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application provides a method, a device, electronic equipment and a storage medium for managing internal threads by a process, wherein the method comprises the following steps: monitoring the current running scene of the process; acquiring the time consumed by each internal thread for rendering the image frame of the process in the running scene; determining the matching priority of each internal thread and a processor core of a terminal based on the time spent by each internal thread on the image frame rendering of the process in the running scene, wherein the processor core comprises at least one large processor core with high frequency and at least one small processor core with low frequency; and interacting with the terminal based on the matching priority, so that the terminal schedules the large processor core and the small processor core for processing each internal thread by taking the matching priority as a reference. The method and the device for managing the threads can improve the rationality of thread management.

Description

Method and device for process management internal thread, electronic equipment and storage medium
Technical Field
The application relates to the field of intelligent terminals, in particular to a method and a device for process management of internal threads, electronic equipment and a storage medium.
Background
At present, in an intelligent terminal which has rich man-machine interaction modes, has the capability of accessing to the internet, is usually loaded with various operating systems and has strong processing capability, a chip of the intelligent terminal generally has a plurality of processor cores. The current mainstream chip will carry at least one large processor core with high frequency and at least one small processor core with low frequency. The performance of a large processor core is greater than that of a small processor core, but the performance of the large processor core also results in higher power consumption. In the prior art, the management rationality of the process is not high, so that the processor core for processing the internal thread of the process cannot achieve the balance of performance and power consumption on the whole.
Disclosure of Invention
An object of the present application is to provide a method, an apparatus, an electronic device, and a storage medium for process management of internal threads, which can improve the rationality of thread management.
According to an aspect of an embodiment of the present application, a method for a process to manage internal threads is disclosed, the method comprising:
monitoring the current running scene of the process;
acquiring the time consumed by each internal thread for rendering the image frame of the process in the running scene;
determining the matching priority of each internal thread and a processor core of a terminal based on the time spent by each internal thread on the image frame rendering of the process in the running scene, wherein the processor core comprises at least one large processor core with high frequency and at least one small processor core with low frequency;
and interacting with the terminal based on the matching priority, so that the terminal schedules the large processor core and the small processor core for processing each internal thread by taking the matching priority as reference.
According to an aspect of the embodiments of the present application, an apparatus for a process to manage internal threads is disclosed, the apparatus including:
the monitoring module is configured to monitor the current running scene of the process;
the acquisition module is configured to acquire the time consumed by each internal thread for rendering the image frame of the process in the running scene;
the determining module is configured to determine matching priorities of the internal threads and processor cores of a terminal based on rendering time of the internal threads for image frames of a process in the running scene, wherein the processor cores comprise at least one large processor core with high frequency and at least one small processor core with low frequency;
and the interaction module is configured to interact with the terminal based on the matching priority, so that the terminal schedules the large processor core and the small processor core for processing each internal thread by taking the matching priority as a reference.
In an exemplary embodiment of the present application, the apparatus is configured to:
deploying the running environment of the terminal in the virtual machine, and running a process in the running environment of the terminal;
and monitoring the running environment of the terminal in the virtual machine to acquire the time consumed by each internal thread in the running scene for rendering the image frame of the process.
In an exemplary embodiment of the present application, the apparatus is configured to:
dividing each internal thread into a first priority, a second priority and a third priority according to the sequence from high to low of the time consumption of each internal thread for image frame rendering of a process in the running scene, wherein the internal thread of the first priority is preferentially matched with the processor big core, the internal thread of the third priority is preferentially matched with the processor small core, and the internal thread of the second priority is preferentially matched with the processor big core after the internal threads of the first priority are both matched with the processor big core.
In an exemplary embodiment of the present application, the apparatus is configured to:
and establishing real-time communication with the terminal, and sending a scheduling request to the terminal based on the matching priority, so that the terminal can schedule the large processor core and the small processor core for processing each internal thread by taking the matching priority as reference.
In an exemplary embodiment of the present application, the apparatus is configured to:
marking the thread name of each internal thread based on the category of the matched processor core, and scheduling the large processor core and the small processor core which process each internal thread by the terminal by taking the matching priority as reference.
In an exemplary embodiment of the present application, the apparatus is configured to:
acquiring scheduling state information returned by the terminal, wherein the scheduling state information is used for describing internal threads processed by the large processor core and the small processor core respectively;
adjusting the matching priority based on the scheduling status information.
In an exemplary embodiment of the present application, the apparatus is configured to:
uploading the scheduling state information to a process management terminal so that the process management terminal updates the matching priority in response to the scheduling state information;
and synchronizing the matching priority with the updated matching priority fed back by the process management end.
According to an aspect of an embodiment of the present application, an electronic device is disclosed, including: a memory storing computer readable instructions; a processor reading computer readable instructions stored by the memory to perform the method of any of the preceding claims.
According to an aspect of embodiments of the present application, a computer program medium is disclosed, having computer readable instructions stored thereon, which, when executed by a processor of a computer, cause the computer to perform the method of any of the preceding claims.
According to an aspect of embodiments herein, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the method provided in the above-mentioned various alternative implementation modes.
In the embodiment of the application, the process manages the internal thread and is associated with the current running scene of the process. Specifically, the process makes the processor core scheduled by the terminal for processing each internal thread and the image frame rendering time consumption of the process by each internal thread under the operation scene associated through the method, so that the internal threads are managed, the real-time scene requirement can be met in the aspect of the image frame rendering time consumption, and the rationality of thread management is improved.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned by practice of the application.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings.
FIG. 1 illustrates a system architecture diagram for a process to manage internal threads according to one embodiment of the present application.
Fig. 2 illustrates an interaction flow between a process client and a terminal according to an embodiment of the present application.
FIG. 3 illustrates a flow diagram of a method for a process to manage internal threads provided in accordance with an embodiment of the present application.
Fig. 4 shows a scheduling request generation flow diagram according to an embodiment of the application.
FIG. 5 is a block diagram of an apparatus for a process to manage internal threads according to an embodiment of the application.
Fig. 6 shows a hardware diagram of an electronic device according to an embodiment of the application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these example embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The drawings are merely schematic illustrations of the present application and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and a repetitive description thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more example embodiments. In the following description, numerous specific details are provided to give a thorough understanding of example embodiments of the present application. One skilled in the relevant art will recognize, however, that the subject matter of the present application can be practiced without one or more of the specific details, or with other methods, components, steps, and so forth. In other instances, well-known structures, methods, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the application.
Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The application provides a method for managing internal threads by a process, and the execution main body of the method is the process. The process in the embodiment of the application adopts the method to interact with the terminal where the process is located, and the terminal schedules the corresponding processor core to process the internal thread of the process on the basis of the interaction, so that the management of the internal thread is realized.
FIG. 1 illustrates a system architecture diagram for a process to manage internal threads according to one embodiment of the present application.
Referring to fig. 1, in this embodiment, the process client, the terminal 10 and the process management end 20 are mainly involved in the process of managing its internal threads.
The process client is an application installed in the terminal 10 to provide a corresponding service to the user, and the process is located in the process client. The process management terminal 20 is generally a server for performing control management on processes in the process client. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud functions, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN, and a big data and artificial intelligence platform. The terminal 10 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal 10 and the server may be directly or indirectly connected through wired or wireless communication, and the present application is not limited thereto.
The process in the process client mainly comprises the following functional modules: the device comprises a thread monitoring module, a data reporting module, a thread matching module and an interaction control module.
The thread monitoring module is mainly used for monitoring the creation and destruction of threads in the process and maintaining a current running thread list.
The thread matching module is mainly used for carrying out processor core matching logic judgment on the internal threads in the current running thread list according to the matching priority and giving out a matching result between the internal threads and the processor cores.
An interactive control module in the process client is mainly used for establishing communication with the terminal, and in the process running process, according to a matching result given by the thread matching module, a large core matching request and a small core matching request are sent to the terminal 10, and scheduling state information fed back by the terminal 10 and used for describing the actual scheduling condition of the processor core is obtained.
The data reporting module is mainly configured to report the scheduling state information given by the interactive control module to the process management terminal 20, and receive the adjusted matching priority sent by the process management terminal 20.
The terminal 10 mainly includes the following functional modules: the device comprises a scheduling module and an interactive control module.
The interactive control module of the terminal 10 is mainly configured to receive a large core matching request and a small core matching request sent by a process client, and forward the received requests to the scheduling module.
The scheduling module is mainly used for scheduling the processor core according to the received request.
In one embodiment, the word senses of the fields of the match results from the thread match module are shown in table 1 below.
TABLE 1 field word sense of matching results
Figure BDA0003031423760000061
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the application.
Fig. 2 shows an interaction flow between a process client and a terminal according to an embodiment of the present application.
Referring to fig. 2, in this embodiment, a process client initiates a query request to a terminal to query whether the terminal supports a large-small core scheduling policy for a processor core.
And the terminal returns a support result to the process client. If the large and small core scheduling policies are not supported, the process is ended. And if the large and small core scheduling strategies are supported, the terminal returns the large core number and the small core number of the processor to the process client.
And the process client determines the matching priority of each internal thread and the processor core of the terminal under the condition of determining the large core number and the small core number of the processor based on the rendering time of each internal thread to the image frame of the process under the current operating scene.
And aiming at the internal thread which is preferentially matched with the large core of the processor, the process client sends a large core scheduling request to the terminal so as to request the terminal to preferentially schedule the large core of the processor to process the internal thread. And after receiving the large core scheduling request, the terminal returns scheduling state information aiming at the large core of the processor to the process client so that the process client can determine which internal threads are actually scheduled by the terminal for the large core of the processor.
And aiming at the internal thread which is preferentially matched with the processor corelet, the process client sends a corelet scheduling request to the terminal so as to request the terminal to preferentially schedule the processor corelet to process the internal thread. And after receiving the small core scheduling request, the terminal returns scheduling state information aiming at the small core of the processor to the process client, so that the process client determines which internal threads are actually scheduled by the terminal for the small core of the processor.
Fig. 3 is a flowchart illustrating a method for a process to manage internal threads according to an embodiment of the present application, where the method includes:
s310, monitoring the current running scene of the process;
s320, acquiring the time consumed by each internal thread for rendering the image frame of the process in the running scene;
step S330, determining the matching priority of each internal thread and a processor core of a terminal based on the time consumed by each internal thread for rendering the image frame of the process in the operating scene, wherein the processor core comprises at least one high-frequency large processor core and at least one low-frequency small processor core;
and step S340, interacting with the terminal based on the matching priority, and scheduling the large processor core and the small processor core for processing each internal thread by the terminal by taking the matching priority as reference.
In the embodiment of the present application, a high-frequency large processor core refers to a processor core with a higher operating frequency and a stronger processing capability than the lower operating frequency, and a low-frequency small processor core refers to a processor core with a lower operating frequency and a weaker processing capability than the lower operating frequency.
In the embodiment of the application, the running scene of the process is monitored in real time in the running process of the process. The running scene of the process is mainly divided by the running state of the process. For example: in the running process of the game process, the state of loading the game interface is divided into a first scene, the state of the same game interface with other game characters below N is divided into a second scene, and the state of the same game interface with other game characters above N or more is divided into a third scene. Wherein N is a preset positive number.
In the embodiment of the application, in the process of running, the time consumed by each internal thread for rendering the image frame of the process in the current running scene is obtained, and then the matching priority of each internal thread and the processor core of the terminal is determined on the basis. And then interacting with the terminal based on the matching priority, and enabling the terminal to obtain the information of the matching priority through interaction, so that the terminal schedules the large processor core and the small processor core for processing each internal thread by taking the matching priority as reference.
Therefore, in the embodiment of the application, the process manages the internal thread and is associated with the current running scene of the process. Specifically, the process makes the processor core scheduled by the terminal for processing each internal thread and the image frame rendering time consumption of each internal thread for the process under the operation scene associated through the method, so that the internal threads are managed, the real-time scene requirement can be met in the aspect of the image frame rendering time consumption, and the thread management rationality is improved.
In one embodiment, the method provided by the present application is executed as a game process. In the running process of the game process, the current game scene is monitored, the time consumed by each internal thread for rendering the image frame of the game process under the current game scene is further acquired, and the matching priority of each internal thread and the processor core of the terminal is further determined on the basis. And interacting with the terminal based on the matching priority so that the terminal schedules the large processor core and the small processor core for processing each internal thread by taking the matching priority as a reference.
In one embodiment, the method provided by the present application is performed by a drawing process. In the operation process of the drawing process, the current drawing scene is monitored, the time consumed by each internal thread for rendering the image frame of the drawing process under the current drawing scene is further acquired, and the matching priority of each internal thread and the processor core of the terminal is further determined on the basis. And interacting with the terminal based on the matching priority so that the terminal schedules the large processor core and the small processor core for processing each internal thread by taking the matching priority as a reference.
In one embodiment, a terminal runtime environment is deployed in a virtual machine, and a process is run in the terminal runtime environment. In this way, the behaviour of the process when running in the terminal is simulated. Wherein, the performance of the process obtained by simulation when the process runs in the terminal comprises the following steps: when each internal thread in the process runs in the terminal, the time is consumed for rendering the image frame of the process.
And then monitoring the running environment of the terminal in the virtual machine to acquire the time consumed by each internal thread for rendering the image frame of the process in the running scene.
The embodiment has the advantage that the determined time consumption of each internal thread for image frame rendering of the process is associated with the terminal by simulating the running performance of the process in the terminal inside the virtual machine, so that the accuracy of the time consumption for image frame rendering is improved.
In an embodiment, according to the sequence from high to low of the image frame rendering consumption time of the process of each internal thread in the running scene, each internal thread is divided into a first priority, a second priority and a third priority, wherein the internal thread of the first priority is preferentially matched with the large processor core, the internal thread of the third priority is preferentially matched with the small processor core, and the internal thread of the second priority is preferentially matched with the large processor core after the internal threads of the first priority are all matched with the large processor core.
The internal thread of the first priority is matched with the processor big core preferentially, namely the terminal is required to dispatch the processor big core to process the internal thread of the first priority as much as possible under the condition that the terminal processor big core resource allows. The internal thread of the third priority is preferentially matched with the processor corelet, that is, the terminal is required to dispatch the processor corelet to process the internal thread of the third priority as much as possible under the condition that the resource of the processor corelet of the terminal allows. And the internal threads of the second priority are preferably matched with the large processor core after the internal threads of the first priority are matched with the large processor core, namely after the terminal is required to allocate all the internal threads of the first priority to the processor core, if an idle large processor core exists, the idle large processor core is dispatched to process the internal threads of the second priority as much as possible.
In one embodiment, the main thread and the rendering thread are fixedly divided into a first priority level, namely, the main thread and the rendering thread are preferentially matched with the processor big core no matter what operation scene the process is in. The data acquisition thread and the data reporting thread are fixedly divided to a third priority, namely, the data acquisition thread and the data reporting thread are preferably matched with the processor corelet no matter what operation scene the process is in.
The embodiment has the advantages that through the method, the terminal always preferentially schedules the processor core to process threads which are closely related to user experience, such as a main thread and a rendering thread, so that the damage of the user experience in the process running process is avoided; the terminal is enabled to always preferentially schedule threads which are less related to user experience, such as a processor small core processing data acquisition thread and a data reporting thread, so that the resource occupation of the threads on a processor large core is avoided, and the damage of the user experience in the process running process is further avoided.
In one embodiment, the process interacts with the terminal by establishing real-time communication with the terminal.
Specifically, in this embodiment, the process establishes real-time communication with the terminal, and sends a scheduling request to the terminal based on the determined matching priority. Therefore, after the terminal receives the scheduling request, the large processor core and the small processor core which process each internal thread are scheduled by taking the matching priority as reference.
The embodiment has the advantages that the thread management and the operation scene synchronization are ensured by establishing the real-time communication mode for interaction.
Fig. 4 shows a scheduling request generation flow chart according to an embodiment of the present application.
Referring to FIG. 4, in this embodiment, the process monitors the new thread to determine the newly created process from the last cycle to the current time. The method comprises the steps of determining a current operation scene for new threads dynamically associated with the operation scene, further determining image frame rendering time consumption of the new threads in the operation scene, further determining matching priorities of the new threads, and further determining whether the new threads are preferentially matched with a large core or a small core of a processor at the current time.
The process also traverses the old threads with the determined and matched priorities, determines the current running scene, and further determines whether the old threads are matched with the large processor core or the small processor core preferentially at the current time.
And updating the scheduling list in a process according to the monitoring of the new thread and the traversal of the old thread, generating a new scheduling list, and then sending a corresponding scheduling request to the terminal according to the new scheduling list.
In one embodiment, the process interacts with the terminal by marking the thread name.
Specifically, in this embodiment, after determining the matching priority, the process marks the thread name of each internal thread based on the type of the matched processor core. When the terminal needs to schedule the processor cores to process the internal threads, the processor cores which are preferentially matched with the internal threads can be determined by reading the thread names, and then the corresponding processor cores are preferentially scheduled to process the matched internal threads respectively.
The embodiment has the advantage that the communication cost is reduced by interacting in a way of marking the thread name.
In one embodiment, for internal threads that preferentially match a processor Big core, a process tags its thread name with a prefix field "Big"; for internal threads that preferentially match a processor corelet, the process tags its thread name with a prefix field "Small". When the terminal needs to dispatch the processor cores to process the internal threads, the prefix fields of the internal threads are determined by reading the thread names, and then whether the internal threads are preferentially matched with the large cores or the small cores of the processor is preferentially determined according to the prefix fields, and then the corresponding processor cores are preferentially dispatched to process the matched internal threads respectively.
In one embodiment, the process interacts with the terminal based on the matching priority, so that the terminal schedules the large processor core and the small processor core for processing each internal thread with reference to the matching priority. However, due to the limitation of the terminal resource (for example, the number of the processor cores is limited; other processes in the terminal occupy the processor cores), or due to the limitation of the terminal scheduling policy (for example, the priority of the scheduling policy set by the terminal is higher than the priority of the matching priority), the scheduling actually performed by the processor core by the terminal may be out of or in the scheduling indicated by the matching priority.
And the terminal generates scheduling state information according to the conditions of the internal threads processed by the processor cores in actual scheduling and returns the scheduling state information to the process.
And after the process acquires the scheduling state information, adjusting the matching priority based on the scheduling state information. And then interacting with the terminal based on the adjusted matching priority so that the terminal schedules the large processor core and the small processor core for processing each internal thread by taking the adjusted matching priority as a reference.
In one embodiment, the unadjusted matching priority in the current operating scenario describes: "thread 1 and thread 2 and thread 3 each preferentially match a large processor core, and thread 4 and thread 5 each preferentially match a small processor core".
And after the process interacts with the terminal on the basis of the unadjusted matching priority, determining that the terminal actually schedules a processor large-core processing thread 2 and a processor 3 and actually schedules a processor small-core processing thread 1, a processor 4 and a processor 5 according to scheduling state information returned by the terminal.
Because the influence of the thread 1 and the thread 2 on the user experience is larger than the influence of the thread 3 on the user experience, the matching priority in the current running scene is adjusted by the process, so that the content described by the adjusted matching priority is as follows: "thread 1 and thread 2 each preferentially match a large processor core, and thread 3 and thread 4 and thread 5 each preferentially match a small processor core". The process interacts with the terminal based on the adjusted matching priority such that the terminal preferentially allocates resources of the processor big core to thread 1 and thread 2 and preferentially allocates resources of the processor small core to thread 3, thread 4, and thread 5.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the application.
In an embodiment, the matching priority is adjusted and issued by a process management terminal for controlling and managing the process.
After the process obtains the scheduling state information returned by the terminal, the scheduling state information is uploaded to the process management terminal, so that the process management terminal responds to the scheduling state information and updates the matching priority. And after updating the matching priority, the process management end feeds the updated matching priority back to the process, and the process synchronizes the matching priority with the updated matching priority.
The embodiment has the advantage that the flexibility of thread management is improved by synchronously matching the priority with the process management terminal.
FIG. 5 shows an apparatus for a process to manage internal threads according to an embodiment of the present application, the apparatus comprising:
a monitoring module 410 configured to monitor an operation scenario in which the process is currently located;
an obtaining module 420 configured to obtain time consumed by each internal thread for rendering an image frame of a process in the running scene;
the determining module 430 is configured to determine matching priorities of each internal thread and a processor core of a terminal based on the time spent by each internal thread on image frame rendering of a process in the running scene, wherein the processor cores comprise at least one large processor core with high frequency and at least one small processor core with low frequency;
an interaction module 440 configured to interact with the terminal based on the matching priority, so that the terminal schedules the processor big core and the processor small core for processing each internal thread with reference to the matching priority.
In an exemplary embodiment of the present application, the apparatus is configured to:
deploying the running environment of the terminal in the virtual machine, and running a process in the running environment of the terminal;
and monitoring the running environment of the terminal in the virtual machine to acquire the time consumed by each internal thread in the running scene for rendering the image frame of the process.
In an exemplary embodiment of the present application, the apparatus is configured to:
dividing each internal thread into a first priority, a second priority and a third priority according to the sequence from high to low of the time consumption of each internal thread for image frame rendering of a process in the running scene, wherein the internal thread of the first priority is preferentially matched with the processor big core, the internal thread of the third priority is preferentially matched with the processor small core, and the internal thread of the second priority is preferentially matched with the processor big core after the internal threads of the first priority are both matched with the processor big core.
In an exemplary embodiment of the present application, the apparatus is configured to:
and establishing real-time communication with the terminal, and sending a scheduling request to the terminal based on the matching priority, so that the terminal can schedule the large processor core and the small processor core for processing each internal thread by taking the matching priority as reference.
In an exemplary embodiment of the present application, the apparatus is configured to:
marking the thread name of each internal thread based on the category of the matched processor core, and scheduling the large processor core and the small processor core which process each internal thread by the terminal by taking the matching priority as reference.
In an exemplary embodiment of the present application, the apparatus is configured to:
acquiring scheduling state information returned by the terminal, wherein the scheduling state information is used for describing internal threads processed by the large processor core and the small processor core respectively;
adjusting the matching priority based on the scheduling status information.
In an exemplary embodiment of the present application, the apparatus is configured to:
uploading the scheduling state information to a process management terminal so that the process management terminal updates the matching priority in response to the scheduling state information;
and synchronizing the matching priority with the updated matching priority fed back by the process management end.
An electronic apparatus 50 according to an embodiment of the present application is described below with reference to fig. 6. The electronic device 50 shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, electronic device 50 is embodied in the form of a general purpose computing device. The components of the electronic device 50 may include, but are not limited to: the at least one processing unit 510, the at least one memory unit 520, and a bus 530 that couples various system components including the memory unit 520 and the processing unit 510.
Wherein the storage unit stores program code that is executable by the processing unit 510 to cause the processing unit 510 to perform steps according to various exemplary embodiments of the present invention as described in the description part of the above exemplary methods of the present specification. For example, the processing unit 510 may perform various steps as shown in fig. 3.
The memory unit 520 may include a readable medium in the form of a volatile memory unit, such as a random access memory unit (RAM) 5201 and/or a cache memory unit 5202, and may further include a read only memory unit (ROM) 5203.
Storage unit 520 may also include a program/utility 5204 having a set (at least one) of program modules 5205, such program modules 5205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which or some combination thereof may comprise an implementation of a network environment.
Bus 530 may be one or more of any of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 50 may also communicate with one or more external devices 600 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 50, and/or with any device (e.g., router, modem, etc.) that enables the electronic device 50 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 550. An input/output (I/O) interface 550 is connected to the display unit 540. Also, the electronic device 50 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 560. As shown, the network adapter 560 communicates with the other modules of the electronic device 50 over the bus 530. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 50, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, and may also be implemented by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to make a computing device (which can be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiments of the present application.
In exemplary embodiments of the present application, there is also provided a computer readable storage medium having stored thereon computer readable instructions, which, when executed by a processor of a computer, cause the computer to perform the method described in the above method embodiment section.
According to an embodiment of the present application, there is also provided a program product for implementing the method in the above method embodiment, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this respect, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as JAVA, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit according to embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods herein are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, a mobile terminal, or a network device, etc.) to execute the method according to the embodiments of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.

Claims (9)

1. A method for a game process to manage internal threads, the method comprising:
monitoring the current running scene of the game process;
acquiring the time consumed by each internal thread in the running scene for rendering the image frame of the game process;
determining the matching priority of each internal thread and a processor core of a terminal based on the time spent by each internal thread on rendering the image frame of the game process in the running scene, wherein the processor core comprises at least one high-frequency large processor core and at least one low-frequency small processor core;
interacting with the terminal based on the matching priority, so that the terminal schedules the large processor core and the small processor core for processing each internal thread by taking the matching priority as reference;
acquiring scheduling state information returned by the terminal, wherein the scheduling state information is used for describing internal threads processed by the large processor core and the small processor core respectively;
and adjusting the matching priority based on the scheduling state information, and interacting with the terminal based on the adjusted matching priority so that the terminal schedules the processor big core and the processor small core for processing each internal thread by taking the adjusted matching priority as reference.
2. The method of claim 1, wherein obtaining the rendering time of each internal thread for the image frame of the game process in the running scene comprises:
deploying the running environment of the terminal in the virtual machine, and running a process in the running environment of the terminal;
and monitoring the running environment of the terminal in the virtual machine to acquire the time consumed by each internal thread in the running scene for rendering the image frame of the game process.
3. The method of claim 1, wherein determining the matching priority of each internal thread and a processor core of a terminal based on the rendering time of each internal thread for an image frame of a game progress in the running scene comprises:
and dividing the internal threads into a first priority, a second priority and a third priority according to the sequence of rendering time consumption of the internal threads on image frames of game processes from high to low in the running scene, wherein the internal threads of the first priority are preferentially matched with the large processor core, the internal threads of the third priority are preferentially matched with the small processor core, and the internal threads of the second priority are preferentially matched with the large processor core after the internal threads of the first priority are both matched with the large processor core.
4. The method of claim 1, wherein interacting with the terminal based on the matching priority for the terminal to schedule a large core and a small core of a processor for processing the internal threads with reference to the matching priority comprises:
and establishing real-time communication with the terminal, and sending a scheduling request to the terminal based on the matching priority, so that the terminal schedules the large processor core and the small processor core for processing each internal thread by taking the matching priority as reference.
5. The method of claim 1, wherein interacting with the terminal based on the matching priority for the terminal to schedule the large processor core and the small processor core for processing the internal threads with reference to the matching priority comprises:
and marking the thread name of each internal thread based on the category of the matched processor core, so that the terminal schedules a large processor core and a small processor core for processing each internal thread by taking the matching priority as reference.
6. The method of claim 1, further comprising:
uploading the scheduling state information to a process management terminal so that the process management terminal updates the matching priority in response to the scheduling state information;
and synchronizing the matching priority with the updated matching priority fed back by the process management end.
7. An apparatus for managing internal threads by a gaming process, the apparatus comprising:
the monitoring module is configured to monitor the current running scene of the game process;
the time consumption acquisition module is configured to acquire the time consumption of each internal thread for rendering the image frame of the game process in the running scene;
the determining module is configured to determine the matching priority of each internal thread and a processor core of a terminal based on the rendering time of each internal thread for the image frame of the game process in the running scene, wherein the processor core comprises at least one large processor core with high frequency and at least one small processor core with low frequency;
a first interaction module configured to interact with the terminal based on the matching priority, so that the terminal schedules the processor big core and the processor small core for processing each internal thread by using the matching priority as a reference;
the information acquisition module is configured to acquire scheduling state information returned by the terminal, wherein the scheduling state information is used for describing internal threads processed by the processor big core and the processor small core respectively;
and the second interaction module is configured to adjust the matching priority based on the scheduling state information and interact with the terminal based on the adjusted matching priority, so that the terminal schedules the processor big core and the processor small core which process each internal thread by taking the adjusted matching priority as a reference.
8. An electronic device, comprising:
a memory storing computer readable instructions;
a processor reading computer readable instructions stored by the memory to perform the method of any of claims 1-6.
9. A computer-readable storage medium having stored thereon computer-readable instructions which, when executed by a processor of a computer, cause the computer to perform the method of any of claims 1-6.
CN202110431231.8A 2021-04-21 2021-04-21 Method, device, electronic equipment and storage medium for process management internal thread Active CN113204425B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202310421243.1A CN116450353A (en) 2021-04-21 2021-04-21 Processor core matching method and device, electronic equipment and storage medium
CN202110431231.8A CN113204425B (en) 2021-04-21 2021-04-21 Method, device, electronic equipment and storage medium for process management internal thread

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110431231.8A CN113204425B (en) 2021-04-21 2021-04-21 Method, device, electronic equipment and storage medium for process management internal thread

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202310421243.1A Division CN116450353A (en) 2021-04-21 2021-04-21 Processor core matching method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113204425A CN113204425A (en) 2021-08-03
CN113204425B true CN113204425B (en) 2023-04-07

Family

ID=77027700

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202310421243.1A Pending CN116450353A (en) 2021-04-21 2021-04-21 Processor core matching method and device, electronic equipment and storage medium
CN202110431231.8A Active CN113204425B (en) 2021-04-21 2021-04-21 Method, device, electronic equipment and storage medium for process management internal thread

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202310421243.1A Pending CN116450353A (en) 2021-04-21 2021-04-21 Processor core matching method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (2) CN116450353A (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116703691A (en) * 2022-11-17 2023-09-05 荣耀终端有限公司 Image processing method, electronic device, and computer storage medium
CN117130771A (en) * 2023-03-30 2023-11-28 荣耀终端有限公司 Resource scheduling method, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103631656A (en) * 2012-08-21 2014-03-12 联想(新加坡)私人有限公司 Task scheduling in big and little cores

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10639550B2 (en) * 2017-04-18 2020-05-05 Bullguard Ltd System and method for dynamically allocating resources to a game process
KR20200097579A (en) * 2019-02-08 2020-08-19 삼성전자주식회사 Electronic device, storage medium and method for process scheduling
CN110489228B (en) * 2019-07-16 2022-05-17 华为技术有限公司 Resource scheduling method and electronic equipment
CN111324454A (en) * 2020-02-03 2020-06-23 京东数字科技控股有限公司 Multi-core CPU allocation method and device, electronic equipment and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103631656A (en) * 2012-08-21 2014-03-12 联想(新加坡)私人有限公司 Task scheduling in big and little cores

Also Published As

Publication number Publication date
CN113204425A (en) 2021-08-03
CN116450353A (en) 2023-07-18

Similar Documents

Publication Publication Date Title
US11146502B2 (en) Method and apparatus for allocating resource
US20200104732A1 (en) Methods and system for managing predictive models
US10310900B2 (en) Operating programs on a computer cluster
CN110221901A (en) Container asset creation method, apparatus, equipment and computer readable storage medium
CN113204425B (en) Method, device, electronic equipment and storage medium for process management internal thread
CN109117252A (en) Method, system and the container cluster management system of task processing based on container
CN111818194A (en) Domain name based access system and method
CN112052082B (en) Task attribute optimization method, device, server and storage medium
US20220374742A1 (en) Method, device and storage medium for running inference service platform
CN114924851A (en) Training task scheduling method and device, electronic equipment and storage medium
CN114090191A (en) Method, device and equipment for scheduling storage resources and storage medium
CN105511914B (en) Using update method, device and system
CN110399393B (en) Data processing method, device, medium and electronic equipment
CN111800511B (en) Synchronous login state processing method, system, equipment and readable storage medium
CN116721007B (en) Task control method, system and device, electronic equipment and storage medium
CN106095417A (en) A kind of prospect Application Program Interface refresh synchronization method and system
CN115086321B (en) Multi-cluster traffic forwarding method and device and electronic equipment
US11593178B2 (en) ML-to-ML orchestration system and method for system wide information handling system (IHS) optimization
CN112925636A (en) Request scheduling and processing method and device
CN115391706A (en) Display page generation method and device
CN114296936A (en) Node scheduling method and device based on Kubernetes cluster, electronic equipment and medium
CN113778350A (en) Log printing method and device
CN116700919A (en) Task processing method and device, computer readable storage medium and electronic equipment
CN116910075A (en) Data warehouse source table changing method, system, equipment and storage medium
CN116684473A (en) Program change distribution method, program change distribution device, storage medium, and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40051679

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant