CN113835871A - Thread management method, thread management device, computer storage medium and application software - Google Patents

Thread management method, thread management device, computer storage medium and application software Download PDF

Info

Publication number
CN113835871A
CN113835871A CN202010591463.5A CN202010591463A CN113835871A CN 113835871 A CN113835871 A CN 113835871A CN 202010591463 A CN202010591463 A CN 202010591463A CN 113835871 A CN113835871 A CN 113835871A
Authority
CN
China
Prior art keywords
thread
task
map
rendering
main
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010591463.5A
Other languages
Chinese (zh)
Inventor
吴朝良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010591463.5A priority Critical patent/CN113835871A/en
Publication of CN113835871A publication Critical patent/CN113835871A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5044Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The embodiment of the invention provides a thread management method, a thread management device, a computer storage medium and application software. The thread management method comprises the following steps: the main thread monitors the time consumed by a first thread executing the main task to execute the main task; when the main thread monitors that the consumed time exceeds a preset first consumed time threshold, suspending a second thread meeting a suspension condition in a second thread executing a secondary task until the monitored consumed time is lower than a preset second consumed time threshold, and resuming the suspended second thread, wherein the main task and the secondary task are tasks of the same application software, and the first consumed time threshold is larger than the second consumed time threshold. When the resource demand of the first thread is more, part of the second thread is suspended, and the resource demand of the first thread is ensured. When the first thread is in less demand, the concurrent execution of the first thread and the second thread is ensured.

Description

Thread management method, thread management device, computer storage medium and application software
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a thread management method, a thread management device, a computer storage medium and application software.
Background
Generally, when an application program is run in a terminal device, the application program schedules hardware resources such as a central processing unit, a graphic processor, and the like via an operating system. For example, embedded operating systems provide developers with development interfaces that schedule hardware resources. Based on the interface, a developer can configure a plurality of threads which are executed in parallel for the application program, and richer functions are provided for the application program.
However, when the hardware resources are limited, concurrent execution of multiple threads may cause a problem in resource load balancing.
Disclosure of Invention
Embodiments of the present invention provide a thread management method, a thread management apparatus, a computer storage medium, and application software to solve or alleviate the above problems.
According to a first aspect of the embodiments of the present invention, there is provided a thread management method, including: the main thread monitors the time consumed by a first thread executing the main task to execute the main task; when the main thread monitors that the consumed time exceeds a preset first consumed time threshold, suspending a second thread meeting a suspension condition in a second thread executing a secondary task until the monitored consumed time is lower than a preset second consumed time threshold, and resuming the suspended second thread, wherein the main task and the secondary task are tasks of the same application software, and the first consumed time threshold is larger than the second consumed time threshold.
According to a second aspect of the embodiments of the present invention, there is provided a thread management apparatus including: the thread monitoring module is used for monitoring the time consumed by a first thread executing a main task to execute the main task; and the thread control module is used for pausing a second thread meeting a pause condition in a second thread executing a secondary task when the monitored consumed time exceeds a preset first consumed time threshold, and resuming the paused second thread until the monitored consumed time is lower than a preset second consumed time threshold, wherein the main task and the secondary task are tasks of the same application software, and the first consumed time threshold is larger than the second consumed time threshold.
According to a third aspect of embodiments of the present invention, there is provided a computer readable medium having stored thereon a computer program which, when executed by a processor, performs the method according to the first aspect.
According to a fourth aspect of embodiments of the present invention there is provided an application operable to perform a method as described in the first aspect.
In the scheme of the embodiment of the invention, when the consumed time exceeds the preset first consumed time threshold, the second thread meeting the pause condition in the second thread executing the secondary task is paused, so that when the resource demand of the first thread is more, part of the second thread is paused, and the resource demand of the first thread is ensured. In addition, the first time consumption threshold is larger than the second time consumption threshold, so that the second time consumption threshold indicates that the demand of the first thread is low, when the monitored consumed time is lower than the preset second time consumption threshold, the suspended second thread is recovered, and when the demand of the first thread is low, the concurrent execution of the first thread and the second thread can be ensured.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present invention, and it is also possible for a person skilled in the art to obtain other drawings based on the drawings.
FIG. 1A is a schematic flow chart diagram of a method for thread load balancing according to one embodiment of the present invention;
FIG. 1B is a diagram illustrating a network architecture used by a map navigation service according to another embodiment of the present invention;
FIG. 2 is a schematic flow chart diagram of a thread load balancing method according to another embodiment of the present invention;
FIG. 3A is a diagram illustrating a thread load balancing method according to another embodiment of the present invention;
FIG. 3B is a diagram illustrating a thread load balancing method according to another embodiment of the invention;
FIG. 4 is a schematic block diagram of a thread load balancing apparatus according to another embodiment of the present invention;
fig. 5 is a hardware configuration of an electronic device according to another embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the embodiments of the present invention, the technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments of the present invention shall fall within the scope of the protection of the embodiments of the present invention.
The following further describes specific implementation of the embodiments of the present invention with reference to the drawings.
Fig. 1A is a schematic flow chart of a thread load balancing method according to another embodiment of the present invention. The thread load balancing method of FIG. 1A may be applied to any suitable electronic device having data processing capabilities, including but not limited to: servers, mobile terminals (e.g., cell phones, PADs, etc.), PCs, and the like, and in particular, may include computer systems such as in-vehicle devices, desktop computers, notebook or laptop computers, netbooks, tablet computers, e-book readers, GPS devices, cameras, Personal Digital Assistants (PDAs), handheld electronic devices, cellular telephones, smart phones, other suitable electronic devices, or any suitable combination thereof. By way of example, and not limitation, embodiments of the invention contemplate any suitable apparatus. The method of FIG. 1A includes:
1100: the main thread monitors the time it takes for the first thread executing the main task to execute the main task.
It should be appreciated that when an application starts, the process of the application is created by the Operating System (OS) while the main thread runs accordingly. The primary task may be executed as a first thread and the secondary task may be executed as a second thread. The primary tasks may correspond to one or more first threads and the secondary tasks may correspond to one or more second threads. The first thread and the second thread may be child threads of the main thread. The number of the plurality of first threads and the number of the plurality of second threads may be the same or different.
It is also understood that a first thread pool may be set for a first thread and/or a second thread pool may be set for a second thread. For example, a plurality of first threads may be managed in the first thread pool, and when executing the main task, the plurality of first threads may execute a main task queue corresponding to the main task based on a first thread polling algorithm. For example, a plurality of second threads may be managed in the second thread pool, and when executing the secondary task, the plurality of second threads may execute a secondary task queue corresponding to the secondary task based on a second thread polling algorithm. The first thread pool the first thread polling algorithm and the second thread polling algorithm may be the same or different.
It should also be understood that the time consumption herein may be the task execution time or the task execution speed. For example, in the example of map navigation application software, the elapsed time includes, but is not limited to, map frame rendering speed, map frame rendering elapsed time, map frame rendering frame rate, and the like. The time consumption may also include, but is not limited to, map frame loading speed, map frame loading time consumption, map frame loading frame rate, and the like. The time consumption may also include, but is not limited to, map frame expansion speed, map frame expansion time consumption, map frame expansion frame rate, and the like.
1200: and when the monitoring consumed time of the main thread exceeds a preset first consumed time threshold, suspending a second thread meeting the suspension condition in the second thread executing the secondary task, and resuming the suspended second thread until the monitored consumed time is lower than a preset second consumed time threshold, wherein the main task and the secondary task are tasks of the same application software, and the first consumed time threshold is larger than the second consumed time threshold.
It should be understood that the application software may be a map navigation application software, and may also be other client applications or applications. In the case where there is a single second thread, the second thread that satisfies the suspension condition may be the separate second thread. In the case of storing a plurality of second threads, the second threads satisfying the suspension condition may be a part of the plurality of second threads. For example, in the second thread pool, if the second thread random polling algorithm is adopted, a part of the plurality of second threads may be randomly suspended, for example, a part of the plurality of second threads may be suspended.
It should also be understood that in the first example, the primary task may be a map rendering task, the secondary task may be a map data loading task, the first thread may be a map rendering thread, and the second thread may be a map data loading thread. In a second example, the primary task may be a map rendering task, the secondary task may be a map expansion task, the first thread may be a map rendering thread, and the second thread may be a map expansion thread. In a third example, the primary task may be a map expansion task, the secondary task may be a map data loading task, the first thread may be a map expansion thread, and the second thread may be a map data loading thread.
In the scheme of the embodiment of the invention, when the consumed time exceeds the preset first consumed time threshold, the second thread meeting the pause condition in the second thread executing the secondary task is paused, so that when the resource demand of the first thread is more, part of the second thread is paused, and the resource demand of the first thread is ensured. In addition, the first time consumption threshold is greater than the second time consumption threshold, so that the second time consumption threshold indicates that the demand of the first thread is low, when the monitored consumed time is lower than the preset second time consumption threshold, the suspended second thread is recovered, and when the demand of the first thread is low, the concurrent execution of the first thread and the second thread can be ensured
In one example, the application software may be a map navigation application software. Fig. 1B is a schematic diagram of a network architecture 100 used by a map navigation service according to an embodiment of the present invention. As shown, the map navigation server 160 communicates with the map navigation client 120 over the network 110. The communication link 130 may be, for example, a connection of the map navigation server 160 to the network 110 or a connection of the map navigation client 120 to the network 110. The two communication links 130 in the illustration may be the same or different. It should be appreciated that communication links 130 include one or more wireline (e.g., Digital Subscriber Line (DSL) or Data Over Cable Service Interface Specification (DOCSIS), wireless (e.g., Wi-Fi or Worldwide Interoperability for Microwave Access (WiMAX)) or optical (e.g., Synchronous Optical Network (SONET) or Synchronous Digital Hierarchy (SDH)) links in one particular implementation, one or more network communication links 130 each include an ad hoc network, an intranet, an extranet, a VPN, a LAN, a WLAN, a WAN, a WWAN, a MAN, a portion of the Internet, a portion of the PSTN, a network based on cellular technology, a network based on satellite communication technology, another network communication link 130, or a combination of two or more such network communication links 130. network communication links 130 are not necessarily the same throughout map navigation network architecture 100 130.
Any suitable network 110 is contemplated by embodiments of the present invention. By way of example and not limitation, one or more portions of network 110 may include an ad hoc network, an intranet, an extranet, a Virtual Private Network (VPN), a Local Area Network (LAN), a wireless LAN (wlan), a Wide Area Network (WAN), a wireless WAN (wwan), a Metropolitan Area Network (MAN), a portion of the internet, a portion of the Public Switched Telephone Network (PSTN), a cellular telephone network, or a combination of two or more of these. The network 110 may include one or more networks 110. Network 110 may be accessible to network users at client systems 120 at map navigation client 120. The map navigation client 120 may enable its user to communicate with other users at other client systems 120.
The system framework of the map navigation service network architecture of FIG. 1B may be implemented in a so-called browser/server (B/S) or client/server (C/S) mode. The map navigation server 160 includes a server 162 and a data store 164. Map navigation data, e.g., vector data for rendering, etc., may be stored in data store 164. Server 162 is used to read map data, such as vector data, from data store 164 and send the map data to map navigation client 120 via network 110. The map navigation client 120 is installed with a network 123 for users to receive the map data. The map navigation client 120 is also equipped with hardware devices such as a memory and a processor 124. In one example, the map navigation client 120 may be installed with an operating system such as an embedded operating system or a real-time operating system. The operating system may have installed therein, for example, a navigation application providing a map navigation service, a life class application, a trip class application, a map navigation application, a browser application capable of navigating by browsing a web map, and the like. Although not shown, the map navigation client 120 may also be installed with a Graphics Processor (GPU). The Application program may be installed in the operating system through an Application Programming Interface (API) of the operating system, and the scheduling processor 124 performs map navigation data rendering such as vector data through the API. It should be understood that the application program described above loads the received map navigation data into a memory (e.g., main memory) through the network module 123. The loading is performed, for example, by a load thread scheduling processor 124 of the load module 122. In addition, the rendering module 121 may perform a rendering process by the rendering thread scheduling processor 124. The processor 124 may perform the rendering (e.g., hardware rendering) described above by controlling or scheduling the GPU. Alternatively, the processor 124 may perform the rendering process (e.g., software rendering) described above separately. The following describes in detail aspects of embodiments of the present invention with reference to other drawings.
It should be appreciated that the map frame load queue may be used to store map frame load tasks. For example, a map frame load queue may be a queue of multiple map frame load tasks. Each map frame loading task is used to load a specific map frame. Alternatively, the map frame load queue may be a queue of multiple map frame element load tasks, each for loading a particular map frame element. The map frame element may be a drawing element of a point, a line, or a surface when rendering. For example, several map frame element loading tasks constitute one map frame loading task. In addition, the map frame loading task in the text may be one map frame loading task or a plurality of map frame loading tasks.
In one example, one or more completed map frame loading tasks in a map frame loading queue may be added to a map frame rendering queue. For example, multiple map frame element loading tasks may be added to the map frame rendering queue to obtain multiple map frame element rendering tasks, or, logically, multiple map frame rendering tasks composed of multiple map frame element rendering tasks may be obtained.
Alternatively, one or more completed map frame loading tasks in the map frame loading queue may be added to the map frame expansion queue. Accordingly, one or more map frames already expanded in the map frame expansion queue may be added to the map frame rendering queue. For example, a plurality of map frame element loading tasks may be added to the map frame expansion queue to obtain a plurality of map frame element expansion tasks, or, logically, a plurality of map frame expansion tasks composed of a plurality of map frame element expansion tasks may be obtained. In addition, a plurality of map frame element expansion tasks may be added to the map frame rendering queue to obtain a plurality of map frame element rendering tasks, or, logically, a plurality of map frame rendering tasks composed of a plurality of map frame element rendering tasks may be obtained.
In another example, in the map frame expansion queue, the map frame loading queue and the map frame rendering queue, the corresponding tasks may be read from the first position of the queue in sequence. When joining a queue, it may be joined to the queue at the end of the queue in turn. In addition, tasks that have already been added to the queue may be repositioned in the queue.
In another example, the map frame rendering task may be added to the map frame loading queue as a map frame rendering task by the scheduling processor based on a completed map frame loading task in the map frame loading queue.
It should also be understood that the rendering task may render the map frame data, which may be vector data acquired from a server, or the like. In general, the geographical range of the screen display of the client may have a plurality of map data blocks (mapfiles), vector data compositions. The vector data can be transmitted on line or stored locally. The data present in each map data block includes (POI (point of interest) points, lines (roads), surfaces (greens, water, areas, etc.), buildings, etc.). The loading task described above may refer to loading a map data block. The expansion task described above refers to expanding POI points, lines (roads), faces (greens, water, areas), and the like into data that can be used by the rendering thread. The above flow may be handled by a thread pool or by a separate thread. The rendering thread processes the expanded data. Through the rendering thread processing, the data is assembled into language instructions such as OPENGL (rendering development interface of android operating system) so as to drive the GPU to render the display.
The vector data may be rendered in conjunction with a three-dimensional model. The vector data may include a road overlay depicting the location of the road in the geographic area. The vector data may also include various text annotations. Other vector data may be included in the representation of the geographic area. The vector data texture may be mapped to a smooth transparent wrinkled layer conforming to the terrain geometry. The portion of the smooth transparent corrugated layer that does not include vector data may be invisible or completely transparent.
In another implementation manner of the present invention, the monitoring, by the main thread, the time consumed for executing the main task by the first thread executing the main task includes: the main thread monitors the time consumed by rendering each frame of the map while the map rendering thread executes the map rendering task.
In one example, the main thread monitors the elapsed time of rendering each frame of the map while the map rendering thread performs the map rendering task, including: rendering threads are monitored periodically. For example, for each monitoring cycle, a first target rendering thread is set as a monitoring start point, and a second target rendering thread is set as a monitoring end point. In addition, the monitoring end point in the current period may be used as the monitoring start point of the next monitoring period. The monitoring start point in the current period may be a monitoring end point of the previous monitoring period. In addition, the main thread monitors the time consumed for rendering each frame of map when the map rendering thread executes the map rendering task, and the method comprises the following steps: a rendering frame rate for one or more map frame rendering tasks is determined. The frame rate includes the number of map frame frames to be rendered in a unit time, or the rendering duration of each map frame, which may also be referred to as the frame duration of a map frame. For the rendering frame rates of the map frame rendering tasks, averaging the rendering durations of the map frame rendering tasks to obtain the rendering frame rates of the map frame rendering tasks. In addition, the median of the plurality of rendering durations may be used as the rendering frame rate of the plurality of map frame rendering tasks. In addition, the determination manner of the rendering tasks may be determined through a specific time window, or a target rendering task with a significantly changed rendering duration may be determined by analyzing the duration of each rendering task (e.g., by derivation), and after the target rendering task with a significantly changed rendering duration, several rendering tasks are determined as the rendering tasks. Or an interpolator in an application program may be used to record the duration of each rendering task, move the window continuously or with a preset step length based on a preset window including a plurality of rendering tasks of a target number, and average the durations of the plurality of rendering tasks in the window to obtain the rendering frame rate.
It should be understood that the current rendering frame rate may be determined based on a rendering duration of the map frame element rendering task or a duration of the map frame rendering task. The map frame element rendering task may be one or more rendering tasks. The map frame rendering task may also be one or more rendering tasks. In one example, the at least one load thread is a plurality of load threads. And executing the current judgment processing, if the current rendering frame rate of the map frame rendering task is less than the rendering frame rate threshold value, stopping the plurality of loading threads, and then executing the next judgment processing. In another example, in one example, the at least one load thread is a plurality of load threads. And executing current judgment processing, stopping a first loading thread in the multiple loading threads if the current rendering frame rate of the map frame rendering task is less than the rendering frame rate threshold, then executing next judgment processing, and continuing stopping a second loading thread in the multiple loading threads if the current rendering frame rate of the map frame rendering task is still less than the rendering frame rate threshold. The first load thread and the second load thread may be one or more load threads. Additionally, stalling the load thread may include mounting the load thread to temporarily execute the load task. Stalling the load thread may also include deleting the load task executed by the load thread. For the above example, the first loading thread and the second loading thread may be suspended, the loading tasks executed by the first loading thread and the second loading thread may be deleted, or one of the first loading thread and the second loading thread may be suspended and the loading task executed by the other one may be deleted.
Because the rendering threads and the loading threads occupy the resources of the processor, and the rendering frame rate threshold can ensure the rendering fluency, stopping at least one loading thread when the current rendering frame rate is less than the rendering frame rate threshold can realize the rendering fluency under the existing resources of the processor, in other words, the pause phenomenon during rendering is reduced.
As an example, the method further comprises: and if the rendering frame rate of the map frame rendering task is not less than the rendering frame rate threshold, starting at least one loading thread for loading the uncompleted map frame loading task in the map frame loading queue by calling the processor.
In one example, a trend of change of the current rendering frame rate may be determined, e.g., a derivative indicative of a rate of change of the rendering frame rate is determined, and the number of loading threads is launched based on the derivative such that the number of loading threads launched is positively correlated with the derivative. In another example, the number of load tasks added is determined based on the derivative such that the number of load tasks added is positively correlated with the derivative.
As an example, a main thread monitors the time consumed by a first thread executing a main task to execute the main task, including: monitoring a rendering thread which renders the at least one rendering task through the scheduling processor, and determining the total rendering duration of the at least one rendering task; determining a rendering frame rate based on the total rendering duration and the number of the at least one rendering task.
In another example, a trend of change of the current rendering frame rate may be determined, e.g., a derivative indicative of a rate of change of the rendering frame rate is determined, and the number of load threads is suspended based on the derivative such that the number of suspended load threads is positively correlated with the derivative.
As an example, a main thread monitors the time consumed by a first thread executing a main task to execute the main task, including: monitoring rendering threads which render at least one rendering task through a scheduling processor, and recording a group of element rendering durations corresponding to each rendering task to obtain multi-element rendering durations corresponding to a plurality of rendering tasks, wherein the group of element rendering durations comprise a plurality of element rendering durations respectively corresponding to the plurality of element rendering tasks of each rendering task; and summing the rendering time lengths of the multiple elements to obtain the total rendering time length.
As an example, when the monitoring consumed time of the main thread exceeds a preset first consumed time threshold, suspending a second thread satisfying a suspension condition in a second thread executing a secondary task, comprising: suspending at least one load thread that loads the outstanding map frame load tasks in the map frame load queue by invoking the processor by holding the outstanding map frame load tasks in the map frame load queue.
In another implementation of the present invention, after suspending a second thread satisfying a suspension condition among second threads that are executing a secondary task, the method further includes: deleting map data loading tasks exceeding a preset life cycle in the map data loading tasks; and/or deleting tasks in the map data loading tasks that are not at the current map rendering level.
As an example, deleting a task that is not at the current map rendering level in the map data loading task further includes: and deleting the completed map frame loading task in the map frame loading queue.
In another example, a trend of change of the current rendering frame rate may be determined, e.g., a derivative indicative of a rate of change of the rendering frame rate is determined, and the number of load tasks deleted is determined based on the derivative such that the number of load tasks deleted is positively correlated with the derivative.
As an example, deleting a task that is not at the current map rendering level in the map data loading task includes: determining a part of map frame loading tasks in the uncompleted map frame loading tasks based on the preset queue length and/or the current map frame rendering level; suspending at least one loading thread that loads a portion of the map frame loading tasks by invoking the processor, and deleting remaining ones of the outstanding map frame loading tasks.
In another implementation manner of the present invention, when there are two or more second threads executing the secondary task, suspending a second thread satisfying a suspension condition among the second threads executing the secondary task, includes: the second thread, which is executing part or all of the second thread of the secondary task, is halted.
As one example, suspending a portion or all of a second thread that is executing a secondary task includes: determining a plurality of loading threads for loading uncompleted map frame loading tasks in a map frame loading queue by calling a processor from a map frame loading thread pool; if the current rendering frame rate of the map frame rendering task is less than the rendering frame rate threshold, suspending at least one loading thread from the plurality of loading threads and maintaining other loading threads in the thread pool.
As one example, suspending a portion or all of a second thread that is executing a secondary task includes: determining a thread overhead ordering of a plurality of loading threads; if the current rendering frame rate of the map frame rendering task is smaller than the rendering frame rate threshold, determining at least one loading thread from a plurality of loading threads based on thread overhead sequencing; at least one load thread is suspended and the other load threads in the thread pool are maintained.
FIG. 2 is a flowchart illustrating a thread load balancing method according to another embodiment of the present invention. As shown, in step 201, the current rendering frame rate is acquired, and step 202 is entered. In one example, the obtaining of the current rendering frame rate may be performed based on a preset time point. In another example, after each map frame rendering task or map frame element rendering task is completed, based on a preset task window length, a rendering frame rate indicated by the map frame rendering task or map frame element rendering task within the window length, e.g., an average rendering frame rate, may be calculated accordingly.
In step 202, determining whether the current rendering frame rate is less than a rendering frame rate threshold, if so, entering step 203; if not, proceed to step 204. In step 203, the load thread is suspended and returns to step 201. In one example, a trend of change of the current rendering frame rate may be determined, e.g., a derivative indicative of a rate of change of the rendering frame rate is determined, and the number of load threads is suspended based on the derivative such that the number of suspended load threads is positively correlated with the derivative. In another example, the number of load tasks deleted is determined based on the derivative such that the number of load tasks deleted is positively correlated with the derivative.
In step 204, the load thread is started and returns to step 201. In one example, a trend of change of the current rendering frame rate may be determined, e.g., a derivative indicative of a rate of change of the rendering frame rate is determined, and the number of loading threads is launched based on the derivative such that the number of loading threads launched is positively correlated with the derivative. In another example, the number of load tasks added is determined based on the derivative such that the number of load tasks added is positively correlated with the derivative.
Fig. 3A is a schematic diagram of a thread load balancing method according to another embodiment of the invention. As shown, the upper graph indicates that the load thread is started in the case where the rendering frame rate is greater than the predetermined threshold. The lower graph indicates that the load thread is suspended in the case where the rendering frame rate is less than a predetermined threshold. In addition, when the loading thread is started, the rendering thread scheduling processor realizes rendering processing. At the same time, the processor controls or schedules the graphics processor to implement parallel logic computations in order to speed up the rendering process described above. The processor may also be concurrently threaded with the load rendering schedule. It should be understood that only one rendering thread and loading thread is shown in the figure, but it should be understood that in other examples, there may be multiple ones of either of the rendering and loading threads. For example, for any of the threads described above, a thread pool may be built, scheduling multiple render and load tasks accordingly.
Fig. 3B is a schematic diagram of a thread load balancing method according to another embodiment of the invention. As shown, in this example, one rendering thread and three loading threads are included. In the case where the rendering frame rate is less than the predetermined threshold, the load thread 1 and the load thread 3 are suspended while the load thread 2 is kept. It should be understood that load thread 1 and load thread 3 may be suspended simultaneously or may be suspended sequentially. For example, load thread 1 may be suspended, then the current rendering frame rate is calculated, and if the current rendering frame rate is still less than the preset threshold, load thread 3 is suspended. In addition, after suspending the loading thread 3, the current frame rate may also be calculated, and if the current rendering frame rate is still less than the preset threshold, the loading thread 2 may be suspended. Similarly, if the current rendering frame rate is greater than the threshold, any of the load threads described above may be resumed. For example, multiple load threads may be resumed or started simultaneously, or the load threads may be resumed or started sequentially. The embodiment of the present invention is not limited thereto.
FIG. 4 is a schematic block diagram of a thread load balancing apparatus according to another embodiment of the present invention. The thread management apparatus of fig. 4 may be adapted to any suitable electronic device having data processing capabilities, including but not limited to: server, mobile terminal (such as mobile phone, PAD, etc.), PC, etc. The thread management apparatus of fig. 4 includes:
a thread monitoring module 410 for monitoring the time consumed by a first thread executing a main task to execute the main task;
and the thread control module 420 suspends a second thread meeting the suspension condition in a second thread executing a secondary task when the monitored consumed time exceeds a preset first consumed time threshold, and resumes the suspended second thread until the monitored consumed time is lower than a preset second consumed time threshold, wherein the primary task and the secondary task are tasks of the same application software, and the first consumed time threshold is greater than the second consumed time threshold.
In the scheme of the embodiment of the invention, when the consumed time exceeds the preset first consumed time threshold, the second thread meeting the pause condition in the second thread executing the secondary task is paused, so that when the resource demand of the first thread is more, part of the second thread is paused, and the resource demand of the first thread is ensured. In addition, the first time consumption threshold is larger than the second time consumption threshold, so that the second time consumption threshold indicates that the demand of the first thread is low, when the monitored consumed time is lower than the preset second time consumption threshold, the suspended second thread is recovered, and when the demand of the first thread is low, the concurrent execution of the first thread and the second thread can be ensured.
In another implementation manner of the present invention, the application software is a map navigation application software, the primary task is a map rendering task, the secondary task is a map data loading task, the first thread is a map rendering thread, and the second thread is a map data loading thread.
In another implementation manner of the present invention, the application software is map navigation application software, wherein the main task is a map rendering task, the sub-task is a map expanding task, the first thread is a map rendering thread, and the second thread is a map expanding thread, or the main task is a map expanding task, the sub-task is a map data loading task, the first thread is a map expanding thread, and the second thread is a map data loading thread.
In another implementation manner of the present invention, the thread monitoring module is specifically configured to: the main thread monitors the time consumed for rendering each frame of map when the map rendering thread executes the map rendering task.
In another implementation of the present invention, the thread control module is further configured to: after suspending a second thread meeting a suspension condition in second threads which are executing secondary tasks, deleting a map data loading task exceeding a preset life cycle in the map data loading tasks; and/or deleting tasks which are not at the current map rendering level in the map data loading tasks.
In another implementation manner of the present invention, the thread control module is specifically configured to: the second thread, which is executing part or all of the second thread of the secondary task, is halted.
The apparatus of this embodiment is used to implement the corresponding method in the foregoing method embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein again. In addition, the functional implementation of each module in the apparatus of this embodiment can refer to the description of the corresponding part in the foregoing method embodiment, and is not described herein again.
Fig. 5 is a hardware configuration of an electronic device according to another embodiment of the present invention; as shown in fig. 5, the hardware structure of the electronic device may include: a processor 501, a communication interface 502, a computer-readable medium 503, and a communication bus 504;
wherein the processor 501, the communication interface 502 and the computer readable medium 503 are communicated with each other through a communication bus 504;
alternatively, the communication interface 502 may be an interface of a communication module;
the processor 501 may be specifically configured to: the main thread monitors the time consumed by a first thread executing the main task to execute the main task; when the main thread monitors that the consumed time exceeds a preset first consumed time threshold, suspending a second thread meeting a suspension condition in a second thread executing a secondary task until the monitored consumed time is lower than a preset second consumed time threshold, and resuming the suspended second thread, wherein the main task and the secondary task are tasks of the same application software, and the first consumed time threshold is larger than the second consumed time threshold.
The Processor 501 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The computer-readable medium 503 may be, but is not limited to, a Random Access Memory (RAM), a Read-Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code configured to perform the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section, and/or installed from a removable medium. The computer program performs the above-described functions defined in the method of the present invention when executed by a Central Processing Unit (CPU). It should be noted that the computer readable medium of the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. The computer readable medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access storage media (RAM), a read-only storage media (ROM), an erasable programmable read-only storage media (EPROM or flash memory), an optical fiber, a portable compact disc read-only storage media (CD-ROM), an optical storage media piece, a magnetic storage media piece, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code configured to carry out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may operate over any of a variety of networks: including a Local Area Network (LAN) or a Wide Area Network (WAN) -to the user's computer, or alternatively, to an external computer (e.g., through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions configured to implement the specified logical function(s). In the above embodiments, specific precedence relationships are provided, but these precedence relationships are only exemplary, and in particular implementations, the steps may be fewer, more, or the execution order may be modified. That is, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The names of these modules do not in some cases constitute a limitation of the module itself.
As another aspect, the present invention also provides a computer-readable medium on which a computer program is stored, which when executed by a processor implements the method as described in the above embodiments.
As another aspect, the present invention also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: the main thread monitors the time consumed by a first thread executing the main task to execute the main task; when the main thread monitors that the consumed time exceeds a preset first consumed time threshold, suspending a second thread meeting a suspension condition in a second thread executing a secondary task until the monitored consumed time is lower than a preset second consumed time threshold, and resuming the suspended second thread, wherein the main task and the secondary task are tasks of the same application software, and the first consumed time threshold is larger than the second consumed time threshold.
The expressions "first", "second", "said first" or "said second" used in various embodiments of the present disclosure may modify various components regardless of order and/or importance, but these expressions do not limit the respective components. The above description is only configured for the purpose of distinguishing elements from other elements. For example, the first user equipment and the second user equipment represent different user equipment, although both are user equipment. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure.
When an element (e.g., a first element) is referred to as being "operably or communicatively coupled" or "connected" (operably or communicatively) to "another element (e.g., a second element) or" connected "to another element (e.g., a second element), it is understood that the element is directly connected to the other element or the element is indirectly connected to the other element via yet another element (e.g., a third element). In contrast, it is understood that when an element (e.g., a first element) is referred to as being "directly connected" or "directly coupled" to another element (a second element), no element (e.g., a third element) is interposed therebetween.
The foregoing description is only exemplary of the preferred embodiments of the invention and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention according to the present invention is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the scope of the invention as defined by the appended claims. For example, the above features and (but not limited to) features having similar functions disclosed in the present invention are mutually replaced to form the technical solution.

Claims (10)

1. A thread management method, comprising:
the main thread monitors the time consumed by a first thread executing the main task to execute the main task;
when the main thread monitors that the consumed time exceeds a preset first consumed time threshold, suspending a second thread meeting a suspension condition in a second thread executing a secondary task until the monitored consumed time is lower than a preset second consumed time threshold, and resuming the suspended second thread, wherein the main task and the secondary task are tasks of the same application software, and the first consumed time threshold is larger than the second consumed time threshold.
2. The method of claim 1, wherein the application software is a map navigation application software, the primary task is a map rendering task, the secondary task is a map data loading task, the first thread is a map rendering thread, and the second thread is a map data loading thread.
3. The method of claim 1, wherein the application software is a map navigation application software, wherein the primary task is a map rendering task, the secondary task is a map expansion task, the first thread is a map rendering thread, the second thread is a map expansion thread, or,
the main task is a map expanding task, the secondary task is a map data loading task, the first thread is a map expanding thread, and the second thread is a map data loading thread.
4. The method of claim 2, wherein the main thread monitoring an elapsed time for a first thread executing a main task to execute the main task comprises:
the main thread monitors the time consumed for rendering each frame of map when the map rendering thread executes the map rendering task.
5. The method of claim 2, wherein after suspending a second thread satisfying a suspension condition of the second threads executing the secondary task, the method further comprises:
deleting map data loading tasks exceeding a preset life cycle from the map data loading tasks;
and/or deleting tasks which are not at the current map rendering level in the map data loading tasks.
6. The method of claim 1, wherein when there are two or more second threads executing the secondary task, suspending ones of the second threads executing the secondary task that satisfy a suspension condition, comprises:
the second thread, which is executing part or all of the second thread of the secondary task, is halted.
7. A thread management apparatus comprising:
the thread monitoring module is used for monitoring the time consumed by a first thread executing a main task to execute the main task;
and the thread control module is used for pausing a second thread meeting a pause condition in a second thread executing a secondary task when the monitored consumed time exceeds a preset first consumed time threshold, and resuming the paused second thread until the monitored consumed time is lower than a preset second consumed time threshold, wherein the main task and the secondary task are tasks of the same application software, and the first consumed time threshold is larger than the second consumed time threshold.
8. The apparatus of claim 7, wherein when there are two or more second threads executing the secondary task, the thread control module is further configured to: the second thread, which is executing part or all of the second thread of the secondary task, is halted.
9. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
10. An application operable to perform the method of any one of claims 1 to 7.
CN202010591463.5A 2020-06-24 2020-06-24 Thread management method, thread management device, computer storage medium and application software Pending CN113835871A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010591463.5A CN113835871A (en) 2020-06-24 2020-06-24 Thread management method, thread management device, computer storage medium and application software

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010591463.5A CN113835871A (en) 2020-06-24 2020-06-24 Thread management method, thread management device, computer storage medium and application software

Publications (1)

Publication Number Publication Date
CN113835871A true CN113835871A (en) 2021-12-24

Family

ID=78964959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010591463.5A Pending CN113835871A (en) 2020-06-24 2020-06-24 Thread management method, thread management device, computer storage medium and application software

Country Status (1)

Country Link
CN (1) CN113835871A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114461323A (en) * 2022-01-26 2022-05-10 海信电子科技(深圳)有限公司 Card pause processing method and device, electronic equipment and storage medium
CN115150198A (en) * 2022-09-01 2022-10-04 国汽智控(北京)科技有限公司 Vehicle-mounted intrusion detection system, method, electronic device and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114461323A (en) * 2022-01-26 2022-05-10 海信电子科技(深圳)有限公司 Card pause processing method and device, electronic equipment and storage medium
CN114461323B (en) * 2022-01-26 2023-04-28 海信电子科技(深圳)有限公司 Clamping and processing method and device, electronic equipment and storage medium
CN115150198A (en) * 2022-09-01 2022-10-04 国汽智控(北京)科技有限公司 Vehicle-mounted intrusion detection system, method, electronic device and storage medium
CN115150198B (en) * 2022-09-01 2022-11-08 国汽智控(北京)科技有限公司 Vehicle-mounted intrusion detection system, method, electronic device and storage medium

Similar Documents

Publication Publication Date Title
CN109523187B (en) Task scheduling method, device and equipment
US10048979B2 (en) Managing virtual machine migration
US8615579B1 (en) Managing virtual machine migration
RU2697700C2 (en) Equitable division of system resources in execution of working process
US11182216B2 (en) Auto-scaling cloud-based computing clusters dynamically using multiple scaling decision makers
CN113835871A (en) Thread management method, thread management device, computer storage medium and application software
US11687389B2 (en) Memory crash prevention for a computing device
US11941451B2 (en) Orchestration of containerized applications
US9720494B2 (en) Managing access to data on a client device during low-power state
US9875137B2 (en) Intelligent application back stack management
US20220116478A1 (en) Microservice latency reduction
US20150234677A1 (en) Dynamically adjusting wait periods according to system performance
CN110764892A (en) Task processing method, device and computer readable storage medium
CN113448728B (en) Cloud resource scheduling method, device, equipment and storage medium
US9571418B2 (en) Method for work-load management in a client-server infrastructure and client-server infrastructure
CN115686805A (en) GPU resource sharing method and device, and GPU resource sharing scheduling method and device
CN111611086A (en) Information processing method, information processing apparatus, electronic device, and medium
CN113535251A (en) Thread management method and device
CN115421931B (en) Business thread control method and device, electronic equipment and readable storage medium
CN113076224A (en) Data backup method, data backup system, electronic device and readable storage medium
US11243598B2 (en) Proactive power management of a graphics processor
CN114116220A (en) GPU (graphics processing Unit) sharing control method, GPU sharing control device and storage medium
EP2975516B1 (en) Intelligent application back stack management
US10908962B1 (en) System and method to share GPU resources
CN115658284A (en) Resource scheduling method, device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination