CN117271137A - Multithreading data slicing parallel method - Google Patents

Multithreading data slicing parallel method Download PDF

Info

Publication number
CN117271137A
CN117271137A CN202311399096.9A CN202311399096A CN117271137A CN 117271137 A CN117271137 A CN 117271137A CN 202311399096 A CN202311399096 A CN 202311399096A CN 117271137 A CN117271137 A CN 117271137A
Authority
CN
China
Prior art keywords
data
thread
threads
task
packet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311399096.9A
Other languages
Chinese (zh)
Inventor
黄羿衡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Suyun Information Technology Co ltd
Original Assignee
Jiangsu Suyun Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Suyun Information Technology Co ltd filed Critical Jiangsu Suyun Information Technology Co ltd
Priority to CN202311399096.9A priority Critical patent/CN117271137A/en
Publication of CN117271137A publication Critical patent/CN117271137A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention relates to the technical field of data processing, and discloses a multithreading data slicing parallel method, which comprises the following steps: s1: creating threads, obtaining the number of request lines and the size of the sub packets, determining the size of a thread pool and the number of threads by using the number of request lines and the size of the sub packets, initializing the thread pool, and creating 10-12 threads. The invention not only can improve the utilization rate of thread resources, reduce the conditions of low load and empty threads, reduce the waste of thread resources, but also reduces the total amount of threads required to be allocated, and can relieve the problem that the context is frequently switched when the number of threads is too large, and can enable each thread to process the data corresponding to the threads.

Description

Multithreading data slicing parallel method
Technical Field
The invention relates to the technical field of data processing, in particular to a multithreading data slicing parallel method.
Background
The database is a warehouse for organizing, storing and managing data according to a data structure, and the query and processing speed of the database on the data is far higher than that of a common file. With the rapid growth of mobile internet services and the number of users, the conventional warehousing mechanism has hardly satisfied the requirements of warehousing management, so that a method for processing data by adopting multiple threads has been developed
However, as the database is more and more data, when multiple or a large number of requests come simultaneously, the threads of the machine are used up, the contexts are frequently switched due to too many thread allocations of the operating system, and the imbalance of the tasks of the thread allocation further causes the request processing to be slow. And the tasks distributed by the threads are unbalanced, so that some threads process a large number of data lines, the time is long, the number of lines processed by some threads is small, the time is short, and finally, the threads are unified until all the threads are processed, and the thread resources cannot be fully utilized.
Disclosure of Invention
(one) solving the technical problems
Aiming at the defects of the prior art, the invention provides a multi-thread data slicing parallel method, which mainly aims at solving the problems that the tasks distributed by the existing threads are unbalanced, the number of lines of data processed by some threads is large, the time spent is long, the number of lines processed by some threads is small, the time spent is short, and finally, the threads are returned after all threads are processed uniformly, and the thread resources cannot be fully utilized.
(II) technical scheme
In order to achieve the above purpose, the present invention provides the following technical solutions:
a multi-threaded data slicing parallel method, comprising the steps of:
s1: creating threads, obtaining the number of request lines and the sub-package size, determining the size of a thread pool and the number of threads by using the number of request lines and the sub-package size, initializing the thread pool, and creating 10-12 threads;
s2: the method comprises the steps of receiving data, responding to a thread starting request, generating a sending data packet, receiving the data packet by a management node, slicing the received data packet by the management node to obtain 4-6 sliced data sets, determining a target value according to the number of created threads, and distributing thread resources for each sliced data set;
s3: determining the priority, and determining the priority of the sliced data group and the longest waiting time of the sliced data group in a priority queue according to the thread resources allocated by the sliced data group in the S2;
s4: managing the slicing data sets, inserting the tasks to be executed into corresponding queues according to the priority levels of the tasks according to the priority levels and the priority level number of the slicing data sets determined in the step S3, adding a time stamp for the tasks, and recording the time of adding the tasks into the queues;
s5: checking, in the task scheduling process, checking the waiting time of each task in the non-highest priority queue, and if the longest waiting time in S3 is exceeded and the task is not executed yet, transferring the task to a higher priority queue at the upper level and changing the timestamp;
s6: thread management, namely maintaining the number of task threads and a task thread state table in a thread pool by a daemon thread, recording and controlling the change of the thread state, and if a new task is added into a priority queue and no idle thread exists currently, creating a new thread execution task; if all the priority queues are empty, i.e. tasks which do not need to be executed, the thread enters an idle state, and if the idle state exceeds the set longest idle time, the thread automatically ends to be converted into a terminal state and is converted into a terminal state.
Further, the number of request lines represents the total number of data lines in the request data, and the packet size represents the maximum number of processing request lines per thread.
On the basis of the foregoing aspect, the determining the thread number by using the request line number and the packetization size includes: when the relation among the business data is no inter-dependency relation, the ratio of the number of the request lines to the sub-package size is obtained; and if the ratio is an integer, taking the ratio as the thread number.
As still further another aspect of the present invention, the transmission data packet includes transmission data of the thread and an ID value of the transmission data.
Further, the ID value of the sending data is a virtual address of the buffer memory of the receiving data.
Based on the foregoing scheme, the target value represents the number of threads that currently need to be enabled.
As still further scheme of the present invention, the terminal reads the device tag, the time tag and the data check code corresponding to each data in the data packet, the terminal packages each data with the same data tag in the data packet to obtain a plurality of initial packets, then the terminal uses the current time tag to check the time tag of all the data contained in each initial packet, eliminates the data with the time tag which is not matched with the current time in each initial packet to obtain a plurality of intermediate packets, and finally the terminal performs cluster analysis on the data check code of all the data contained in each intermediate packet, eliminates the data which does not belong to the intermediate packets to obtain a plurality of fragmented data groups.
(III) beneficial effects
Compared with the prior art, the invention provides a multithreading data slicing parallel method, which has the following beneficial effects:
1. the invention reduces the situations of low load and empty threads by improving the utilization rate of thread resources, reduces the waste of thread resources, simultaneously reduces the total amount of threads required to be allocated, and solves the problem that a large number of threads cause frequent context switching.
2. The invention firstly determines the number of threads for processing data, establishes the corresponding number of threads, distributes the data to be processed to each thread so as to enable each thread to process the data corresponding to the thread, and processes the data in a multi-thread mode, thereby improving the data processing efficiency.
3. According to the invention, each target thread can receive the corresponding sliced data group and process the corresponding sliced data group, so that the mutual interference and penetration between different data can be avoided, and the reliability of the data is improved.
Drawings
Fig. 1 is a schematic flow structure diagram of a multi-threaded data slicing parallel method according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
Referring to fig. 1, a multithreading data slicing parallel method includes the steps of:
s1: creating threads, obtaining the number of request lines and the sub-packet size, determining the size of a thread pool and the number of threads by using the number of request lines and the sub-packet size, initializing the thread pool, and creating 10 threads;
s2: the method comprises the steps of receiving data, responding to a thread starting request, generating a sending data packet, receiving the data packet by a management node, slicing the received data packet by the management node to obtain 4 sliced data groups, determining a target value according to the number of threads created, distributing thread resources for each sliced data group, firstly determining the number of threads used for processing the data, creating a corresponding number of threads, distributing the data to be processed to each thread, enabling each thread to process the data corresponding to the thread, and processing the data in a multi-thread mode, so that the data processing efficiency can be improved;
s3: determining the priority, determining the priority of the sliced data group and the longest waiting time of the sliced data group in a priority queue according to the thread resources allocated by the sliced data group in the S2, and processing the corresponding sliced data group by enabling each target thread to receive the corresponding sliced data group, so that the mutual interference and penetration between different data can be avoided, and the reliability of the data is improved;
s4: managing the slicing data sets, inserting the tasks to be executed into corresponding queues according to the priority levels of the tasks according to the priority levels and the priority level number of the slicing data sets determined in the step S3, adding a time stamp for the tasks, and recording the time of adding the tasks into the queues;
s5: checking, in the task scheduling process, checking the waiting time of each task in the non-highest priority queue, and if the longest waiting time in S3 is exceeded and the task is not executed yet, transferring the task to a higher priority queue at the upper level and changing the timestamp;
s6: thread management, namely maintaining the number of task threads and a task thread state table in a thread pool by a daemon thread, recording and controlling the change of the thread state, and if a new task is added into a priority queue and no idle thread exists currently, creating a new thread execution task; if all priority queues are empty, i.e. tasks which need to be executed are not needed, the thread enters an idle state, if the idle state exceeds a set longest idle time, the thread automatically ends to be converted into a terminal state mode to be converted into a terminal state, the conditions of low-load and empty threads are reduced by improving the utilization rate of thread resources, the waste of thread resources is reduced, the total amount of threads which need to be allocated is reduced, and the problem that the context is frequently switched is caused by a large amount of threads is solved.
In particular, in the present invention, the number of request lines represents the total number of data lines in the request data, the packet size represents the maximum number of processing request lines per thread, and the number of threads is determined by using the number of request lines and the packet size, including: when the relation among the business data is no inter-dependency relation, acquiring the ratio of the number of request lines to the sub-packet size; if the ratio is an integer, the ratio is taken as the thread number, the sending data packet contains the sending data of the thread and the ID value of the sending data, the ID value of the sending data is the virtual address of the buffer memory of the receiving data, the target value represents the number of threads which need to be started currently, the terminal reads the equipment label, the time label and the data check code corresponding to each data in the data packet, the terminal packages each data with the same data label in the data packet to obtain a plurality of initial groups, then the terminal checks the time labels of all the data contained in each initial group by using the current time label, the corresponding time label in each initial group is removed from the data which is not matched with the current time, so as to obtain a plurality of intermediate groups, and finally the terminal performs cluster analysis on the data check code of all the data contained in each intermediate group and removes the data which does not belong to the intermediate group, so as to obtain a plurality of fragment data groups.
Example 2
Referring to fig. 1, a multithreading data slicing parallel method includes the steps of:
s1: creating threads, obtaining the number of request lines and the sub-packet size, determining the size of a thread pool and the number of threads by using the number of request lines and the sub-packet size, initializing the thread pool, and creating 11 threads;
s2: the method comprises the steps of receiving data, responding to a thread starting request, generating a sending data packet, receiving the data packet by a management node, slicing the received data packet by the management node to obtain 5 sliced data groups, determining a target value according to the number of threads created, distributing thread resources for each sliced data group, firstly determining the number of threads used for processing the data, creating a corresponding number of threads, distributing the data to be processed to each thread, enabling each thread to process the data corresponding to the thread, and processing the data in a multi-thread mode, so that the data processing efficiency can be improved;
s3: determining the priority, determining the priority of the sliced data group and the longest waiting time of the sliced data group in a priority queue according to the thread resources allocated by the sliced data group in the S2, and processing the corresponding sliced data group by enabling each target thread to receive the corresponding sliced data group, so that the mutual interference and penetration between different data can be avoided, and the reliability of the data is improved;
s4: managing the slicing data sets, inserting the tasks to be executed into corresponding queues according to the priority levels of the tasks according to the priority levels and the priority level number of the slicing data sets determined in the step S3, adding a time stamp for the tasks, and recording the time of adding the tasks into the queues;
s5: checking, in the task scheduling process, checking the waiting time of each task in the non-highest priority queue, and if the longest waiting time in S3 is exceeded and the task is not executed yet, transferring the task to a higher priority queue at the upper level and changing the timestamp;
s6: thread management, namely maintaining the number of task threads and a task thread state table in a thread pool by a daemon thread, recording and controlling the change of the thread state, and if a new task is added into a priority queue and no idle thread exists currently, creating a new thread execution task; if all priority queues are empty, i.e. tasks which need to be executed are not needed, the thread enters an idle state, if the idle state exceeds a set longest idle time, the thread automatically ends to be converted into a terminal state mode to be converted into a terminal state, the conditions of low-load and empty threads are reduced by improving the utilization rate of thread resources, the waste of thread resources is reduced, the total amount of threads which need to be allocated is reduced, and the problem that the context is frequently switched is caused by a large amount of threads is solved.
In particular, in the present invention, the number of request lines represents the total number of data lines in the request data, the packet size represents the maximum number of processing request lines per thread, and the number of threads is determined by using the number of request lines and the packet size, including: when the relation among the business data is no inter-dependency relation, acquiring the ratio of the number of request lines to the sub-packet size; if the ratio is an integer, the ratio is taken as the thread number, the sending data packet contains the sending data of the thread and the ID value of the sending data, the ID value of the sending data is the virtual address of the buffer memory of the receiving data, the target value represents the number of threads which need to be started currently, the terminal reads the equipment label, the time label and the data check code corresponding to each data in the data packet, the terminal packages each data with the same data label in the data packet to obtain a plurality of initial groups, then the terminal checks the time labels of all the data contained in each initial group by using the current time label, the corresponding time label in each initial group is removed from the data which is not matched with the current time, so as to obtain a plurality of intermediate groups, and finally the terminal performs cluster analysis on the data check code of all the data contained in each intermediate group and removes the data which does not belong to the intermediate group, so as to obtain a plurality of fragment data groups.
Example 3
Referring to fig. 1, a multithreading data slicing parallel method includes the steps of:
s1: creating threads, obtaining the number of request lines and the sub-packet size, determining the size of a thread pool and the number of threads by using the number of request lines and the sub-packet size, initializing the thread pool, and creating 12 threads;
s2: the method comprises the steps of receiving data, responding to a thread starting request, generating a sending data packet, receiving the data packet by a management node, slicing the received data packet by the management node to obtain 6 sliced data groups, determining a target value according to the number of threads created, distributing thread resources for each sliced data group, firstly determining the number of threads used for processing the data, creating a corresponding number of threads, distributing the data to be processed to each thread, enabling each thread to process the data corresponding to the thread, and processing the data in a multi-thread mode, so that the data processing efficiency can be improved;
s3: determining the priority, determining the priority of the sliced data group and the longest waiting time of the sliced data group in a priority queue according to the thread resources allocated by the sliced data group in the S2, and processing the corresponding sliced data group by enabling each target thread to receive the corresponding sliced data group, so that the mutual interference and penetration between different data can be avoided, and the reliability of the data is improved;
s4: managing the slicing data sets, inserting the tasks to be executed into corresponding queues according to the priority levels of the tasks according to the priority levels and the priority level number of the slicing data sets determined in the step S3, adding a time stamp for the tasks, and recording the time of adding the tasks into the queues;
s5: checking, in the task scheduling process, checking the waiting time of each task in the non-highest priority queue, and if the longest waiting time in S3 is exceeded and the task is not executed yet, transferring the task to a higher priority queue at the upper level and changing the timestamp;
s6: thread management, namely maintaining the number of task threads and a task thread state table in a thread pool by a daemon thread, recording and controlling the change of the thread state, and if a new task is added into a priority queue and no idle thread exists currently, creating a new thread execution task; if all priority queues are empty, i.e. tasks which need to be executed are not needed, the thread enters an idle state, if the idle state exceeds a set longest idle time, the thread automatically ends to be converted into a terminal state mode to be converted into a terminal state, the conditions of low-load and empty threads are reduced by improving the utilization rate of thread resources, the waste of thread resources is reduced, the total amount of threads which need to be allocated is reduced, and the problem that the context is frequently switched is caused by a large amount of threads is solved.
In particular, in the present invention, the number of request lines represents the total number of data lines in the request data, the packet size represents the maximum number of processing request lines per thread, and the number of threads is determined by using the number of request lines and the packet size, including: when the relation among the business data is no inter-dependency relation, acquiring the ratio of the number of request lines to the sub-packet size; if the ratio is an integer, the ratio is taken as the thread number, the sending data packet contains the sending data of the thread and the ID value of the sending data, the ID value of the sending data is the virtual address of the buffer memory of the receiving data, the target value represents the number of threads which need to be started currently, the terminal reads the equipment label, the time label and the data check code corresponding to each data in the data packet, the terminal packages each data with the same data label in the data packet to obtain a plurality of initial groups, then the terminal checks the time labels of all the data contained in each initial group by using the current time label, the corresponding time label in each initial group is removed from the data which is not matched with the current time, so as to obtain a plurality of intermediate groups, and finally the terminal performs cluster analysis on the data check code of all the data contained in each intermediate group and removes the data which does not belong to the intermediate group, so as to obtain a plurality of fragment data groups.
In this description, it should be noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.

Claims (7)

1. A multi-threaded data slicing parallel method, comprising the steps of:
s1: creating threads, obtaining the number of request lines and the sub-package size, determining the size of a thread pool and the number of threads by using the number of request lines and the sub-package size, initializing the thread pool, and creating 10-12 threads;
s2: the method comprises the steps of receiving data, responding to a thread starting request, generating a sending data packet, receiving the data packet by a management node, slicing the received data packet by the management node to obtain 4-6 sliced data sets, determining a target value according to the number of created threads, and distributing thread resources for each sliced data set;
s3: determining the priority, and determining the priority of the sliced data group and the longest waiting time of the sliced data group in a priority queue according to the thread resources allocated by the sliced data group in the S2;
s4: managing the slicing data sets, inserting the tasks to be executed into corresponding queues according to the priority levels of the tasks according to the priority levels and the priority level number of the slicing data sets determined in the step S3, adding a time stamp for the tasks, and recording the time of adding the tasks into the queues;
s5: checking, in the task scheduling process, checking the waiting time of each task in the non-highest priority queue, and if the longest waiting time in S3 is exceeded and the task is not executed yet, transferring the task to a higher priority queue at the upper level and changing the timestamp;
s6: thread management, namely maintaining the number of task threads and a task thread state table in a thread pool by a daemon thread, recording and controlling the change of the thread state, and if a new task is added into a priority queue and no idle thread exists currently, creating a new thread execution task; if all the priority queues are empty, i.e. tasks which do not need to be executed, the thread enters an idle state, and if the idle state exceeds the set longest idle time, the thread automatically ends to be converted into a terminal state and is converted into a terminal state.
2. The multi-threaded data slicing parallel method of claim 1, wherein the number of request lines represents a total number of data lines in the request data, and wherein the packet size represents a maximum number of request lines per thread.
3. The multi-threaded data slicing parallelism of claim 2, wherein determining the number of threads using the number of request lines and the packetization size comprises: when the relation among the business data is no inter-dependency relation, the ratio of the number of the request lines to the sub-package size is obtained; and if the ratio is an integer, taking the ratio as the thread number.
4. A multi-threaded data slicing parallel method as in claim 3 wherein said send data packet contains said thread's send data and said send data ID value.
5. The method of claim 4, wherein the ID value of the transmitted data is a virtual address of a cache of the received data.
6. A multi-threaded data slicing parallelism method according to claim 1, wherein the target value is indicative of the number of threads currently required to be enabled.
7. The multi-threaded data slicing parallel method according to claim 1, wherein the terminal reads a device tag, a time tag and a data check code corresponding to each data in the data packet, the terminal packages each data with the same data tag in the data packet to obtain a plurality of initial packets, then the terminal checks the time tags of all the data contained in each initial packet by using the current time tag, eliminates the data which is not matched with the current time and corresponds to the time tag in each initial packet to obtain a plurality of intermediate packets, finally the terminal performs cluster analysis on the data check code of all the data contained in each intermediate packet, eliminates the data which does not belong to the intermediate packets to obtain a plurality of sliced data groups.
CN202311399096.9A 2023-10-26 2023-10-26 Multithreading data slicing parallel method Pending CN117271137A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311399096.9A CN117271137A (en) 2023-10-26 2023-10-26 Multithreading data slicing parallel method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311399096.9A CN117271137A (en) 2023-10-26 2023-10-26 Multithreading data slicing parallel method

Publications (1)

Publication Number Publication Date
CN117271137A true CN117271137A (en) 2023-12-22

Family

ID=89221571

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311399096.9A Pending CN117271137A (en) 2023-10-26 2023-10-26 Multithreading data slicing parallel method

Country Status (1)

Country Link
CN (1) CN117271137A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117556289A (en) * 2024-01-12 2024-02-13 山东杰出人才发展集团有限公司 Enterprise digital intelligent operation method and system based on data mining

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117556289A (en) * 2024-01-12 2024-02-13 山东杰出人才发展集团有限公司 Enterprise digital intelligent operation method and system based on data mining
CN117556289B (en) * 2024-01-12 2024-04-16 山东杰出人才发展集团有限公司 Enterprise digital intelligent operation method and system based on data mining

Similar Documents

Publication Publication Date Title
CN1097913C (en) ATM throttling
US5925102A (en) Managing processor resources in a multisystem environment in order to provide smooth real-time data streams, while enabling other types of applications to be processed concurrently
CN101951411A (en) Cloud scheduling system and method and multistage cloud scheduling system
CN117271137A (en) Multithreading data slicing parallel method
US20130061018A1 (en) Memory access method for parallel computing
CN102891809B (en) Multi-core network device message presses interface order-preserving method and system
CN111813573B (en) Communication method of management platform and robot software and related equipment thereof
CN1787588A (en) Method for processing multiprogress message and method for processing multiprogress talk ticket
CN109542608B (en) Cloud simulation task scheduling method based on hybrid queuing network
CN1869933A (en) Computer processing system for implementing data update and data updating method
US20030158883A1 (en) Message processing
CN102457578A (en) Distributed network monitoring method based on event mechanism
CN105761039A (en) Method for processing express delivery information big data
CN101196928A (en) Contents searching method, system and engine distributing unit
CN112035255A (en) Thread pool resource management task processing method, device, equipment and storage medium
CN111913784B (en) Task scheduling method and device, network element and storage medium
CN112650449B (en) Method and system for releasing cache space, electronic device and storage medium
CN109257303A (en) QoS queue dispatching method, device and satellite communication system
CN107911484B (en) Message processing method and device
CN110737530A (en) method for improving packet receiving capability of HANDLE identifier parsing system
CN1825288A (en) Method for implementing process multi-queue dispatching of embedded SRAM operating system
CN112860391B (en) Dynamic cluster rendering resource management system and method
CN115878910A (en) Line query method, device and storage medium
CN114860449A (en) Data processing method, device, equipment and storage medium
CN113674137A (en) Model loading method for maximizing and improving video memory utilization rate based on LRU (least recently used) strategy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination