CN116302391A - Multithreading task processing method and related device - Google Patents

Multithreading task processing method and related device Download PDF

Info

Publication number
CN116302391A
CN116302391A CN202310076347.3A CN202310076347A CN116302391A CN 116302391 A CN116302391 A CN 116302391A CN 202310076347 A CN202310076347 A CN 202310076347A CN 116302391 A CN116302391 A CN 116302391A
Authority
CN
China
Prior art keywords
memory
thread
task
queue
circular queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310076347.3A
Other languages
Chinese (zh)
Inventor
李辉
尤波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Electronic Information Industry Co Ltd
Original Assignee
Inspur Electronic Information Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Electronic Information Industry Co Ltd filed Critical Inspur Electronic Information Industry Co Ltd
Priority to CN202310076347.3A priority Critical patent/CN116302391A/en
Publication of CN116302391A publication Critical patent/CN116302391A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The application discloses a multithreading task processing method, which comprises the following steps: storing the received tasks into a circulation queue corresponding to the thread, and dequeuing the tasks in the circulation queue according to the first-in-first-out sequence; and executing the tasks queued from the cyclic queue through the threads and the memory resources corresponding to the threads to obtain an execution result. The received tasks are stored in the circular queue, the circular queue sequentially dequeues the tasks, and finally, each dequeued task is executed by the corresponding thread and the memory resource corresponding to the thread, so that an execution result is obtained, the task is executed by the thread according to the sequence by adopting the corresponding memory resource, the situation that the task occupies the resource and the thread is avoided, the multithreading access is realized without using a lock mechanism, and the processing efficiency of the multithreading is improved. The application also discloses a multithreaded task processing device, a computing device and a computer readable storage medium, which have the beneficial effects.

Description

Multithreading task processing method and related device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a multithreading task processing method, a task processing device, a computing device, and a computer readable storage medium.
Background
With the continuous development of information technology, high-concurrency traffic needs to be processed in a server and a storage device, and massive IO is accompanied. Therefore, server devices and storage devices have high requirements on performance and stability, and most of the scenarios are multi-threaded processing.
In the related art, locking is needed in the process of accessing a public resource through multiple threads, but the locking can introduce two problems: one is lock waiting and the other is thread switching, the lock waiting can increase time consumption, the thread switching increases extra scheduling overhead, the two problems have great influence on performance, but data pollution can be caused by unlocking, and thread safety cannot be guaranteed.
Thus, the key issues of concern to those skilled in the art in how to implement access to a resource using a locking function are avoided.
Disclosure of Invention
The purpose of the application is to provide a multithreading task processing method, a task processing device, a computing device and a computer readable storage medium, so as to avoid the adoption of a lock mechanism to realize multithreading task processing and improve the processing efficiency of the multithreading.
In order to solve the above technical problems, the present application provides a method for processing a task with multiple threads, including:
storing the received task to a circulation queue corresponding to the thread;
dequeuing tasks in the circular queue according to the first-in-first-out order;
executing the task queued from the cyclic queue through the thread and the memory resource corresponding to the thread to obtain an execution result; the memory resource is a memory resource applied for the thread.
Optionally, the process of applying for the memory resource includes:
sending a memory application to a memory management module; the memory management module is used for managing an original memory pool;
the memory management module allocates a memory space from the original memory pool and returns memory information of the memory space;
and determining the memory resource based on the memory information.
Optionally, the memory management module allocates a memory space from the original memory pool, and returns memory information of the memory space, including:
the memory management module allocates continuous memory space from the original memory pool;
and returning the memory information of the memory space.
Optionally, the process of creating the circular queue includes:
and creating a corresponding circular queue based on the multithreading demand information, and setting the enqueue and dequeue operation instructions of the circular queue as a public interface.
Optionally, the process of creating the thread includes:
creating corresponding threads based on the number of the circular queues, and setting the state of each thread as a resident thread.
Optionally, the method further comprises:
and when the circular queue is an empty queue, controlling the thread corresponding to the circular queue to execute idle operation.
Optionally, storing the received task in a circulation queue corresponding to the thread, including:
and storing the sent task into the circular queue through an enqueue operation interface of the circular queue.
The application also provides a multithreaded task processing device, comprising:
the task storage module is used for storing the received task to a circulation queue corresponding to the thread;
the circular queue processing module is used for dequeuing tasks in the circular queue according to the first-in-first-out sequence;
the task execution module is used for executing the tasks listed from the cyclic queue through the threads and the memory resources corresponding to the threads to obtain an execution result; the memory resource is a memory resource applied for the thread.
The present application also provides a computing device comprising:
a memory for storing a computer program;
a processor for implementing the steps of the task processing method as described above when executing the computer program.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the task processing method as described above.
The multi-thread task processing method provided by the application comprises the following steps: storing the received tasks into a circulation queue corresponding to a thread, and dequeuing the tasks in the circulation queue according to the first-in-first-out order; executing the task queued from the cyclic queue through the thread and the memory resource corresponding to the thread to obtain an execution result; the memory resource is a memory resource applied for the thread.
The received tasks are stored in the circular queue, the circular queue sequentially dequeues the tasks, and finally, each dequeued task is executed by the corresponding thread and the memory resource corresponding to the thread, so that an execution result is obtained, the task is executed by the thread according to the sequence by adopting the corresponding memory resource, the situation that the task occupies the resource and the thread is avoided, the multithreading access is realized without using a lock mechanism, and the processing efficiency of the multithreading is improved.
The application further provides a multithreaded task processing device, a computing device and a computer-readable storage medium, which have the above beneficial effects and are not described herein.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings may be obtained according to the provided drawings without inventive effort to a person skilled in the art.
FIG. 1 is a flowchart of a method for processing tasks in multiple threads according to an embodiment of the present application;
FIG. 2 is a data flow diagram of a multi-threaded task processing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of memory management of a multi-threaded task processing method according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of memory partitioning in a multi-threaded task processing method according to an embodiment of the present disclosure;
FIG. 5 is a schematic thread structure diagram of a multi-threaded task processing method according to an embodiment of the present disclosure;
FIG. 6 is a schematic structural diagram of a multi-threaded task processing device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a computing device according to an embodiment of the present application.
Detailed Description
The core of the application is to provide a multithreading task processing method, a task processing device, a computing device and a computer readable storage medium, so as to avoid the adoption of a lock mechanism to realize multithreading task processing and improve the processing efficiency of the multithreading.
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
In the related art, locking is needed in the process of accessing a public resource through multiple threads, but the locking can introduce two problems: one is lock waiting and the other is thread switching, the lock waiting can increase time consumption, the thread switching increases extra scheduling overhead, the two problems have great influence on performance, but data pollution can be caused by unlocking, and thread safety cannot be guaranteed.
Therefore, the application provides a multithreading task processing method, which is characterized in that received tasks are stored in a circular queue, then the circular queue sequentially dequeues the tasks, and finally each dequeued task is executed by a corresponding thread and a memory resource corresponding to the thread, so that an execution result is obtained, the task is executed by the thread according to the sequence by adopting the corresponding memory resource, the situation that the task occupies resources and threads is avoided, the multithreading access is realized without using a lock mechanism, and the multithreading processing efficiency is improved.
The following describes, by way of one embodiment, a multi-threaded task processing method provided herein.
Referring to fig. 1, fig. 1 is a flowchart of a multi-threaded task processing method according to an embodiment of the present application.
In this embodiment, the method may include:
s101, storing the received task in a circulation queue corresponding to the thread;
this step aims at storing the received task in the circular queue corresponding to the thread.
The circular queue is mainly used for storing tasks, and the circular queue is a private queue for threads. Therefore, the task to be processed by the thread is only the task dequeued from the queue, and the problem of the thread preempted by a plurality of tasks is avoided.
The threads are in one-to-one correspondence with the circular queues, and no data exchange among the threads is guaranteed.
Further, the step may include:
and storing the sent task into the circular queue through an enqueue operation interface of the circular queue.
It can be seen that this alternative is mainly to explain how tasks are deposited into the circular queue. In the alternative scheme, the sent task is stored in the circular queue through an enqueue operation interface of the circular queue. That is, the circular queue has an enqueue operation interface for enqueuing. Tasks may be deposited in the circular queue through the enqueue operation interface. Correspondingly, the circular queue also comprises a dequeue operation interface. The enqueue operation interface and the dequeue operation interface are both common operation interfaces of the circular queue.
Further, the process of creating the circular queue in this embodiment may include:
and creating a corresponding circular queue based on the multithreading demand information, and setting the enqueue and dequeue operation instructions of the circular queue as a public interface.
It can be seen that this alternative is mainly illustrative of how a circular queue is created. In the alternative scheme, a corresponding circular queue is created based on the multithreading demand information, and enqueue and dequeue operation instructions of the circular queue are set as a public interface. Since one circular queue corresponds to one queue. Thus, in this alternative it may be determined how many circular queues to create based on the multithreaded demand information. The multithreading requirement information is information for describing the number of required multithreading.
Further, the process of creating a thread in this embodiment may include:
corresponding threads are created based on the number of circular queues and the state of each thread is set to a resident thread.
It can be seen that this alternative is mainly illustrative of how the corresponding thread is created. In this alternative, the corresponding thread may be created by one sentence of the number of circular queues. At the same time, the state of each thread is set to a resident thread. Where a resident process refers to whether or not the process has a task executing, the process can be maintained in the system without releasing the process because of the lack of a task.
Further, on the basis of the above alternative, this embodiment may further include:
and when the circular queue is an empty queue, controlling the thread corresponding to the circular queue to execute idle operation.
It can be seen that this alternative is mainly illustrative of how processing can be done when there are no tasks in the queue. In this alternative, when the circular queue is an empty queue, the thread corresponding to the circular queue is controlled to execute the idle operation.
S102, dequeuing tasks in a circular queue according to the first-in-first-out order;
on the basis of S101, this step aims at dequeuing tasks in the circular queue in a first-in-first-out order.
That is, the tasks stored in the circular queue are dequeued in a first-in-first-out order. Meanwhile, the enqueue and dequeue operations can be set as a public interface, so that the operation of the thread on the circular queue is realized. Further, enqueuing may be implemented in other modules, with the circular queue being polled in the current module.
Further, the threads, queues and other modules conform to a producer consumer model, the threads serve as consumers, the elements are dequeued after the tasks are executed, and the other modules serve as consumers and enqueue the tasks to be processed.
S103, executing the tasks queued from the cyclic queue through the threads and the memory resources corresponding to the threads to obtain an execution result; the memory resource is a memory resource applied for the thread.
Based on S102, the step aims at executing the tasks listed from the cyclic queue through the threads and the memory resources corresponding to the threads to obtain an execution result; the memory resource is a memory resource applied for the thread.
The memory resource is a memory resource of a portion corresponding to the thread. Other threads do not preempt the memory resources, thereby realizing multi-thread access.
Further, the process of applying for the memory resource in this embodiment may include:
step 1, sending a memory application to a memory management module; the memory management module is used for managing the original memory pool;
step 2, the memory management module allocates memory space from the original memory pool and returns the memory information of the memory space;
and step 3, determining memory resources based on the memory information.
It can be seen that this alternative is mainly to explain how to apply for the memory resource. In the alternative scheme, a memory application is sent to a memory management module; the memory management module is used for managing the original memory pool; the memory management module allocates a memory space from the original memory pool and returns memory information of the memory space; memory resources are determined based on the memory information.
Further, step 2 in the above alternative may include:
step 1, a memory management module allocates continuous memory space from an original memory pool;
and step 2, returning the memory information of the memory space.
It can be seen that this alternative is mainly to explain how to return the memory information. In this alternative, the memory management module allocates contiguous memory space from the original memory pool; and returning the memory information of the memory space.
In summary, in this embodiment, the received tasks are stored in the circular queue, then the circular queue sequentially dequeues the tasks, and finally, each dequeued task is executed by the corresponding thread and the memory resource corresponding to the thread, so as to obtain an execution result, thereby implementing that the thread executes the tasks in sequence by adopting the corresponding memory resource, avoiding the situation that the task occupies resources and threads, enabling the multithreading access to be implemented without using a lock mechanism, and improving the processing efficiency of the multitasking.
The following further describes a multithreading task processing method according to another specific embodiment.
Referring to fig. 2, fig. 2 is a data flow chart of a multi-threaded task processing method according to an embodiment of the present application.
In this embodiment, the method may include:
step 1, storing received tasks into a circulation queue corresponding to a thread;
step 2, dequeuing the tasks in the circular queue according to the first-in-first-out order;
step 3, executing the tasks queued from the cyclic queue through the threads and the memory resources corresponding to the threads to obtain an execution result; the memory resource is a memory resource applied for the thread.
Obviously, the above process is that the received tasks are stored in the circular queue, then the circular queue sequentially dequeues the tasks, and finally, each dequeued task is executed by the corresponding thread and the memory resource corresponding to the thread, so as to obtain an execution result, the task is executed by the thread according to the sequence by adopting the corresponding memory resource, the situation that the task is preempted by the resource and the thread is avoided, the multithreading access is realized without using a lock mechanism, and the processing efficiency of the multithreading is improved.
Specifically, the method of dividing the memory in advance to ensure that the exclusive thread accesses the exclusive resource and the thread polls the circular queue is used, the circular queue is used as the unique task queue of the thread, each thread polls the own private circular queue, the shared resource is distributed according to the private queue, the data structure is planned in an overall mode, the thread private data is set, the thread communication is not performed, and the like, so that the problems of frequent thread switching and lock waiting are avoided, and further the multithreading still ensures thread safety without performance reduction under the condition of unlocking. The method is suitable for the scenes that the multithreading does not carry out thread communication, the tasks are relatively independent and the concurrency is high.
Referring to fig. 3, fig. 3 is a schematic diagram of memory management of a multi-threaded task processing method according to an embodiment of the present application.
In the process of memory application, a memory application module Memory Management (memory management module) is first created to implement a kernel-mode memory application and a user-mode memory application interface.
Memory applications are divided into two broad categories, direct memory access and normal memory. Direct memory access is used for IO, the participation of CPU is not needed, the occupation of CPU is reduced, and the common memory needs CPU intervention.
And applying the memory in the system initialization stage, and not applying the memory in the operation stage. The memory size required by each module is calculated in advance, a Memory Management module locks a large section of memory as an original memory pool during initialization, other modules send memory applications to a Memory Management module when the memory application demands, the required memory types are informed of the memory types through carried parameters, and a Memory Management module divides one block (the minimum allocation unit is 4 k) from the original memory pool and returns the memory information.
Further, memory application and release. And after the application of the memory in the module, the memory is occupied for a long time unless the module exits, and the memory is returned. Memory Management the run phase does not release memory.
The size of the memory application can apply for memory pages with the size of 4K to the kernel, so that the continuity of the memory is ensured, and the memory offset operation is facilitated.
Referring to fig. 4, fig. 4 is a schematic diagram illustrating memory division of a multi-threaded task processing method according to an embodiment of the present application.
And finally, dividing the internal memory among the modules. As shown in fig. 4, after each module takes the memory, the memory is divided into a plurality of areas, each area is put in a queue, and the same area is not allowed to be put on two queues.
Wherein, thread and queue design:
the process of creating the circular queue can realize the enqueuing and dequeuing operations of the queue and set the enqueuing and dequeuing operations as a public interface; the enqueuing operation is realized in other modules, and the queue is polled in the module; when the circulation queue is empty, the task is considered to be absent, and when the circulation queue is not empty, the elements in the queue are sequentially processed, and after the processing is finished, the elements are dequeued.
Referring to fig. 5, fig. 5 is a schematic thread structure diagram of a multi-thread task processing method according to an embodiment of the present application.
The process of creating the thread queues can create a corresponding number of threads according to the number of the queues, all threads are resident and are not recycled or destroyed, and the threads are allowed to idle when no task exists; the one-to-one correspondence of threads to queues is shown in FIG. 5, ensuring that no data is exchanged between threads
The thread, the queue and other modules conform to a producer consumer model, the thread serves as a consumer, elements are dequeued after tasks are executed, and other modules serve as consumers and the tasks to be processed are enqueued.
Finally, data structure and function design:
the definition of the data structure is unified with the thread identification, and when other modules add tasks into the queue, the same memory is ensured to be executed in only one queue.
If the number of the data structures in each module is smaller than the number of threads, a certain number of threads are specified to process, if the number of the data structures is larger than the number of threads, the data structures can be distributed on all threads according to the conditions and requirements, and if the data structures are equal to the number of the threads, the data structures can be in one-to-one correspondence.
The function 0 shown in fig. 2 may be executed on all threads, and when the function is executed, the validity of the memory is judged, and the execution is allowed.
Therefore, the design that the multithreading is not locked to ensure the thread safety is realized through the advanced allocation of the memory and the thread access of the dedicated resources in the embodiment, so that the problems of frequent thread switching, lock waiting and the like can be avoided, the satellite performance and stability are improved, and the product competitiveness is improved. The shared resources are divided in advance, and each thread accesses private data of the thread, so that locking operation is avoided, and thread safety is guaranteed by multiple threads without locking.
Therefore, in this embodiment, the received tasks are stored in the circular queue, then the circular queue sequentially dequeues the tasks, and finally, each dequeued task is executed by the corresponding thread and the memory resource corresponding to the thread, so as to obtain an execution result, and the task is executed by the thread according to the sequence by adopting the corresponding memory resource, so that the situation that the task is preempted in the resource and the thread is avoided, the multithreading access is realized without using a lock mechanism, and the processing efficiency of the multitasking is improved.
The multi-threaded task processing device provided in the embodiments of the present application is described below, and the multi-threaded task processing device described below and the multi-threaded task processing method described above may be referred to correspondingly.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a multi-threaded task processing device according to an embodiment of the present application.
In this embodiment, the apparatus may include:
the task storage module 100 is configured to store the received task in a circular queue corresponding to the thread;
a circular queue processing module 200, configured to dequeue tasks in the circular queue according to a first-in-first-out order;
the task execution module 300 is configured to execute a task queued from the cyclic queue through a thread and a memory resource corresponding to the thread, so as to obtain an execution result; the memory resource is a memory resource applied for the thread.
Optionally, the present embodiment may further include: the memory application module is used for sending a memory application to the memory management module; the memory management module is used for managing the original memory pool; the memory management module allocates a memory space from the original memory pool and returns memory information of the memory space; memory resources are determined based on the memory information.
Optionally, the memory management module allocates a memory space from the original memory pool and returns memory information of the memory space, including:
the memory management module allocates continuous memory space from the original memory pool; and returning the memory information of the memory space.
Optionally, the present embodiment may further include: the queue application module is used for creating a corresponding circular queue based on the multithreading demand information and setting the enqueue and dequeue operation instructions of the circular queue as a public interface.
Optionally, the present embodiment may further include: and the thread application module is used for creating corresponding threads based on the number of the circular queues and setting the state of each thread as a resident thread.
Optionally, the present embodiment may further include: and the empty queue processing module is used for controlling the thread corresponding to the circular queue to execute idle operation when the circular queue is empty.
Optionally, the task depositing module 100 is specifically configured to deposit the sent task into the circular queue through an enqueue operation interface of the circular queue.
Therefore, in this embodiment, the received tasks are stored in the circular queue, then the circular queue sequentially dequeues the tasks, and finally, each dequeued task is executed by the corresponding thread and the memory resource corresponding to the thread, so as to obtain an execution result, and the task is executed by the thread according to the sequence by adopting the corresponding memory resource, so that the situation that the task is preempted in the resource and the thread is avoided, the multithreading access is realized without using a lock mechanism, and the processing efficiency of the multitasking is improved.
The present application further provides a computing device, please refer to fig. 7, fig. 7 is a schematic structural diagram of a computing device provided in an embodiment of the present application, which may include:
a memory for storing a computer program;
a processor for implementing the steps of any of the multithreaded task processing methods described above when executing a computer program.
As shown in fig. 7, which is a schematic diagram of a composition structure of a computing device, the computing device may include: a processor 10, a memory 11, a communication interface 12 and a communication bus 13. The processor 10, the memory 11 and the communication interface 12 all complete communication with each other through a communication bus 13.
In the present embodiment, the processor 10 may be a central processing unit (Central Processing Unit, CPU), an asic, a dsp, a field programmable gate array, or other programmable logic device, etc.
Processor 10 may call a program stored in memory 11, and in particular, processor 10 may perform operations in an embodiment of an abnormal IP identification method.
The memory 11 is used for storing one or more programs, and the programs may include program codes, where the program codes include computer operation instructions, and in this embodiment, at least the programs for implementing the following functions are stored in the memory 11:
storing the received task to a circulation queue corresponding to the thread;
dequeuing tasks in the circular queue according to the first-in-first-out order;
executing the tasks queued from the cyclic queue through the threads and the memory resources corresponding to the threads to obtain an execution result; the memory resource is a memory resource applied for the thread.
In one possible implementation, the memory 11 may include a storage program area and a storage data area, where the storage program area may store an operating system, and at least one application program required for functions, etc.; the storage data area may store data created during use.
In addition, the memory 11 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device or other volatile solid-state storage device.
The communication interface 12 may be an interface of a communication module for interfacing with other devices or systems.
Of course, it should be noted that the structure shown in fig. 7 does not limit the computing device in the embodiment of the present application, and the computing device may include more or fewer components than shown in fig. 7, or may combine some components in practical applications.
Therefore, in this embodiment, the received tasks are stored in the circular queue, then the circular queue sequentially dequeues the tasks, and finally, each dequeued task is executed by the corresponding thread and the memory resource corresponding to the thread, so as to obtain an execution result, and the task is executed by the thread according to the sequence by adopting the corresponding memory resource, so that the situation that the task is preempted in the resource and the thread is avoided, the multithreading access is realized without using a lock mechanism, and the processing efficiency of the multitasking is improved.
The present application also provides a computer readable storage medium having a computer program stored thereon, which when executed by a processor, performs the steps of any of the multithreaded task processing methods described above.
The computer readable storage medium may include: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
For the description of the computer-readable storage medium provided in the present application, reference is made to the above method embodiments, and the description is omitted herein.
Therefore, in this embodiment, the received tasks are stored in the circular queue, then the circular queue sequentially dequeues the tasks, and finally, each dequeued task is executed by the corresponding thread and the memory resource corresponding to the thread, so as to obtain an execution result, and the task is executed by the thread according to the sequence by adopting the corresponding memory resource, so that the situation that the task is preempted in the resource and the thread is avoided, the multithreading access is realized without using a lock mechanism, and the processing efficiency of the multitasking is improved.
In the description, each embodiment is described in a progressive manner, and each embodiment is mainly described by the differences from other embodiments, so that the same similar parts among the embodiments are mutually referred. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above describes in detail a multithreaded task processing method, a task processing device, a computing device, and a computer-readable storage medium provided in the present application. Specific examples are set forth herein to illustrate the principles and embodiments of the present application, and the description of the examples above is only intended to assist in understanding the methods of the present application and their core ideas. It should be noted that it would be obvious to those skilled in the art that various improvements and modifications can be made to the present application without departing from the principles of the present application, and such improvements and modifications fall within the scope of the claims of the present application.

Claims (10)

1. A method for processing a task in a plurality of threads, comprising:
storing the received task to a circulation queue corresponding to the thread;
dequeuing tasks in the circular queue according to the first-in-first-out order;
executing the task queued from the cyclic queue through the thread and the memory resource corresponding to the thread to obtain an execution result; the memory resource is a memory resource applied for the thread.
2. The task processing method according to claim 1, wherein the process of applying for the memory resource includes:
sending a memory application to a memory management module; the memory management module is used for managing an original memory pool;
the memory management module allocates a memory space from the original memory pool and returns memory information of the memory space;
and determining the memory resource based on the memory information.
3. The task processing method according to claim 2, wherein the memory management module allocates a memory space from the original memory pool and returns memory information of the memory space, including:
the memory management module allocates continuous memory space from the original memory pool;
and returning the memory information of the memory space.
4. The task processing method according to claim 1, wherein the process of creating the circular queue includes:
and creating a corresponding circular queue based on the multithreading demand information, and setting the enqueue and dequeue operation instructions of the circular queue as a public interface.
5. The task processing method according to claim 1, wherein the process of creating the thread includes:
creating corresponding threads based on the number of the circular queues, and setting the state of each thread as a resident thread.
6. The task processing method according to claim 5, characterized by further comprising:
and when the circular queue is an empty queue, controlling the thread corresponding to the circular queue to execute idle operation.
7. The task processing method according to claim 1, wherein storing the received task in a circular queue corresponding to the thread, comprises:
and storing the sent task into the circular queue through an enqueue operation interface of the circular queue.
8. A multi-threaded task processing device, comprising:
the task storage module is used for storing the received task to a circulation queue corresponding to the thread;
the circular queue processing module is used for dequeuing tasks in the circular queue according to the first-in-first-out sequence;
the task execution module is used for executing the tasks listed from the cyclic queue through the threads and the memory resources corresponding to the threads to obtain an execution result; the memory resource is a memory resource applied for the thread.
9. A computing device, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the task processing method according to any one of claims 1 to 7 when executing said computer program.
10. A computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the steps of the task processing method as claimed in any one of claims 1 to 7.
CN202310076347.3A 2023-01-30 2023-01-30 Multithreading task processing method and related device Pending CN116302391A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310076347.3A CN116302391A (en) 2023-01-30 2023-01-30 Multithreading task processing method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310076347.3A CN116302391A (en) 2023-01-30 2023-01-30 Multithreading task processing method and related device

Publications (1)

Publication Number Publication Date
CN116302391A true CN116302391A (en) 2023-06-23

Family

ID=86778830

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310076347.3A Pending CN116302391A (en) 2023-01-30 2023-01-30 Multithreading task processing method and related device

Country Status (1)

Country Link
CN (1) CN116302391A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117614906A (en) * 2024-01-23 2024-02-27 珠海星云智联科技有限公司 Method, computer device and medium for multi-thread multi-representation oral package

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117614906A (en) * 2024-01-23 2024-02-27 珠海星云智联科技有限公司 Method, computer device and medium for multi-thread multi-representation oral package
CN117614906B (en) * 2024-01-23 2024-04-19 珠海星云智联科技有限公司 Method, computer device and medium for multi-thread multi-representation oral package

Similar Documents

Publication Publication Date Title
US8607249B2 (en) System and method for efficient concurrent queue implementation
CN105579961B (en) Data processing system, operating method and hardware unit for data processing system
Pyarali et al. Evaluating and optimizing thread pool strategies for real-time CORBA
US8930584B2 (en) System and method for providing a linearizable request manager
US20130080672A1 (en) System, method and computer program product for access control
Lyons et al. Scheduling-context capabilities: A principled, light-weight operating-system mechanism for managing time
JP2013506179A (en) Execution management system combining instruction threads and management method
US9507633B2 (en) Scheduling method and system
CN113051057A (en) Multithreading data lock-free processing method and device and electronic equipment
CN107562685B (en) Method for data interaction between multi-core processor cores based on delay compensation
EP2962200B1 (en) System and method for using a sequencer in a concurrent priority queue
CN116302391A (en) Multithreading task processing method and related device
CN115237556A (en) Scheduling method and device, chip, electronic equipment and storage medium
Reano et al. Intra-node memory safe gpu co-scheduling
CN115495262A (en) Microkernel operating system and method for processing interprocess message
US7117496B1 (en) Event-based synchronization
EP2951691A1 (en) System and method for supporting work sharing muxing in a cluster
CN115964150A (en) Business processing method, system, device and medium based on double real-time kernels
US9081630B2 (en) Hardware-implemented semaphore for resource access based on presence of a memory buffer in a memory pool
US11743200B2 (en) Techniques for improving resource utilization in a microservices architecture via priority queues
CN112749020A (en) Microkernel optimization method of Internet of things operating system
CN113961364A (en) Large-scale lock system implementation method and device, storage medium and server
US9619277B2 (en) Computer with plurality of processors sharing process queue, and process dispatch processing method
CN115934385B (en) Multi-core inter-core communication method, system, equipment and storage medium
Brandenburg A note on blocking optimality in distributed real-time locking protocols

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination