CN114546651A - Multithreading operation method, device, equipment and storage medium - Google Patents

Multithreading operation method, device, equipment and storage medium Download PDF

Info

Publication number
CN114546651A
CN114546651A CN202210171314.2A CN202210171314A CN114546651A CN 114546651 A CN114546651 A CN 114546651A CN 202210171314 A CN202210171314 A CN 202210171314A CN 114546651 A CN114546651 A CN 114546651A
Authority
CN
China
Prior art keywords
thread
data
memory
current data
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210171314.2A
Other languages
Chinese (zh)
Inventor
钟丹东
袁海洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Baowangda Software Technology Co ltd
Original Assignee
Jiangsu Baowangda Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Baowangda Software Technology Co ltd filed Critical Jiangsu Baowangda Software Technology Co ltd
Priority to CN202210171314.2A priority Critical patent/CN114546651A/en
Publication of CN114546651A publication Critical patent/CN114546651A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The embodiment of the invention discloses a method, a device, equipment and a storage medium for multithread operation. Wherein, the method comprises the following steps: the thread which does not process data in the at least two threads acquires the shared resource from the memory as the expected value of the thread; according to preset data acquisition conditions, a thread of unprocessed data acquires current data of shared resources from a memory, and whether the current data is consistent with an expected value of the thread is judged; if so, determining the thread of the unprocessed data as a target thread, processing the current data by the target thread according to a preset data processing rule to obtain target data, and storing the target data serving as the current data into a memory. Under the condition of no locking, the safe operation of multiple threads is ensured, and the thread switching expense is reduced.

Description

Multithreading operation method, device, equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a method, a device, equipment and a storage medium for multi-thread operation.
Background
With the rapid development of computer hardware, a CPU has been upgraded from a traditional single core to a multi-core, and it is urgent to make full use of the CPU to enable a system to operate more efficiently. The multithreading technology can fully play the performance of the CPU through system scheduling.
However, the introduction of the multithreading technology inevitably brings the problem of multithreading safety, and in the prior art, the locking of shared resources is a common way of ensuring the multithreading safety. The direct locking mode is a typical pessimistic lock, has strong exclusivity and exclusivity, increases the overhead of thread switching, and influences the running efficiency of multiple threads.
Disclosure of Invention
The embodiment of the invention provides a method, a device, equipment and a storage medium for multithread operation, which are used for improving the operation efficiency of multithread.
According to an aspect of the invention, there is provided a method of multi-threaded execution, the method comprising:
the thread which does not process data in the at least two threads acquires the shared resource from the memory as the expected value of the thread;
according to preset data acquisition conditions, a thread of unprocessed data acquires current data of shared resources from a memory, and whether the current data is consistent with an expected value of the thread is judged;
if so, determining the thread of the unprocessed data as a target thread, processing the current data by the target thread according to a preset data processing rule to obtain target data, and storing the target data serving as the current data into a memory.
According to another aspect of the present invention, there is provided an apparatus for multithread execution, the apparatus comprising:
the expected value determining module is used for acquiring the shared resource from the memory as the expected value of the thread without processing data in at least two threads;
the expected value judgment module is used for acquiring the current data of the shared resource from the memory by the thread of unprocessed data according to the preset data acquisition condition and judging whether the current data is consistent with the expected value of the current data;
and the data processing module is used for determining the thread of the unprocessed data as a target thread if the thread of the unprocessed data is the target thread, processing the current data by the target thread according to a preset data processing rule to obtain target data, and storing the target data serving as the current data into the memory.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform a method of multi-threaded execution as described in any of the embodiments of the invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to implement a method of multi-threaded execution according to any one of the embodiments of the present invention when executed.
According to the technical scheme of the embodiment of the invention, the data to be processed in the shared resource is obtained through a plurality of threads of unprocessed data, and the obtained data is determined as respective expected values. And acquiring the data of the shared resource from the memory again, wherein the acquired current data may be the same as the expected value or may be different from the expected value. If the current data acquired by the thread which does not process the data is the same as the expected value, the thread is a target thread and can process the current data. And storing the processed target data as current data into a memory, namely updating the current data of the shared resource. The method and the device realize that the target thread for processing the data can be determined without locking the shared resource. The problem of in the prior art, increase thread switching cost in the mode of direct locking is solved. And the target thread is determined by judging the expected value and the current data, so that the safety of multithreading operation can be ensured, and the multithreading operation efficiency is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present invention, nor do they necessarily limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart illustrating a method for multithread execution according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for multithread execution according to a second embodiment of the present invention;
FIG. 3 is a block diagram of an apparatus for multithread execution according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device implementing a method of multi-threaded execution according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is to be understood that the terms "current," "target," and the like in the description and claims of the present invention and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example one
Fig. 1 is a flowchart illustrating a method for multithread execution according to an embodiment of the present invention, where the embodiment is applicable to data processing in a multithread environment, and the method may be executed by an apparatus for multithread execution, where the apparatus may be implemented in hardware and/or software. As shown in fig. 1, the method includes:
s110, the thread which does not process data in the at least two threads acquires the shared resource from the memory as the expected value of the thread.
The CPU can comprise a plurality of threads, and the performance of the CPU can be fully exerted by the multithreading technology through system scheduling. However, the introduction of multithreading inevitably raises the issue of multithreading security. In order to ensure the safety of multi-thread operation, a direct locking mode can be adopted, wherein the direct locking mode is a typical pessimistic lock, has strong exclusivity and exclusivity, and increases the overhead of thread switching.
In order to reduce the locking probability of shared resources, reduce the expense of thread switching and ensure the running safety of a multithreading environment. In the embodiment of the present invention, when processing the shared resource in the memory or the hard disk, each thread may first acquire the data of the shared resource from the memory or the hard disk, and use the acquired data of the shared resource as its own expected value. Each thread records its own expected value.
In a multi-thread environment, the CPU may include at least two threads, and when all the threads have not processed the shared resource, each thread may acquire the shared resource and determine the expected value. If there are threads that are processing or have already processed the shared resource, then the threads that have not yet processed the data may be made to acquire the shared resource to determine the expected value. The shared resources in the memory may be data that has not been processed initially, or may be data that has been processed by one or more threads and then stored in the memory. For example, the initial shared resource is "1", there are 3 threads, which are thread one, thread two, and thread three, respectively, and each thread adds one to the shared resource in the memory in the data processing process. When all threads have not been processed, each thread acquires an initial shared resource, and takes "1" as an expected value. If the thread adds one to the "1" to obtain the "2", and the "2" is stored in the memory, the thread two and the thread three can acquire the shared resource "2", and the "2" is taken as the expected value of the thread. If the thread two pairs of '2' is added with one to obtain '3', and the '3' is put into the memory, the thread three can obtain the '3' and takes the '3' as an expected value. That is, the expected value of a thread may be obtained multiple times and may change.
And S120, according to preset data acquisition conditions, the thread which does not process the data acquires the current data of the shared resource from the memory, and whether the current data is consistent with the expected value of the thread is judged.
The data obtaining condition may be preset, where the data obtaining condition is a condition that the thread that obtains the expected value can obtain the shared resource from the memory again. For example, the time period for each thread to acquire the shared resource may be preset, and the shared resource may be acquired from the memory once every other time period. For another example, if the data processing speeds of the threads are not the same, if there is a thread processing data, the other threads do not acquire the shared resource in the memory, and if there is no thread processing data, the thread not processing data can acquire the shared resource in the memory. The thread that has not processed the data obtains the current data of the shared resource in the memory, and the current data may be the data of the initial shared resource or the data stored in the memory after being processed by one or more threads. That is, the current data may or may not be consistent with the expected values.
And after the thread which does not process the data acquires the current data, comparing the current data with the expected value, and judging whether the current data is consistent with the expected value. For example, the first thread and the second thread acquire an initial shared resource "1" as an expected value, and then the first thread acquires the current data of the shared resource from the memory according to a preset data acquisition condition, where the current data is also "1", that is, the current data is consistent with the expected value. If the current data of the shared resource in the memory is the data "2" obtained after the processing of the thread one, the thread two obtains the current resource "2" of the shared resource according to the preset data obtaining condition, that is, the current data obtained by the thread two is inconsistent with the expected value "1" of the thread two.
And S130, if so, determining that the thread without processing the data is the target thread, processing the current data by the target thread according to a preset data processing rule to obtain target data, and storing the target data serving as the current data into the memory.
If the current data of the thread which does not process the data is consistent with the expected value of the thread, the thread can process the current data and determine the thread as the target thread. And the target thread processes the current data according to a preset data processing rule. The data processing rule is a predetermined processing procedure defined for different data, and may define, for example, addition and subtraction operations to be performed on data. And after the target thread processes the current data, the processed data is used as target data, and the target data is stored in the memory to update the current data of the shared resource. For example, if the expected value of the thread one is the initial data "1" of the shared resource, and the current data obtained by the thread one from the memory is still "1", the target data "2" may be obtained according to a preset plus-one processing rule, and the "2" is stored as the current data in the memory. And other threads which do not process data can acquire the updated current data from the memory when the data acquisition condition is met. And if the thread which does not process the data does not exist, ending the running process of the shared resources by the multiple threads.
In this embodiment, optionally, after storing the target data as the current data in the memory, the method further includes: and executing the thread which does not process the data to obtain the current data of the shared resource from the memory according to the preset data obtaining condition, and judging whether the current data is consistent with the expected value of the thread.
Specifically, after the target data is stored in the memory as the current data, each thread of unprocessed data continues to acquire the current data of the shared resource from the memory according to the preset data acquisition condition, and whether the current data is consistent with the expected value of the thread is judged until no thread of unprocessed data exists. The method has the advantages that each thread can automatically and sequentially process data, current data processed by each target thread are inconsistent, and data processing errors caused by repeated processing of the same current data by different threads are avoided. The data can be processed only under the condition that the current data is consistent with the expected value, so that the safe operation of a multi-thread environment is ensured under the condition of no locking, and the program operation is more efficient.
Since the target data updates the current data of the shared resource, after the current data is updated, the current data acquired by the thread which does not process the data is inconsistent with the expected value of the thread.
In this embodiment, optionally, after determining whether the current data is consistent with the expected value of the current data, the method further includes: if the data are inconsistent, the thread which does not process the data acquires the current data from the memory as a new expected value of the thread; according to preset data acquisition conditions, a thread of unprocessed data acquires new current data of shared resources from a memory, and whether the new current data is consistent with a new expected value is judged; if so, determining that the thread which does not process the data is a target thread, processing new current data by the target thread according to a preset data processing rule to obtain new target data, and storing the new target data serving as the new current data in the memory.
Specifically, if the current data of the thread which does not process the data is not consistent with the expected value of the thread, it is determined that the thread cannot process the data, and the thread needs to acquire the current data from the memory as a new expected value of the thread, that is, replace the original expected value. And each thread of unprocessed data acquires new current data from the memory again according to preset data acquisition conditions, wherein the new current data can be consistent with the new expected value or not. If a thread is the first thread of all unprocessed data to acquire new current data, the new current data acquired by the thread is consistent with the new expected value. If a thread is not the first thread to obtain new current data, i.e., the new current data may have already been processed, the new current data obtained by the thread may not match the new expected value.
And after any thread which does not process data acquires new current data, comparing the new current data with the new expected value, and if the new current data and the new expected value are consistent, determining that the thread is a target thread. The target thread can process the new current data according to a preset data processing rule to obtain new target data. And storing the new target data as new current data in a memory to realize the replacement of the new current data, so that a subsequent unprocessed data thread can acquire the updated new current data. That is, the new current data acquired by the subsequent thread of unprocessed data may not be consistent with the new expected value.
And if the new current data acquired by the thread which does not process the data is inconsistent with the new expected value, updating the expected value again, replacing the new expected value with the new current data, and performing data processing until the current data acquired by the thread is consistent with the expected value. The method has the advantages that each thread can determine whether the thread can process the shared resource without locking, and only when the current data is consistent with the expected value, the processing operation is carried out, so that the thread switching expense is reduced, the operation safety of multiple threads is ensured, and the operation efficiency of multiple threads is improved.
According to the embodiment of the invention, the data to be processed in the shared resource is obtained through a plurality of threads of unprocessed data, and the obtained data is determined as respective expected value. And acquiring the data of the shared resource from the memory again, wherein the acquired current data may be the same as the expected value or may be different from the expected value. If the current data acquired by the thread which does not process the data is the same as the expected value, the thread is a target thread and can process the current data. And storing the processed target data as current data into a memory, namely updating the current data of the shared resource. The method and the device realize that the thread for processing the data can be determined without locking the shared resource. The problem of in the prior art, increase thread switching cost in the mode of direct locking is solved. And the target thread is determined through the expected value, so that the safety of multi-thread operation can be ensured, and the multi-thread operation efficiency is improved.
Example two
Fig. 2 is a flowchart of a method for multithread execution according to a second embodiment of the present invention, which is an alternative embodiment based on the foregoing embodiment, and the method can be executed by a multithread execution apparatus.
In this embodiment, according to a preset data obtaining condition, obtaining, by a thread that does not process data, current data of a shared resource from a memory includes: judging whether a target thread occupying shared resources for data processing exists or not; if not, any thread which does not process the data acquires the current data of the shared resource from the memory.
As shown in fig. 2, the method includes:
s210, the thread which does not process data in the at least two threads acquires the shared resource from the memory as the expected value of the thread.
S220, judging whether a target thread occupying shared resources for data processing exists or not.
After obtaining the expected value, each thread of unprocessed data can detect whether a thread occupies the shared resource to run in real time or at regular time, and the thread occupying the shared resource is a target thread.
In this embodiment, optionally, after determining whether there is a target thread that is occupying a shared resource for data processing, the method further includes: and if the target thread occupies the shared resource for data processing, the thread which does not process the data is in a preset spinning running state.
Specifically, if the target thread occupies the shared resource to operate, the thread which does not process the data operates according to the preset spin operation state. The spin running state is a preset default running state, and the thread can always detect whether the running of the target thread is finished in the spin running state without a data processing process. And stopping the spin running state of the thread without processing data until the target thread finishes running. The self-rotating operation state is set, so that the threads can not acquire shared resources under the condition of no locking, the safe operation of the threads is ensured, and the operation efficiency of multiple threads is improved.
And S230, if not, any thread which does not process the data acquires the current data of the shared resource from the memory.
If the target thread occupying the shared resource does not exist, determining that the thread not processing the data can acquire the current data from the memory. Each thread that is not processing data may obtain current data, but the speed of acquisition and processing may be different for different threads. There may be one thread that has unprocessed data to fetch the current data first.
In this embodiment, optionally, the obtaining, by any thread without processing data, current data of a shared resource from a memory includes: determining a thread to be executed from threads of unprocessed data according to a preset thread selection algorithm; and the thread to be executed acquires the current data of the shared resource from the memory, and the thread of unprocessed data except the thread to be executed is in a preset spinning running state.
Specifically, a thread selection algorithm may be preset, a thread to be executed is determined from all threads that do not process data, and the thread to be executed acquires current data from the memory. For example, the thread selection algorithm may determine the thread with the highest running speed as the thread to be executed, and the thread which does not process data except the thread to be executed continues to be in the preset spin running state until the current data is acquired as the thread to be executed. Or each thread of unprocessed data can be used as a thread to be executed to acquire the current data, but the acquisition time and speed of each thread are determined by the attribute of each thread. If one thread which does not process data acquires the current data and is used as a target thread to process the data, other threads which do not process the data detect that the target thread exists and enter a spin running state. The method has the advantages that each thread of unprocessed data can be sequentially used as a target thread to run, conflicts among thread runs can be avoided under the condition of no locking, the thread switching overhead is reduced, and the running efficiency of a multi-thread environment is improved.
And S240, judging whether the current data is consistent with the expected value of the current data.
The thread to be executed judges whether the current thread is consistent with the expected value or not, and if so, the data processing is carried out; and if the data are inconsistent, acquiring the current data as a new expected value. For example, there are thread one, thread two, and thread three, and when thread one occupies shared resources for operation, thread two and thread three will be in a spin-running state. When the thread one is detected to finish running, the thread two and the thread three try to run in a CAS (Compare and Swap) mode, and the thread with successful running of the CAS occupies the shared resource for data processing. The CAS runs the failed thread and will continue to spin. The CAS operation success means that the current data is acquired, and the current data is consistent with the self expected value. The CAS operation failure means that the current data is not obtained or the current data is obtained, but the current data is not consistent with the expected value.
In this embodiment, optionally, the method further includes: judging whether the time of the spin running state of any thread of unprocessed data exceeds a preset spin time threshold value; if so, determining the thread without processing the data as the thread to be dispatched, locking and blocking the thread to be dispatched, and waiting for the system to dispatch and run.
Specifically, each thread may monitor the time for maintaining the spin run state in real time. And presetting a spin time threshold, and if the spin running state time of the thread which does not process the data exceeds the preset spin time threshold, determining the thread as a thread to be scheduled. In a multi-thread environment, in order to ensure the operation safety of multiple threads and reduce the consumption of a CPU, the thread to be scheduled can be locked and blocked, and the system scheduling is waited. When the system schedules the thread, unlocking may be performed. The method has the advantages that if the number of threads is too large, the spinning time of the threads is too long, the threads with the too long spinning time are locked and blocked, and the system is waited to be scheduled and run. Namely, under the state of no locking, the shared resource can not be obtained for a long time to continue running, and the locking blockage is considered, so that the meaningless waste of CPU resources is reduced.
And S250, if so, determining that the thread without processing the data is a target thread, processing the current data by the target thread according to a preset data processing rule to obtain target data, and storing the target data serving as the current data into a memory.
According to the embodiment of the invention, the data to be processed in the shared resource is obtained through a plurality of threads of unprocessed data, and the obtained data is determined as respective expected value. When the target thread occupying the shared resource does not exist currently, the thread which does not process the data acquires the data of the shared resource from the memory again, and the acquired current data may be the same as the expected value or different from the expected value. If the current data acquired by the thread which does not process the data is the same as the expected value, the thread is a target thread and can process the current data. And storing the processed target data as current data into a memory, namely updating the current data of the shared resource. The threads for processing the data can be determined without locking the shared resources, and the conflict of acquiring the shared resources by each thread is avoided. The problem of in the prior art, increase thread switching cost in the mode of direct locking is solved. And the target thread is determined through the expected value, so that the safety of multi-thread operation can be ensured, and the multi-thread operation efficiency is improved.
EXAMPLE III
Fig. 3 is a schematic structural diagram of an apparatus for multithread execution according to a third embodiment of the present invention. As shown in fig. 3, the apparatus includes:
an expected value determining module 301, configured to obtain, from the memory, the shared resource as an expected value of the thread that does not process data in the at least two threads;
an expected value judgment module 302, configured to obtain, by a thread that does not process data, current data of a shared resource from a memory according to a preset data obtaining condition, and judge whether the current data is consistent with an expected value of the current data;
and the data processing module 303 is configured to determine, if the thread of the unprocessed data is a target thread, process, by the target thread, the current data according to a preset data processing rule to obtain target data, and store the target data as the current data in a memory.
Optionally, the apparatus further comprises:
and the current data judging module is used for executing the thread of unprocessed data to acquire the current data of the shared resource from the memory according to a preset data acquisition condition after the target data is stored into the memory as the current data, and judging whether the current data is consistent with the expected value of the current data.
Optionally, the apparatus further comprises:
the new expected value determining module is used for acquiring the current data from the memory as the new expected value of the thread without processing the data if the current data is not consistent with the expected value of the thread after judging whether the current data is consistent with the expected value of the thread;
the new expected value judging module is used for acquiring new current data of shared resources from a memory by the thread of the unprocessed data according to preset data acquisition conditions and judging whether the new current data is consistent with the new expected value or not;
and if so, determining the thread of the unprocessed data as a target thread, processing the new current data by the target thread according to a preset data processing rule to obtain new target data, and storing the new target data serving as new current data in a memory.
Optionally, the expected value determining module 302 includes:
the target thread judging unit is used for judging whether a target thread which occupies the shared resource for data processing exists or not;
and the current data acquisition unit is used for acquiring the current data of the shared resource from the memory by any thread which does not process the data if the current data is not acquired by the current data acquisition unit.
Optionally, the current data obtaining unit is specifically configured to:
determining a thread to be executed from the thread of the unprocessed data according to a preset thread selection algorithm;
and the thread to be executed acquires the current data of the shared resource from the memory, and the thread of unprocessed data except the thread to be executed is in a preset spinning running state.
Optionally, the expected value determining module 302 further includes:
and the spin running state determining unit is used for judging whether a target thread which occupies the shared resource for data processing exists or not, and if the target thread which occupies the shared resource for data processing exists, the thread which does not process the data is in a preset spin running state.
Optionally, the apparatus further comprises:
the spin time judging module is used for judging whether the spin running state time of any thread of unprocessed data exceeds a preset spin time threshold value;
and the thread locking module is used for determining the thread of the unprocessed data as the thread to be dispatched if the thread is the unprocessed data, locking and blocking the thread to be dispatched and waiting for the system to dispatch and run.
According to the embodiment of the invention, the data to be processed in the shared resource is obtained through a plurality of threads of unprocessed data, and the obtained data is determined as respective expected value. And acquiring the data of the shared resource from the memory again, wherein the acquired current data may be the same as the expected value or may be different from the expected value. If the current data acquired by the thread which does not process the data is the same as the expected value, the thread is a target thread and can process the current data. And storing the processed target data as current data into a memory, namely updating the current data of the shared resource. The method and the device realize that the thread for processing the data can be determined without locking the shared resource. The problem of in the prior art, increase thread switching cost in the mode of direct locking is solved. And the target thread is determined through the expected value, so that the safety of multi-thread operation can be ensured, and the multi-thread operation efficiency is improved.
The multithreading running device provided by the embodiment of the invention can execute the multithreading running method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
FIG. 4 shows a schematic block diagram of an electronic device 10 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular phones, smart phones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 4, the electronic device 10 includes at least one processor 11, and a memory communicatively connected to the at least one processor 11, such as a Read Only Memory (ROM)12, a Random Access Memory (RAM)13, and the like, wherein the memory stores a computer program executable by the at least one processor, and the processor 11 can perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM)12 or the computer program loaded from a storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data necessary for the operation of the electronic apparatus 10 can also be stored. The processor 11, the ROM 12, and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
A number of components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, or the like; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, or the like. The processor 11 performs the various methods and processes described above, such as a method of multi-threaded execution.
In some embodiments, the method of multi-threaded execution may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the above described method of multi-threaded execution may be performed. Alternatively, in other embodiments, the processor 11 may be configured by any other suitable means (e.g., by means of firmware) to perform a method of multi-threaded execution.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for implementing the methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. A computer program can execute entirely on a machine, partly on a machine, as a stand-alone software package partly on a machine and partly on a remote machine or entirely on a remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solution of the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method of multi-threaded execution, the method comprising:
the thread which does not process data in the at least two threads acquires the shared resource from the memory as the expected value of the thread;
according to preset data acquisition conditions, a thread of unprocessed data acquires current data of shared resources from a memory, and whether the current data is consistent with an expected value of the thread is judged;
if so, determining the thread of the unprocessed data as a target thread, processing the current data by the target thread according to a preset data processing rule to obtain target data, and storing the target data serving as the current data into a memory.
2. The method of claim 1, after storing the target data as current data in memory, further comprising:
and executing the thread which does not process the data to obtain the current data of the shared resource from the memory according to a preset data obtaining condition, and judging whether the current data is consistent with the expected value of the thread.
3. The method according to claim 1 or 2, after determining whether the current data is consistent with the expected value of itself, further comprising:
if the current data is not consistent with the current expected value, the thread of the unprocessed data acquires the current data from the memory as a new expected value of the thread of the unprocessed data;
according to preset data acquisition conditions, the thread of unprocessed data acquires new current data of shared resources from a memory, and whether the new current data is consistent with the new expected value is judged;
and if so, determining the thread of the unprocessed data as a target thread, processing the new current data by the target thread according to a preset data processing rule to obtain new target data, and storing the new target data serving as new current data in a memory.
4. The method of claim 1, wherein the obtaining, by the thread not processing data, current data of the shared resource from the memory according to a preset data obtaining condition comprises:
judging whether a target thread occupying shared resources for data processing exists or not;
if not, any thread which does not process the data acquires the current data of the shared resource from the memory.
5. The method of claim 4, wherein any thread that is not processing data obtains current data of the shared resource from the memory, comprising:
determining a thread to be executed from the thread of the unprocessed data according to a preset thread selection algorithm;
and the thread to be executed acquires the current data of the shared resource from the memory, and the thread of unprocessed data except the thread to be executed is in a preset spinning running state.
6. The method of claim 4, after determining whether there is a target thread occupying the shared resource for data processing, further comprising:
and if the target thread occupies the shared resource for data processing, the thread which does not process the data is in a preset spinning running state.
7. The method of claim 5 or 6, further comprising:
judging whether the time of the spin running state of any thread of unprocessed data exceeds a preset spin time threshold value;
if so, determining the thread of the unprocessed data as a thread to be scheduled, locking and blocking the thread to be scheduled, and waiting for the system to be scheduled and run.
8. An apparatus for multi-threaded execution, the apparatus comprising:
the expected value determining module is used for acquiring the shared resource from the memory as the expected value of the thread without processing data in at least two threads;
the expected value judgment module is used for acquiring the current data of the shared resource from the memory by the thread of unprocessed data according to the preset data acquisition condition and judging whether the current data is consistent with the expected value of the current data;
and the data processing module is used for determining the thread of the unprocessed data as a target thread if the thread of the unprocessed data is the target thread, processing the current data by the target thread according to a preset data processing rule to obtain target data, and storing the target data serving as the current data into the memory.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the method of multithreaded execution as set forth in any one of claims 1-7.
10. A computer-readable storage medium storing computer instructions for causing a processor to perform a method of multithreaded execution as recited in any of claims 1-7.
CN202210171314.2A 2022-02-24 2022-02-24 Multithreading operation method, device, equipment and storage medium Pending CN114546651A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210171314.2A CN114546651A (en) 2022-02-24 2022-02-24 Multithreading operation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210171314.2A CN114546651A (en) 2022-02-24 2022-02-24 Multithreading operation method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114546651A true CN114546651A (en) 2022-05-27

Family

ID=81677718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210171314.2A Pending CN114546651A (en) 2022-02-24 2022-02-24 Multithreading operation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114546651A (en)

Similar Documents

Publication Publication Date Title
CN109918141B (en) Thread execution method, thread execution device, terminal and storage medium
CN107491346B (en) Application task processing method, device and system
CN109710624B (en) Data processing method, device, medium and electronic equipment
US20230409391A1 (en) Thread priority adjusting method, terminal, and computer-readable storage medium
CN114546651A (en) Multithreading operation method, device, equipment and storage medium
CN118193485A (en) Concurrency control method and device, electronic equipment and storage medium
CN113377295B (en) Data storage and reading method, device and equipment for multi-producer single-consumer
CN107608498B (en) Application program management method and terminal equipment
CN116680080A (en) Memory management method and device, electronic equipment and storage medium
CN115390992A (en) Virtual machine creating method, device, equipment and storage medium
CN116069497A (en) Method, apparatus, device and storage medium for executing distributed task
CN115757039A (en) Program monitoring method and device, electronic equipment and storage medium
CN115756808A (en) Resource adjusting method, device, equipment and storage medium
CN114691781A (en) Data synchronization method, system, device, equipment and medium
CN114691376A (en) Thread execution method and device, electronic equipment and storage medium
CN113485809B (en) Method for solving problem of high time consumption of business process
CN115454660A (en) Task management method and device, electronic equipment and storage medium
CN116893893B (en) Virtual machine scheduling method and device, electronic equipment and storage medium
CN114490780B (en) Data stream scheduling method, device, electronic equipment, medium and product
CN111813520B (en) Thread scheduling method and device, storage medium and electronic equipment
CN114064301A (en) Data storage and reading method, device and equipment for single writer and multiple readers
CN117785403A (en) Heartbeat task execution device for equipment, electronic equipment and storage medium
CN117608798A (en) Workflow scheduling method, device, equipment and medium
CN114675979A (en) Multitask concurrent processing method and device, electronic equipment and automatic driving vehicle
CN115599804A (en) Database slow query log processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination