CN115599527A - Data processing method, cluster and storage medium - Google Patents

Data processing method, cluster and storage medium Download PDF

Info

Publication number
CN115599527A
CN115599527A CN202211360707.4A CN202211360707A CN115599527A CN 115599527 A CN115599527 A CN 115599527A CN 202211360707 A CN202211360707 A CN 202211360707A CN 115599527 A CN115599527 A CN 115599527A
Authority
CN
China
Prior art keywords
data
thread
processed
intersection
basic set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211360707.4A
Other languages
Chinese (zh)
Inventor
曹春辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shangyin Microchip Technology Co ltd
Original Assignee
Beijing Shangyin Microchip Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shangyin Microchip Technology Co ltd filed Critical Beijing Shangyin Microchip Technology Co ltd
Priority to CN202211360707.4A priority Critical patent/CN115599527A/en
Publication of CN115599527A publication Critical patent/CN115599527A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention provides a data processing method, a cluster and a storage medium, wherein the method comprises the following steps: responding to a data processing request, calling a target thread to add a plurality of data to be processed carried by the data processing request to a current temporary set, judging whether an intersection exists between a basic set and the current temporary set, wherein the basic set comprises the data to be processed of each thread at the current time, if the intersection does not exist, adding the data to be processed in the current temporary set to the basic set at one time, adding an identifier of the target thread to a thread task queue, calling the target thread to process the data to be processed added to the basic set at the current time, and if the intersection exists, determining the data to be processed in the intersection as occupied data occupied by other threads except the target thread. The invention realizes the locking of batch data and improves the processing efficiency and stability of the Redis cluster to the service data.

Description

Data processing method, cluster and storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a data processing method, a cluster, and a storage medium.
Background
Under the high concurrency service scene, the problem of data consistency caused by the fact that the same data is processed by multi-thread data is avoided. The above problem is usually solved by locking the data. The existing locking mode, such as a redission locking mode, needs to lock the data to be processed one by one according to a certain sequence. However, this one-by-one locking method results in a long locking time in business. Therefore, the processing efficiency and stability of the Redis cluster on the service data are reduced.
Disclosure of Invention
Embodiments of the present invention provide a data processing method, a cluster, and a storage medium, so as to lock batch data and improve processing efficiency and stability of a Redis cluster on service data. The specific technical scheme is as follows:
a method of data processing, the method comprising:
responding to a data processing request, and calling a target thread to add a plurality of data to be processed carried by the data processing request to a current temporary set;
judging whether an intersection exists between a basic set and the current temporary set, wherein the basic set comprises to-be-processed data of each thread at the current time;
if the intersection does not exist, adding the data to be processed in the current temporary set into the basic set at one time, adding the identifier of the target thread into a thread task queue, and calling the target thread to process each data to be processed added into the basic set at this time;
and if the intersection exists, determining the data to be processed in the intersection as occupied data occupied by other threads except the target thread.
Optionally, the method further includes:
if the intersection exists, the step of judging whether the intersection exists between the basic set and the current temporary set is executed repeatedly according to the preset maximum repetition times;
if the step of judging whether the intersection exists between the basic set and the current temporary set is executed at this time, the judgment result is as follows: and if the intersection does not exist, executing the step of adding the data to be processed in the current temporary set to the basic set at one time, adding the identification of the target thread to a thread task queue, and calling the target thread to process the data to be processed added to the basic set at this time.
Optionally, the method further includes:
if the number of times of repeatedly executing the step of judging whether the intersection exists between the basic set and the current temporary set reaches the preset maximum number of times of repetition and each judgment result is that the intersection exists, acquiring the identifiers of other threads and judging whether the identifiers of other threads exist in the thread task queue;
if the identification of the other threads exists, deleting the data to be processed added by the other threads in the basic set according to the identification of the other threads, deleting the historical temporary set corresponding to the identification of the other threads, and returning to the step of judging whether the basic set and the current temporary set have intersection.
Optionally, after determining the data to be processed in the intersection as occupied data occupied by other threads except the target thread, the method further includes:
accessing the thread task queue according to a preset time interval, and according to the sequence of the enqueue time, for each thread of the thread task queue:
judging whether the existing time length of the thread in the thread task queue is greater than a preset time length threshold value or not, and if so, determining the thread as an overtime thread;
and deleting each to-be-processed data added by the overtime thread in the basic set according to the identifier of the overtime thread, deleting the historical temporary set corresponding to the identifier of the overtime thread, and deleting the data of the thread in the thread task queue.
Optionally, the method further includes:
after the target thread finishes processing the data to be processed added to the basic set, generating a data unlocking request;
in response to the data unlocking request, deleting each to-be-processed data added by the target thread in the basic set, deleting the current temporary set corresponding to the identification of the target thread, and deleting the data of the target thread in the thread task queue,
and generating an unlocking completion message.
A data processing cluster, the cluster being arranged to:
responding to a data processing request, and calling a target thread to add a plurality of data to be processed carried by the data processing request to a current temporary set;
judging whether an intersection exists between a basic set and the current temporary set, wherein the basic set comprises to-be-processed data of each thread at the current time;
if the intersection does not exist, adding the data to be processed in the current temporary set to the basic set at one time, adding the identifier of the target thread to a thread task queue, and calling the target thread to process the data to be processed added to the basic set at this time;
and if the intersection exists, determining the data to be processed in the intersection as occupied data occupied by other threads except the target thread.
Optionally, the cluster is further configured to:
if the intersection exists, repeatedly executing the step of judging whether the intersection exists between the basic set and the current temporary set according to a preset maximum repetition time;
if the step of judging whether the intersection exists between the basic set and the current temporary set is executed at this time, the judgment result is as follows: and if the intersection does not exist, executing the step of adding each piece of to-be-processed data in the current temporary set into the basic set at one time, adding the identifier of the target thread into a thread task queue, and calling the target thread to process each piece of to-be-processed data added into the basic set at this time.
Optionally, the cluster is further configured to:
if the number of times of repeatedly executing the step of judging whether the intersection exists between the basic set and the current temporary set reaches the preset maximum number of times of repetition and each judgment result is that the intersection exists, acquiring the identifiers of other threads and judging whether the identifiers of other threads exist in the thread task queue;
if the identification of the other threads exists, deleting the data to be processed added by the other threads in the basic set according to the identification of the other threads, deleting the historical temporary set corresponding to the identification of the other threads, and returning to the step of judging whether the basic set and the current temporary set have intersection.
Optionally, the cluster is further configured to:
after the data to be processed in the intersection is determined to be occupied data occupied by other threads except the target thread, the thread task queue is accessed according to a preset time interval, and according to the sequence of the enqueue time, the data to be processed in the intersection are processed by the following steps:
judging whether the existing time length of the thread in the thread task queue is greater than a preset time length threshold value or not, and if so, determining the thread as an overtime thread;
and deleting each to-be-processed data added by the overtime thread in the basic set according to the identifier of the overtime thread, deleting the historical temporary set corresponding to the identifier of the overtime thread, and deleting the data of the thread in the thread task queue.
Optionally, the data processing cluster is further configured to:
after the target thread finishes processing the data to be processed added to the basic set, generating a data unlocking request;
in response to the data unlocking request, deleting each to-be-processed data added by the target thread in the basic set, deleting the current temporary set corresponding to the identification of the target thread, and deleting the data of the target thread in the thread task queue,
and generating an unlocking completion message.
A computer readable storage medium, instructions of which, when executed by a processor of a data processing cluster of a starter system, enable the data processing cluster to perform a data processing method as any one of the above.
According to the data processing method, the cluster and the storage medium, the data to be processed of different threads can be managed by setting the basic set, occupation of multiple threads on the same data is avoided, and due to the fact that the data to be processed are added into the basic set at one time, batch locking of the data to be processed is achieved, and the processing efficiency of the Redis cluster on services is improved. Meanwhile, the batch data locking is realized by adding the data to be processed into the basic set in batch, so that compared with the prior art, the method and the device do not need to generate a large amount of locking messages, and the processing stability of the Redis cluster on the service is improved. Therefore, the method and the device realize the locking of the batch data and improve the processing efficiency and stability of the Redis cluster to the service data.
Of course, it is not necessary for any product or method to achieve all of the above-described advantages at the same time for practicing the invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a data processing method according to an embodiment of the present invention;
fig. 2 is a flowchart of a data processing method according to an alternative embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
An embodiment of the present invention provides a data processing method, as shown in fig. 1, the data processing method includes:
s101, responding to a data processing request, and calling a target thread to add a plurality of data to be processed carried by the data processing request to a current temporary set.
The data processing method shown in fig. 1 is a method applied to a Redis cluster.
Optionally, in an optional embodiment of the present invention, the data processing request may be a request generated by a task thread in a distributed cluster after acquiring data to be processed. The data processing request comprises a plurality of data to be processed and an identifier of a thread for processing the data to be processed.
It should be noted that, in an actual application scenario, the current temporary Set may be a Set constructed based on a Set in a Redis cluster. The Set described above belongs to an unordered Set in a Redis cluster. Because the data in the Set is unique, the thread can be prevented from processing the same data for multiple times, and the service processing efficiency of the redis cluster is improved.
S102, judging whether an intersection exists between the basic set and the current temporary set, wherein the basic set comprises to-be-processed data of each thread at the current time.
In an actual application scenario, the basic Set may be a Set constructed based on the Set.
It should be noted that, in an actual application scenario, step S102 shown in fig. 1 may be implemented by calling an intersection taking function of the Set. Because the basic set stores the data to be processed of each thread at the current time, if the intersection function is called to judge that the basic set and the current temporary set have intersection, it can be shown that at least one data to be processed in the current temporary set is processed by other threads. Therefore, the target thread is prevented from carrying out secondary processing on the data to be processed in the intersection through the subsequent steps. And further, the risk of data inconsistency caused by multiple times of processing of the same data by multiple threads in a high concurrency scene is avoided.
As will be understood by those skilled in the art, in an actual application scenario, the base set may be automatically created after the Redis set starts running, and the current temporary set may be automatically created after the target thread is detected to acquire data. And the basic Set and the current temporary Set are constructed based on the Set. Therefore, the present invention does not make much limitation and description on the specific construction process of the basic set and the current temporary set.
S103, if no intersection exists, adding each piece of data to be processed in the current temporary set to the basic set at one time, adding the identification of the target thread to the thread task queue, and calling the target thread to process each piece of data to be processed added to the basic set at this time.
It should be noted that, in an actual application scenario, the data to be processed can only be processed by the corresponding thread after being added to the base set, and the data to be processed in the base set are all different. Therefore, the basic set is arranged to manage the data to be processed of different threads, occupation of the same data by multiple threads is avoided, and batch locking of the data to be processed is realized by adding the data to be processed into the basic set at one time, so that the service processing efficiency of the Redis cluster is improved. Meanwhile, the data to be processed is added into the basic set in batches to realize batch data locking, so that compared with the prior art, the method and the device do not need to generate a large amount of locking messages, and the processing stability of the Redis cluster on the service is improved.
And S104, if the intersection exists, determining the data to be processed in the intersection as occupied data occupied by other threads except the target thread.
The method and the device manage the data to be processed of different threads by setting the basic set, avoid the occupation of the same data by multiple threads, realize the batch locking of the data to be processed by adding the data to be processed into the basic set at one time, and further improve the processing efficiency of the Redis cluster on the service. Meanwhile, the batch data locking is realized by adding the data to be processed into the basic set in batch, so that compared with the prior art, the method and the device do not need to generate a large amount of locking messages, and the processing stability of the Redis cluster on the service is improved. Therefore, the method and the device realize the locking of the batch data and improve the processing efficiency and stability of the Redis cluster to the service data.
Optionally, the data processing method shown in fig. 1 further includes:
if the intersection exists, repeatedly executing the step of judging whether the intersection exists between the basic set and the current temporary set according to the preset maximum repeated times;
if the step of judging whether the intersection exists between the basic set and the current temporary set is executed at this time, the judgment result is as follows: and if the intersection does not exist, executing the steps of adding all the data to be processed in the current temporary set into the basic set at one time, adding the identification of the target thread into the thread task queue, and calling the target thread to process all the data to be processed added into the basic set at this time.
Optionally, in an optional embodiment of the present invention, it cannot be determined whether the data to be processed in the intersection is completely executed by another thread. If the data in the intersection is still being processed by other threads, the unlocking is forced, which may cause the risk that the same data is occupied by multiple threads. Therefore, the invention can reserve the buffer time by setting the repeated execution steps, so that other threads can finish processing the data in the intersection, thereby avoiding the risk that the same data is occupied by a plurality of threads due to forced unlocking.
Optionally, in an optional embodiment of the present invention, the preset maximum number of repetitions may be determined according to a total duration of the thread processing the to-be-processed data, where the duration of the step of determining whether the base set and the current temporary set have an intersection is performed.
It should be noted that, in an actual application scenario, due to a network delay problem, the Redis cluster cannot release the to-be-processed data that has been processed by the thread in time, and thus other threads that need to use the to-be-processed data cannot perform, which causes deadlock. Therefore, in order to avoid deadlock caused by network problems, expiration time lengths can be respectively set for the basic set, the temporary set and the thread task queue, so that the set and the task queue can delete data with the storage time length exceeding the expiration time lengths, deadlock is avoided, and the processing reliability of the Redis cluster on service data is improved.
Optionally, the data processing method shown in fig. 1 further includes:
if the times of repeatedly executing the step of judging whether the intersection exists between the basic set and the current temporary set reach the preset maximum repeated times and each judgment result is that the intersection exists, acquiring the identifiers of other threads and judging whether the identifiers of other threads exist in the thread task queue;
if the identification of other threads exists, deleting each data to be processed added by other threads in the basic set according to the identification of other threads, deleting the historical temporary set corresponding to the identification of other threads, and returning to execute the step of judging whether the basic set and the current temporary set have intersection.
Optionally, in an optional embodiment of the present invention, the thread task queue may be a task queue in which an identifier of each thread and an enqueue time thereof are stored. Because there is an upper limit to the time required for a thread to process a batch of data to be processed, the upper limit is the time taken for executing the above steps a plurality of times according to the above preset maximum number of repetitions. Therefore, after the repeated execution times reach the preset maximum repeated times, the invention judges whether the marks of other threads corresponding to the occupied data exist in the thread task queue, realizes the judgment of whether the deadlock occurs, and carries out forced unlocking on other threads generating the deadlock and the occupied data thereof, thereby realizing the release of the occupied data and improving the stability of the Redis cluster for processing the service data.
It should be noted that, in an actual application scenario, the correspondence between the thread and the temporary set may be implemented by setting the identifier of the thread as a key value of the temporary set. The specific construction process of the corresponding relationship is not described and limited in detail.
Optionally, after determining the data to be processed in the intersection as occupied data occupied by other threads except the target thread, the data processing method shown in fig. 1 further includes:
accessing the thread task queue according to a preset time interval, and according to the sequence of the enqueue time, carrying out:
judging whether the existing time length of the thread in the thread task queue is greater than a preset time length threshold value or not, and if so, determining the thread as an overtime thread;
and deleting each to-be-processed data added by the overtime thread in the basic set according to the identifier of the overtime thread, deleting the historical temporary set corresponding to the identifier of the overtime thread, and deleting the data of the thread in the thread task queue.
Optionally, in another optional embodiment of the present invention, when a server of the Redis cluster fails, an instruction cannot be issued through the cluster to find and release the deadlock. Therefore, the invention can set the preset time length threshold value by setting the upper limit value of the time required by the thread to process a batch of data to be processed, and set the timing task to regularly judge whether the thread is executed completely, thereby releasing the data to be processed occupied by the overtime thread due to the fault. Therefore, the stability of the Redis cluster on the service data processing is improved.
Optionally, the data processing method shown in fig. 1 further includes:
after the target thread finishes processing the data to be processed added to the basic set, generating a data unlocking request;
in response to the data unlocking request, deleting each data to be processed added by the target thread in the basic set, deleting the current temporary set corresponding to the identification of the target thread, deleting the data of the target thread in the thread task queue,
and generating an unlocking completion message.
It should be noted that, in an actual application scenario, there are various implementation manners of the data processing method, and here, an example provides one of:
fig. 2 is a flowchart of a data processing method according to an alternative embodiment of the present invention, where the steps include:
step S201, in response to the data processing request, creating a current temporary set, setting an identifier of a target thread in the data processing request as a key value of the current temporary set, and invoking the target thread to add a plurality of to-be-processed data carried by the data processing request to the current temporary set. And triggers step S202.
Step S202, judging whether an intersection exists between the basic set and the current temporary set. If not, step S203 is triggered, and if so, step S204 is triggered.
Step S203, adding each data to be processed in the current temporary set to the basic set at one time, adding the identifier of the target thread to the thread task queue, and calling the target thread to process each data to be processed added to the basic set at this time. And triggers step S205.
Step S204, determining the data to be processed in the intersection as occupation data. And triggers step S206.
Step S205, generating a data unlocking request after the target thread completes processing of each to-be-processed data, deleting each to-be-processed data added by the target thread in the basic set according to the identifier of the target thread in the data unlocking request, deleting the identifier of the target thread in the thread task queue, deleting the current temporary set corresponding to the identifier of the target thread, and generating an unlocking completion message. And triggers step S201.
Step S206, determining whether the repetition number is greater than a preset maximum repetition number. If not, the method returns to the triggering step S202. If yes, step S207 is triggered.
Step S207, acquiring the identifier of the other thread, and determining whether the identifier of the other thread exists in the thread task queue. If not, step S208 is triggered. If yes, step S209 is triggered.
And step S208, generating an abnormal message comprising the identification of other threads, and sending the abnormal message to a preset message receiving port. And returns to the triggering step S201.
Step S209, according to the identifiers of other threads, deleting each to-be-processed data added by other threads in the basic set, deleting the historical temporary set corresponding to the identifiers of other threads, and deleting the identifiers of other threads in the thread task queue. And returns to the triggering step S202.
It should be noted that, in an actual application scenario, the step of accessing the thread task queue according to the preset time interval and performing the corresponding operation on each thread of the thread task queue according to the sequence of the enqueue time may be implemented by setting an additional timing task, and therefore, the description is not given in the optional embodiment of the present invention shown in fig. 2.
Correspondingly to the above method embodiment, the present invention further provides a data processing cluster, where the data processing cluster is configured to:
responding to the data processing request, and calling a target thread to add a plurality of data to be processed carried by the data processing request to the current temporary set;
judging whether an intersection exists between a basic set and a current temporary set, wherein the basic set comprises to-be-processed data of each thread at the current time;
if the intersection does not exist, adding each piece of data to be processed in the current temporary set into the basic set at one time, adding the identifier of the target thread into the thread task queue, and calling the target thread to process each piece of data to be processed added into the basic set at this time;
and if the intersection exists, determining the data to be processed in the intersection as occupied data occupied by other threads except the target thread.
Optionally, the data processing cluster is further configured to:
if the intersection exists, repeatedly executing the step of judging whether the intersection exists between the basic set and the current temporary set according to the preset maximum repeated times;
if the step of judging whether the intersection exists between the basic set and the current temporary set is executed at this time, the judgment result is as follows: and if no intersection exists, executing the steps of adding each to-be-processed data in the current temporary set to the basic set at one time, adding the identifier of the target thread to the thread task queue, and calling the target thread to process each to-be-processed data added to the basic set at this time.
Optionally, the data processing cluster is further configured to:
if the times of repeatedly executing the step of judging whether the intersection exists between the basic set and the current temporary set reach the preset maximum repeated times and each judgment result is that the intersection exists, acquiring the identifiers of other threads and judging whether the identifiers of other threads exist in the thread task queue;
if the identification of other threads exists, deleting each data to be processed added by other threads in the basic set according to the identification of other threads, deleting the historical temporary set corresponding to the identification of other threads, and returning to execute the step of judging whether the basic set and the current temporary set have intersection.
Optionally, the data processing cluster is further configured to:
after data to be processed in the intersection is determined to be occupied data occupied by other threads except the target thread, the thread task queue is accessed according to a preset time interval, and each thread of the thread task queue is accessed according to the sequence of the enqueue time:
judging whether the existing time length of the thread in the thread task queue is greater than a preset time length threshold value or not, and if so, determining the thread as an overtime thread;
and deleting each to-be-processed data added by the overtime thread in the basic set according to the identifier of the overtime thread, deleting the historical temporary set corresponding to the identifier of the overtime thread, and deleting the data of the thread in the thread task queue.
Optionally, the data processing cluster is further configured to:
after the target thread finishes processing the data to be processed added to the basic set, generating a data unlocking request;
in response to the data unlocking request, deleting each data to be processed added by the target thread in the basic set, deleting the current temporary set corresponding to the identification of the target thread, deleting the data of the target thread in the thread task queue,
and generating an unlocking completion message.
Embodiments of the present invention further provide a computer-readable storage medium, where when instructions in the computer-readable storage medium are executed by a processor of a data processing cluster of a starter system, the data processing cluster is enabled to execute any one of the data processing methods described above.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
It should be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A method of data processing, the method comprising:
responding to a data processing request, and calling a target thread to add a plurality of data to be processed carried by the data processing request to a current temporary set;
judging whether an intersection exists between a basic set and the current temporary set, wherein the basic set comprises to-be-processed data of each thread at the current time;
if the intersection does not exist, adding the data to be processed in the current temporary set to the basic set at one time, adding the identifier of the target thread to a thread task queue, and calling the target thread to process the data to be processed added to the basic set at this time;
and if the intersection exists, determining the data to be processed in the intersection as occupied data occupied by other threads except the target thread.
2. The method of claim 1, further comprising:
if the intersection exists, repeatedly executing the step of judging whether the intersection exists between the basic set and the current temporary set according to a preset maximum repetition time;
if the step of judging whether the intersection exists between the basic set and the current temporary set is executed at this time, the judgment result is as follows: and if the intersection does not exist, executing the step of adding the data to be processed in the current temporary set to the basic set at one time, adding the identification of the target thread to a thread task queue, and calling the target thread to process the data to be processed added to the basic set at this time.
3. The method of claim 2, further comprising:
if the number of times of repeatedly executing the step of judging whether the intersection exists between the basic set and the current temporary set reaches the preset maximum number of times of repetition and each judgment result is that the intersection exists, acquiring the identifiers of other threads and judging whether the identifiers of other threads exist in the thread task queue;
if the identification of the other threads exists, deleting the data to be processed added by the other threads in the basic set according to the identification of the other threads, deleting the historical temporary set corresponding to the identification of the other threads, and returning to the step of judging whether the basic set and the current temporary set have intersection.
4. The method of claim 1, wherein after determining the data to be processed in the intersection as occupied data occupied by other threads other than the target thread, the method further comprises:
accessing the thread task queue according to a preset time interval, and according to the sequence of the enqueue time, for each thread of the thread task queue:
judging whether the existing time length of the thread in the thread task queue is greater than a preset time length threshold value or not, and if so, determining the thread as an overtime thread;
and deleting each to-be-processed data added by the overtime thread in the basic set according to the identifier of the overtime thread, deleting the historical temporary set corresponding to the identifier of the overtime thread, and deleting the data of the thread in the thread task queue.
5. The method of claim 1, further comprising:
after the target thread finishes processing the data to be processed added to the basic set, generating a data unlocking request;
in response to the data unlocking request, deleting each piece of to-be-processed data added by the target thread in the basic set, deleting the current temporary set corresponding to the identification of the target thread, and deleting the data of the target thread in the thread task queue,
and generating an unlocking completion message.
6. A data processing cluster, characterized in that the cluster is arranged to:
responding to a data processing request, and calling a target thread to add a plurality of data to be processed carried by the data processing request to a current temporary set;
judging whether an intersection exists between a basic set and the current temporary set, wherein the basic set comprises to-be-processed data of each thread at the current time;
if the intersection does not exist, adding the data to be processed in the current temporary set to the basic set at one time, adding the identifier of the target thread to a thread task queue, and calling the target thread to process the data to be processed added to the basic set at this time;
and if the intersection exists, determining the data to be processed in the intersection as occupied data occupied by other threads except the target thread.
7. A cluster according to claim 6, further arranged to:
if the intersection exists, repeatedly executing the step of judging whether the intersection exists between the basic set and the current temporary set according to a preset maximum repetition time;
if the step of judging whether the intersection exists between the basic set and the current temporary set is executed at this time, the judgment result is as follows: and if the intersection does not exist, executing the step of adding the data to be processed in the current temporary set to the basic set at one time, adding the identification of the target thread to a thread task queue, and calling the target thread to process the data to be processed added to the basic set at this time.
8. The cluster of claim 7, wherein the cluster is further configured to:
if the number of times of repeatedly executing the step of judging whether the intersection exists between the basic set and the current temporary set reaches the preset maximum number of times of repetition and each judgment result is that the intersection exists, acquiring the identifiers of other threads and judging whether the identifiers of other threads exist in the thread task queue;
if the identification of the other threads exists, deleting the data to be processed added by the other threads in the basic set according to the identification of the other threads, deleting the historical temporary set corresponding to the identification of the other threads, and returning to the step of judging whether the basic set and the current temporary set have intersection.
9. A cluster according to claim 5, wherein the cluster is further arranged to:
after the data to be processed in the intersection is determined to be occupied data occupied by other threads except the target thread, the thread task queue is accessed according to a preset time interval, and according to the sequence of the enqueue time, the data to be processed in the intersection are processed by the following steps:
judging whether the existing time length of the thread in the thread task queue is greater than a preset time length threshold value or not, and if so, determining the thread as an overtime thread;
and deleting each to-be-processed data added by the overtime thread in the basic set according to the identifier of the overtime thread, deleting the historical temporary set corresponding to the identifier of the overtime thread, and deleting the data of the thread in the thread task queue.
10. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of a data processing cluster of a starter system, enable the data processing cluster to perform the data processing method of any of claims 1 to 5.
CN202211360707.4A 2022-11-02 2022-11-02 Data processing method, cluster and storage medium Pending CN115599527A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211360707.4A CN115599527A (en) 2022-11-02 2022-11-02 Data processing method, cluster and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211360707.4A CN115599527A (en) 2022-11-02 2022-11-02 Data processing method, cluster and storage medium

Publications (1)

Publication Number Publication Date
CN115599527A true CN115599527A (en) 2023-01-13

Family

ID=84850139

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211360707.4A Pending CN115599527A (en) 2022-11-02 2022-11-02 Data processing method, cluster and storage medium

Country Status (1)

Country Link
CN (1) CN115599527A (en)

Similar Documents

Publication Publication Date Title
US10439937B2 (en) Service addressing in distributed environment
CN108897628B (en) Method and device for realizing distributed lock and electronic equipment
CN107276970B (en) Unbinding and binding method and device
CN111078733B (en) Batch task processing method, device, computer equipment and storage medium
CN107291768B (en) Index establishing method and device
CN110457059A (en) A kind of sequence number generation method and device based on redis
CN104252386A (en) Data update locking method and equipment
CN105813037B (en) Short message concurrent service processing method and device
CN110764930A (en) Request or response processing method and device based on message mode
CN111309548A (en) Timeout monitoring method and device and computer readable storage medium
CN110008068B (en) Distributed task disaster recovery method and device thereof
EP3945420A1 (en) Method and apparatus for data processing, server and storage medium
CN108108126B (en) Data processing method, device and equipment
CA3134297A1 (en) Message pushing method and device thereof, computer equipment and storage medium
CN112667651A (en) Data communication method and device
CN115599527A (en) Data processing method, cluster and storage medium
CN117130979A (en) Service resource migration method and device and electronic equipment
EP3859549A1 (en) Database migration method, apparatus, and device, and computer readable medium
CN109558249B (en) Control method and device for concurrent operation
CN111159298A (en) Service request processing method and device, electronic equipment and storage medium
CN114510459A (en) Distributed lock management method and system based on Redis cache system
CN114647663A (en) Resource processing method, device and system, electronic equipment and storage medium
CN112019452B (en) Method, system and related device for processing service requirement
CN111427871A (en) Data processing method, device and equipment
EP4170518A1 (en) Distributed transaction processing method, terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination