CN112131012B - Token processing method, token processing device and computer readable storage medium - Google Patents

Token processing method, token processing device and computer readable storage medium Download PDF

Info

Publication number
CN112131012B
CN112131012B CN202011342393.6A CN202011342393A CN112131012B CN 112131012 B CN112131012 B CN 112131012B CN 202011342393 A CN202011342393 A CN 202011342393A CN 112131012 B CN112131012 B CN 112131012B
Authority
CN
China
Prior art keywords
token
mutex
request process
linked list
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011342393.6A
Other languages
Chinese (zh)
Other versions
CN112131012A (en
Inventor
黄野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiuzhou Huaxing Integrated Circuit Design Beijing Co ltd
Original Assignee
Jiuzhou Huaxing Integrated Circuit Design Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiuzhou Huaxing Integrated Circuit Design Beijing Co ltd filed Critical Jiuzhou Huaxing Integrated Circuit Design Beijing Co ltd
Priority to CN202011342393.6A priority Critical patent/CN112131012B/en
Publication of CN112131012A publication Critical patent/CN112131012A/en
Application granted granted Critical
Publication of CN112131012B publication Critical patent/CN112131012B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/526Mutual exclusion algorithms

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Storage Device Security (AREA)

Abstract

The application provides a token processing method, a token processing device and a computer readable storage medium. The processing method comprises the following steps: receiving a request process; locking and protecting the requested process by using a mutex; determining whether a token linked list is empty, wherein the token linked list comprises a plurality of nodes, and one node corresponds to one token ID; in the case where the token linked list is not empty, a token ID is distributed for the requesting process. According to the scheme, the token ID is guaranteed to be distributed for one request process, and after the distribution of the token ID on the token linked list is finished, the token ID cannot be distributed even if a new request process arrives, and only can enter a waiting state, so that the problems that limited resources are contended and data blocking occurs when a plurality of request processes exist in the prior art are solved.

Description

Token processing method, token processing device and computer readable storage medium
Technical Field
The present application relates to the field of token processing, and in particular, to a token processing method, a token processing apparatus, a computer-readable storage medium, and a token processor.
Background
The processor chip can process limited operation requests at the same time, when a large number of operation requests are initiated, the limited resources of the processor can be contended by a large number of data requests, and if the operation requests are not processed, a data blocking condition occurs.
For example, a Serica Gemini 3XXX series RSA chip 20 core concurrency architecture allows for caching of at most 40 concurrent RSA operation requests, the chip can process 20 operation requests at the same time, and all the requests share the same data transmission path, so that there is a possibility that limited resources are contended by a large amount of data requests, and if not processed, a data blocking situation will occur.
Disclosure of Invention
The present application mainly aims to provide a token processing method, a token processing apparatus, a computer-readable storage medium, and a token processor, so as to solve the problem in the prior art that a limited resource is contended by a large amount of data requests, and data is blocked.
In order to achieve the above object, according to an aspect of the present application, there is provided a token processing method including: receiving a request process; locking the requested process using a mutex; determining whether a token chain table is empty, wherein the token chain table comprises a plurality of nodes, and one node corresponds to one token ID; and under the condition that the token linked list is not empty, distributing one token ID for the request process.
Further, after distributing one of the token IDs for the requesting process, the method further comprises: deleting the node corresponding to the token ID from the token linked list; releasing the mutex.
Further, after releasing the mutex, the method further comprises: and calculating the token ID and the request process to obtain an operation result, wherein the operation result comprises the token ID.
Further, after the operation of the token ID and the request process is finished, the processing method further includes: locking and protecting the token linked list by using the mutex; recycling the token ID to the corresponding node of the token linked list; releasing the mutex.
Further, in the process of recycling the token ID, the processing method further includes: determining whether the token ID has been used; refusing reclamation in the event that the token ID is not used; in the case where the token ID has been used, the token ID is recycled.
According to another aspect of the present application, there is provided a token processing apparatus, including: a receiving unit, configured to receive a request process; a locking unit, configured to lock and protect the requested process using a mutex; a first determining unit, configured to determine whether a token chain table is empty, where the token chain table includes a plurality of nodes, and one node corresponds to one token ID; a distribution unit, configured to distribute one token ID for the request process when the token linked list is not empty.
Further, the apparatus further comprises: a deleting unit, configured to delete the node corresponding to the token ID from the token linked list after distributing one token ID to the request process; a first releasing unit, configured to release the mutex after distributing one of the token IDs for the requesting process.
Further, the apparatus further includes an arithmetic unit, where the arithmetic unit is configured to, after releasing the mutex, perform an operation on the token ID and the request process to obtain an operation result, where the operation result includes the token ID.
According to yet another aspect of the present application, there is provided a computer-readable storage medium comprising a stored program, wherein the program, when executed, controls an apparatus in which the computer-readable storage medium is located to perform any one of the token processing methods.
According to a further aspect of the present application, there is provided a processor for executing a program, wherein the program executes any one of the token processing methods.
According to the technical scheme, the request process is received firstly, then the mutex is used for locking and protecting the request process, whether the token chain table is empty or not is determined, under the condition that the token chain table is not empty, the token chain table is provided with nodes which can be allocated, then a token ID is distributed to the request process, the token ID is distributed to the request process, and after the distribution of the token ID on the token chain table is finished, the token ID cannot be distributed even if a new request process arrives, and only a waiting state can be entered.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application. In the drawings:
FIG. 1 shows a flow chart of a method of processing a token according to an embodiment of the application;
fig. 2 shows a schematic representation of a processing means of a token according to an embodiment of the application.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It will be understood that when an element such as a layer, film, region, or substrate is referred to as being "on" another element, it can be directly on the other element or intervening elements may also be present. Also, in the specification and claims, when an element is described as being "connected" to another element, the element may be "directly connected" to the other element or "connected" to the other element through a third element.
As described in the background art, in the prior art, because the processor chip has limited operation requests that can be processed simultaneously at the same time, after a large number of operation requests are initiated, limited resources of the processor are contended by a large number of data requests, and if the operation requests are not processed, a data blocking situation occurs.
According to an embodiment of the present application, a method of processing a token is provided.
Fig. 1 is a flow chart of a token processing method according to an embodiment of the present application. As shown in fig. 1, the method comprises the steps of:
step S101, receiving a request process;
step S102, using a mutex to lock and protect the requested process;
step S103, determining whether a token linked list is empty, wherein the token linked list comprises a plurality of nodes, and one node corresponds to one token ID;
and step S104, distributing one token ID for the request process under the condition that the token linked list is not empty.
Specifically, the mutex is used for ensuring atomicity of token linked list operation in the application process, and the mutex locks and protects the request process, namely, when a request process is received, the request process is locked and protected, and even if a new request process arrives before a token ID is not distributed to the request process, the new request process is not processed, so that each request is ensured to be processed independently, and a chaotic phenomenon is prevented when the token ID is distributed to the request process.
Specifically, the above request process is equivalent to the above arithmetic request.
Specifically, the token ID refers to a token including ID information, and the ID information is used as a unique identifier of the token and may directly correspond to the requesting process, that is, a unique token is distributed to each requesting process.
Specifically, the execution body executing the processing method of the token in the present scheme includes, but is not limited to, a Serica Gemini 3XXX series RSA chip 20 core concurrency architecture.
In the scheme, the request process is received firstly, then the request process is locked and protected by using the mutex, whether the token chain table is empty or not is determined, under the condition that the token chain table is not empty, the token chain table has nodes available for allocation, then a token ID is distributed to the request process, the token ID is guaranteed to be distributed to the request process, after the distribution of the token ID on the token chain table is finished, the token ID cannot be allocated even if a new request process arrives, only a waiting state can be entered, and the problems that when a plurality of request processes exist simultaneously in the prior art, limited resources are contended, and data blocking occurs are solved.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
In an embodiment of the application, after distributing one of the token IDs to the requesting process, the method further includes: deleting the node corresponding to the token ID from the token linked list; the mutex is released. That is, after a token ID is distributed to a requesting process, a node corresponding to the distributed token ID is deleted from the token linked list, so that the distributed token ID is prevented from being distributed to another requesting process again. And releasing the mutex so as not to influence the subsequent distribution of the token ID for other request processes, ensure the smooth distribution of the tokens and ensure that one request process corresponds to one token.
In one embodiment of the present application, a request process is received; locking and protecting a critical area of a token pool by using a mutex so as to uniquely operate a token linked list; determining whether a token linked list is empty, wherein the token linked list can contain a plurality of token nodes, and each token has a unique ID; in the event that the token linked list is not empty, a token is distributed for the requesting process and the distributed token is removed from the linked list. The scheme ensures the uniqueness of token distribution, when the token pool has no distributable tokens, a new request process arrives to prompt the scheduling program to enable the request to enter a waiting state until other processes return the tokens after completing operation, and the problems that in the prior art, when a plurality of request processes exist simultaneously, limited resources are contended and request conflict occurs are solved.
In an embodiment of the present application, after releasing the mutex, the method further includes: and calculating the token ID and the request process to obtain a calculation result, wherein the calculation result comprises the token ID. When the request process is operated, the request process carries the token ID to operate together, and the operation result simultaneously carries the token ID, so that the corresponding relation between the request process and the operation result can be found no matter what operation sequence the operation process operates. The method solves the problem that even if the operation time required by different request processes is different, the data throughput does not follow the order similar to first-in first-out, and the corresponding relation between input and output can be found.
In an embodiment of the application, after the operation of the token ID and the request process is finished, the processing method further includes: using a mutex to lock and protect the token linked list; recycling the token ID to the corresponding node of the token linked list; the mutex is released. The mutex is used for avoiding that a linked list pointer is mistaken due to the fact that operations of deleting and adding the token linked list are carried out at the same time, locking protection is carried out by the mutex when the token ID is distributed and recovered, and chaotic phenomena can be prevented from being generated when the token ID is distributed and recovered; after the token ID is recycled to the corresponding node of the token linked list, the mutex is released to remove the protection, so that the nodes are prevented from being preempted by other token IDs and the token ID being recycled when the token ID is recycled.
Specifically, a recovery function is adopted to recover the token ID, the recovery function has two input parameters, a token chain head pointer and a token ID number needing to be recovered, the function firstly judges whether the ID exceeds the range of [0:39], meanwhile, whether the ID is used or not is checked, if the ID is not used, recovery is refused, then, a mutex is obtained, and then the ID token is added back to the token chain table, wherein the mutex is the same as the mutex in the application process and is used for avoiding that the linked list is deleted and added at the same time to cause pointer errors of the linked list.
In an embodiment of the application, in a process of recovering the token ID, the processing method further includes: determining whether the token ID has been used; refusing to recycle in case the token ID is not used; and recovering the token ID when the token ID is used. The token ID is not used, namely the node corresponding to the token ID is still in the token linked list, at this time, the node corresponding to the token ID is not distributed, so that the recovery is refused, and the token ID can be recovered under the condition that the node corresponding to the token ID is empty. Specifically, to avoid the situation that the same token ID is added multiple times during the token recycling process, a voltate integer global variable node _ flags is introduced here, in which bits [0:39] are used to identify whether the token ID is used or not (for example, bit 0 is set to 1, which represents that a requesting process has applied for a token with ID 0, and has not been returned), and if the token ID is not used, the token ID is added into the token pool, and the token ID is rejected.
The embodiment of the present application further provides a token processing apparatus, and it should be noted that the token processing apparatus according to the embodiment of the present application may be used to execute the token processing method according to the embodiment of the present application. The following describes a processing apparatus for a token provided in an embodiment of the present application.
Fig. 2 is a schematic diagram of a token processing apparatus according to an embodiment of the application. As shown in fig. 2, the apparatus includes:
a receiving unit 10, configured to receive a request process;
a locking unit 20 for locking and protecting the requested process by using a mutex;
a first determining unit 30, configured to determine whether a token chain table is empty, where the token chain table includes a plurality of nodes, and each node corresponds to one token ID;
a distribution unit 40, configured to distribute one token ID for the request process when the token chain table is not empty.
Specifically, the mutex is used for ensuring atomicity of token linked list operation in the application process, and the mutex locks and protects the request process, namely, when a request process is received, the request process is locked and protected, and even if a new request process arrives before a token ID is not distributed to the request process, the new request process is not processed, so that each request is ensured to be processed independently, and a chaotic phenomenon is prevented when the token ID is distributed to the request process.
Specifically, the above request process is equivalent to the above arithmetic request.
Specifically, the token ID refers to a token including ID information, and the ID information is used as a unique identifier of the token and may directly correspond to the requesting process, that is, a unique token is distributed to each requesting process.
Specifically, the execution body executing the processing method of the token in the present scheme includes, but is not limited to, a Serica Gemini 3XXX series RSA chip 20 core concurrency architecture.
In the scheme, the receiving unit receives the request process, the locking unit uses the mutex to lock and protect the request process, the first determining unit determines whether the token chain table is empty or not, under the condition that the token chain table is not empty, the nodes which can be allocated are represented in the token chain table, and then one token ID is distributed to the request process, so that one token ID is distributed to one request process, after the distribution of the token ID on the token chain table is finished, the token ID cannot be allocated even if a new request process arrives, and only a waiting state can be entered, and the problems that limited resources are contended and data blocking occurs when a plurality of request processes exist in the prior art are solved.
In an embodiment of the application, the apparatus further includes a deleting unit and a first releasing unit, where the deleting unit is configured to delete the node corresponding to the token ID from the token linked list after distributing the token ID for the request process; the first releasing unit is configured to release the mutex after distributing one of the token IDs to the requesting process. That is, after a token ID is distributed to a requesting process, a node corresponding to the distributed token ID is deleted from the token linked list, so that the distributed token ID is prevented from being distributed to another requesting process again. And releasing the mutex so as not to influence the subsequent distribution of the token ID for other request processes, ensure the smooth distribution of the tokens and ensure that one request process corresponds to one token.
In an embodiment of the application, the apparatus further includes an operation unit, where the operation unit is configured to, after releasing the mutex, perform an operation on the token ID and the request process to obtain an operation result, where the operation result includes the token ID. When the request process is operated, the request process carries the token ID to operate together, and the operation result simultaneously carries the token ID, so that the corresponding relation between the request process and the operation result can be found no matter what operation sequence the operation process operates. The method solves the problem that even if the operation time required by different request processes is different, the data throughput does not follow the order similar to first-in first-out, and the corresponding relation between input and output can be found.
In an embodiment of the application, the processing apparatus further includes a protection unit, a recovery unit, and a second release unit, where the protection unit is configured to perform lock protection on the token chain table by using a mutex after the operation of the token ID and the request process is completed; the recovery unit is used for recovering the token ID to the corresponding node of the token linked list after the operation of the token ID and the request process is finished; and the second releasing unit is used for releasing the mutex after the token ID and the request process operation are finished. The mutex is used for avoiding deleting and adding the operation to the token linked list at the same time, and causes the linked list pointer to make mistakes, the mutex is adopted for locking protection when distributing the token ID and when recycling the token ID, and can prevent the distribution and the recycling of the token ID, and the chaotic phenomenon is generated. After the token ID is recycled to the corresponding node of the token linked list, the mutex is released to remove the protection, so that the nodes are prevented from being preempted by other token IDs and the token ID being recycled when the token ID is recycled.
Specifically, a recovery function is adopted to recover the token ID, the recovery function has two input parameters, a token chain head pointer and a token ID number needing to be recovered, the function firstly judges whether the ID exceeds the range of [0:39], meanwhile, whether the ID is used or not is checked, if the ID is not used, recovery is refused, then, a mutex is obtained, and then the ID token is added back to the token chain table, wherein the mutex is the same as the mutex in the application process and is used for avoiding that the linked list is deleted and added at the same time to cause pointer errors of the linked list.
In an embodiment of the present application, the processing apparatus further includes a second determining unit, a first processing unit, and a second processing unit, where the second determining unit is configured to determine whether the token ID is already used in a process of recycling the token ID; the first processing unit is used for refusing to recycle under the condition that the token ID is not used in the process of recycling the token ID; the second processing unit is used for recycling the token ID when the token ID is used in the process of recycling the token ID. The token ID is not used, namely the node corresponding to the token ID is still in the token linked list, at this time, the node corresponding to the token ID is not distributed, so that the recovery is refused, and the token ID can be recovered under the condition that the node corresponding to the token ID is empty. Specifically, to avoid the situation that the same token ID is added multiple times during the token recycling process, a voltate integer global variable node _ flags is introduced here, in which bits [0:39] are used to identify whether the token ID is used or not (for example, bit 0 is set to 1, which represents that a requesting process has applied for a token with ID 0, and has not been returned), and if the token ID is not used, the token ID is added into the token pool, and the token ID is rejected.
The technical scheme of the application has been realized in a Linux driver of a Serica Gemini series product, and the hardware equipment can be ensured to normally work (reaching design performance, such as 10w operations per second, and simultaneously, errors can not occur to represent the situation that data blockage is caused by contention of resources) under the conditions of long time (7 multiplied by 24 hours), large data volume (requests with the magnitude of 10^ 9) and high concurrency (40 processes or more than 40 processes).
The token processing apparatus includes a processor and a memory, wherein the receiving unit, the locking unit, the first determining unit, the distributing unit, and the like are stored in the memory as program units, and the processor executes the program units stored in the memory to implement corresponding functions.
The processor comprises a kernel, and the kernel calls the corresponding program unit from the memory. The kernel can be set to be one or more, and when limited resources are contended by a large amount of data requests by adjusting kernel parameters, data blocking is prevented.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
The embodiment of the invention provides a computer-readable storage medium, which comprises a stored program, wherein when the program runs, a device where the computer-readable storage medium is located is controlled to execute the processing method of the token.
The embodiment of the invention provides a processor, which is used for running a program, wherein the processing method of the token is executed when the program runs.
The embodiment of the invention provides equipment, which comprises a processor, a memory and a program which is stored on the memory and can run on the processor, wherein when the processor executes the program, at least the following steps are realized:
step S101, receiving a request process;
step S102, using a mutex to lock and protect the requested process;
step S103, determining whether a token linked list is empty, wherein the token linked list comprises a plurality of nodes, and one node corresponds to one token ID;
and step S104, distributing one token ID for the request process under the condition that the token linked list is not empty.
The device herein may be a server, a PC, a PAD, a mobile phone, etc.
The present application further provides a computer program product adapted to perform a program of initializing at least the following method steps when executed on a data processing device:
step S101, receiving a request process;
step S102, using a mutex to lock and protect the requested process;
step S103, determining whether a token linked list is empty, wherein the token linked list comprises a plurality of nodes, and one node corresponds to one token ID;
and step S104, distributing one token ID for the request process under the condition that the token linked list is not empty.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
From the above description, it can be seen that the above-described embodiments of the present application achieve the following technical effects:
1) the method for processing the token comprises the steps of firstly receiving a request process, then locking and protecting the request process by using a mutex, then determining whether the token chain table is empty or not, indicating that the token chain table has nodes available for allocation under the condition that the token chain table is not empty, and then distributing a token ID for the request process, so that the condition that one token ID is distributed for one request process is guaranteed, and after the distribution of the token ID on the token chain table is finished, the token ID cannot be distributed even if a new request process arrives, and only a waiting state can be entered.
2) The processing device of the token comprises a receiving unit, a locking unit, a first determining unit and a second determining unit, wherein the receiving unit receives a request process, the locking unit uses a mutex to lock and protect the request process, the first determining unit determines whether the token chain table is empty or not, under the condition that the token chain table is not empty, the token chain table indicates that nodes which can be distributed exist in the token chain table, then a token ID is distributed to the request process, the token ID is distributed to the request process, and after the distribution of the token ID on the token chain table is finished, the token ID cannot be distributed even if a new request process arrives, and only a waiting state can be entered.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (5)

1. A method for processing a token, comprising:
receiving a request process;
locking the requested process using a mutex;
determining whether a token chain table is empty, wherein the token chain table comprises a plurality of nodes, and one node corresponds to one token ID;
distributing one token ID for the request process under the condition that the token linked list is not empty;
after distributing one of the token IDs for the requesting process, the method further comprises:
deleting the node corresponding to the token ID from the token linked list;
releasing the mutex;
after releasing the mutex, the method further comprises:
calculating the token ID and the request process to obtain a calculation result, wherein the calculation result comprises the token ID;
after the operation of the token ID and the request process is finished, the processing method further includes:
locking and protecting the token linked list by using the mutex;
recycling the token ID to the corresponding node of the token linked list;
releasing the mutex.
2. The processing method according to claim 1, wherein in recovering the token ID, the processing method further comprises:
determining whether the token ID has been used;
refusing reclamation in the event that the token ID is not used;
in the case where the token ID has been used, the token ID is recycled.
3. An apparatus for processing a token, comprising:
a receiving unit, configured to receive a request process;
a locking unit, configured to lock and protect the requested process using a mutex;
a first determining unit, configured to determine whether a token chain table is empty, where the token chain table includes a plurality of nodes, and one node corresponds to one token ID;
a distribution unit, configured to distribute one token ID for the request process when the token linked list is not empty;
the device further comprises:
a deleting unit, configured to delete the node corresponding to the token ID from the token linked list after distributing one token ID to the request process;
a first releasing unit, configured to release the mutex after distributing one of the token IDs for the requesting process;
the device also comprises an arithmetic unit, wherein the arithmetic unit is used for carrying out operation on the token ID and the request process after the mutex is released to obtain an operation result, and the operation result comprises the token ID;
the device further comprises:
the protection unit is used for performing locking protection on the token linked list by using the mutex after the operation of the token ID and the request process is finished;
a recovery unit, configured to recover the token ID to the corresponding node of the token linked list;
and the second releasing unit is used for releasing the mutex.
4. A computer-readable storage medium, comprising a stored program, wherein the program, when executed, controls an apparatus in which the computer-readable storage medium is located to perform the method of processing a token of any one of claims 1 to 2.
5. A processor, characterized in that the processor is configured to run a program, wherein the program is configured to execute the method of processing a token according to any one of claims 1 to 2 when running.
CN202011342393.6A 2020-11-26 2020-11-26 Token processing method, token processing device and computer readable storage medium Active CN112131012B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011342393.6A CN112131012B (en) 2020-11-26 2020-11-26 Token processing method, token processing device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011342393.6A CN112131012B (en) 2020-11-26 2020-11-26 Token processing method, token processing device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112131012A CN112131012A (en) 2020-12-25
CN112131012B true CN112131012B (en) 2021-07-20

Family

ID=73852325

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011342393.6A Active CN112131012B (en) 2020-11-26 2020-11-26 Token processing method, token processing device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112131012B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101082870A (en) * 2007-07-20 2007-12-05 中兴通讯股份有限公司 Method for restricting parallel execution of shell script

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1331531A (en) * 2000-06-29 2002-01-16 杨震 ATM token-ring network
US7831974B2 (en) * 2002-11-12 2010-11-09 Intel Corporation Method and apparatus for serialized mutual exclusion
CN104468302B (en) * 2014-10-16 2018-03-30 深圳市金证科技股份有限公司 A kind of processing method and processing device of token
CN108897628B (en) * 2018-05-25 2020-06-26 北京奇艺世纪科技有限公司 Method and device for realizing distributed lock and electronic equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101082870A (en) * 2007-07-20 2007-12-05 中兴通讯股份有限公司 Method for restricting parallel execution of shell script

Also Published As

Publication number Publication date
CN112131012A (en) 2020-12-25

Similar Documents

Publication Publication Date Title
US11036553B2 (en) Preempting or traversing allocated resource applications based on job priority and resource demand information
US9319281B2 (en) Resource management method, resource management device, and program product
CN108897628B (en) Method and device for realizing distributed lock and electronic equipment
EP3138013B1 (en) System and method for providing distributed transaction lock in transactional middleware machine environment
US11675622B2 (en) Leader election with lifetime term
US20110219378A1 (en) Iterative data parallel opportunistic work stealing scheduler
CN114925015A (en) Data processing method, device, equipment and medium based on multi-core processor
CN112131012B (en) Token processing method, token processing device and computer readable storage medium
KR102254159B1 (en) Method for detecting real-time error in operating system kernel memory
CN113590320A (en) Resource processing method, device, equipment and medium for distributed batch task scheduling
CN108073460B (en) Global lock preemption method and device in distributed system and computing equipment
US8689230B2 (en) Determination of running status of logical processor
US9135058B2 (en) Method for managing tasks in a microprocessor or in a microprocessor assembly
US20060230246A1 (en) Memory allocation technique using memory resource groups
CN112738181B (en) Method, device and server for cluster external IP access
CN113961364A (en) Large-scale lock system implementation method and device, storage medium and server
CN113806142A (en) Data recovery method, device and related equipment
CN115878336A (en) Information processing method and device in lock operation and computing equipment
CN114647663A (en) Resource processing method, device and system, electronic equipment and storage medium
CN111352710A (en) Process management method and device, computing equipment and storage medium
CN112988460B (en) Data backup method and device for virtual machine
CN113810479B (en) Service coordination system and service coordination method
CN115115429A (en) Block chain data storage, updating and reading method and device and electronic equipment
CN117687806A (en) Deadlock processing method, electronic device and storage medium
CN112733499A (en) Serial number generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100080 s1601-s1605, 16 / F, block C, No.2, south academy of Sciences Road, Haidian District, Beijing

Applicant after: Saixin semiconductor technology (Beijing) Co.,Ltd.

Address before: 100080 s1601-s1605, 16 / F, block C, No.2, south academy of Sciences Road, Haidian District, Beijing

Applicant before: JIUZHOU HUAXING INTEGRATED CIRCUIT DESIGN (BEIJING) Co.,Ltd.

CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: S1601, 16 / F, block C, No. 2, South Road, Academy of Sciences, Haidian District, Beijing 100080

Patentee after: Saixin semiconductor technology (Beijing) Co.,Ltd.

Address before: 100080 s1601-s1605, 16 / F, block C, No.2, south academy of Sciences Road, Haidian District, Beijing

Patentee before: Saixin semiconductor technology (Beijing) Co.,Ltd.