Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It will be understood that when an element such as a layer, film, region, or substrate is referred to as being "on" another element, it can be directly on the other element or intervening elements may also be present. Also, in the specification and claims, when an element is described as being "connected" to another element, the element may be "directly connected" to the other element or "connected" to the other element through a third element.
As described in the background art, in the prior art, because the processor chip has limited operation requests that can be processed simultaneously at the same time, after a large number of operation requests are initiated, limited resources of the processor are contended by a large number of data requests, and if the operation requests are not processed, a data blocking situation occurs.
According to an embodiment of the present application, a method of processing a token is provided.
Fig. 1 is a flow chart of a token processing method according to an embodiment of the present application. As shown in fig. 1, the method comprises the steps of:
step S101, receiving a request process;
step S102, using a mutex to lock and protect the requested process;
step S103, determining whether a token linked list is empty, wherein the token linked list comprises a plurality of nodes, and one node corresponds to one token ID;
and step S104, distributing one token ID for the request process under the condition that the token linked list is not empty.
Specifically, the mutex is used for ensuring atomicity of token linked list operation in the application process, and the mutex locks and protects the request process, namely, when a request process is received, the request process is locked and protected, and even if a new request process arrives before a token ID is not distributed to the request process, the new request process is not processed, so that each request is ensured to be processed independently, and a chaotic phenomenon is prevented when the token ID is distributed to the request process.
Specifically, the above request process is equivalent to the above arithmetic request.
Specifically, the token ID refers to a token including ID information, and the ID information is used as a unique identifier of the token and may directly correspond to the requesting process, that is, a unique token is distributed to each requesting process.
Specifically, the execution body executing the processing method of the token in the present scheme includes, but is not limited to, a Serica Gemini 3XXX series RSA chip 20 core concurrency architecture.
In the scheme, the request process is received firstly, then the request process is locked and protected by using the mutex, whether the token chain table is empty or not is determined, under the condition that the token chain table is not empty, the token chain table has nodes available for allocation, then a token ID is distributed to the request process, the token ID is guaranteed to be distributed to the request process, after the distribution of the token ID on the token chain table is finished, the token ID cannot be allocated even if a new request process arrives, only a waiting state can be entered, and the problems that when a plurality of request processes exist simultaneously in the prior art, limited resources are contended, and data blocking occurs are solved.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
In an embodiment of the application, after distributing one of the token IDs to the requesting process, the method further includes: deleting the node corresponding to the token ID from the token linked list; the mutex is released. That is, after a token ID is distributed to a requesting process, a node corresponding to the distributed token ID is deleted from the token linked list, so that the distributed token ID is prevented from being distributed to another requesting process again. And releasing the mutex so as not to influence the subsequent distribution of the token ID for other request processes, ensure the smooth distribution of the tokens and ensure that one request process corresponds to one token.
In one embodiment of the present application, a request process is received; locking and protecting a critical area of a token pool by using a mutex so as to uniquely operate a token linked list; determining whether a token linked list is empty, wherein the token linked list can contain a plurality of token nodes, and each token has a unique ID; in the event that the token linked list is not empty, a token is distributed for the requesting process and the distributed token is removed from the linked list. The scheme ensures the uniqueness of token distribution, when the token pool has no distributable tokens, a new request process arrives to prompt the scheduling program to enable the request to enter a waiting state until other processes return the tokens after completing operation, and the problems that in the prior art, when a plurality of request processes exist simultaneously, limited resources are contended and request conflict occurs are solved.
In an embodiment of the present application, after releasing the mutex, the method further includes: and calculating the token ID and the request process to obtain a calculation result, wherein the calculation result comprises the token ID. When the request process is operated, the request process carries the token ID to operate together, and the operation result simultaneously carries the token ID, so that the corresponding relation between the request process and the operation result can be found no matter what operation sequence the operation process operates. The method solves the problem that even if the operation time required by different request processes is different, the data throughput does not follow the order similar to first-in first-out, and the corresponding relation between input and output can be found.
In an embodiment of the application, after the operation of the token ID and the request process is finished, the processing method further includes: using a mutex to lock and protect the token linked list; recycling the token ID to the corresponding node of the token linked list; the mutex is released. The mutex is used for avoiding that a linked list pointer is mistaken due to the fact that operations of deleting and adding the token linked list are carried out at the same time, locking protection is carried out by the mutex when the token ID is distributed and recovered, and chaotic phenomena can be prevented from being generated when the token ID is distributed and recovered; after the token ID is recycled to the corresponding node of the token linked list, the mutex is released to remove the protection, so that the nodes are prevented from being preempted by other token IDs and the token ID being recycled when the token ID is recycled.
Specifically, a recovery function is adopted to recover the token ID, the recovery function has two input parameters, a token chain head pointer and a token ID number needing to be recovered, the function firstly judges whether the ID exceeds the range of [0:39], meanwhile, whether the ID is used or not is checked, if the ID is not used, recovery is refused, then, a mutex is obtained, and then the ID token is added back to the token chain table, wherein the mutex is the same as the mutex in the application process and is used for avoiding that the linked list is deleted and added at the same time to cause pointer errors of the linked list.
In an embodiment of the application, in a process of recovering the token ID, the processing method further includes: determining whether the token ID has been used; refusing to recycle in case the token ID is not used; and recovering the token ID when the token ID is used. The token ID is not used, namely the node corresponding to the token ID is still in the token linked list, at this time, the node corresponding to the token ID is not distributed, so that the recovery is refused, and the token ID can be recovered under the condition that the node corresponding to the token ID is empty. Specifically, to avoid the situation that the same token ID is added multiple times during the token recycling process, a voltate integer global variable node _ flags is introduced here, in which bits [0:39] are used to identify whether the token ID is used or not (for example, bit 0 is set to 1, which represents that a requesting process has applied for a token with ID 0, and has not been returned), and if the token ID is not used, the token ID is added into the token pool, and the token ID is rejected.
The embodiment of the present application further provides a token processing apparatus, and it should be noted that the token processing apparatus according to the embodiment of the present application may be used to execute the token processing method according to the embodiment of the present application. The following describes a processing apparatus for a token provided in an embodiment of the present application.
Fig. 2 is a schematic diagram of a token processing apparatus according to an embodiment of the application. As shown in fig. 2, the apparatus includes:
a receiving unit 10, configured to receive a request process;
a locking unit 20 for locking and protecting the requested process by using a mutex;
a first determining unit 30, configured to determine whether a token chain table is empty, where the token chain table includes a plurality of nodes, and each node corresponds to one token ID;
a distribution unit 40, configured to distribute one token ID for the request process when the token chain table is not empty.
Specifically, the mutex is used for ensuring atomicity of token linked list operation in the application process, and the mutex locks and protects the request process, namely, when a request process is received, the request process is locked and protected, and even if a new request process arrives before a token ID is not distributed to the request process, the new request process is not processed, so that each request is ensured to be processed independently, and a chaotic phenomenon is prevented when the token ID is distributed to the request process.
Specifically, the above request process is equivalent to the above arithmetic request.
Specifically, the token ID refers to a token including ID information, and the ID information is used as a unique identifier of the token and may directly correspond to the requesting process, that is, a unique token is distributed to each requesting process.
Specifically, the execution body executing the processing method of the token in the present scheme includes, but is not limited to, a Serica Gemini 3XXX series RSA chip 20 core concurrency architecture.
In the scheme, the receiving unit receives the request process, the locking unit uses the mutex to lock and protect the request process, the first determining unit determines whether the token chain table is empty or not, under the condition that the token chain table is not empty, the nodes which can be allocated are represented in the token chain table, and then one token ID is distributed to the request process, so that one token ID is distributed to one request process, after the distribution of the token ID on the token chain table is finished, the token ID cannot be allocated even if a new request process arrives, and only a waiting state can be entered, and the problems that limited resources are contended and data blocking occurs when a plurality of request processes exist in the prior art are solved.
In an embodiment of the application, the apparatus further includes a deleting unit and a first releasing unit, where the deleting unit is configured to delete the node corresponding to the token ID from the token linked list after distributing the token ID for the request process; the first releasing unit is configured to release the mutex after distributing one of the token IDs to the requesting process. That is, after a token ID is distributed to a requesting process, a node corresponding to the distributed token ID is deleted from the token linked list, so that the distributed token ID is prevented from being distributed to another requesting process again. And releasing the mutex so as not to influence the subsequent distribution of the token ID for other request processes, ensure the smooth distribution of the tokens and ensure that one request process corresponds to one token.
In an embodiment of the application, the apparatus further includes an operation unit, where the operation unit is configured to, after releasing the mutex, perform an operation on the token ID and the request process to obtain an operation result, where the operation result includes the token ID. When the request process is operated, the request process carries the token ID to operate together, and the operation result simultaneously carries the token ID, so that the corresponding relation between the request process and the operation result can be found no matter what operation sequence the operation process operates. The method solves the problem that even if the operation time required by different request processes is different, the data throughput does not follow the order similar to first-in first-out, and the corresponding relation between input and output can be found.
In an embodiment of the application, the processing apparatus further includes a protection unit, a recovery unit, and a second release unit, where the protection unit is configured to perform lock protection on the token chain table by using a mutex after the operation of the token ID and the request process is completed; the recovery unit is used for recovering the token ID to the corresponding node of the token linked list after the operation of the token ID and the request process is finished; and the second releasing unit is used for releasing the mutex after the token ID and the request process operation are finished. The mutex is used for avoiding deleting and adding the operation to the token linked list at the same time, and causes the linked list pointer to make mistakes, the mutex is adopted for locking protection when distributing the token ID and when recycling the token ID, and can prevent the distribution and the recycling of the token ID, and the chaotic phenomenon is generated. After the token ID is recycled to the corresponding node of the token linked list, the mutex is released to remove the protection, so that the nodes are prevented from being preempted by other token IDs and the token ID being recycled when the token ID is recycled.
Specifically, a recovery function is adopted to recover the token ID, the recovery function has two input parameters, a token chain head pointer and a token ID number needing to be recovered, the function firstly judges whether the ID exceeds the range of [0:39], meanwhile, whether the ID is used or not is checked, if the ID is not used, recovery is refused, then, a mutex is obtained, and then the ID token is added back to the token chain table, wherein the mutex is the same as the mutex in the application process and is used for avoiding that the linked list is deleted and added at the same time to cause pointer errors of the linked list.
In an embodiment of the present application, the processing apparatus further includes a second determining unit, a first processing unit, and a second processing unit, where the second determining unit is configured to determine whether the token ID is already used in a process of recycling the token ID; the first processing unit is used for refusing to recycle under the condition that the token ID is not used in the process of recycling the token ID; the second processing unit is used for recycling the token ID when the token ID is used in the process of recycling the token ID. The token ID is not used, namely the node corresponding to the token ID is still in the token linked list, at this time, the node corresponding to the token ID is not distributed, so that the recovery is refused, and the token ID can be recovered under the condition that the node corresponding to the token ID is empty. Specifically, to avoid the situation that the same token ID is added multiple times during the token recycling process, a voltate integer global variable node _ flags is introduced here, in which bits [0:39] are used to identify whether the token ID is used or not (for example, bit 0 is set to 1, which represents that a requesting process has applied for a token with ID 0, and has not been returned), and if the token ID is not used, the token ID is added into the token pool, and the token ID is rejected.
The technical scheme of the application has been realized in a Linux driver of a Serica Gemini series product, and the hardware equipment can be ensured to normally work (reaching design performance, such as 10w operations per second, and simultaneously, errors can not occur to represent the situation that data blockage is caused by contention of resources) under the conditions of long time (7 multiplied by 24 hours), large data volume (requests with the magnitude of 10^ 9) and high concurrency (40 processes or more than 40 processes).
The token processing apparatus includes a processor and a memory, wherein the receiving unit, the locking unit, the first determining unit, the distributing unit, and the like are stored in the memory as program units, and the processor executes the program units stored in the memory to implement corresponding functions.
The processor comprises a kernel, and the kernel calls the corresponding program unit from the memory. The kernel can be set to be one or more, and when limited resources are contended by a large amount of data requests by adjusting kernel parameters, data blocking is prevented.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
The embodiment of the invention provides a computer-readable storage medium, which comprises a stored program, wherein when the program runs, a device where the computer-readable storage medium is located is controlled to execute the processing method of the token.
The embodiment of the invention provides a processor, which is used for running a program, wherein the processing method of the token is executed when the program runs.
The embodiment of the invention provides equipment, which comprises a processor, a memory and a program which is stored on the memory and can run on the processor, wherein when the processor executes the program, at least the following steps are realized:
step S101, receiving a request process;
step S102, using a mutex to lock and protect the requested process;
step S103, determining whether a token linked list is empty, wherein the token linked list comprises a plurality of nodes, and one node corresponds to one token ID;
and step S104, distributing one token ID for the request process under the condition that the token linked list is not empty.
The device herein may be a server, a PC, a PAD, a mobile phone, etc.
The present application further provides a computer program product adapted to perform a program of initializing at least the following method steps when executed on a data processing device:
step S101, receiving a request process;
step S102, using a mutex to lock and protect the requested process;
step S103, determining whether a token linked list is empty, wherein the token linked list comprises a plurality of nodes, and one node corresponds to one token ID;
and step S104, distributing one token ID for the request process under the condition that the token linked list is not empty.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
From the above description, it can be seen that the above-described embodiments of the present application achieve the following technical effects:
1) the method for processing the token comprises the steps of firstly receiving a request process, then locking and protecting the request process by using a mutex, then determining whether the token chain table is empty or not, indicating that the token chain table has nodes available for allocation under the condition that the token chain table is not empty, and then distributing a token ID for the request process, so that the condition that one token ID is distributed for one request process is guaranteed, and after the distribution of the token ID on the token chain table is finished, the token ID cannot be distributed even if a new request process arrives, and only a waiting state can be entered.
2) The processing device of the token comprises a receiving unit, a locking unit, a first determining unit and a second determining unit, wherein the receiving unit receives a request process, the locking unit uses a mutex to lock and protect the request process, the first determining unit determines whether the token chain table is empty or not, under the condition that the token chain table is not empty, the token chain table indicates that nodes which can be distributed exist in the token chain table, then a token ID is distributed to the request process, the token ID is distributed to the request process, and after the distribution of the token ID on the token chain table is finished, the token ID cannot be distributed even if a new request process arrives, and only a waiting state can be entered.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.