CN110348250B - Hardware overhead optimization method and system of multi-chain hash stack - Google Patents

Hardware overhead optimization method and system of multi-chain hash stack Download PDF

Info

Publication number
CN110348250B
CN110348250B CN201910559199.4A CN201910559199A CN110348250B CN 110348250 B CN110348250 B CN 110348250B CN 201910559199 A CN201910559199 A CN 201910559199A CN 110348250 B CN110348250 B CN 110348250B
Authority
CN
China
Prior art keywords
hash
item
queue
bit
chain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910559199.4A
Other languages
Chinese (zh)
Other versions
CN110348250A (en
Inventor
陈李维
许奇臻
李锦峰
史岗
孟丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Information Engineering of CAS
Original Assignee
Institute of Information Engineering of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Information Engineering of CAS filed Critical Institute of Information Engineering of CAS
Priority to CN201910559199.4A priority Critical patent/CN110348250B/en
Publication of CN110348250A publication Critical patent/CN110348250A/en
Application granted granted Critical
Publication of CN110348250B publication Critical patent/CN110348250B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/71Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information
    • G06F21/72Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information in cryptographic circuits

Abstract

The embodiment of the invention provides a hardware overhead optimization method and a system of a multi-chain hash stack, wherein the method comprises the following steps: adding a transmit queue on hardware; the transmitting queue comprises a plurality of items which are sequentially arranged, each item represents a chain hash stack, and each chain hash stack is used for storing the input and the output of one hash operation; when the input of the Hash operation is sent to the corresponding item in the transmission queue, setting the item in a queuing calculation state; after the Hash module finishes the calculation state, inquiring whether an item in a queuing calculation state exists in the transmission queue; and if so, the hash module calculates the hash operation in the next item according to the queuing sequence. Compared with the technical scheme of using the N times of hash modules in the prior art, the embodiment of the invention reduces the hardware overhead and has the acceleration effect of multi-chain and shadow caching.

Description

Hardware overhead optimization method and system of multi-chain hash stack
Technical Field
The invention relates to the technical field of computers, in particular to a hardware overhead optimization method and system of a multi-chain hash stack.
Background
The chain hash stack is a novel defense means for protecting the return address of a function. Two registers top and salt are added on hardware, top is used for storing a hash value, and salt is used for storing a key of a hash function. When a sub-function is called, the hash value and return address are pushed onto the stack in sequence. And then carrying out hash operation on the key, the hash value and the return address together, wherein the operation result is used for updating the value of the top register. When the function returns, the hash value and the return address are popped out of the stack together, and hash operation is carried out. And comparing the operation result with the value in the top register. If not, throwing an exception; if the hash value is the same, the top register is updated with the hash value in the stack and returned to the parent function.
The security of the chain hash stack is extremely high, and even an attacker can read and write the memory address space at will, the defending mechanism cannot be bypassed. However, as the operation period of the hash function increases, its performance loss increases sharply. For example, when a hash operation requires 80 clock cycles, its performance loss is about 20%, limiting its usefulness.
In order to reduce performance loss, the prior art proposes a multi-chain optimization scheme, and specifically, the multi-chain optimization scheme uses N times of hash modules and top/salt registers on hardware. Each time a hash operation is encountered, the 1 st, 2 … th hash module is used in turn. And when the top value of the hash module is input, selecting the value of the mth top register according to the remainder m obtained by dividing the depth of the function in the function call stack by N. The multi-chain implementation mode can avoid the problem of pipeline stagnation caused by long hash operation period and reduce performance loss. However, in order to implement the N-chain optimization scheme, N-1 times of hash modules and top/salt registers, as well as control logic, are required, which causes large overhead to hardware.
Disclosure of Invention
In order to solve the above problems, embodiments of the present invention provide a hardware overhead optimization method and system for a multi-chain hash stack, which overcome the above problems or at least partially solve the above problems.
According to a first aspect of the embodiments of the present invention, a hardware overhead optimization method for a multi-chain hash stack is provided, where the method includes: adding a transmit queue on hardware; the transmitting queue comprises a plurality of items which are sequentially arranged, each item represents a chain hash stack, and each chain hash stack is used for storing the input and the output of one hash operation; when the input of the Hash operation is sent to the corresponding item in the transmitting queue, setting the item to be in a queuing calculation state; after the Hash module finishes the calculation state, inquiring whether an item in a queuing calculation state exists in the transmission queue; and if so, the hash module calculates the hash operation in the next item according to the queuing sequence.
According to a second aspect of the embodiments of the present invention, there is provided a hardware overhead optimization system of a multi-chain hash stack, the system including: the adding module is used for adding a transmitting queue on hardware; the transmitting queue comprises a plurality of items which are sequentially arranged, each item represents a chain hash stack, and each chain hash stack is used for storing the input and the output of one hash operation; the queuing module is used for setting the items to be in a queuing calculation state when the input of the Hash operation is sent to the corresponding items in the transmitting queue; the query module is used for querying whether the items in the queue computing state exist in the transmitting queue after the hash module finishes the computing state; and if so, the hash module calculates the hash operation in the next item according to the queuing sequence.
According to a third aspect of the embodiments of the present invention, there is provided an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the hardware overhead optimization method for a multi-chain hash stack according to any one of the various possible implementations of the first aspect.
According to a fourth aspect of embodiments of the present invention, there is provided a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method for hardware overhead optimization of a multi-chain hash stack as provided in any one of the various possible implementations of the first aspect.
According to the hardware overhead optimization method and system of the multi-chain hash stack, the transmission queue is added to hardware, only one hash module is used, and when the hash module is in a calculation state, the subsequently acquired hash operation is added into the transmission queue to wait for calculation; therefore, the method can realize that the Hash operation of each item in the transmitting queue is calculated in sequence by adopting one Hash module without pausing the production line, and has the acceleration effect of multi-chain and shadow cache on the basis of reducing the hardware overhead compared with the technical scheme of using N times of Hash modules in the prior art.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from these without inventive effort.
Fig. 1 is a schematic flowchart of a hardware overhead optimization method for a multi-chain hash stack according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a transmit queue according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a state change of a finite state machine according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a hardware overhead optimization system of a multi-chain hash stack according to an embodiment of the present invention;
fig. 5 is a schematic physical structure diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments, but not all embodiments, of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a hardware overhead optimization method of a multi-chain hash stack, which is shown in fig. 1 and includes but is not limited to the following steps:
step 101, adding a transmission queue on hardware; the transmitting queue comprises a plurality of items which are sequentially arranged, each item represents a chain hash stack, and each chain hash stack is used for storing the input and the output of one hash operation.
In particular, because N times of hash modules and top/salt registers are used in the multi-chain optimization scheme in the prior art, a large overhead exists in hardware. Therefore, in the embodiment of the invention, only one hash module is used, but a transmission queue is added on hardware. The issue queue includes a plurality of entries, each entry is a row in fig. 2, and each entry represents a chain for storing inputs and outputs of the hash operation.
And 102, when the input of the Hash operation is sent to the corresponding item in the transmission queue, setting the item in a queuing calculation state.
Specifically, after the transmission queue is added in step 101, even if a single hash module is in the computation state, as long as the transmission queue is not full, the subsequent newly obtained hash operation is added into the transmission queue to wait for computation (i.e., the item enters the queuing computation state), without stalling the pipeline.
103, after the hash module finishes the calculation state, inquiring whether an item in a queuing calculation state exists in the transmission queue; and if so, the hash module calculates the hash operation in the next item according to the queuing sequence.
Specifically, after the hash module finishes one hash operation each time, whether the calculation in queue (i.e. the item in the queue calculation state) still exists in the transmission queue can be continuously inquired, and if so, the hash module continuously calculates the hash operation in the next item; if not, the hash module may stop the computation.
Wherein, a finite state machine can be added before step 103 to control the calculation process of the transmission queue. The state change of a finite state machine can be seen in fig. 3, where the state machine has only two states, either inquiry or calculation. While the state machine is in the inquiry state, the computation state is entered if there are transmitted items waiting to be computed (i.e., items in the queued computation state). After the computation is completed, the state machine returns to the query state, and selects to continue processing the item waiting for computation or keep in the query state according to the condition whether the item in the queuing computation state exists.
According to the hardware overhead optimization method of the multi-chain hash stack, the transmission queue is added to hardware, only one hash module is used, and when the hash module is in a calculation state, the subsequently acquired hash operation is added into the transmission queue to wait for calculation; therefore, the method can realize that the Hash operation of each item in the transmitting queue is calculated in sequence by adopting one Hash module without pausing the production line, and has the acceleration effect of multi-chain and shadow cache on the basis of reducing the hardware overhead compared with the technical scheme of using N times of Hash modules in the prior art.
Based on the contents of the above-described embodiment, as an alternative embodiment, referring to fig. 2, each item includes: a queue bit, a callret bit, a ret _ addr bit, an old _ hash bit and a new _ hash bit;
the queue bit is used for indicating whether the item is in a queuing calculation state; a queue is a bit in which different assignments can represent whether an item is in a queued computing state.
The callret bit is used for indicating the type of hash operation in the project, and the type of hash operation comprises hash value updating operation or hash value checking operation; the callret bit is a bit, and different assignments in the callret bit can represent different hash operation types; for example, when callret is 1, hash value update operation when the return address is pushed is indicated; when callret is 0, it indicates a hash value check operation when the return address is popped.
Where the ret _ addr bit is used to store the return address.
Wherein the old _ hash bit is used to store the original hash value.
The new _ hash bit is used for storing a new hash value obtained by performing hash value updating operation on the original hash value.
Based on the content of the foregoing embodiment, as an alternative embodiment, the transmission queue includes: head and calculating pointers;
wherein, the head pointer is used for pointing to an item to be stored in the next hash operation; when the function is called, the head pointer is + 1; when the function returns, the head pointer is-1. In other words, the head is used to point to the location where the next hash operation is to be stored, pointing to the corresponding chained hash stack.
Wherein, the calculating pointer is used for pointing to an item currently carrying out the hash operation.
Based on the content of the foregoing embodiments, as an alternative embodiment, a method for setting an item to a queuing computation state when an input of a hash operation is fed into a corresponding item in a transmission queue is provided, which includes, but is not limited to, the following steps:
step 1, inputting the hash value updating operation input of the function head into an item in an emission queue, and taking the queue position of the item as the value of the item in a queuing calculation state, and continuously executing the operation without stopping a production line.
Specifically, before the transmission queue is full, after the hash value updating operation at the head of the function sends the input to the transmission queue, the queue position of the item is set to 1, and the pipeline is not stopped to continue executing.
And 2, after the hash value updating operation is finished, storing the obtained new hash value into a new _ hash bit.
Based on the content of the foregoing embodiment, as an optional embodiment, after adding the transmission queue on hardware, the method further includes:
when the hash module performs hash value check operation on the tail part of the function, if the type of hash operation performed on the callret bit indication item is hash value update operation, matching whether the return address and the hash value in the chain hash stack are the same as the input stored in the transmission queue;
if the queue position is the value indicating that the item is not in the queuing calculation state, the item is not queued for calculation, and the value in the old _ hash bit is assigned to the new _ hash bit; if not, an exception is thrown.
Specifically, when the hash module encounters a hash value check operation at the tail of the function, if the callret bit is 1, the return address and the hash value in the stack are directly matched to determine whether the input stored in the transmission queue is the same (a shadow cache manner may be used, which is not limited in the embodiment of the present invention). If the queue position of the item is 0, queuing calculation is not carried out, and the value of old _ hash is assigned to new _ hash; if not, an exception is thrown. And if the callret bit is 0, when the input of the hash operation to be checked is sent to the corresponding item in the transmission queue, setting the queue position of the item to be 1, and continuously executing the operation without stopping the production line. After the hash value check operation is finished, comparing the new _ hash of the item with the new _ hash of the item, and if the new _ hash is the same as the new _ hash of the item, updating the new _ hash by using the old _ hash of the item, wherein the queue position is 0; if not, an error is reported.
Based on the content of the foregoing embodiment, as an alternative embodiment, if the value is the same, the setting the queue position as a value indicating that the entry is not in the queued calculation state, the entry is not queued for calculation, and the value in the old _ hash bit is assigned to the new _ hash bit, further includes: if the hash module is computing the item, the computation of the hash module is terminated.
Based on the content of the foregoing embodiment, as an optional embodiment, after adding the transmission queue on hardware, the method further includes:
if the type of the hash operation of all the items is hash value updating operation or the type of the hash operation of all the items is hash value checking operation, the subsequent hash operation with the same type as the hash operation needs to enter an emission queue after the hash operation of the earliest queuing item is finished; and/or the presence of a gas in the gas,
if the hash module is performing hash value check operation and the type of the subsequent hash operation is hash value update operation, the subsequent hash operation needs to enter the transmission queue after the hash value check operation is finished.
Specifically, there are two cases described above for the case where the pipeline needs to be stalled. For the first case, when the queue bits of all the entries are 1, it indicates that all the entries are hash value updates or all the entries are hash value check calculations, and at this time, the same operation needs to wait for a certain entry to enter the transmission queue. For the second situation, when the hash value is being checked, if the next hash value update operation is encountered, the transmission queue can be entered after the hash value is checked.
Based on the content of the foregoing embodiment, an embodiment of the present invention provides a hardware overhead optimization system for a multi-chain hash stack, where the hardware overhead optimization system is configured to execute the hardware overhead optimization method for the multi-chain hash stack in the foregoing method embodiment. Referring to fig. 4, the system includes: an adding module 301, a queuing module 302 and a querying module 303; the adding module 301 is configured to add a transmit queue on hardware; the transmitting queue comprises a plurality of items which are sequentially arranged, each item represents a chain hash stack, and each chain hash stack is used for storing the input and the output of one hash operation; a queuing module 302, configured to set an item as a queuing calculation state when an input of the hash operation is sent to a corresponding item in the transmission queue; the query module 303 is configured to query whether an item in a queuing calculation state exists in the transmission queue after the hash module finishes the calculation state; and if so, the hash module calculates the hash operation in the next item according to the queuing sequence.
Specifically, in the embodiment of the present invention, only one hash module is used, but the adding module 301 adds one transmission queue on hardware. The issue queue includes a plurality of entries, each entry is a row in fig. 2, and each entry represents a chain for storing inputs and outputs of the hash operation. Even if a single hash module is in the compute state, as long as the transmit queue is not full, the subsequent newly obtained hash operation is added to the transmit queue by the queue module 302 to wait for computation (i.e., the item enters the queued compute state), without stalling the pipeline. After the hash module finishes one hash operation each time, the query module 303 may continue to query whether there is any calculation in queue (i.e., an item in a queue calculation state) in the transmission queue, and if so, the hash module continues to calculate the hash operation in the next item; if not, the hash module may stop the computation.
According to the hardware overhead optimization system of the multi-chain hash stack, the transmission queue is added to hardware, only one hash module is used, and when the hash module is in a calculation state, the subsequently obtained hash operation is added into the transmission queue to wait for calculation; therefore, the method can realize that the Hash operation of each item in the transmitting queue is calculated in sequence by adopting one Hash module without pausing the production line, and simultaneously plays the accelerating effect of multi-chain and shadow cache on the basis of reducing the hardware overhead compared with the technical scheme of using N times of Hash modules in the prior art.
An embodiment of the present invention provides an electronic device, as shown in fig. 5, the electronic device includes: a processor (processor)501, a communication Interface (Communications Interface)502, a memory (memory)503, and a communication bus 504, wherein the processor 501, the communication Interface 502, and the memory 503 are configured to communicate with each other via the communication bus 504. The processor 501 may call a computer program running on the memory 503 and on the processor 501 to execute the hardware overhead optimization method of the multi-chain hash stack provided by the foregoing embodiments, for example, the method includes: adding a transmit queue on hardware; the transmitting queue comprises a plurality of items which are sequentially arranged, each item represents a chain hash stack, and each chain hash stack is used for storing the input and the output of one hash operation; when the input of the Hash operation is sent to the corresponding item in the transmitting queue, setting the item to be in a queuing calculation state; after the Hash module finishes the calculation state, inquiring whether an item in a queuing calculation state exists in the transmission queue; and if so, the hash module calculates the hash operation in the next item according to the queuing sequence.
In addition, the logic instructions in the memory 503 may be implemented in the form of software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
An embodiment of the present invention further provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a method for optimizing hardware overhead of a multi-chain hash stack provided in the foregoing embodiments, and includes: adding a transmit queue on hardware; the transmitting queue comprises a plurality of items which are sequentially arranged, each item represents a chain hash stack, and each chain hash stack is used for storing the input and the output of one hash operation; when the input of the Hash operation is sent to the corresponding item in the transmitting queue, setting the item to be in a queuing calculation state; after the Hash module finishes the calculation state, inquiring whether an item in a queuing calculation state exists in the transmission queue; and if so, the hash module calculates the hash operation in the next item according to the queuing sequence.
The above-described embodiments of the electronic device and the like are merely illustrative, and units illustrated as separate components may or may not be physically separate, and components displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute the various embodiments or some parts of the methods of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (8)

1. A hardware overhead optimization method of a multi-chain hash stack is characterized by comprising the following steps:
adding a transmit queue on hardware; the transmitting queue comprises a plurality of items which are sequentially arranged, each item represents a chain hash stack, and each chain hash stack is used for storing the input and the output of one hash operation;
when the input of the Hash operation is sent to the corresponding item in the transmitting queue, setting the item to be in a queuing calculation state;
after the hash module finishes the calculation state, inquiring whether the project in the queuing calculation state still exists in the transmission queue; if yes, the hash module calculates the hash operation in the next project according to the queuing sequence;
wherein each of the items comprises: a queue bit, a callret bit, a ret _ addr bit, an old _ hash bit and a new _ hash bit;
the queue bit is used for indicating whether the item is in the queuing calculation state;
the callret bit is used for indicating the type of the hash operation in the project, and the type of the hash operation comprises a hash value updating operation or a hash value checking operation;
the ret _ addr bit is used for storing a return address;
the old _ hash bit is used for storing an original hash value;
the new _ hash bit is used for storing a new hash value obtained after the hash value updating operation is carried out on the original hash value;
after the transmission queue is added on the hardware, the method further comprises the following steps: if the type of the hash operation of all the items is hash value updating operation or the type of the hash operation of all the items is hash value checking operation, the subsequent hash operation with the same type as the hash operation needs to enter the transmission queue after the hash operation of the earliest queuing item is finished; and/or the presence of a gas in the gas,
if the hash module is performing the hash value check operation and the type of the subsequent hash operation is a hash value update operation, the subsequent hash operation needs to enter the transmission queue after the hash value check operation is finished.
2. The hardware overhead optimization method of the multi-chain hash stack of claim 1, wherein the transmit queue comprises: head and calculating pointers;
the head pointer is used for pointing to one item to be stored in the next hash operation; the calculating pointer is used for pointing to one item currently carrying out the hash operation.
3. The hardware overhead optimization method of the multi-chain hash stack according to claim 1, wherein when the input of the hash operation is sent to the corresponding item in the transmission queue, setting the item to a queuing calculation state includes:
sending the input of the hash value updating operation of the function head into one item in the transmitting queue, and taking the queue position of the item as the value of the item in the queuing calculation state, and continuing to execute the operation without stopping a production line;
and after the hash value updating operation is finished, storing the obtained new hash value into the new _ hash bit.
4. The hardware overhead optimization method of the multi-chain hash stack according to claim 1, wherein after adding the transmission queue on the hardware, the method further comprises:
when the hash module performs the hash value check operation at the tail of the function, if the callret bit indicates that the type of the hash operation performed on the item is the hash value update operation, matching whether the return address and the hash value in the chain hash stack are the same as the input stored in the transmission queue;
if the queue position is the value indicating that the item is not in the queuing calculation state, the item is not queued for calculation, and the value in the old _ hash bit is assigned to the new _ hash bit; if the difference is not the same, throwing an exception;
if the callret bit indicates that the type of the hash operation performed on the item is the hash value check operation, when the input of the hash operation to be checked is sent to the corresponding item in the transmission queue, the item is set to be in a queuing calculation state, including:
sending the input of the hash value check operation at the tail part of the function into one item in the transmitting queue, and taking the queue position of the item as the value of the item in the queuing calculation state, and continuously executing the operation without stopping a production line;
after the hash value check operation is finished, comparing the new _ hash of the item with the new _ hash of the item, if the new _ hash is the same as the new _ hash of the item, updating the new _ hash by using the old _ hash of the item, and setting the queue position of the item to be 0, wherein 0 represents that queuing calculation is not performed any more; if not, an error is reported.
5. The method of claim 4, wherein if they are the same, assigning the value in the old _ hash bit to the new _ hash bit, and wherein the method further comprises:
if the hash module is computing the item, terminating the computing of the hash module.
6. A hardware overhead optimization system for a multi-chain hash stack, comprising:
the adding module is used for adding a transmitting queue on hardware; the transmitting queue comprises a plurality of items which are sequentially arranged, each item represents a chain hash stack, and each chain hash stack is used for storing the input and the output of one hash operation;
the queuing module is used for setting the item to be in a queuing calculation state when the input of the Hash operation is sent to the corresponding item in the transmitting queue;
the query module is used for querying whether the items in the queue computing state exist in the transmitting queue after the hash module finishes the computing state; if yes, the hash module calculates the hash operation in the next project according to the queuing sequence;
the adding module is further configured to: wherein each of the items comprises: a queue bit, a callret bit, a ret _ addr bit, an old _ hash bit and a new _ hash bit;
the queue bit is used for indicating whether the item is in the queuing calculation state;
the callret bit is used for indicating the type of the hash operation in the project, and the type of the hash operation comprises a hash value updating operation or a hash value checking operation;
the ret _ addr bit is used for storing a return address;
the old _ hash bit is used for storing an original hash value;
the new _ hash bit is used for storing a new hash value obtained after the hash value updating operation is carried out on the original hash value;
if the type of the hash operation of all the items is hash value updating operation or the type of the hash operation of all the items is hash value checking operation, the subsequent hash operation with the same type as the hash operation needs to enter the transmission queue after the hash operation of the earliest queuing item is finished; and/or the presence of a gas in the gas,
if the hash module is performing the hash value check operation and the type of the subsequent hash operation is a hash value update operation, the subsequent hash operation needs to enter the transmission queue after the hash value check operation is finished.
7. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps of the method for hardware overhead optimization of a multi-chain hash stack according to any of claims 1 to 5.
8. A non-transitory computer readable storage medium, having stored thereon a computer program, which when executed by a processor, performs the steps of the method for hardware overhead optimization of a multi-chain hash stack according to any of claims 1 to 5.
CN201910559199.4A 2019-06-26 2019-06-26 Hardware overhead optimization method and system of multi-chain hash stack Active CN110348250B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910559199.4A CN110348250B (en) 2019-06-26 2019-06-26 Hardware overhead optimization method and system of multi-chain hash stack

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910559199.4A CN110348250B (en) 2019-06-26 2019-06-26 Hardware overhead optimization method and system of multi-chain hash stack

Publications (2)

Publication Number Publication Date
CN110348250A CN110348250A (en) 2019-10-18
CN110348250B true CN110348250B (en) 2020-12-29

Family

ID=68183090

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910559199.4A Active CN110348250B (en) 2019-06-26 2019-06-26 Hardware overhead optimization method and system of multi-chain hash stack

Country Status (1)

Country Link
CN (1) CN110348250B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102742320A (en) * 2009-03-20 2012-10-17 瑞典爱立信有限公司 Active queue management for wireless communication network uplink
CN107992577A (en) * 2017-12-04 2018-05-04 北京奇安信科技有限公司 A kind of Hash table data conflict processing method and device
CN109889449A (en) * 2019-02-03 2019-06-14 清华大学 The packet-forwarding method and system of low storage overhead

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100449986C (en) * 2003-01-28 2009-01-07 华为技术有限公司 Method for raising operational speed of key-hashing method
US7290253B1 (en) * 2003-09-30 2007-10-30 Vmware, Inc. Prediction mechanism for subroutine returns in binary translation sub-systems of computers
US7523298B2 (en) * 2006-05-04 2009-04-21 International Business Machines Corporation Polymorphic branch predictor and method with selectable mode of prediction
US8196110B2 (en) * 2007-11-30 2012-06-05 International Business Machines Corporation Method and apparatus for verifying a suspect return pointer in a stack
US10360374B2 (en) * 2017-05-25 2019-07-23 Intel Corporation Techniques for control flow protection
CN109508539A (en) * 2018-09-21 2019-03-22 中国科学院信息工程研究所 The chained stack structure that return address is tampered in detection storehouse

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102742320A (en) * 2009-03-20 2012-10-17 瑞典爱立信有限公司 Active queue management for wireless communication network uplink
CN107992577A (en) * 2017-12-04 2018-05-04 北京奇安信科技有限公司 A kind of Hash table data conflict processing method and device
CN109889449A (en) * 2019-02-03 2019-06-14 清华大学 The packet-forwarding method and system of low storage overhead

Also Published As

Publication number Publication date
CN110348250A (en) 2019-10-18

Similar Documents

Publication Publication Date Title
US10884830B1 (en) Method and apparatus for multithreaded data transmission in a tee system
US10943006B2 (en) Method and apparatus for multithreaded data transmission in a TEE system
US20130326154A1 (en) Cache system optimized for cache miss detection
US20210109920A1 (en) Method for Validating Transaction in Blockchain Network and Node for Configuring Same Network
US20180173638A1 (en) Method and apparatus for data access
EP3058481B1 (en) Acceleration based on cached flows
EP2858024A1 (en) An asset management device and method in a hardware platform
US20150331712A1 (en) Concurrently processing parts of cells of a data structure with multiple processes
US20210019415A1 (en) Method and apparatus for data transmission in a tee system
CN116909943B (en) Cache access method and device, storage medium and electronic equipment
US9419646B2 (en) Hardware compression to find backward references with multi-level hashes
CN110363006B (en) Multi-chain Hash stack structure and method for detecting function return address being tampered
US11093245B2 (en) Computer system and memory access technology
JP7461895B2 (en) Network Packet Templating for GPU-Driven Communication
CN110348250B (en) Hardware overhead optimization method and system of multi-chain hash stack
CN110362503B (en) Optimization method and optimization system of chain hash stack
US20180143890A1 (en) Simulation apparatus, simulation method, and computer readable medium
JP6189266B2 (en) Data processing apparatus, data processing method, and data processing program
CN115794677A (en) Cache data verification method and device, electronic equipment and storage medium
US10789176B2 (en) Technologies for a least recently used cache replacement policy using vector instructions
CN111680289B (en) Chained hash stack operation method and device
CN110378109A (en) Reduce the method and system of chain type Hash stack performance loss
CN110362502B (en) Shadow cache optimization method and device of chained hash stack
US10146820B2 (en) Systems and methods to access memory locations in exact match keyed lookup tables using auxiliary keys
US20210405969A1 (en) Computer-readable recording medium recording arithmetic processing program, arithmetic processing method, and arithmetic processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant