CN111240748B - Multi-core-based thread-level speculative parallelism method - Google Patents

Multi-core-based thread-level speculative parallelism method Download PDF

Info

Publication number
CN111240748B
CN111240748B CN202010054734.3A CN202010054734A CN111240748B CN 111240748 B CN111240748 B CN 111240748B CN 202010054734 A CN202010054734 A CN 202010054734A CN 111240748 B CN111240748 B CN 111240748B
Authority
CN
China
Prior art keywords
thread
state
core unit
sequence
threads
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010054734.3A
Other languages
Chinese (zh)
Other versions
CN111240748A (en
Inventor
李远成
施佳琪
王朝闻
冯茹
蒋林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Science and Technology
Original Assignee
Xian University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Science and Technology filed Critical Xian University of Science and Technology
Priority to CN202010054734.3A priority Critical patent/CN111240748B/en
Publication of CN111240748A publication Critical patent/CN111240748A/en
Application granted granted Critical
Publication of CN111240748B publication Critical patent/CN111240748B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3877Concurrent instruction execution, e.g. pipeline or look ahead using a slave processor, e.g. coprocessor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)
  • Devices For Executing Special Programs (AREA)

Abstract

The invention relates to a thread-level speculative parallelization method based on multiple cores, which comprises the following steps: judging whether each core unit needs to wait after executing the thread in the first state; the threads in the first state have a preset first sequence; if waiting is needed, inserting the thread in the first state into a preset position of a verification queue, and pointing a pointer in the position of the verification queue to a storage unit; the verification queue is provided with a plurality of sequentially arranged positions; the thread insertion validation queue position corresponds to the thread order in the first order; each position in the verification queue is provided with a preset pointer pointing to the core unit corresponding to the position; the validation queue is stored in a storage unit; and verifying the threads in the first state in a storage unit or a core unit according to the pointer direction in the verification queue according to the first sequence to obtain a verification result.

Description

Multi-core-based thread-level speculative parallelism method
Technical Field
The invention relates to a method for thread-level speculative parallelism based on multiple cores.
Background
Thread Level Parallelism (TLP), a computer can execute more than two threads at the same time, and the data consistency in the thread execution process is ensured through the whole set of verification mechanism. In the traditional CMP model, for the non-regular program parallelization design, the program code segment verification time scheme is as follows: the verification between the processing units is carried out, and whether the data used by the thread excited by the processing units are consistent or not is verified after the execution of the thread is finished. If the codes are consistent, the authority is determined to be handed over, otherwise, the excited threads are cancelled, and the original codes are continued by the determined threads.
As shown in fig. 1, in the conventional execution model, a thread in a certain state verifies a next thread, and in the process, if the processing speed of a subsequent thread is high, it is necessary to wait for the thread in the certain state to verify and then continue to execute other codes.
Due to the fact that the traditional model is used for parallelizing irregular programs, the phenomenon of load imbalance exists, and calculation performance is reduced.
Disclosure of Invention
Technical problem to be solved
In order to solve the above problems in the prior art, the present invention provides a method for load balancing in thread-level speculative parallelism based on multiple cores.
(II) technical scheme
In order to achieve the above object, the present invention provides a method for thread-level speculative parallelism based on multiple cores, which has multiple core units, each core unit executing a segment of a thread; the method comprises the following steps:
a1, judging whether each core unit needs to wait after executing a thread in a first state;
the segment threads of the first state have a preset first sequence;
a2, if waiting is needed, inserting the thread in the first state into a preset position of a verification queue, and pointing a pointer in the position of the verification queue to a storage unit;
the validation queue has a plurality of sequentially arranged positions;
the thread insertion validation queue position corresponds to the thread order in the first order;
each position in the verification queue is provided with a preset pointer pointing to the core unit corresponding to the position;
the validation queue is stored in a storage unit;
and A3, verifying the threads in the first state in a storage unit or a core unit according to the pointer direction in the verification queue and acquiring a verification result.
Preferably, the step A1 includes:
a1-1, acquiring the time of executing a section of thread in a second state by a core unit and the time of executing the thread in a first state by each core unit;
a1-2, based on the time of each core unit executing the thread in the first state, when the execution of the thread in the first state by each core unit is finished, acquiring the state of the thread and the state of a previous thread adjacent to the thread in the first sequence;
a1-3, determining whether waiting is needed after each core unit executes the thread in the first state or not based on the time for the core unit to execute the thread in a section of the second state, the time for each core unit to execute the thread in the first state, the state of the thread when each core unit executes the thread in the first state and the state of the thread adjacent to the thread in the first sequence when each core unit executes the thread in the first state.
Preferably, the steps A1 to 3 include:
when the core unit finishes executing the thread in the first state, if the state of the previous thread adjacent to the thread in the first sequence is the first state, determining that the core unit needs to wait.
Preferably, the steps A1 to 3 include:
when the core unit finishes executing the thread in the first state, if the state of the thread adjacent to the previous thread in the first sequence is the second state, and the time for the core unit to execute the thread in the first state is shorter than the executed time of the thread adjacent to the previous thread in the first sequence, determining that the core unit needs to wait after the thread in the first state is finished.
Preferably, the step A3 includes:
and if the pointers in the positions sequentially arranged in the verification queue point to the core unit, verifying the thread corresponding to the core unit in the core unit to obtain a verification result.
Preferably, the step A3 includes:
and if the pointers in the positions sequentially arranged in the verification queue point to the storage unit, verifying the threads corresponding to the positions in the storage unit to obtain a verification result.
Preferably, the method further comprises the following step before the step A1:
b1, determining a plurality of sections of threads corresponding to the sequence of a program to be executed according to the preset program to be executed;
the multiple sections of threads comprise multiple sections of threads in a first state and one section of threads in a second state;
and B2, based on the thread in the second state, adopting a preset out-of-order excitation strategy to excite the multiple sections of threads in the first state.
Preferably, the preset first sequence corresponds to the sequence of the programs to be executed.
Preferably, the method is characterized in that,
the first state is a speculative state;
the second state is a determination state.
(III) advantageous effects
The beneficial effects of the invention are: the invention transfers the verification from the core unit to the storage unit through the verification queue, eliminates the phenomenon of load imbalance among threads and can increase the effective execution rate of the core unit.
Drawings
FIG. 1 is a diagram illustrating parallel execution in the prior art;
FIG. 2 is a flowchart of a method for thread-level speculative parallelism based on multiple cores according to the present invention;
FIG. 3 is a diagram illustrating parallel execution according to an embodiment of the present invention.
[ description of reference ]
1: a first segment of threads in a first order;
2: a second segment of threads in the first order;
3: a third section of threads in the first order;
4: a fourth segment of threads in the first order;
5: a fifth segment of threads in the first order;
6: the sixth segment of threads in the first order.
Detailed Description
For a better understanding of the present invention, reference will now be made in detail to the present embodiments of the invention, which are illustrated in the accompanying drawings.
Example one
Referring to fig. 2, the method for thread-level speculative parallelism based on multiple cores in this embodiment has multiple core units, each executing a segment of a thread; the method comprises the following steps:
step 1, in this embodiment, according to a preset program to be executed, a plurality of sections of threads corresponding to the sequence of the program to be executed are determined.
The multi-segment threads in the embodiment comprise a plurality of segments of threads in a first state and a segment of threads in a second state. The first state in this embodiment is a presumptive state, and the second state is a definitive state.
And 2, based on the thread in the second state, adopting a preset out-of-order excitation strategy to excite the multiple sections of threads in the first state. The thread is launched and executed by the core unit corresponding to the thread.
And 3, judging whether each core unit needs to wait after executing the thread in the first state.
The segment threads of the first state have a first predetermined order.
Preferably, the preset first sequence corresponds to the sequence of the programs to be executed.
Preferably, step 3 in this embodiment includes:
the time for the core units to execute a thread in the second state and the time for each core unit to execute a thread in the first state are obtained.
Based on the time when each core unit executes the thread in the first state, the state of the thread and the state of a previous thread adjacent to the thread in the first order are acquired when the thread in the first state executed by each core unit is finished.
And determining whether waiting is needed after each core unit executes the thread in the first state or not based on the time for the core unit to execute the thread in the second state and the time for each core unit to execute the thread in the first state, and the state of the thread and the state of the previous thread adjacent to the thread in the first sequence when each core unit executes the thread in the first state.
In this embodiment, when the core unit finishes executing the thread in the first state, if the state of the previous thread adjacent to the thread in the first order is the first state, it is determined that the core unit needs to wait.
In this embodiment, when the core unit finishes executing the thread in the first state, if the state of the thread adjacent to the previous thread in the first order is the second state, and the time for the core unit to execute the thread in the first state is shorter than the executed time of the thread adjacent to the previous thread in the first order, it is determined that the core unit needs to wait after executing the thread in the first state finishes.
And 4, if waiting is needed, inserting the thread in the first state into a preset verification queue position, and pointing a pointer in the verification queue position to a storage unit.
The validation queue has a plurality of sequentially arranged positions therein.
The position of the thread inserted into the verification queue corresponds to the order of the thread in the first order;
each position in the verification queue is provided with a preset pointer pointing to the core unit corresponding to the position.
The validation queue is stored in a storage unit.
And 5, verifying the threads in the first state in a storage unit or a core unit according to the first sequence according to the pointer direction in the verification queue to obtain a verification result.
Preferably, the step 5 comprises:
and if the pointers in the positions sequentially arranged in the verification queue point to the core unit, verifying the thread corresponding to the core unit in the core unit to obtain a verification result.
And if the pointers in the positions sequentially arranged in the verification queue point to the storage unit, verifying the thread corresponding to the positions in the storage unit to obtain a verification result.
In the embodiment, the verification is transferred from the core unit to the storage unit through the verification queue, and when the thread is executed first, the verification is not waited in the core unit, so that the efficiency is improved.
Example two
Referring to fig. 2 and fig. 3, the method for thread-level speculative parallelism based on multiple cores in this embodiment specifically includes: determining 6 segments of threads having a first order based on program data to be executed; in this embodiment, the threads are numbered according to the order of the threads in the first order of the 6 segments of threads. And the numbers of the threads are sequentially placed in the sequence queue. The sequence queue in this embodiment is stored in a predetermined data structure.
The 6-segment process comprises the following steps: 1 section of determination state thread and 5 sections of speculation state thread; the first order corresponds to an order of the programs; in this embodiment, the first segment thread in the first order is numbered 1, and thread 1 is shown. The second-stage thread in the first order is numbered 2, and thread 2 is indicated. The third thread in the first order is numbered 3, and thread 3 is indicated. The fourth thread segment in the first order is numbered 4, indicating thread 4. The fifth thread in the first order is numbered 5, and thread 5 is indicated. The sixth thread in the first order is numbered 6, and thread 6 is indicated.
In this embodiment, the first segment of threads in the first sequence are determined state threads, that is, thread 1 is a determined state thread; thread 2, thread 3, thread 4, thread 5, and thread 6 are all speculative state threads. In the embodiment, each thread is executed by one core unit respectively; the 5 sections of the speculative state threads are executed by the core unit through an out-of-order firing order; when the state of a thread in a speculative state is a deterministic state, the state of a thread in the speculative state is determined to be a deterministic state.
Further comprising the steps of:
a1, respectively acquiring the executed time of each section of thread in the 6 sections of threads based on the 6 sections of threads.
In this embodiment, referring to fig. 3, the executed times of the threads 1, 2, 3, 4, 5, and 6, that is, the times when the program data in the threads are calculated, are obtained.
A2, based on the executed time of each thread in the 6 segments of threads, when the executed time of each segment of speculative state thread is ended, that is, when any one of the threads 2, 3, 4, 5, and 6 is executed, acquiring the state of the thread when the executed time of the thread is ended and the state of the thread adjacent to the previous thread in the first order, as can be seen from fig. 3 in this embodiment, after the thread 2 is executed, acquiring the states of the current thread 1 and the thread 2; if the thread 3 is executed, acquiring the states of the thread 3 and the thread 2 at the moment; if the thread 4 is executed, acquiring the states of the current thread 3 and the current thread 4; if the thread 5 is executed, acquiring the states of the current thread 4 and the current thread 5; and if the thread 6 is executed, acquiring the states of the current thread 5 and the current thread 6.
And A3, determining whether the thread in the speculative state needs to wait to be verified or not for the thread state at the end of the execution time of the thread in the speculative state obtained each time, the state of the thread adjacent to the thread in the first sequence and the executed time of each thread in the 6 segments of threads.
Referring to FIG. 3, if the state of the thread is speculative adjacent to the previous thread in the first order, then it is determined that the thread needs to wait to be verified.
For example, if the state of the thread 3 is a speculative state after the thread 4 is executed, it is determined that the thread 4 needs to wait to be verified.
And if the state of the thread adjacent to the previous thread in the first sequence is a determined state and the executed time of the thread is shorter than the executed time of the thread adjacent to the previous thread in the first sequence, determining that the thread needs to wait to be verified.
For example, if the state of the thread 3 is the determination state after the thread 4 is executed, and the thread 3 is not executed yet, it is determined that the thread 4 needs to wait to be verified.
A4, if the data of the thread needs to be verified, inserting the data of the thread into a verification queue in a preset data structure, wherein the position corresponds to the sequence of the thread in the first sequence, and pointing a pointer in the position to a storage unit;
the validation queue has a plurality of sequentially arranged positions;
each location of the validation queue has a pointer therein to a core unit corresponding to the location.
The validation queue is stored in a storage unit.
In this embodiment, the method further includes the steps of:
and A5, verifying each speculative thread in the verification queue or the core unit of the storage unit according to the first sequence according to the pointers in the positions arranged in the sequence in the verification queue to obtain a verification result.
Referring to fig. 3, step A5 in this embodiment specifically includes:
if the pointers in the positions sequentially arranged in the verification queue point to the core unit, verifying the thread corresponding to the core unit in the core unit to obtain a verification result;
and if the pointers in the positions sequentially arranged in the verification queue point to the storage unit, verifying the thread corresponding to the positions in the storage unit to obtain a verification result.
In the embodiment, the verification queue transfers the verification from the core unit to the storage unit, so that the phenomenon of load imbalance among threads is eliminated, and the effective execution rate of the core unit can be increased.
The technical principles of the present invention have been described above in connection with specific embodiments, which are intended to explain the principles of the present invention and should not be construed as limiting the scope of the present invention in any way. Based on the explanations herein, those skilled in the art will be able to conceive of other embodiments of the present invention without inventive efforts, which shall fall within the scope of the present invention.

Claims (9)

1. A thread-level speculative parallelization method based on multiple cores is characterized by comprising a plurality of core units, wherein each core unit executes a section of thread; the method comprises the following steps:
a1, judging whether each core unit needs to wait after executing a thread in a first state;
the segment threads of the first state have a preset first sequence;
a2, if waiting is needed, inserting the thread in the first state into a preset position of a verification queue, and pointing a pointer in the position of the verification queue to a storage unit;
the verification queue is provided with a plurality of sequentially arranged positions;
the thread insertion validation queue position corresponds to the thread order in the first order;
each position in the verification queue is provided with a preset pointer pointing to the core unit corresponding to the position;
the validation queue is stored in a storage unit;
and A3, verifying the threads in the first state in a storage unit or a core unit according to the pointer direction in the verification queue according to the first sequence to obtain a verification result.
2. The method according to claim 1, wherein the step A1 comprises:
a1-1, acquiring the time of executing a section of thread in a second state by a core unit and the time of executing the thread in a first state by each core unit;
a1-2, based on the time of each core unit executing the thread in the first state, when the execution of the thread in the first state by each core unit is finished, acquiring the state of the thread and the state of a previous thread adjacent to the thread in the first sequence;
a1-3, determining whether waiting is needed after each core unit executes the thread in the first state or not based on the time for the core unit to execute the thread in a section of the second state, the time for each core unit to execute the thread in the first state, the state of the thread when each core unit executes the thread in the first state and the state of the thread at the end of the section of the thread in the first state, wherein the thread is adjacent to the thread in the first sequence.
3. The method according to claim 2, wherein the steps A1-3 comprise:
when the core unit finishes executing the thread in the first state, if the state of the previous thread adjacent to the thread in the first sequence is the first state, determining that the core unit needs to wait.
4. The method according to claim 3, wherein the steps A1-3 comprise:
when the core unit finishes executing the thread in the first state, if the state of the thread adjacent to the previous thread in the first sequence is the second state, and the time for the core unit to execute the thread in the first state is shorter than the executed time of the thread adjacent to the previous thread in the first sequence, determining that the core unit needs to wait after the thread in the first state is finished.
5. The method according to claim 1, wherein said step A3 comprises:
and if the pointers in the positions sequentially arranged in the verification queue point to the core unit, verifying the thread corresponding to the core unit in the core unit to obtain a verification result.
6. The method according to claim 1, wherein said step A3 comprises:
and if the pointers in the positions sequentially arranged in the verification queue point to the storage unit, verifying the thread corresponding to the positions in the storage unit to obtain a verification result.
7. The method according to claim 1, characterized by further comprising, before the step A1, the steps of:
b1, determining a plurality of sections of threads corresponding to the sequence of a program to be executed according to the preset program to be executed;
the multiple sections of threads comprise multiple sections of threads in a first state and multiple sections of threads in a second state;
and B2, based on the thread in the second state, adopting a preset out-of-order excitation strategy to excite the multiple sections of threads in the first state.
8. The method of claim 7, wherein the preset first sequence corresponds to the sequence of programs to be executed.
9. The method of claim 2,
the first state is a speculative state;
the second state is a determination state.
CN202010054734.3A 2020-01-17 2020-01-17 Multi-core-based thread-level speculative parallelism method Active CN111240748B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010054734.3A CN111240748B (en) 2020-01-17 2020-01-17 Multi-core-based thread-level speculative parallelism method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010054734.3A CN111240748B (en) 2020-01-17 2020-01-17 Multi-core-based thread-level speculative parallelism method

Publications (2)

Publication Number Publication Date
CN111240748A CN111240748A (en) 2020-06-05
CN111240748B true CN111240748B (en) 2023-04-07

Family

ID=70879575

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010054734.3A Active CN111240748B (en) 2020-01-17 2020-01-17 Multi-core-based thread-level speculative parallelism method

Country Status (1)

Country Link
CN (1) CN111240748B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010060084A2 (en) * 2008-11-24 2010-05-27 Intel Corporation Systems, methods, and apparatuses to decompose a sequential program into multiple threads, execute said threads, and reconstruct the sequential execution
CN110543395A (en) * 2019-08-30 2019-12-06 北京中科寒武纪科技有限公司 verification method, verification device and related product

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8443375B2 (en) * 2009-12-14 2013-05-14 Verisign, Inc. Lockless queues

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010060084A2 (en) * 2008-11-24 2010-05-27 Intel Corporation Systems, methods, and apparatuses to decompose a sequential program into multiple threads, execute said threads, and reconstruct the sequential execution
CN110543395A (en) * 2019-08-30 2019-12-06 北京中科寒武纪科技有限公司 verification method, verification device and related product

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
面向DSWP并行的OpenMP任务调度机制的扩展与实现;刘晓娴等;《计算机科学》;20130915(第09期);全文 *

Also Published As

Publication number Publication date
CN111240748A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
US8555039B2 (en) System and method for using a local condition code register for accelerating conditional instruction execution in a pipeline processor
EP0106671B1 (en) Prefetching instructions in computer
US6732297B2 (en) Pipeline testing method, pipeline testing system, pipeline test instruction generation method and storage method
US20050251655A1 (en) Multi-scalar extension for SIMD instruction set processors
CN109101276B (en) Method for executing instruction in CPU
TW201030606A (en) Optimizing performance of instructions based on sequence detection or information associated with the instructions
JP2002508564A (en) Processor with multiple program counters and trace buffers outside execution pipeline
JP2014532221A (en) Apparatus and method for providing interaction service for infants and system using the same
CN111240748B (en) Multi-core-based thread-level speculative parallelism method
JP4334598B1 (en) Information processing apparatus and error correction method
EP3329364B1 (en) Data processing
JP4610218B2 (en) Information processing device
US8601488B2 (en) Controlling the task switch timing of a multitask system
CN116324718A (en) Processor with multiple fetch and decode pipelines
CN111240747B (en) Instruction generation method and device, test framework and electronic equipment
US8656393B2 (en) Multi-core system
US20110197049A1 (en) Two pass test case generation using self-modifying instruction replacement
CN109634666B (en) Method for fusing BTBs (Branch target bus) under prefetching mechanism
CN112445528B (en) Result self-checking instruction sequence filling method based on pipeline constraint
CN113918225A (en) Instruction prediction method, instruction data processing apparatus, processor, and storage medium
US20020129292A1 (en) Clock control method and information processing device employing the clock control method
EP1591886A2 (en) Register management in a simulation environment
CN110347400B (en) Compile acceleration method, routing unit and cache
JP6617511B2 (en) Parallelization method, parallelization tool, in-vehicle device
CN111738710B (en) Method and processor for resource deduction of execution of intelligent contract

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant