CN110557261A - Node data processing method, device and system - Google Patents

Node data processing method, device and system Download PDF

Info

Publication number
CN110557261A
CN110557261A CN201910824956.6A CN201910824956A CN110557261A CN 110557261 A CN110557261 A CN 110557261A CN 201910824956 A CN201910824956 A CN 201910824956A CN 110557261 A CN110557261 A CN 110557261A
Authority
CN
China
Prior art keywords
data
parallel
accelerated
input data
block header
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910824956.6A
Other languages
Chinese (zh)
Inventor
胡均浩
李振中
葛维
唐平
石玲宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ziguang Zhanrui (chongqing) Technology Co Ltd
Unisoc Chongqing Technology Co Ltd
Original Assignee
Ziguang Zhanrui (chongqing) Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ziguang Zhanrui (chongqing) Technology Co Ltd filed Critical Ziguang Zhanrui (chongqing) Technology Co Ltd
Priority to CN201910824956.6A priority Critical patent/CN110557261A/en
Publication of CN110557261A publication Critical patent/CN110557261A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3236Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using cryptographic hash functions
    • H04L9/3239Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using cryptographic hash functions involving non-keyed hash functions, e.g. modification detection codes [MDCs], MD5, SHA or RIPEMD
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2209/00Additional information or applications relating to cryptographic mechanisms or cryptographic arrangements for secret or secure communication H04L9/00
    • H04L2209/12Details relating to cryptographic hardware or logic circuitry
    • H04L2209/125Parallelization or pipelining, e.g. for accelerating processing of cryptographic operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/50Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols using hash chains, e.g. blockchains or hash trees

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Computer Security & Cryptography (AREA)
  • Technology Law (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Development Economics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Power Sources (AREA)

Abstract

The disclosure relates to a node data processing method, a device and a system, wherein the method comprises the following steps: inputting second fields of N different block header data into N accelerated processor systems in parallel, wherein each accelerated processing system receives one block header data and a group of first input data midstate, each group of first input data comprises K different first input data, N is more than or equal to 2, and K is more than or equal to 2; each accelerated processor system utilizes M accelerated processors in the accelerated processing system to perform partition parallel processing on random numbers in a second field of block header data received by the accelerated processing system, wherein M is more than or equal to 2; wherein the K parallel branches in each turbo processor process the K first input data midstate in parallel. By utilizing the implementation modes provided by the embodiments of the present disclosure, the data processing efficiency can be improved, and the storage space can be reduced, thereby reducing the power consumption.

Description

Node data processing method, device and system
Technical Field
The present disclosure relates to the field of computer data processing, and in particular, to a method, an apparatus, and a system for processing node data.
Background
In the bitcoin mining mechanism, a hash algorithm is adopted to create a small digital 'fingerprint' from any data. The hash function compresses a message or data into a digest so that the amount of data becomes small, fixing the format of the data. This function mixes the data shuffled and recreates a fingerprint called a hash value (or hash value). The hash value is typically represented by a short string of random letters and numbers. For any length of message, SHA256 will generate a 256-bit long hash value, called a message digest.
In the prior art, the mining pool collection bit transaction needs to run a full node, and the work of miners comes from the distribution of the mining pool. The data of the full node is analyzed by the mine pool, and then sent to miners from time to time, and the mine pool is provided with block header data except the Root node hash value Merkle Root and the random number Nonce. After receiving the information sent from the mine pool, the miners can form a complete block header by combining Merkle Root and randomly changing Nonce, and calculate the hash value by using the complete block header. And traversing the Nonce, and submitting the mine only if the mine excavation difficulty of the mine pool is met. And the mine pool is verified in time after receiving the information, if the information meets the requirement, the labor contribution is recorded, meanwhile, whether the difficulty requirement of the whole network is met is seen, if the difficulty requirement of the whole network is met, the information is broadcasted and issued, so that a new block is dug, and the amount of coins which each miner should have is distributed according to the recorded labor Share number.
However, in the prior art, on the one hand, no mining machine implementation scheme supporting the asic boost optimization is available. On the other hand, the operation structure in the prior art causes low computation power, needs large storage space and consumes large power.
Disclosure of Invention
The present disclosure provides a node data processing method, device and system, so as to improve data processing efficiency, reduce storage space, and thus reduce power consumption.
According to a first aspect of the present disclosure, there is provided a node data processing method, the method including:
inputting second fields of N different block header data into N accelerated processor systems in parallel, wherein each accelerated processing system receives one block header data and a group of first input data midstate, each group of first input data comprises K different first input data, N is more than or equal to 2, and K is more than or equal to 2;
Each accelerated processor system utilizes M accelerated processors in the accelerated processing system to perform partition parallel processing on random numbers in a second field of block header data received by the accelerated processing system, wherein M is more than or equal to 2;
Wherein the K parallel branches in each turbo processor process the K first input data midstate in parallel.
In a possible implementation, the first input data midstate of each parallel branch is different, and the second input data message is the same.
In one possible implementation, the root node hash values merkle roots of the N block header data are different.
In a possible implementation manner, the performing, by using M acceleration processors in the acceleration processing system, parallel processing between partitions on the random number in the second field of the block header data received by the acceleration processing system includes:
For a random number segment of the received block header data, performing in parallel, with the M accelerated processors, an accumulation iteration of a starting position random number nonce _ sta to an ending position random number nonce _ fin for a plurality of different parallel intervals.
In one possible implementation, the plurality of different parallel intervals include an iteration interval obtained by dividing an iteration interval from an initial starting position random number to an initial ending position random number.
According to a second aspect of the present disclosure, there is provided a node data processing apparatus, the apparatus comprising:
N sets of accelerated processing systems configured to receive N different second fields of block header data inputted in parallel, wherein each accelerated processing system receives one block header data and a set of first input data midstate, each set of first input data includes K different first input data, N ≧ 2, K ≧ 2;
each accelerated processor system utilizes M accelerated processors in the accelerated processing system to perform partition parallel processing on random numbers in a second field of block header data received by the accelerated processing system, wherein M is more than or equal to 2;
Wherein the K parallel branches in each turbo processor process the K first input data midstate in parallel.
In a possible implementation, the first input data midstate of each parallel branch is different, and the second input data message is the same.
in one possible implementation, the root node hash values merkle roots of the N block header data are different.
In a possible implementation manner, the performing, by using M acceleration processors in the acceleration processing system, parallel processing between partitions on the random number in the second field of the block header data received by the acceleration processing system includes:
For a random number segment of the received block header data, performing in parallel, with the M accelerated processors, an accumulation iteration of a starting position random number nonce _ sta to an ending position random number nonce _ fin for a plurality of different parallel intervals.
in one possible implementation, the plurality of different parallel intervals include an iteration interval obtained by dividing an iteration interval from an initial starting position random number to an initial ending position random number.
in one possible implementation, the apparatus further includes:
An input data storage unit for storing the input data;
the control unit is used for distributing operation intervals of the M accelerating processors according to the divided intervals;
the output data storage unit is used for storing hit data summary information;
and the interface unit is used for reading and writing data and configuring parameters.
According to a third aspect of the present disclosure there is provided a nodal data processing system, the system comprising a plurality of processors and a memory for storing processor-executable instructions, the instructions when executed by the processors implementing the above method.
According to the implementation mode provided by the embodiment of the aspects of the disclosure, the asic boost ore machine optimization can be realized through the parallel acceleration processor system apu _ sys. In addition, the plurality of accelerated processor systems apu _ sys are used for processing the plurality of block head data in parallel, the plurality of accelerated processors are used for processing the block head data received by the accelerated processor systems where the plurality of accelerated processors are located in parallel, and the plurality of parallel branches in each accelerated processor are used for processing the plurality of first input data midstate in parallel, so that three-section parallel is realized, the calculation power can be effectively improved, the storage space is further reduced, and the power consumption is reduced.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
fig. 1 shows a flowchart of a node data processing method according to an embodiment of the present disclosure.
Fig. 2 is a schematic block diagram of a node data processing apparatus according to an embodiment of the present disclosure.
Fig. 3 is a schematic structural diagram of a node data processing system according to an embodiment of the present disclosure.
Fig. 4 shows a schematic diagram illustrating parallel processing of N block header data according to an embodiment of the present disclosure.
Fig. 5 shows an algorithm flow diagram of a sha256 algorithm according to an embodiment of the present disclosure.
fig. 6 is a data flow diagram illustrating parallel processing of block header data according to an embodiment of the present disclosure.
Fig. 7 shows a data flow diagram for parallel processing of the first input data according to an embodiment of the present disclosure.
Fig. 8 shows a block diagram of an apparatus 800 for performing the above-described method according to an example embodiment.
Fig. 9 is a block diagram illustrating an apparatus 1900 for performing the above-described method according to an example embodiment.
Detailed Description
various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
a hashing algorithm, also known as a hash function, maps a binary value of arbitrary length to a shorter, fixed-length binary value, and this small binary value is called a hash value.
There are many different hash algorithms, typical ones being MD2, MD4, MD5, SHA-1, SHA-2, SHA-256, SHA-512, SHA-3, RIPEMD-160, and SCRYPT algorithms (used for Leitech and dog coins), among others. In the bitcoin
The SHA256 algorithm is largely used, the ripemm 160 algorithm is additionally used only when generating the coin address from the public key, and the SHA256 algorithm is generally used when Hash is used elsewhere. It features that any string is transformed to generate 256 random 0 or 1.
The bottom layer of the mining machine continuously changes the original data, continuously calculates the Hash value under the SHA256 algorithm, and stops calculating when certain conditions are met.
The original data, i.e. the 80 byte size block header, is subjected to the SHA256 algorithm. These 80 bytes are divided into the following six parts:
1) Version number Version: 4 bytes, time varying voting is carried out;
2) Front zone Hash: 32 bytes, and a new block is changed;
3) Transaction tree root MerkleRoot: 32 bytes, transaction time varying;
4) TimeStamp: 4 bytes, current time is slightly changed;
5) Current difficulty value Bits: 4 bytes, which are changed once every two weeks or so;
6) Random number Nonce: 4 bytes, variable at any time.
the prior art uses the internal calculation rule of SHA256 algorithm, first 64 bytes into one group, and then 4 bytes into one group. The optimization mode of the asicboot is a mode of exchanging transaction positions, and does not need to modify coinbase to quickly obtain a plurality of identical merklerrroots with 4 bytes at the tail, so that the hardware can accelerate the optimization and calculation of the hash value of SHA256 at the first two times of the block, namely, the operation speed of SHA256 (blockaheader)) is improved.
Adopting an ASICBoost optimized sha256 algorithm, and exchanging the sequence of transactions; so that a large number of Merkle roots can be generated quickly and from these a larger number of Merkle roots of the last 4 bytes identical are selected and then issued to the miners. This is primarily the operation of the mine pit and therefore the cooperation of the mine pit is required for an ore machine supporting asicbost.
Fig. 1 is a data flow diagram of a node data processing method according to an embodiment of the present disclosure. Specifically, as shown in fig. 1, the method may include:
S110: inputting second fields of N different block header data into N accelerated processing systems in parallel, wherein each accelerated processing system receives one block header data and a group of first input data midstate, each group of first input data comprises K different first input data, N is larger than or equal to 2, and K is larger than or equal to 2.
The first field of the block header data generates the first input data midstate, and the second field is screened data when the first field data changes and the root node hash value merkle root in the second field does not change.
In an embodiment of the present disclosure, the root node hash values merkle roots of the N block header data are different.
Fig. 4 is a schematic diagram illustrating parallel processing of N block header data according to an embodiment of the present disclosure. As shown in fig. 4, N accelerated processor systems APU _ SYS process N sets of chunk header data block headers in parallel, where each accelerated processor system APU _ SYS inputs a different chunk header data root hash value merkle root and a first input data set generated by the different chunk header data chunk headers corresponding to the merkleroot, where the first input data set includes K different first input data Midstate, as shown in fig. 4, the input data of the accelerated processor system APU _ SYS _0 is K different first input data Midstate _0, Midstate _0_1 … … Midstate _0_ K, and the same second input data message _0, and other accelerated processor systems APU _ SYS _1 … … APU _ SYS _ N, and also inputs a corresponding data set including K different first input data Midstate and a corresponding second input data message.
s120: for each acceleration processor system, utilizing M acceleration processors in the acceleration processing system to perform parallel processing on the block header data and the first input data midstate received by the acceleration processing system, wherein M is more than or equal to 2; wherein the K parallel branches in each turbo processor process the K first input data midstate in parallel.
Wherein, the input data processed by each accelerator is k sets of midstate and the second field data of the block head after the partition. The K parallel branches are processing K sets of midates, the second field of the block header data of each parallel branch being the same.
in an embodiment of the present disclosure, the performing, by using M acceleration processors in the acceleration processing system, parallel processing between partitions on a random number in a second field of block header data received by the acceleration processing system may include:
For a random number segment of the received block header data, performing in parallel, with the M accelerated processors, an accumulation iteration of a starting position random number nonce _ sta to an ending position random number nonce _ fin for a plurality of different parallel intervals.
in an embodiment of the present disclosure, the plurality of different parallel intervals are obtained by dividing an iteration interval from an initial starting position random number to an initial ending position random number.
specifically, the division of the parallel interval of the parallel branch may adopt the following manner:
1. Inputting an initial starting position random number nonce _ sta, an initial ending position random number nonce _ fin, and a data interval offset which is the whole and needs iteration is initial nonce _ fin-initial nonce _ sta;
2. Dividing the offset interval equally into parallel intervals (nonce _ sta _0, nonce _ fin _0) … … (nonce _ sta _ M, nonce _ fin _ M) corresponding to the respective accelerated processors, according to the parallelism M of the M accelerated processors, i.e., the value of M, or the number of accelerated processors; each acceleration processor computes in parallel nonce accumulated iterations of the respective parallel interval.
specifically, for example, the initial nonce _ sta is 0, the initial nonce _ fin is 99, and the number of the acceleration processors is 4, that is, M is 4. The 4 parallel sections are divided, the acceleration processor 1 processes the parallel section 1(nonce _ sta _0 is 0 and nonce _ fin _0 is 24), the acceleration processor 2 processes the parallel section 2(nonce _ sta _1 is 25 and nonce _ fin _1 is 49), the acceleration processor 3 processes the parallel section 3(nonce _ sta _2 is 50 and nonce _ fin _2 is 74), and the acceleration processor 4 processes the parallel section 4(nonce _ sta _3 is 75 and nonce _ fin _3 is 99). Each acceleration processor carries out iterative operation according to the distributed parallel interval from the nonce _ sta to the nonce _ fin, namely, iteration is carried out on the random number segment nonce. Because the input data of the first-stage hash operation is the same except for the nonce value, parallel iterative operation can be carried out, hit fields can be guessed more quickly, and the computational power of the mining machine is improved. Time consumption of iterative computation is reduced, and data processing efficiency is improved.
Of course, the above configuration is only exemplary, and in other embodiments of the present disclosure, a specific manner of parallel processing may be allocated according to the number of specific accelerated processors and the data interval offset that needs to be iterated for the whole of the initial random number nonce that needs to be processed, which is not limited by the present disclosure.
in the above embodiments, the accumulation iterations from nonce _ sta to nonce _ fin in different parallel intervals may all be processed by using the sha256 mining algorithm. Fig. 5 is a schematic algorithm flow diagram of an embodiment of the sha256 algorithm provided in the present disclosure, specifically, as shown in fig. 5, the sha256 mining algorithm divides data with any length into N × 64 bytes, and fills the remainder. Then a 32-byte data digest is computed and the results are self-added in 4-byte divisions N times.
fig. 6 is a data flow diagram illustrating parallel processing of block header data according to an embodiment of the present disclosure. Specifically, as shown in fig. 6, an accelerated processor system apu _ sys _ n includes M accelerated processors apu, each accelerated processor apu performs parallel processing on the chunk header data and the first input data midstate received by the accelerated processor system apu _ sys _ n, the chunk header data processed by each accelerated processor apu is the same, and the processed k first input data (midstate _ n _0, midstate _ n _1 … … midstate _0_ k) and the second input data (message _ n) are also the same, except that the accumulated iterations from the start position random number nonce _ sta to the end position random number nonce _ fin of M different parallel intervals are processed in parallel.
Further, fig. 7 shows a data flow diagram for parallel processing of the first input data according to an embodiment of the present disclosure. The k first input data (midstate _ n _0, midstate _ n _1 … … midstate _0_ k) are respectively processed in parallel by k parallel branches of the accelerated processor apu, and meanwhile, the k second input data input by the parallel branches are the same (Message _ n).
in the above embodiments, each accelerated processor system apu _ sys only needs to store k midstates and 1 message (merkroot, time, nbits), and does not need to repeatedly store the message information corresponding to each set of midstates, and when the number of the accelerated processor systems is N, N-1 sets of storage spaces can be saved, so that the storage space can be reduced, and the energy consumption can be further reduced.
Fig. 2 is a schematic block diagram of a block chain data processing apparatus according to an embodiment of the present disclosure. Specifically, as shown in fig. 2, the apparatus may include:
N sets of accelerated processor systems 101, which may be configured to receive N different second fields of block header data inputted in parallel, wherein each accelerated processing system receives one block header data and a set of first input data midstate, each set of first input data includes K different first input data, N ≧ 2, K ≧ 2;
for each acceleration processor system, M acceleration processors in the acceleration processing system are utilized to perform partition parallel processing on random numbers in a second field of block header data received by the acceleration processing system, wherein M is more than or equal to 2;
wherein the K parallel branches in each turbo processor process the K first input data midstate in parallel.
In an embodiment of the present disclosure, the first input data midstate of each of the parallel branches is different, and the second input data message is the same.
In an embodiment of the present disclosure, the root node hash values merkle roots of the N block header data are different.
in an embodiment of the present disclosure, the performing, by using M acceleration processors in the acceleration processing system, parallel processing between partitions on a random number in a second field of block header data received by the acceleration processing system may include:
For a random number segment of the received block header data, performing in parallel, with the M accelerated processors, an accumulation iteration of a starting position random number nonce _ sta to an ending position random number nonce _ fin for a plurality of different parallel intervals.
In an embodiment of the present disclosure, the plurality of different parallel intervals include an iteration interval obtained by dividing an iteration interval from an initial starting position random number to an initial ending position random number.
In one embodiment of the present disclosure, the apparatus may further include:
An input data storage unit 102 operable to store the input data;
A control unit 103, configured to allocate operation intervals of the M acceleration processors according to the divided intervals;
An output data storage unit 104, which may be used to store hit data summary information;
An interface unit 105 may be used to read and write data and may be used to configure parameters.
For the same or similar processes related to the above-mentioned apparatuses as those in the embodiments shown in fig. 1, 3, 4, 5, 6, and 7, specific implementations may be implemented according to implementations provided in the embodiments corresponding to fig. 1, 3, 4, 5, 6, and 7.
Fig. 3 is a schematic structural diagram of a node data processing system according to an embodiment of the present disclosure. It should be noted that the above-mentioned device or system may also include other implementation manners according to the description of the method embodiment, and specific implementation manners may refer to the description of the related method embodiment, which is not described in detail herein.
fig. 8 is a block diagram illustrating an apparatus 800 for performing the above-described method according to an example embodiment. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 8, the apparatus 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
the memory 804 is configured to store various types of data to support operations at the apparatus 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
power components 806 provide power to the various components of device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed status of the device 800, the relative positioning of components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in the position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, the orientation or acceleration/deceleration of the device 800, and a change in the temperature of the device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
the communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the device 800 to perform the above-described methods.
fig. 9 is a block diagram illustrating an apparatus 1900 for performing the above-described method according to an example embodiment. For example, the apparatus 1900 may be provided as a server. Referring to fig. 9, the device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by the processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The device 1900 may also include a power component 1926 configured to perform power management of the device 1900, a wired or wireless network interface 1950 configured to connect the device 1900 to a network, and an input/output (I/O) interface 1958. The device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, MacOS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the apparatus 1900 to perform the above-described methods.
the present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
the computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
the computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (12)

1. A method of node data processing, the method comprising:
Inputting second fields of N different block header data into N accelerated processor systems in parallel, wherein each accelerated processing system receives one block header data and a group of first input data midstate, each group of first input data comprises K different first input data, N is more than or equal to 2, and K is more than or equal to 2;
Each accelerated processor system utilizes M accelerated processors in the accelerated processing system to perform partition parallel processing on random numbers in a second field of block header data received by the accelerated processing system, wherein M is more than or equal to 2;
Wherein the K parallel branches in each turbo processor process the K first input data midstate in parallel.
2. A method as claimed in claim 1, wherein the first input data midstate of each of said parallel branches is different and the second input data message is the same.
3. the node data processing method according to claim 1, wherein root node hash values merkle roots of said N block header data are different.
4. The method as claimed in claim 1, wherein said performing, by using M accelerators in the acceleration processing system, parallel processing between partitions on the random number in the second field of the block header data received by the acceleration processing system comprises:
For a random number segment of the received block header data, performing in parallel, with the M accelerated processors, an accumulation iteration of a starting position random number nonce _ sta to an ending position random number nonce _ fin for a plurality of different parallel intervals.
5. The node data processing method according to claim 4, wherein the plurality of different parallel intervals include an interval obtained by dividing an iteration interval from an initial start position random number to an initial end position random number.
6. A node data processing apparatus, characterized in that the apparatus comprises:
N sets of accelerated processing systems configured to receive N different second fields of block header data inputted in parallel, wherein each accelerated processing system receives one block header data and a set of first input data midstate, each set of first input data includes K different first input data, N ≧ 2, K ≧ 2;
Each accelerated processor system utilizes M accelerated processors in the accelerated processing system to perform partition parallel processing on random numbers in a second field of block header data received by the accelerated processing system, wherein M is more than or equal to 2;
Wherein the K parallel branches in each turbo processor process the K first input data midstate in parallel.
7. A node data processing arrangement according to claim 6, wherein the first input data midstate of each of said parallel branches is different and the second input data message is the same.
8. the node data processing apparatus according to claim 6, wherein root node hash values merkle roots of said N block header data are different.
9. The apparatus as claimed in claim 8, wherein said performing, by using M accelerators in the accelerated processing system, parallel processing between partitions on the random number in the second field of the block header data received by the accelerated processing system comprises:
For a random number segment of the received block header data, performing in parallel, with the M accelerated processors, an accumulation iteration of a starting position random number nonce _ sta to an ending position random number nonce _ fin for a plurality of different parallel intervals.
10. The method of claim 9, wherein the plurality of different parallel intervals comprises a plurality of intervals obtained by dividing an iteration interval from an initial starting position random number to an initial ending position random number.
11. A node data processing apparatus according to any one of claims 5 to 10, wherein the apparatus further comprises:
An input data storage unit for storing the input data;
The control unit is used for distributing operation intervals of the M accelerating processors according to the divided intervals;
The output data storage unit is used for storing hit data summary information;
And the interface unit is used for reading and writing data and configuring parameters.
12. A nodal data processing system, the system comprising a plurality of processors and memory for storing processor-executable instructions which, when executed by the processors, implement the method of any of claims 1 to 5.
CN201910824956.6A 2019-09-02 2019-09-02 Node data processing method, device and system Pending CN110557261A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910824956.6A CN110557261A (en) 2019-09-02 2019-09-02 Node data processing method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910824956.6A CN110557261A (en) 2019-09-02 2019-09-02 Node data processing method, device and system

Publications (1)

Publication Number Publication Date
CN110557261A true CN110557261A (en) 2019-12-10

Family

ID=68738755

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910824956.6A Pending CN110557261A (en) 2019-09-02 2019-09-02 Node data processing method, device and system

Country Status (1)

Country Link
CN (1) CN110557261A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4276598A4 (en) * 2021-02-20 2024-06-05 Bitmain Tech Inc Computing apparatus for proof of work, and asic chip and computing method for proof of work

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4276598A4 (en) * 2021-02-20 2024-06-05 Bitmain Tech Inc Computing apparatus for proof of work, and asic chip and computing method for proof of work

Similar Documents

Publication Publication Date Title
CN111832067B (en) Data processing method and device and data processing device
CN110543481B (en) Data processing method and device, computer equipment and storage medium
CN112241250B (en) Data processing method and device and data processing device
CN111597029B (en) Data processing method and device, electronic equipment and storage medium
CN112861175B (en) Data processing method and device for data processing
CN106991018B (en) Interface skin changing method and device
CN110826697B (en) Method and device for acquiring sample, electronic equipment and storage medium
CN107066502B (en) Multimedia content editing method and device
CN115085912A (en) Ciphertext computing method and device for ciphertext computing
CN110557261A (en) Node data processing method, device and system
US11494117B2 (en) Method and system for data processing
CN113051610A (en) Data processing method and device and data processing device
CN112163046A (en) Block chain-based equipment data storage method, device and system
CN113239389B (en) Data processing method and device and data processing device
CN112131999B (en) Identity determination method and device, electronic equipment and storage medium
CN112468290B (en) Data processing method and device and data processing device
CN112861145B (en) Data processing method and device for data processing
CN112580064B (en) Data processing method and device and data processing device
CN116489247A (en) Device and method for editing random network protocol message programmable in operation
CN110990357A (en) Data processing method, device and system, electronic equipment and storage medium
CN114118397A (en) Neural network method and apparatus, electronic device, and storage medium
CN111369438B (en) Image processing method and device, electronic equipment and storage medium
CN110765943A (en) Network training and recognition method and device, electronic equipment and storage medium
CN111695158B (en) Operation method and device
CN112286456B (en) Storage method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191210