CN112182111A - Block chain based distributed system layered processing method and electronic equipment - Google Patents

Block chain based distributed system layered processing method and electronic equipment Download PDF

Info

Publication number
CN112182111A
CN112182111A CN202011091586.9A CN202011091586A CN112182111A CN 112182111 A CN112182111 A CN 112182111A CN 202011091586 A CN202011091586 A CN 202011091586A CN 112182111 A CN112182111 A CN 112182111A
Authority
CN
China
Prior art keywords
data
sample data
group
initial
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011091586.9A
Other languages
Chinese (zh)
Other versions
CN112182111B (en
Inventor
赵书鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu Yunshang Digital Technology Co ltd
Original Assignee
Ningbo Golden Lion Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Golden Lion Technology Co Ltd filed Critical Ningbo Golden Lion Technology Co Ltd
Priority to CN202011091586.9A priority Critical patent/CN112182111B/en
Publication of CN112182111A publication Critical patent/CN112182111A/en
Application granted granted Critical
Publication of CN112182111B publication Critical patent/CN112182111B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority

Abstract

The embodiment of the disclosure discloses a block chain-based distributed system hierarchical processing method and electronic equipment. One embodiment of the method comprises: acquiring a target data block set to be processed and an initial data information set, wherein the initial data information comprises the data type of the target data block and the storage address of the target data block; processing the initial data information set based on the data type to obtain a process data information group set; processing the set of target data blocks based on the set of process data information sets; and updating the block chain. The method carries out hierarchical division on a target data block set to be processed according to the data type in the initial information set so as to obtain a process data information set. And respectively processing the target data block set aiming at each process data information group in the process data information group set. The layered processing mode can improve the processing efficiency of the target data block set processing task, ensure that all tasks are completed timely and effectively, and improve the processing performance of the distributed system based on the block chain.

Description

Block chain based distributed system layered processing method and electronic equipment
Technical Field
The embodiment of the disclosure relates to the field of block chains and distributed systems, in particular to a distributed task scheduling method based on block chains and electronic equipment.
Background
For the task scheduling problem of the distributed system, how to make full use of the processor by the application program in the computer system for high-performance computation has become a very practical research field. The main purpose of the existing parallel task scheduling optimization mechanism is to reduce the scheduling length of the whole parallel application program, and at the same time, to a certain extent, improve the operating efficiency of the system and balance the system load, etc.
Specifically, when the distributed scheduling of the data processing task is completed in the actual service, the following technical problems are faced:
first, the use of a predetermined task scheduling policy cannot cope with the real-time task processing requirements in the distributed system, and cannot actually match the processing priority requirements of different tasks.
Secondly, the traditional hierarchical task scheduling method based on clustering divides processing task clustering into a plurality of subtasks, and then allocates the subtasks to appropriate electronic devices to obtain the fastest response based on a certain scheduling strategy. The clustering effect is limited by the influence of a preset clustering central point, and the processing task cannot be reasonably divided into different subtasks.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a block chain based distributed system hierarchical processing method and an electronic device to solve one or more of the technical problems mentioned in the above background section.
In a first aspect, some embodiments of the present disclosure provide a block chain-based hierarchical processing method for a distributed system, where the method includes: acquiring a target data block set to be processed and an initial data information set, wherein the initial data information comprises the data type of the target data block and the storage address of the target data block; processing the initial data information set based on the data type to obtain a process data information group set; processing the set of target data blocks based on the set of process data information sets; and updating the block chain.
In a second aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement a method as in any one of the first aspects.
In a third aspect, some embodiments of the disclosure provide a computer readable medium having a computer program stored thereon, wherein the program when executed by a processor implements a method as in any one of the first aspect.
The above embodiments of the present disclosure have the following advantages: firstly, the method carries out hierarchical division on a target data block set to be processed according to the data type in an initial information set so as to obtain a process data information set. And respectively processing the target data block set aiming at each process data information group in the process data information group set. Secondly, the method uses a dynamic threshold value adjusting method to realize the hierarchical division of the target data block set. Through dynamic threshold adjustment, the quantity and the mode of process data information group clustering are optimized, and the influence of a fixed clustering center point on the clustering effect is eliminated to a certain extent. And (4) separating and delaying the outlier uncertainty process data, and determining final hierarchical division after secondary clustering and re-division. The layered processing mode can improve the processing efficiency of the target data block set processing task, ensure that all tasks are completed timely and effectively, and improve the processing performance of the distributed system based on the block chain.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is an architectural diagram of an exemplary system in which some embodiments of the present disclosure may be applied;
FIG. 2 is a flow diagram of some embodiments of a block chain based distributed system hierarchical processing method, in accordance with some embodiments of the present disclosure;
FIG. 3 is a schematic diagram of one application scenario of a block chain based distributed system hierarchical processing method, according to some embodiments of the present disclosure;
FIG. 4 is a schematic block diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the blockchain-based distributed system hierarchical processing method of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as a data storage application, a data encryption application, a task scheduling application, etc., may be installed on the terminal devices 101, 102, 103.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various terminal devices having a display screen, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the above-listed terminal apparatuses. Which may be implemented as multiple software or software modules (e.g., to provide target block data input, etc.), or may be implemented as a single software or software module. And is not particularly limited herein.
The server 105 may be a server that provides various services, such as a server that performs task processing on target data input by the terminal apparatuses 101, 102, 103, and the like. The server can perform the processing of segmentation, distribution, storage and the like on the received target data and feed back the processing result to the terminal equipment.
It should be noted that the hierarchical processing method for the distributed system based on the block chain provided by the embodiment of the present disclosure may be executed by the server 105, or may be executed by the terminal device.
It should be noted that the local area of the server 105 may also directly process the data, for example, the server 105 may directly extract the local data and obtain the data block set through the slicing process, and in this case, the exemplary system architecture 100 may not include the terminal devices 101, 102, 103 and the network 104.
It should be noted that the terminal devices 101, 102, and 103 may also have a task scheduling application installed therein, and in this case, the task scheduling method may also be executed by the terminal devices 101, 102, and 103. At this point, the exemplary system architecture 100 may also not include the server 105 and the network 104.
The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules (for example, for providing a task scheduling service), or as a single software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of electronic devices, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of some embodiments of a blockchain-based distributed system hierarchical processing method in accordance with the present disclosure is shown. The block chain-based distributed system layered processing method comprises the following steps:
step 201, acquiring a target data block set to be processed and an initial data information set.
In some embodiments, an executing body (e.g., the electronic device shown in fig. 1) of the block chain-based distributed system hierarchical processing method obtains a target data block set and an initial data information set to be processed. The initial data information comprises the data type of the target data block and the storage address of the target data block.
Optionally, before acquiring the target data block set and the initial data information set to be processed, the executing body acquires a sample data set. Based on the sample data set, a sample classification data set is generated. Specifically, a distance threshold is determined. The number of sample categories in the sample data set is determined. And randomly selecting sample data with the number of sample categories from the sample data set as an initial central value to obtain an initial central value set.
A first set of process sample data sets is determined based on the initial set of center values. Specifically, the first process sample data group set includes a number of first process sample data groups of sample types. The first process sample data set consists of 1 initial center value. There is only one element, i.e. one initial centre value, in each first process sample data set in the set of first process sample data sets.
And removing all initial central values in the initial central value set from the sample data set to obtain a first candidate sample data set. For each first candidate sample data in the first candidate sample data set, based on the first process sample data set and the distance threshold, putting the first candidate sample data into the corresponding first process sample data set to obtain a first process sample data set by using the following formula:
C={Cj={xi|d(xi,uj)≤D}},
where d represents a distance function, x represents a first candidate sample data, and i represents a first candidate sample data count. D represents a distance threshold. x is the number ofiRepresents the ith first candidate sample data, u represents the initial center value, j represents the initial center value count, u represents the initial center value countjRepresenting the jth initial center value. d (x)i,uj) Denotes xiAnd ujThe distance between them. CjRepresenting the jth first process sample data set with an initial center value uj. C denotes a first set of process sample data groups.
For each first process sample data group of the first set of process sample data groups, determining a mean value of the process sample data group as a process center value to obtain a set of process center values. A second set of process sample data sets is determined based on the set of process center values. Specifically, the second process sample data group set includes a number of second process sample data groups of sample type number, and the second process sample data group is composed of 1 process center value. There is only one element, i.e., one process center value, in each second process sample data group in the set of second process sample data groups.
And removing all process center values in the process center value set from the sample data set to obtain a second candidate sample data set. For each second candidate sample data in the second candidate sample data set, based on the second process sample data set and the distance threshold, putting the second candidate sample data into the corresponding second process sample data set by using the following formula, so as to obtain a second process sample data set:
M={Mj={yi|d(yi,vj)≤D}},
where d represents a distance function, y represents the second candidate sample data, and i represents the second candidate sample data count. D represents a distance threshold. y isiRepresenting the ith second candidate sample data, v representing the process center value, j representing the process center value count, vjRepresenting the jth process center value. d (y)i,vj) Denotes yiAnd vjThe distance between them. MjRepresenting the jth second process sample data group with a process center value vj. M is a second set of process sample data groups.
A neighborhood value is determined. And generating a sample classification dataset based on the second process sample dataset set and the neighborhood values. For each second set of process sample data in the second set of process sample data sets, determining a first result set and a second result set of the second set of process sample data based on the neighborhood value to obtain a first result set and a second result set. Specifically, based on the neighborhood value, a neighborhood set of the second process sample data set is determined. Second process sample data having a distance from the process center value of the second process sample data group smaller than the neighborhood value belongs to the neighborhood set of the second process sample data group. And removing all sample data in the second process sample data group from the sample data set to obtain a third candidate sample data set.
For each third candidate sample data in the third set of candidate sample data, placing the third candidate sample data in the first set of results in response to the third candidate sample data being in the neighborhood set. For each second process sample data in the second set of process sample data, placing the second process sample data in a second set of results in response to the second process sample data being in the neighborhood set. For each second process sample data in the second process sample data group, in response to the second process sample data not being in the neighborhood set, placing the second process sample data in the first result group. A set of the first set of result sets and the second set of result sets is determined as the sample classification dataset.
Based on the sample classification dataset, a clustering template is generated. And the clustering template records the data type of the sample data in the sample data set. And recording the data types in the sample classification data set into the clustering template to obtain the clustering template. In particular, the process of dividing a collection of physical or abstract objects into classes composed of similar objects is called clustering. The process of generating the first set of process sample data groups and the second set of process sample data groups is clustering.
The above formula is an inventive point of the embodiments of the present disclosure, and solves the second technical problem mentioned in the background art. Firstly, the method generates a first process sample data group set by a clustering method by using a sample data set. Secondly, the method generates a second process sample data group set by utilizing the first process sample data group set through a clustering method. Then, a neighborhood value is determined, and a sample classification dataset is generated based on the second process sample dataset set and the neighborhood value. Finally, a clustering template is generated based on the sample classification data set. The method uses a dynamic threshold value adjusting method to realize the hierarchical division of a target data block set. Through dynamic threshold adjustment, the number of process data information group clusters is optimized, and the influence of fixed cluster center points on the clustering effect is eliminated to a certain extent. And (4) separating and delaying the outlier uncertainty process data, and determining final hierarchical division after secondary clustering and re-division. The method is not limited by the influence of a preset clustering central point, and can divide the target data set task to be processed into different subtasks through secondary clustering, so that the technical problem II is solved.
Step 202, processing the initial data information set based on the data type to obtain a process data information group set.
In some embodiments, for each initial data information in the initial information set, the execution subject matches the data type of the initial data information with the clustering template, and determines the category information of the initial data information to obtain the category information set.
For each category information in the category information set, dividing all initial data information corresponding to the category information into a process data information group corresponding to the category information to obtain a process data information group set. Specifically, the initial data information having the same category constitutes a process data information group corresponding to the category information. Each process data information group in the process data information group set corresponds to different category information. The process data information in the different process data information groups corresponds to different category information.
Step 203, processing the target data block set based on the process data information group set.
In some embodiments, the execution agent determines the result data chunk set based on the set of process data information sets and the set of target data chunks. Each result data block group in the result data block group set corresponds to a process data information group in the process data information group set.
And processing the result data block group set. Optionally, for each result data block group in the result data block group set, each result data block in the result data block group is processed in turn.
The above method is an inventive point of the embodiments of the present disclosure, and solves the first technical problem mentioned in the background art. Firstly, processing an initial data information set by referring to a sample classification data set according to a target data block set to be processed according to a data type to obtain a process data information set. The process data information group set comprises target data blocks classified according to data types. Secondly, the target data block is processed according to the process data information group set, and layered scheduling processing can be achieved. The hierarchical processing mode improves the processing efficiency of the target data block set processing task and ensures that all tasks are completed timely and effectively. The layered scheduling processing is carried out according to the real-time matched data types, and the actual processing priority requirements of the real-time tasks can be matched, so that the technical problem I is solved.
Step 204, update block chain.
In some embodiments, the execution body updates the block chain. Optionally, an intelligent contract is invoked, wherein the intelligent contract comprises intelligent contract code, an instance, and execution data. The intelligent contract includes intelligent contract code, an instance, and execution data. An intelligent contract is a set of commitments defined in digital form. The intelligent contract can control data in the block chain and appoint the rights and obligations of each participating terminal in the block chain. The smart contracts may be automatically executed by the computer system. In particular, the intelligent contract includes intelligent contract code, instances, and execution data. The intelligent contract code may be the source code of the intelligent contract. The intelligent contract code may be a piece of code that the computer system is capable of executing. An instance may be an actual service in a blockchain running an intelligent contract. The execution data may be data that remains in the blockchain after execution of an instance.
And running the intelligent contract code, and recording the hierarchical execution result of the target data block set into the block chain. Specifically, the hierarchical execution result of the target data block set may be determined as one block. And running intelligent contract codes to add the block to the block chain. Specifically, the intelligent contract for recording the block generates an instance and executes data during the operation process. The instance and the execution data are recorded in a blockchain.
One embodiment presented in fig. 2 has the following beneficial effects: firstly, the method carries out hierarchical division on a target data block set to be processed according to the data type in an initial information set so as to obtain a process data information set. And respectively processing the target data block set aiming at each process data information group in the process data information group set. Secondly, the method uses a dynamic threshold value adjusting method to realize the hierarchical division of the target data block set. Through dynamic threshold adjustment, the number of process data information group clusters is optimized, and the influence of fixed cluster center points on the clustering effect is eliminated to a certain extent. And (4) separating and delaying the outlier uncertainty process data, and determining final hierarchical division after secondary clustering and re-division. The layered processing mode can improve the processing efficiency of the target data block set processing task, ensure that all tasks are completed timely and effectively, and improve the processing performance of the distributed system based on the block chain.
With continued reference to fig. 3, a schematic diagram of one application scenario of a block chain based distributed system hierarchical processing method in accordance with the present disclosure is shown.
In the application scenario of fig. 3, a user sends a set of target data blocks to be processed and a set of initial data information 301 to a server. After receiving the target data block set and the initial data information set, the server processes the initial data information set based on the data type in the initial data information set to obtain a process data information set 302. The server processes the set of target data blocks 303 hierarchically based on the set of process data information sets. The server updates the blockchain and issues the results of the set of hierarchically processed target data blocks into the blockchain 304.
According to the block chain-based hierarchical processing method for the distributed system, firstly, clustering processing is carried out on a target data block set according to the data type in an initial data information set so as to obtain a process data information group set. And performing hierarchical processing on the target data based on the process data information group set, and issuing a processing result to the block chain. The layered processing mode can improve the processing efficiency of the target data block set processing task, ensure that all tasks are completed timely and effectively, and improve the processing performance of the distributed system based on the block chain.
Referring now to FIG. 4, a block diagram of a computer system 400 suitable for use in implementing a server of an embodiment of the present disclosure is shown. The server shown in fig. 4 is only an example, and should not bring any limitation to the function and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, the computer system 400 includes a Central Processing Unit (CPU)401 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 402 or a program loaded from a storage section 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the system 400 are also stored. The CPU 401, ROM 402, and RAM 403 are connected to each other via a bus 404. An Input/Output (I/O) interface 405 is also connected to the bus 404.
The following components are connected to the I/O interface 405: a storage section 406 including a hard disk and the like; and a communication section 407 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 407 performs communication processing via a network such as the internet. A drive 408 is also connected to the I/O interface 405 as needed. A removable medium 409 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted as necessary on the drive 408, so that a computer program read out therefrom is mounted as necessary in the storage section 406.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 407 and/or installed from the removable medium 409. The above-described functions defined in the method of the present disclosure are performed when the computer program is executed by a Central Processing Unit (CPU) 401. It should be noted that the computer readable medium in the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the C language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the inventive concept as defined above. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (10)

1. A hierarchical processing method for a distributed system based on a block chain comprises the following steps:
acquiring a target data block set to be processed and an initial data information set, wherein the initial data information comprises a data type of the target data block and a storage address of the target data block;
processing the initial data information set based on the data type to obtain a process data information group set;
processing the set of target data blocks based on the set of process data information sets;
and updating the block chain.
2. The method of claim 1, wherein prior to obtaining the set of target data blocks to be processed and the set of initial data information, the method further comprises:
acquiring a sample data set;
generating a sample classification dataset based on the sample dataset;
and generating the clustering template based on the sample classification data set, wherein the clustering template records the data type of the sample data in the sample data set.
3. The method of claim 2, wherein said generating a sample classification dataset based on said sample dataset comprises:
determining a distance threshold;
determining a number of sample categories in the sample data set;
randomly selecting sample data with sample class number from the sample data set as an initial central value to obtain an initial central value set;
determining a first process sample data group set based on the initial central value set, wherein the first process sample data group set comprises a sample class number first process sample data group, and the first process sample data group consists of 1 initial central value:
removing all initial central values in the initial central value set from the sample data set to obtain a first candidate sample data set;
for each first candidate sample data in the first candidate sample data set, putting the first candidate sample data into the corresponding first process sample data set based on the first process sample data set and the distance threshold value by using the following formula, so as to obtain a first process sample data set:
C={Cj={xi|d(xi,uj)≤D}},
where D represents a distance function, x represents a first candidate sample data, i represents a first candidate sample data count, D represents the distance threshold, xiRepresents the ith first candidate sample data, u represents the initial center value, j represents the initial center value count, u represents the initial center value countjDenotes the jth initial center value, d (x)i,uj) Denotes xiAnd ujA distance between CjRepresenting the jth first process sample data set, the initial center value of which is ujAnd C represents a first set of process sample data groups;
for each first process sample data group in the first process sample data group set, determining a mean value of the process sample data group as a process central value to obtain a process central value set;
determining a second process sample data group set based on the process central value set, wherein the second process sample data group set comprises a sample class number of second process sample data groups, and the second process sample data group comprises 1 process central value;
removing all process central values in the process central value set from the sample data set to obtain a second candidate sample data set;
for each second candidate sample data in the second candidate sample data set, based on the second process sample data group set and the distance threshold, putting the second candidate sample data into the corresponding second process sample data group to obtain a second process sample data group set;
determining a neighborhood value;
generating the sample classification dataset based on the second set of process sample data groups and the neighborhood value.
4. The method of claim 3, wherein said generating the sample classification dataset based on the second set of process sample data and the neighborhood value comprises:
for each second process sample data group in the second set of process sample data groups, determining a first result group and a second result group of the second process sample data group based on a neighborhood value to obtain a first result group set and a second result group set;
determining a set of the first set of result groups and the second set of result groups as the sample classification dataset.
5. The method of claim 4, wherein said determining the first result set and the second result set of the second process sample data set based on neighborhood values comprises:
determining a neighborhood set of the second process sample data set based on the neighborhood value;
removing all second process sample data in the second process sample data group from the sample data set to obtain a third candidate sample data set;
for each third candidate sample data in the third candidate sample data set, in response to the third candidate sample data being in the neighborhood set, placing the third candidate sample data in a first result set;
for each second process sample data in the second process sample data group, placing the second process sample data in a second result group in response to the second process sample data being in the neighborhood set;
for each second process sample data in the second process sample data group, in response to the second process sample data not being in the neighborhood set, placing the second process sample data in the first result group.
6. The method of any of claims 2-5, wherein the processing the initial set of data information based on the data type to obtain a set of process data information groups comprises:
for each initial information in the initial information set, matching the data type of the initial data information with the clustering template, and determining the category information of the initial data information to obtain a category information set;
and for each category information in the category information set, dividing all initial data information corresponding to the category information into a process data information group corresponding to the category information to obtain the process data information group set.
7. The method of claim 6, wherein said processing the set of target data blocks based on the set of process data information groups comprises:
determining a result data block group set based on the process data information group set and the target data block set, wherein each result data block group in the result data block group set corresponds to a process data information group in the process data information group set;
and processing the result data block group set.
8. The method of claim 7, wherein the processing the set of result data chunks comprises:
and for each result data block group in the result data block group set, sequentially processing each result data block in the result data block group.
9. A first terminal device comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-8.
10. A computer-readable storage medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-8.
CN202011091586.9A 2020-10-13 2020-10-13 Block chain based distributed system layered processing method and electronic equipment Active CN112182111B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011091586.9A CN112182111B (en) 2020-10-13 2020-10-13 Block chain based distributed system layered processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011091586.9A CN112182111B (en) 2020-10-13 2020-10-13 Block chain based distributed system layered processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN112182111A true CN112182111A (en) 2021-01-05
CN112182111B CN112182111B (en) 2022-07-19

Family

ID=73951116

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011091586.9A Active CN112182111B (en) 2020-10-13 2020-10-13 Block chain based distributed system layered processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN112182111B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113434471A (en) * 2021-06-24 2021-09-24 平安国际智慧城市科技股份有限公司 Data processing method, device, equipment and computer storage medium
CN116566995A (en) * 2023-07-10 2023-08-08 安徽中科晶格技术有限公司 Block chain data transmission method based on classification and clustering algorithm

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034809A (en) * 2018-08-16 2018-12-18 北京京东尚科信息技术有限公司 Generation method, device, block chain node and the storage medium of block chain
CN110334053A (en) * 2019-05-09 2019-10-15 哈尔滨理工大学 A kind of data based on block chain deposit card data processing method
US20190332966A1 (en) * 2018-04-27 2019-10-31 Seal Software Ltd. Generative adversarial network model training using distributed ledger
CN111629063A (en) * 2020-05-29 2020-09-04 宁波富万信息科技有限公司 Block chain based distributed file downloading method and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190332966A1 (en) * 2018-04-27 2019-10-31 Seal Software Ltd. Generative adversarial network model training using distributed ledger
CN109034809A (en) * 2018-08-16 2018-12-18 北京京东尚科信息技术有限公司 Generation method, device, block chain node and the storage medium of block chain
CN110334053A (en) * 2019-05-09 2019-10-15 哈尔滨理工大学 A kind of data based on block chain deposit card data processing method
CN111629063A (en) * 2020-05-29 2020-09-04 宁波富万信息科技有限公司 Block chain based distributed file downloading method and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
高飞: "基于区块链技术的智能合约自动分类系统设计", 《高原科学研究》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113434471A (en) * 2021-06-24 2021-09-24 平安国际智慧城市科技股份有限公司 Data processing method, device, equipment and computer storage medium
CN116566995A (en) * 2023-07-10 2023-08-08 安徽中科晶格技术有限公司 Block chain data transmission method based on classification and clustering algorithm
CN116566995B (en) * 2023-07-10 2023-09-22 安徽中科晶格技术有限公司 Block chain data transmission method based on classification and clustering algorithm

Also Published As

Publication number Publication date
CN112182111B (en) 2022-07-19

Similar Documents

Publication Publication Date Title
US11762697B2 (en) Method and apparatus for scheduling resource for deep learning framework
CN108536650B (en) Method and device for generating gradient lifting tree model
US8572614B2 (en) Processing workloads using a processor hierarchy system
CN112182111B (en) Block chain based distributed system layered processing method and electronic equipment
US20230033019A1 (en) Data processing method and apparatus, computerreadable medium, and electronic device
CN111427971B (en) Business modeling method, device, system and medium for computer system
US20150195344A1 (en) Method and System for a Scheduled Map Executor
CN110852882A (en) Packet consensus method, apparatus, device, and medium for blockchain networks
CN111985831A (en) Scheduling method and device of cloud computing resources, computer equipment and storage medium
CN111611622A (en) Block chain-based file storage method and electronic equipment
CN111629063A (en) Block chain based distributed file downloading method and electronic equipment
CN111680799A (en) Method and apparatus for processing model parameters
CN110046670B (en) Feature vector dimension reduction method and device
CN116820714A (en) Scheduling method, device, equipment and storage medium of computing equipment
CN115730812A (en) Index information generation method and device, electronic equipment and readable medium
CN110716809A (en) Method and device for scheduling cloud resources
CN111951112A (en) Intelligent contract execution method based on block chain, terminal equipment and storage medium
CN114595047A (en) Batch task processing method and device
CN113760497A (en) Scheduling task configuration method and device
CN113760680A (en) Method and device for testing system pressure performance
CN114189518A (en) Communication method and communication device applied to computer cluster
CN109408716B (en) Method and device for pushing information
CN112547569A (en) Article sorting equipment control method, device, equipment and computer readable medium
CN110825920A (en) Data processing method and device
US11481130B2 (en) Method, electronic device and computer program product for processing operation commands

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220630

Address after: 255100 east section of Shuangshan Road, Zhonglou sub district office, Zichuan District, Zibo City, Shandong Province

Applicant after: Qilu yunshang Digital Technology Co.,Ltd.

Address before: Room 135, auxiliary building, 38 Hengjie South Road, Hengjie Town, Haishu District, Ningbo City, Zhejiang Province

Applicant before: Ningbo Golden Lion Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant