WO2020062211A1 - 一种融合区块链技术拟态存储防篡改日志的方法及系统 - Google Patents
一种融合区块链技术拟态存储防篡改日志的方法及系统 Download PDFInfo
- Publication number
- WO2020062211A1 WO2020062211A1 PCT/CN2018/109007 CN2018109007W WO2020062211A1 WO 2020062211 A1 WO2020062211 A1 WO 2020062211A1 CN 2018109007 W CN2018109007 W CN 2018109007W WO 2020062211 A1 WO2020062211 A1 WO 2020062211A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- query
- log
- node
- block
- blockchain
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/40—Network security protocols
Definitions
- the invention belongs to the field of Internet technology improvement, and particularly relates to a method and system for merging block chain technology mimic storage tamper-resistant logs.
- the distributed file file system can effectively meet the huge demand of massive data for storage and calculation, and it has the advantages of scalability and low cost.
- the growing demand for uninterrupted file storage in business systems can be addressed by expanding the storage capacity of the original system.
- a distributed file system is a basic application system based on a single-machine operating system and a computer network environment. It uses multiple servers to share storage loads, uses global metadata to manage file information, and uses a large number of complex algorithms to ensure data security. Such as consistency, fault tolerance, availability and other algorithms.
- the distributed file system has many considerations on the data redundancy mechanism, but for some core confidential data, the security requirements are extremely demanding. How can the distributed file system be faced with intruder attacks? Can still provide normal services without interruption, obviously this problem cannot be solved with traditional data disaster recovery solutions.
- Mimic distributed file storage system is an active defense against unknown vulnerabilities and backdoors in the system.
- the core technology and industrial foundation of China's information field are seriously lagging behind.
- National security needs must be resolved as soon as possible.
- the security strategy of mimic security defense It is proposed in the current context.
- Mimic security defense emphasizes that different software and hardware variants are dynamically and pseudo-randomly selected for execution under active and passive triggering conditions. Heterogeneity and diversity are used to change the similarity and unity of the system, thereby enabling attackers to pass through the internal and external of the system.
- the software or hardware execution environment observed by vulnerabilities or backdoors has strong uncertainty, and it is difficult to build an attack chain based on vulnerabilities or backdoors, which ultimately effectively reduces the security risk of the system.
- the DHR defense mechanism is derived from the innovative theory and technology of mimic security defense. Based on the DHR mechanism, the mimic distributed file system architecture uses reliable access control mechanisms, dynamic defense transformation mechanisms, heterogeneous redundancy mechanisms, and blockchain log mechanisms to determine and track unknown threats, and block and disrupt various attack methods. Finally, the goal of effectively reducing system security risks is achieved.
- log data is an indispensable feature for a complete storage system.
- the background structure of the Internet system has become complicated, and it is difficult for system administrators to go directly to a certain node to query log information. Therefore, it is necessary to design a special log management system to help the system administrator to efficiently monitor the operation of the system. Find and handle abnormal conditions in time.
- log data is generally divided into user access logs, application logs, and system logs.
- User access logs record user login and logout activities. This type of log is used to track user behavior analysis and is often used for user data mining.
- the system log records system startup, shutdown, or failure information, which is critical to system operating status and security.
- the traditional log recording system does not consider security, and the log information is easy to be tampered with. Therefore, designing a safe and reliable log system can better improve the security of the mimic storage system.
- Chukwa is an open source data collection system for monitoring large distributed systems developed by Apache. It is built on the HDFS and Map / Reduce frameworks and inherits the excellent scalability and robustness of Hadoop. In terms of data analysis, Chukwa has a flexible and powerful set of tools that can be used to monitor and analyze the results to make better use of the collected data results. Its architecture is shown in Figure 1.
- the main components are:
- Adaptors Interfaces and tools for directly collecting data.
- One Agent can manage data collection for multiple Adaptors.
- Chukwa uses an Agent to collect the data it is interested in.
- Each type of data is implemented by an Adaptor, and the data type (Data Model) is in the corresponding Specified in the configuration.
- Ahukwa's Agent uses the so-called 'watchdog' mechanism, which will automatically restart the terminated data collection process to prevent the loss of original data.
- Hadoop clusters are good at processing a small number of large files, but processing large numbers of small files is not its strength.
- Chukwa designed the Collector role to partially merge data before writing it to the cluster to prevent large numbers of small files Writing of files.
- Chukwa allows and encourages the setting of multiple Collectors. Agents randomly select a Collector from the Collectors list to transmit data. If one Collector fails or is busy, the next Collector is replaced. So that load balancing can be achieved. Practice has proven that the load of multiple Collectors is almost even.
- Afka is a messaging system originally developed from LinkedIn and used as the basis for LinkedIn's Activity Stream and Operation Data Processing Pipeline. It is now used by many different types of companies as multiple types of data pipelines and messaging systems. Its architecture is shown in Figure 2.
- a Kafka cluster contains one or more servers. Such servers are called brokers.
- Topic Each message published to the Kafka cluster has a category. This category is called Topic. (Physical messages of different topics are stored separately. Logically, although a topic message is stored on one or more brokers, users only need to specify the topic of the message to produce or consume data without having to care where the data is stored.)
- Partition is a physical concept. Each Topic contains one or more Partitions.
- Consumer Group Each Consumer belongs to a specific Consumer Group (you can specify a group name for each Consumer, if you do not specify a group name, it belongs to the default group).
- a typical Kafka cluster contains several Producers (which can be Page Views generated by the web front end, or server logs, system CPU, Memory, etc.), and several brokers (Kafka supports horizontal expansion. Generally, the more brokers, the higher the cluster throughput ), Several Consumer Groups, and a Zookeeper cluster. Kafka manages the cluster configuration through Zookeeper, elects the leader, and rebalances when the Consumer Group changes. Producer uses the push mode to publish messages to the broker, and Consumer uses the pull mode to subscribe and consume messages from the broker.
- the purpose of the present invention is to provide a method for mimicry storage of tamper-resistant logs by integrating blockchain technology, which aims to solve the technical problem that a storage node storing a log fails or is maliciously attacked, which easily leads to data loss or data tampering.
- the present invention is implemented in such a way that a fused blockchain technology mimic stores a tamper-resistant log, and the fused blockchain technology mimic stores a tamper-resistant log includes the following steps:
- the converted standard log is sent to a blockchain network node, and the blockchain network node encapsulates the log as a transaction;
- the steward node sends the pre-block to the commissioner node, and the commissioner node verifies the pre-block and sends a signature to the steward node;
- the steward node judges whether more than half of the members' signatures have been collected. If so, it publishes formal blocks to all nodes and executes the next step. If not, it abandons publishing and returns to step S4.
- Each node synchronizes the received new block to the blockchain.
- the log collection in step S1 includes two message queues, UDP and TCP, and UDP or TCP is selected for transmission according to the size of the log data.
- a further technical solution of the present invention is that the log query is divided into a quick query and a secure query, and the quick query includes the following steps:
- the quick query database feeds back the query result to the query unit
- step S713. Verify the consistency of the log and hash in the query result in the blockchain. If yes, go to the next step. If no, go back to step S712 and feedback the error information to the query manager.
- the quick query database returns the query result and verifies the consistency of the log and hash in the query result. If yes, execute the next step. If no, send a log query request to any blockchain network node and jump to S727;
- the query unit sends the log hash and block number as verification information to any blockchain network node;
- the blockchain network node that received the information forwards the verification information to all nodes in the blockchain network.
- All the blockchain network nodes that have received the verification request verify whether there is a log of the specified hash in the block. If it exists, they want to query the unit to pass the verification verification. If it does not exist, they return a verification error message to the inquiry unit.
- the query unit receives more than half of the blockchain network node's verification pass information within the valid time, it converts the standard log into a specified type of log and feeds it back to the administrator to end the query, otherwise it sends the query to any blockchain node Send a query request;
- the blockchain network node that receives the query request forwards the query request to the blockchain network. All the nodes that receive the query request look up the corresponding log in their own blockchain database and return the log to the query unit.
- the node performing the self-check sends a genesis block request to the blockchain network
- the blockchain network node that receives the request feeds back the genesis block to the database node;
- the self-checking node receives the same block that exceeds half the number of nodes within the valid time, compare whether the block is the same as the block in its own database. If it is not the same, replace the genesis block in the database with The correct genesis block, and then from the genesis block, verify that each block in the blockchain is correct with a hash value;
- Another object of the present invention is to provide a system for simulatively storing tamper-resistant logs in a fused blockchain technology.
- Acquisition and conversion module for collecting logs of each isomer in the mimic storage system and converting them into logs in a standard format
- Encapsulation module used to send the converted standard log to the blockchain network node, and the blockchain network node encapsulates the log as a transaction
- Node verification and signature sending module which is used by the housekeeper node to send the pre-block to the member node, and the member node verifies the pre-block and sends the signature to the housekeeper node;
- a judging module is used by the housekeeper node to judge whether the collected member signatures exceed half. If so, it publishes formal blocks to all nodes and executes the next step. If not, it renounces publishing and returns to node verification and signature sending modules.
- the storage module is used for each node to synchronize the received new block to the blockchain.
- Query and self-test and repair module which is used to query and self-check and repair the storage logs in the blockchain network.
- a further technical solution of the present invention is that the log collection in the collection and conversion module includes two message queues, UDP and TCP, and selects UDP or TCP for transmission according to the size of the log data.
- a further technical solution of the present invention is that the log query is divided into a quick query and a secure query, and the quick query includes:
- a fast request sending unit for sending a query request through the query unit
- a fast feedback unit which is used to quickly query the database to feed back query results to the query unit;
- the fast judgment unit is used to verify the consistency of the log and hash in the query result in the blockchain. If so, the next step is executed. If not, the fast feedback unit is returned and the error information is fed back to the query manager.
- Quick conversion and return unit which is used to convert the query standard logs into specific types of logs and return the logs to the administrator;
- the security query includes:
- a security request unit configured to send a query request to the quick query database through the query unit
- Security judgment unit which is used to quickly query the database to return query results and verify the consistency of the log and hash in the query results. If yes, execute the security verification unit. If not, send logs to any blockchain network node. Query the request and jump to the security block query return unit;
- the security verification unit is used for the query unit to send the log hash and block number as verification information to any blockchain network node;
- the verification information forwarding unit is used for the blockchain network node that received the information to forward the verification information to all nodes in the blockchain network;
- the security conversion log unit is used to convert the standard log to a specified type of log and feed it back to the administrator if the query unit receives more than half of the blockchain network node's verification pass information within the valid time. Any blockchain node sends a query request;
- the secure block query return unit is used for the blockchain network node that receives the query request to forward the query request to the blockchain network. All nodes that receive the query request look up the corresponding logs in their own blockchain database, and Return the log to the query unit;
- the administrator unit which is used to convert the log and feed it back to the administrator if the query unit receives the same log with more than half the number of nodes within the valid time; otherwise, the query is returned to the administrator.
- the self-test and repair include:
- Send genesis block unit the node used for self-check sends the genesis block request to the blockchain network
- Node return unit which is used for the blockchain network node that received the request to feed back the genesis block to the database node;
- a comparison unit is used to compare whether the block in the database is the same as the block in its own database if the self-check node receives more than half of the same number of nodes within the valid time. Replace the block with the correct genesis block, and then verify the correctness of each block in the blockchain through the hash value from the genesis block;
- An update unit is used to synchronize the block from the blockchain network and continue the verification if a certain block is incorrect during the verification process. If the verification is completed, if the node also queries the database quickly Node, the content in the database needs to be updated during the self-check synchronization process.
- the beneficial effect of the present invention is that the blockchain is used as the log storage module of the mimic storage system to protect the logs generated by the mimic storage system from being tampered with.
- the blockchain uses a PoV consensus algorithm to ensure the consistency and security of the stored data.
- the communication complexity of the PoV block generation process is low, the performance is good, and the scalability is strong, which is convenient to increase or decrease the number of cluster nodes.
- Two query methods are used to provide quick query and secure query to meet different query requirements. Quick query interacts directly with the quick query database, while secure query needs to interact with the blockchain network and the quick query database to complete.
- the nodes in the blockchain network periodically perform self-checks and repairs. First, the correctness of the genesis block is determined by a majority vote, and then each block is checked through the blockchain's chain structure. The block requests the correct block for repair from other nodes in the network.
- Figure 1 is a schematic diagram of Chukwa's basic architecture.
- FIG. 2 is a schematic diagram of Kafka's basic architecture.
- FIG. 3 is a block chain log system architecture provided by an embodiment of the present invention.
- FIG. 4 is a log collection process provided by an embodiment of the present invention.
- FIG. 5 is a process of storing logs to a blockchain and quickly querying a database according to an embodiment of the present invention.
- FIG. 6 is a running process of a PoV algorithm according to an embodiment of the present invention.
- FIG. 7 shows two different query modes provided by the embodiment of the present invention.
- log collection is responsible for collecting log data of different modules. It can filter invalid log data, transform the log format, and then randomly select blockchain network nodes to publish log data. To prevent blockchain nodes from processing too many requests at the same time, log publishing can achieve load balancing.
- the log query and analysis unit can query log records from the blockchain. It supports two types of query operations: fast query and secure query. The specific architecture is shown in Figure 3.
- Logs can come from a variety of sources, such as logs of various isomers in the mimic storage system, logs of the configuration manager, etc.
- the log formats produced by various components are also different.
- the log collection unit collects logs generated by various log generation sources, and after filtering incorrect logs, performs format conversion on the correct logs, converts the logs into standard formats, and finally publishes them to the blockchain network for storage.
- Log collection can use either active collection or passive receiving. Multiple collection units can be set up. Each log source can send logs to any collection unit. The collection unit randomly sends the log to a blockchain network node and stores the log in the blockchain through the process of block generation. As shown in Figure 4.
- the log collection unit includes two message queues, UDP and TCP. Selecting UDP or TCP for transmission according to the size of the log data can increase communication efficiency and reduce network traffic. Aiming at the problem that the blockchain network cannot ensure that all the received data is stored in the blockchain, the acquisition unit has set a cache and retransmission strategy. The acquisition unit registers the block subscription service with the blockchain network when it starts, and can get the block whenever the blockchain network generates a new block. When sending logs, these sent logs will be cached at the same time, and a timeout period will be set. When the acquisition unit receives the newly released block, it will extract the logs from the block and delete the same logs in the cache. If there are logs in the cache that have not been queried after the timeout, these logs are resent to the blockchain and the timeout is reset.
- the standard format of the log should include information other than the log content itself, including the type of log, the application or service that generated the log, and the node that generated the log.
- the standard format is shown in the following table.
- the hash of the log is a converted hash of the log in the standard format for subsequent query verification and check of the weight when generating a block.
- the converted log content also contains information that may be generated by all types of logs. Standard logs can be converted losslessly into corresponding types of logs.
- the blockchain is essentially an incremental distributed storage system that only allows data to be added without modification or deletion.
- Each node in the blockchain network has a copy of the same complete blockchain.
- data is also backed up by introducing redundant strategies.
- the blockchain guarantees the consistency of the data of each node through a consistency protocol.
- the consistency protocol is also called a consensus algorithm.
- the present invention selects a voting-based algorithm PoV (Proof of Vote) as the consistency algorithm of the blockchain network. .
- PoV Proof of Vote
- the PoV consensus protocol specifies three types of identity nodes—a steward candidate node, a steward node, and a member node. These three special nodes jointly implement a consensus protocol to produce qualified blocks.
- the blockchain network implementing this protocol is equivalent to a state machine determined by the blockchain information, and each newly generated block will modify the state of the entire system.
- the state information maintained by the PoV protocol includes information such as the member list, the steward candidate list, the steward list, the next steward, and the start time of the next block generation.
- the member nodes are the highest power nodes, and the member nodes are equal. They make decisions on consensus transactions by means of common voting.
- the generation of the block is responsible for the steward node.
- the number of steward nodes is fixed. Let us record it as N b .
- the steward node is elected by voting by the member nodes, but not all nodes can be elected and can be elected as steward nodes.
- the node is called the steward candidate node, and the steward candidate node can be applied by any non-steward candidate node and joined after the consent of more than half of the member nodes.
- the member node with the highest decision-making power can also be applied by other nodes and joined with the consent of all members.
- the steward candidate node and the member node can withdraw from the identity at any time without the consent of the member node.
- All node identity change information is stored in the block in the form of special transactions.
- a node receives a newly issued block, it changes the list of various nodes it maintains based on the information of the special transaction.
- the member node selects a fixed number of voting lists from all the steward candidates according to their scores or their preferences, and sends them to all steward nodes.
- the on-duty steward (the steward responsible for generating the block) node counts the votes after receiving the votes of all members.
- the candidate with the highest number of stewards is the next steward, and the number of stewards is determined by the transactions in the first block.
- the ballot paper, the ballot paper result and the ballot paper signature are put into the transaction and put into the pre-block. Finally, the pre-block is sent to all members for signature.
- each housekeeper's pre-production block has a fixed time limit. Once the legal block is not released after this time, the steward with the housekeeper number +1 is on duty as the housekeeper's production pre-block. And so on. In addition, each housekeeper has a limited time to serve as a housekeeper. After a group of housekeepers generate a certain number of blocks, they need to vote for housekeeper candidates again to elect a new housekeeper. The time limit for each block and the number of blocks that each butler needs to generate are recorded in the first block. The steward number responsible for the pre-production block is determined by the signature of the member of the previous block. Because the members' signatures are random, the steward of the next packing block is also random. The specific process is shown in Figure 6.
- a special block will be generated, which contains only the transactions selected by the next steward, and no other transactions. All functional transactions (including application to become a member transaction, withdrawal from a member transaction, application to become a steward candidate transaction, and withdrawal from a steward candidate transaction) and ordinary transactions are packaged by ordinary blocks. Ordinary transactions are user-defined transactions that can contain arbitrary content. Therefore, the blockchain can be used for multiple purposes. In this solution, we store logs generated by the mimic system in ordinary transactions.
- the generation of ordinary blocks is similar to the generation of special blocks. It starts when a housekeeper node receives a newly released block and updates its identity to the steward on duty. The node starts pre-production after waiting for a delay time. Block. This delay time is also recorded on the first block. Setting a block generation delay can effectively slow down the block generation speed, increase the number of transactions packaged per block, and set a proper delay time to increase the throughput of the blockchain network. The watchkeeper needs to obtain transactions in the transaction cache pool and store them in the block. If ordinary blocks are generated, all transactions in the functional transaction cache pool and a certain amount of transactions in the ordinary transaction cache pool are stored in the block.
- the housekeeper node sends the block to all members after generating the pre-block.
- the members sign the block header after verifying the correctness of the block information, and then return the signature to the steward on duty. After obtaining the signatures of more than one-half of the members, the steward on duty put the members' signatures in the block header, and released the block with the time when the last member signed it as the block generation time.
- the PoV blockchain network uses time division to limit the time generated by each block and the steward rotation time. Based on this feature, the invention sets the timeout retransmission time of the acquisition unit to the cycle of the steward rotation, because when the steward rotation The transactions in the cache will be cleaned once to retrieve the data that has been chained. If the query transaction period is set to the period generated by each block, the system can query the log in time when new data is added to the blockchain, and it will not cause network congestion due to frequent polling operations.
- PoV achieves strong fault tolerance and high throughput performance through partial decentralization.
- the communication complexity of the generated block is only O (3m), m is the number of member nodes, and the expansion is stronger.
- Ordinary nodes There are no restrictions on joining or exiting the blockchain network, and the joining and exiting of member nodes and steward candidate nodes can be recorded in the blockchain as special transactions after a certain step.
- Housekeeper nodes are part of the network, and the number is fixed. While reducing the communication burden, it can also ensure that logs are packed into the block to the greatest extent.
- the blockchain network provides block subscription services externally.
- a node subscribes to the blockchain service
- all nodes in the blockchain network add the node's IP to the subscription list.
- a new block is released in the blockchain network, the node that received the newly issued block sends the block to all nodes that have subscribed to the service.
- the quick query database node subscribes to a block service from the blockchain network. When the node receives a newly released block, it extracts data from the block and stores it in the quick query database.
- the logs in the quick query database are stored in the following Table 2 format:
- Node failures in distributed storage are unavoidable.
- each node in the blockchain network needs periodic self-checks and simultaneous repairs.
- both quick query can be performed through quick query database
- secure query can also be performed through blockchain network.
- the fast query database can provide efficient and fast query services, and the query through the blockchain network can ensure the security of the obtained data.
- each blockchain network node stores a copy of the complete blockchain, theoretically it can provide data query services.
- the data of a single node is unreliable, and querying data through a single node cannot ensure the correctness of the query result. Therefore, the present invention ensures the correctness of the final query result by majority voting.
- the general query and security query process is shown in Figure 7.
- the query unit sends a query request
- the query unit sends a query request to the quick query database
- the node receiving the information forwards the check information to all nodes in the network;
- All nodes receiving the verification information check whether the log of the specified hash exists in the corresponding block, and then return the result to the query unit;
- the standard log is converted into a specified type log and returned to the administrator to end the query. Otherwise, send a query request to any blockchain network node;
- the node that receives the query request forwards the query request to the blockchain network. All the nodes that receive the query request look up the corresponding log in their own blockchain database and return the log to the query unit;
- the query unit If the query unit receives the same log that exceeds half the number of nodes before the timeout, it converts the log and returns it to the administrator. Otherwise the query returned to the administrator fails.
- Ordinary queries focus more on efficiency and interact directly with the fast query database, while secure queries focus more on the correctness of the query results.
- the query process has two stages. The first stage first queries the log from the quick query database and passes the block The chain network confirms it. If the result is incorrect, it sends a query request to each node in the blockchain network to make a majority decision on the query result.
- the secure query can ensure that as long as the number of invalid or malicious nodes does not exceed Get the right result in half. Because the logs stored in the database are formatted standard logs, the system needs to convert the standard logs into the required types of logs after returning the logs to the queryer.
- the blockchain is used as the log storage module of the mimic storage system to protect the logs generated by the mimic storage system from tampering.
- the blockchain uses a PoV consensus algorithm to ensure the consistency and security of the stored data.
- the communication complexity of PoV's block generation process is low, the performance is good, and the scalability is strong, which is convenient to increase or decrease the number of cluster nodes.
- Two query methods are used to provide quick query and secure query to meet different query requirements.
- Quick query interacts directly with the quick query database, while secure query needs to interact with the blockchain network and the quick query database to complete.
- the nodes in the blockchain network periodically perform self-checks and repairs. First, the correctness of the genesis block is determined by a majority vote, and then each block is checked through the blockchain's chain structure. The block requests the correct block for repair from other nodes in the network.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
一种融合区块链技术拟态存储防篡改日志的方法,包括:S1、采集拟态储存系统中各个异构体的日志并将其转换成标准格式的日志;S2、将转换的标准日志发送给区块链网络节点,由该区块链网络节点把日志封装为交易;S3、区块链网络节点将交易发给管家节点,管家节点将日志存储到预区块;S4、管家节点将预区块发给委员节点,委员节点验证预区块并发送签名给管家节点;S5、管家节点判断收集到的委员签名是否超过半数;S6、各个节点将接收到的新区块同步到区块链中。采用区块链作为拟态存储系统日志存储模块,保护拟态存储日志不被篡改。采用PoV算法产生区块的过程通信复杂度较低,性能较好,可扩展性强,便于增加或减少集群节点数量。
Description
本发明属于互联网技术改进领域,尤其涉及一种融合区块链技术拟态存储防篡改日志的方法及系统。
分布式文件文件系统能够有效地满足海量数据对存储和计算的庞大需求,其具有扩展性和低成本等优点。业务系统不间断的文件存储增长需求,可以通过对原有系统进行存储容量拓展来解决。分布式文件系统是基于单机操作系统和计算机网络环境构建的基础应用系统,通过多个服务器分担存储负荷,并使用全局元数据对文件信息进行管理,同时使用大量的复杂算法来保障数据安全性,如一致性、容错性、可用性等算法。
分布式文件系统使用网络作为传输介质进行数据交换,却极少具备对抗网络攻击的防御功能,在面对网络攻击时显得极为脆弱,存在数据被窃取、数据完整性被破坏甚至存储集群宕机的风险。除此之外,对于操作系统存在的安全性问题,如病毒、木马等攻击行为,也会让分布式文件系统上的数据存在安全性风险。
除了已知的攻击方式,在对待未知系统漏洞时传统的分布式存储系统也没有很好的应对方式。作为分布式文件系统的载体,操作系统不可避免的存在着漏洞。从国家信息安全层面来看,目前国内可用于商业用途的操作系统寥寥无几,可选对象基本上都是国外产品,受商业因素或国家战略影响,部分操作系统可能被人为地设置后门,导致发生在计算机系统上的 漏洞事件频发。因此,如何从抗操作系统攻击和抗网络攻击的层面保障数据的安全性也是分布式文件系统今后必须考虑的重要问题。
作为海量数据存储的解决方案,分布式文件系统在数据冗余机制上有很多考量,但是对于一些核心的机密数据,安全性要求极为苛刻,分布式文件系统如何做到面对入侵者的攻击时,仍然能不间断的提供正常的服务,显然这个问题无法用传统的数据灾备方案来解决。
拟态分布式文件存储系统是针对系统中的未知漏洞和后门采用主动的防御手段。在现今的互联网环境下,我国信息领域核心技术与产业基础严重滞后,网络空间攻击成本和防御成本之间具有严重的不对称性,国家安全需求必须尽快解决,而拟态安全防御这种安全策略思想正是在当前的背景下被提出的。拟态安全防御强调在主动和被动触发条件下动态地、伪随机地选择不同的软硬件变体执行,利用异构性、多样性改变系统相似性、单一性,从而使攻击者通过系统内外部的漏洞或后门观察到的软硬件执行环境具有很强的不确定性,并难以构建出基于漏洞或后门的攻击链,最终有效地降低系统的安全风险。DHR防御机制源于拟态安全防御这一革新理论和技术。基于DHR机制的拟态分布式文件系统架构通过可靠的访问控制机制、动态防御变换机制、异构冗余机制以及区块链日志机制,判断和追踪未知的威胁,阻断和扰乱各种攻击手段,最终达到有效降低系统安全风险的目标。
同时,对于一个完整的存储系统来说,日志记录是不可或缺的功能。随着大数据的发展,互联网系统的后台架构变得复杂,系统管理人员很难 直接去某个节点查询日志信息,因此需要设计专门的日志管理系统,帮助系统管理者高效地监测系统的运行情况,及时发现和处理的异常状况。另外,数据挖掘和机器学习技术的发展,互联网企业通过用户访问日志信息,分析用户行为,进行商品推荐变得流行,因此日志数据受到重视。目前,日志数据通常被分为用户访问日志、应用日志和系统日志。用户访问日志记录用户的登录和退出活动,这一类日志用于跟踪用户的行为分析,常作为用户数据挖掘用。系统日志记录系统的启动、关闭或者故障信息,对系统运行状态和安全性至关重要。然而,传统的日志记录系统并没有考虑安全性,日志信息容易被篡改,因此设计一个安全可靠的日志系统能更好地提高拟态存储系统的安全性。
Chukwa是Apache开发的一个开源的监控大型分布式系统的数据收集系统,它构建于HDFS和Map/Reduce框架之上,并继承了Hadoop优秀的扩展性和健壮性。在数据分析方面,Chukwa拥有一套灵活、强大的工具,可用于监控和分析结果来更好的利用所收集的数据结果。其架构如图1所示。
其中主要的部件为:
1.Agents:负责采集最原始的数据,并发送给Collectors
2.Adaptors:直接采集数据的接口和工具,一个Agent可以管理多个Adaptor的数据采集
3.Collectors:负责收集Agent收送来的数据,并定时写入集群中
4.Map/Reduce Jobs:定时启动,负责把集群中的数据分类、排序、去重 和合并
5.HICC(Hadoop Infrastructure Care Center):负责数据的展示
在每个数据的产生端(基本上是集群中每一个节点上),Chukwa使用一个Agent来采集它感兴趣的数据,每一类数据通过一个Adaptor来实现,数据的类型(Data Model)在相应的配置中指定。为防止数据采集端的Agent出现故障,Ahukwa的Agent采用了所谓的‘watchdog’机制,会自动重启终止的数据采集进程,防止原始数据的丢失。
另一方面,对于重复采集的数据,在Chukwa的数据处理过程中,会自动对它们进行去重。这样,就可以对于关键的数据在多台机器上部署相同的Agent,从而实现容错的功能。
Agents采集到的数据,是存储到Hadoop集群上的。Hadoop集群擅长于处理少量大文件,而对于大量小文件的处理则不是它的强项,针对这一点,Chukwa设计了Collector这个角色,用于把数据先进行部分合并,再写入集群,防止大量小文件的写入。
另一方面,为防止Collector成为性能瓶颈或产生单点故障,Chukwa允许和鼓励设置多个Collector,Agents随机地从Collectors列表中选择一个Collector传输数据,如果一个Collector失败或繁忙,就换下一个Collector,从而可以实现负载的均衡。实践证明,多个Collector的负载几乎是平均的。
该日志采集系统的缺点是没有安全机制,读写接入口只有一个,当系统受到恶意攻击的时候,日志数据容易被篡改。
Afka是一个消息系统,原本开发自LinkedIn,用作LinkedIn的活动流 (Activity Stream)和运营数据处理管道(Pipeline)的基础。现在它已被多家不同类型的公司作为多种类型的数据管道和消息系统使用。其体系结构如图2所示。
1.Broker:Kafka集群包含一个或多个服务器,这种服务器被称为broker。
2.Topic:每条发布到Kafka集群的消息都有一个类别,这个类别被称为Topic。(物理上不同Topic的消息分开存储,逻辑上一个Topic的消息虽然保存于一个或多个broker上但用户只需指定消息的Topic即可生产或消费数据而不必关心数据存于何处)。
3.Partition:Partition是物理上的概念,每个Topic包含一个或多个Partition。
4.Producer:负责发布消息到Kafka broker。
5.Consumer:消息消费者,向Kafka broker读取消息的客户端。
6.Consumer Group:每个Consumer属于一个特定的Consumer Group(可为每个Consumer指定group name,若不指定group name则属于默认的group)。
一个典型的Kafka集群中包含若干Producer(可以是web前端产生的Page View,或者是服务器日志,系统CPU、Memory等),若干broker(Kafka支持水平扩展,一般broker数量越多,集群吞吐率越高),若干Consumer Group,以及一个Zookeeper集群。Kafka通过Zookeeper管理集群配置,选举leader,以及在Consumer Group发生变化时进行rebalance。Producer使用push模式将消息发布到broker,Consumer使用pull模式从broker订 阅并消费消息。
使用该消息系统作为日志系统没有引入冗余,如果某些节点的失效,将导致日志丢失,同时,该系统依赖于zookeeper进行配置,一致性也依赖于zookeeper,若zookeeper节点被恶意攻击,则会导致系统被写入错误日志或者导致日志被篡改。
发明内容
本发明的目的在于提供一种融合区块链技术拟态存储防篡改日志的方法,旨在解决存储日志的存储节点发生故障或受到恶意攻击,容易导致数据丢失或者数据被篡改的技术问题。
本发明是这样实现的,一种融合区块链技术拟态存储防篡改日志的方法,所述融合区块链技术拟态存储防篡改日志的方法包括以下步骤:
S1、采集拟态储存系统中各个异构体的日志并将其转换成标准格式的日志;
S2、将转换的标准日志发送给区块链网络节点,由该区块链网络节点把日志封装为交易;
S3、将交易发给管家节点,管家节点将日志存储到预区块中;
S4、管家节点将预区块发给委员节点,委员节点验证预区块并发送签名给管家节点;
S5、管家节点判断收集到的委员签名是否超过半数,如是,则发布正式区块给所有节点并执行下一步,如否,则放弃发布返回步骤S4;
S6、各个节点将接收到的新区块同步到区块链中。
本发明的进一步技术方案是:所述融合区块链技术拟态存储防篡改日志的方法还包括以下步骤:
S7、对区块链网络中存储日志进行查询和自检与修复。
本发明的进一步技术方案是:所述步骤S1中日志采集包含了UDP和TCP两个消息列队,根据日志数据的大小选择UDP或者TCP进行传输。
本发明的进一步技术方案是:所述日志查询分为快速查询和安全查询,所述快速查询包括以下步骤:
S711、通过查询单元发送查询请求;
S712、快速查询数据库向查询单元反馈查询结果;
S713、在区块链中对查询结果中的日志和hash的一致性进行验证,如是,则执行下一步,如否,则返回步骤S712并将错误信息反馈给查询管理者;
S714、将查询到的标准日志转换为特定类型的日志向管理员反馈日志。
本发明的进一步技术方案是:所述安全查询包括以下步骤:
S721、通过查询单元向快速查询数据库发送查询请求;
S722、快速查询数据库返回查询结果并对查询结果中的日志和hash的一致性进行校验,如是,则执行下一步,如否,则向任意区块链网络节点发送日志查询请求并跳转至S727;
S723、查询单元将日志hash以及区块编号作为校验信息发送给任意的区块链网络节点;
S724、接收到该信息的区块链网络节点把校验信息转发给区块链网络中的所有节点;
S725、所有接收到校验请求的区块链网络节点核验区块中是否存在指定hash的日志,如存在,则想查询单元反馈验证通过,如不存在,则反馈验证错误信息给查询单元;
S726、若查询单元在有效时间内接收到超过半数以上的区块链网络节点的验证通过信息,则把标准日志转换为指定类型日志并反馈给管理员,结束查询,否则向任意区块链节点发送查询请求;
S727、接收到查询请求的区块链网络节点向区块链网络转发查询请求,所有接收到查询请求的节点都在自己的区块链数据库中查找对应的日志,并把日志返回给查询单元;
S728、若查询单元在有效时间内接收到超过节点数量一半的相同日志,则转换该日志并反馈给管理员,否则向管理员反馈查询失败。
本发明的进一步技术方案是:所述自检和修复包括以下步骤:
S731、进行自检的节点向区块链网络发送创世区块请求;
S732、接收到请求的区块链网络节点向数据库节点反馈创世区块;
S733、若自检节点在有效时间内接收到超过节点数量一半的相同区块,则对比该区块与自身数据库中的区块是否相同,如果不相同,把数据库中的创世区块替换为正确的创世区块,然后从创世区块开始通过hash值校验区块链中的每个区块是否正确;
S734、如果在校验过程中有某一个区块不正确,则从区块链网络中同步该区块后继续进行校验,直到检验完毕,若该节点同时也是快速查询数据库中的节点,则在自检同步过程中需要对数据库中的内容进行更新。
本发明的另一目的在于提供一种融合区块链技术拟态存储防篡改日志的系统,所述融合区块链技术拟态存储防篡改日志的系统包括
采集转换模块,用于采集拟态储存系统中各个异构体的日志并将其转换成标准格式的日志;
封装模块,用于将转换的标准日志发送给区块链网络节点,由该区块链网络节点把日志封装为交易;
存储模块,用于将交易发给管家节点,管家节点将日志存储到预区块中;
节点验证和签名发送模块,用于管家节点将预区块发给委员节点,委员节点验证预区块并发送签名给管家节点;
判断模块,用于管家节点判断收集到的委员签名是否超过半数,如是,则发布正式区块给所有节点并执行下一步,如否,则放弃发布返回节点验证和签名发送模块;
存储模块,用于各个节点将接收到的新区块同步到区块链中。
本发明的进一步技术方案是:所述融合区块链技术拟态存储防篡改日志的系统还包括
查询和自检与修复模块,用于对区块链网络中存储日志进行查询和自检与修复。
本发明的进一步技术方案是:所述采集转换模块中日志采集包含了UDP和TCP两个消息列队,根据日志数据的大小选择UDP或者TCP进行传输。
本发明的进一步技术方案是:所述日志查询分为快速查询和安全查询,所述快速查询包括:
快速请求发送单元,用于通过查询单元发送查询请求;
快速反馈单元,用于快速查询数据库向查询单元反馈查询结果;
快速判断单元,用于在区块链中对查询结果中的日志和hash的一致性进行验证,如是,则执行下一步,如否,则返回快速反馈单元并将错误信息反馈给查询管理者;
快速转换返回单元,用于将查询到的标准日志转换为特定类型的日志向管理员返回日志;
所述安全查询包括:
安全请求单元,用于通过查询单元向快速查询数据库发送查询请求;
安全判断单元,用于快速查询数据库返回查询结果并对查询结果中的日志和hash的一致性进行校验,如是,则执行安全校验单元,如否,则向任意区块链网络节点发送日志查询请求并跳转至安全区块查询返回单元;
安全校验单元,用于查询单元将日志hash以及区块编号作为校验信息发送给任意的区块链网络节点;
校验信息转发单元,用于接收到该信息的区块链网络节点把校验信息转发给区块链网络中的所有节点;
安全查看返回结果单元,用于所有接收到校验请求的区块链网络节点核验区块中是否存在指定hash的日志,如存在,则想查询单元反馈验证通过,如不存在,则反馈验证错误信息给查询单元;
安全转换日志单元,用于若查询单元在有效时间内接收到超过半数以上的区块链网络节点的验证通过信息,则把标准日志转换为指定类型日志并反馈给管理员,结束查询,否则向任意区块链节点发送查询请求;
安全区块查询返回单元,用于接收到查询请求的区块链网络节点向区块链网络转发查询请求,所有接收到查询请求的节点都在自己的区块链数据库中查找对应的日志,并把日志返回给查询单元;
返回管理员单元,用于若查询单元在有效时间内接收到超过节点数量一半的相同日志,则转换该日志并反馈给管理员,否则向管理员反馈查询失败;
所述自检和修复包括:
发送创世区块单元,用于进行自检的节点向区块链网络发送创世区块请求;
节点返回单元,用于接收到请求的区块链网络节点向数据库节点反馈创世区块;
对比单元,用于若自检节点在有效时间内接收到超过节点数量一半的相同区块,则对比该区块与自身数据库中的区块是否相同,如果不相同,把数据库中的创世区块替换为正确的创世区块,然后从创世区块开始通过hash值校验区块链中的每个区块是否正确;
更新单元,用于如果在校验过程中有某一个区块不正确,则从区块链网络中同步该区块后继续进行校验,直到检验完毕,若该节点同时也是快速查询数据库中的节点,则在自检同步过程中需要对数据库中的内容进行更新。
本发明的有益效果是:采用区块链作为拟态存储系统的日志存储模块,保护拟态存储系统产生的日志不被篡改。该区块链采用了PoV共识算法保证存储数据的一致性与安全性。PoV产生区块过程的通信复杂度较低,性能较好,同时可扩展性强,便于增加或减少集群节点数量。采用了两种查询方式,提供快速查询和安全查询,满足不同的查询需求,快速查询直接与快速查询数据库进行交互,而安全查询需要与区块链网络与快速查询数据库共同交互完成。区块链网络中的节点周期性的进行自检与修复,先通过多数表决确定创世区块的正确性,再通过区块链的链式结构对每一个区块进行检查,对于产生错误的区块向网络中其他节点请求正确的区块进行修复。
图1是Chukwa基本架构示意图。
图2是Kafka基本架构示意图。
图3是本发明实施例提供的区块链日志系统体系结构。
图4是本发明实施例提供的日志采集过程。
图5是本发明实施例提供的日志存储到区块链以及快速查询数据库的过程。
图6是本发明实施例提供的PoV算法运行过程。
图7是本发明实施例提供的两种不同的查询方式。
本发明提供的融合区块链技术拟态存储防篡改日志的方法,其详述如下:
日志模块整体架构
与传统的日志系统类似,我们提出的日志系统主要包括日志采集、日志存储、日志查询三个部分。为了增加日志存储的安全性,我们使用了区块链网络来进行日志存储;为了提高查询效率,我们还加入了一个快速查询数据库,用于对外提供高效查询的服务。日志采集单元负责收集不同模块的日志数据,它能过滤无效的日志数据,并对日志进行格式变换,然后随机选择区块链网络节点发布日志数据。为防止区块链节点同时处理过多的请求,日志发布能做到负载均衡。日志查询和分析单元能从区块链中查询日志记录,它支持快速查询和安全查询两种查询操作。具体体系结构如图3所示。
日志采集单元
日志可以有多种来源,例如拟态存储系统中各个异构体的日志,配置管理器的日志等,各种不同组件产生的日志格式也并不相同。日志采集单元收集各个日志产生源产生的日志,在过滤不正确的日志后,对正确的日志进行格式转换,并把日志转换为标准格式,最后将其发布到区块链网络中进行存储。日志采集可以使用主动式采集凡是或者被动式接收方式,采 集单元可以设置多个,每个日志源可以把日志发送给任意的采集单元。采集单元把日志随机发送给一个区块链网络节点,通过区块生成的过程把日志存放在区块链中。如图4所示。
日志采集单元中包含了UDP和TCP两个消息队列,根据日志数据的大小选择UDP或者TCP进行传输,可增加通信效率,减少网络通信量。针对区块链网络不能确保把接收到的所有数据都存放进区块链中的问题,因此采集单元设置了缓存与重发策略。采集单元在启动时向区块链网络注册区块订阅服务,每当区块链网络产生新的区块时即可获得该区块。在发送日志的时候会同时对这些已发送日志进行缓存,并设置超时时间,当采集单元接收到新发布的区块时,从区块中提取出日志,把缓存中相同的日志删除。若在缓存中存在超时后仍未查询到的日志,则把这些日志重新发送到区块链中,并重新设置超时时间。
为了便于后续的查询分析,日志的标准格式中要包含出日志内容本身以外的信息,包括日志类型、产生日志的应用或服务、产生日志的节点等。标准格式如下表所示。
表1日志存储格式
字段名 | 数据类型 | 备注 |
日志hash | String | 对日志内容进行的hash |
日志类型 | Int | 日志的类型 |
应用服务 | Int | 产生日志的应用或服务 |
节点编号 | Int | 产生日志的节点 |
产生时间 | String | 产生日志的时间 |
日志内容 | Object | 日志的具体信息 |
这里日志的hash是转换后的标准格式的日志hash,以便后续的查询校验以及生成区块时的查重。每一个日志和产生源之间是一对一的关系。同时转换后的日志内容也包含所有类型日志所可能产生的信息,标准日志可以无损的转换为对应类型的日志。
日志存储单元
区块链本质上是一个增量的分布式存储系统,只允许增加数据而不允许修改或删除数据,区块链网络中的每个节点都拥有相同的完整区块链的一个副本,因此本质上区块链也是通过引入冗余策略对数据进行备份。区块链通过一致性协议保证各个节点数据的一致性,一致性协议也被称为共识算法,本发明选用一种基于投票的算法PoV(Proof of Vote)来作为区块链网络的一致性算法。存储过程如图5所示。
PoV一致性协议中规定了三种身份节点——管家候选节点、管家节点以及委员节点,这三种特殊节点共同实施一致性协议从而产生出合格的区块。实现该协议的区块链网络相当于一个由区块链信息决定的状态机,每个新产生的区块都会修改整个系统的状态。由PoV协议维护的状态信息有委员列表、管家候选列表,管家列表,下一任管家,下个区块产生的开始时间等信息。
在使用PoV的区块链网络中,委员节点是最高权力节点,委员节点之间是平等的,他们通过共同投票的方式对共识事务进行决策。区块的生成由管家节点负责,管家节点的数量是固定不变的,我们记为N
b,管家节点 由委员节点投票选举产生,但不是所有节点都可以被选举,可以被选举成为管家节点的节点我们称之为管家候选节点,而管家候选节点则可以由任何的非管家候选节点申请,在经过过半委员节点同意后加入。此外,拥有最高决策权的委员节点也可以由其他节点申请并经过所有委员同意后加入。管家候选节点和委员节点可以随时退出该身份,不需要经过委员节点的同意。所有的节点身份更改信息都以特殊交易的形式存放进区块中,节点在接收到新发布的区块时根据特殊交易的信息更改自己所维护的各类节点列表。
申请成为管家候选流程:
1)向任意的委员节点申请获取推荐信。
2)把推荐信发送给所由委员节点,获取超过一半的委员的同意回复及签名。
3)把推荐信及超过一半的委员签名作为交易发送给所有管家节点。
4)等待包含该交易的区块发布。
申请成为委员节点:
1)向所有委员节点请求获取同意签名。
2)把包含所有委员签名的申请委员交易发送给所有管家节点。
3)等待包含该交易的区块发布。
投票选举下一任管家:
1)委员节点从所有管家候选中根据评分或自己的喜好选出固定数量的投票名单签名后发送给所有管家节点。
2)值班管家(负责生成区块的管家)节点在收到所有委员的投票后统计票数,获票数最高的管家候选为下任管家,管家数由第一个区块中的交易决定。最后把选票、选票结果及选票签名放进交易中放进预区块中。最后把预区块发送给所有委员请求签名。
3)值班管家在收到过半委员签名后把区块发布到区块链网络中。
PoV区块链网络中,每个管家产预区块均有固定的时间限制,一旦超过该时间没有发布合法区块,则轮到该管家编号+1的管家担任值班管家产预区块,以此类推。此外,每一任管家担任管家的时间也是有限制的,在一批管家生成某固定的数量的区块后,需要重新对管家候选进行投票,选举出新一任管家。每个区块产生的时间上限以及每一任管家需要产生的区块数都记录在第一个区块中。负责产预区块的管家编号由上一个区块的委员签名决定。由于委员签名具有随机性,下一个打包区块的管家也是随机的。具体流程如图6所示。
每一任管家在产生了B
w个普通区块后,就会产生一个特殊区块,该特殊区块只包含下一任管家选举的交易,而没有其他交易。所有的功能性交易(包括申请成为委员交易,退出委员交易,申请成为管家候选交易以及退出管家候选交易)和普通交易则由普通区块打包。普通交易是用户自定义的交易,可以包含任意的内容,因此可以把区块链应用于多种用途,在本方案中,我们则是把拟态系统产生的日志存放在普通交易中。
普通区块的生成和特殊区块的生成过程是类似的,在某管家节点接收到新发布的区块并把自己的身份更新为值班管家时开始,该节点在等待一 个延迟时间后开始产预区块。该延迟时间同样是记录在第一个区块上。设置区块生成延迟可以有效减缓区块生成的速度,增加每个区块打包的交易数,设置一个恰当的延迟时间可以增加区块链网络的吞吐量。值班管家需要在交易缓存池中获取交易,存放在区块中,若生成普通区块,则把功能性交易缓存池中的所有交易和普通交易缓存池中的一定量交易存放在区块中,若生成特殊区块,则只把投票缓存池中的投票进行统计后放进区块中生成未签名的预区块。每个区块放置的普通交易量是有上限的,该上限值可人为调整以适应系统的需求,而其他功能性交易因为产生的数量不会很多,因此没有限制其放置数量的必要性。管家节点在生成预区块后,把区块发送给所有委员,委员们在验证区块信息的正确性后对区块头进行签名,然后把签名返回给值班管家。在获取了超过二分之一的委员的签名后,值班管家把委员签名放进区块头,并以最后一个委员签名的时间为区块的产生时间发布该区块。
PoV区块链网络通过时间分割来限定每个区块产生的时间以及管家轮换时间,结合这一特性,本发明把采集单元的超时重发时间设置为管家轮换的周期,因为在管家轮换之际,缓存中的事务将会进行一次清洗,取出已经上链的数据。若把查询交易的周期设置为每个区块产生的周期,系统就能在区块链中新增数据的时候及时查询到日志,同时不会因为频繁轮询操作而导致网络拥塞。
在性能上,PoV通过部分去中心化实现了较强的容错能力和较高的吞吐性能,生成区块的通信复杂度只有O(3m),m为委员节点数量,且扩展 更强,普通节点加入或退出区块链网络没有限制,而委员节点和管家候选节点的加入和退出可在一定步骤后作为特殊的事务记录在区块链中。
当PoV区块链网络中的节点接收到发送来的日志时,会将日志转发给所有的管家节点,即记账节点。管家节点是网络中的部分节点,且数量是固定的,在减少通信负担的同时也能最大限度确保日志被打包进区块中。
区块链网络对外提供区块订阅服务,当一个节点订阅区块链的服务时,区块链网络中的所有节点将该节点的IP加入到订阅列表中。当区块链网络中有新的区块发布时,接收到新发布区块的节点向所有订阅了该服务的节点发送该区块。
由于区块链中的数据存储在区块中,当数据量较大的时候,查询效率较低。因此通过增加一个快速查询数据库存储区块中的日志数据,从而实现迅速查询的日志的功能。该快速查询数据库节点向区块链网络订阅区块服务,当该节点接收到新发布的区块时,就从区块中提取数据并存放在快速查询数据库中。在快速查询数据库中的日志以下表2格式存储:
表2日志存储格式
字段名 | 数据类型 | 备注 |
区块编号 | Int | 记录日志所在区块高度 |
日志hash | String | 对日志内容进行的hash |
日志类型 | Int | 日志的类型 |
应用服务 | Int | 产生日志的应用或服务 |
节点编号 | Int | 产生日志的节点 |
产生时间 | String | 产生日志的时间 |
日志内容 | Object | 日志的具体信息 |
分布式存储中的节点故障是难以避免的,为了及时纠正错误数据,区块链网络中的每个节点都需要周期性的自检并同步修复。
日志查询单元
在日志查询的方式上,既可以通过快速查询数据库进行快速查询,也可以通过区块链网络进行安全查询。快速查询数据库可以提供高效快速的查询服务,而通过区块链网络进行查询则能确保获取数据的安全性。虽然每一个区块链网络节点都存储了完整区块链的一个副本,理论上都可以提供数据查询服务。然而单个节点的数据是不可靠的,通过单节点查询数据不能确保查询结果的正确性,因此本发明通过多数表决的方式确保最终查询结果的正确性。普通查询及安全查询过程如图7所示。
普通查询流程:
(1)查询单元发送查询请求;
(2)快速查询数据库返回查询结果;
(3)校验日志和hash的一致性;
(4)把标准日志转换为特定类型的日志;
(5)向管理员返回日志。
安全查询流程:
(1)查询单元向快速查询数据库发送查询请求;
(2)快速查询数据库返回查询结果;
(3)校验日志和hash的一致性;
(4)把hash以及区块编号作为校验信息发送给任意的区块链网络节点;
(5)接收到该信息的节点把校验信息转发给网络中的所有节点;
(6)所有接受到校验信息的节点查看对应的区块中是否存在指定hash的日志,然后向查询单元返回结果;
(7)若在超时时间内接收到超过半数以上的区块链网络节点的确认信息,则把标准日志转换为制定类型日志并返回给管理员,结束查询。否则向任意区块链网络节点发送查询请求;
(8)接收到查询请求的节点向区块链网络转发查询请求,所有接收到查询请求的节点都在自己的区块链数据库中查找对应的日志,并把日志返回给查询单元;
(9)若查询单元在超时前接收到超过节点数量一半的相同日志,则转换该日志并返回给管理员。否则向管理员返回查询失败。
普通查询更着重考虑效率,直接与快速查询数据库进行交互,而安全查询更着重于查询结果的正确性,查询过程有两个阶段,第一个阶段先从快速查询数据库中查询日志,通过区块链网络进行确认,若结果不正确,则把查询请求发送到区块链网络中的每一个节点,对查询结果进行择多判决,安全查询能确保只要失效或者恶意节点数量不超过区块链的一半即可获得正确结果。由于数据库存储的日志是经过格式化的标准日志,因此系统在获取日志之后需要把标准日志转换为所需类型的日志再返回给查询者。
日志系统的自检与修复
分布式存储中的节点故障是难以避免的,为了及时纠正错误数据,区 块链网络中的每个节点都需要周期性的自检并同步修复。我们设计了如下同步算法并使其周期性的运行在区块链网络节点中:
(1)向区块链网络发送创世区块请求;
(2)接收到请求的节点向数据库节点返回创世区块;
(3)若在超时前接收到超过节点数量一半的相同区块,则对比该区块与自己数据库中的区块是否相同,如果不相同,把数据库中的创世区块替换为正确的创世区块,然后从创世区块开始通过hash值校验区块链中的每个区块是否正确;
(4)如果在校验过程中有哪一个区块不正确,则从区块链网络中同步该区块后继续进行校验,直到检验完毕。若该节点同时也是快速查询数据库中的节点,则在自检同步过程中需要对数据库中的内容进行更新。
采用区块链作为拟态存储系统的日志存储模块,保护拟态存储系统产生的日志不被篡改。该区块链采用了PoV共识算法保证存储数据的一致性与安全性。PoV的产生区块过程的通信复杂度较低,性能较好,同时可扩展性强,便于增加或减少集群节点数量。
采用了两种查询方式,提供快速查询和安全查询,满足不同的查询需求,快速查询直接与快速查询数据库进行交互,而安全查询需要与区块链网络与快速查询数据库共同交互完成。
区块链网络中的节点周期性的进行自检与修复,先通过多数表决确定创世区块的正确性,再通过区块链的链式结构对每一个区块进行检查,对于产生错误的区块向网络中其他节点请求正确的区块进行修复。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。
Claims (10)
- 一种融合区块链技术拟态存储防篡改日志的方法,其特征在于,所述融合区块链技术拟态存储防篡改日志的方法包括以下步骤:S1、采集拟态储存系统中各个异构体的日志并将其转换成标准格式的日志;S2、将转换的标准日志发送给区块链网络节点,由该区块链网络节点把日志封装为交易;S3、将交易发给管家节点,管家节点将日志存储到预区块中;S4、管家节点将预区块发给委员节点,委员节点验证预区块并发送签名给管家节点;S5、管家节点判断收集到的委员签名是否超过半数,如是,则发布正式区块给所有节点并执行下一步,如否,则放弃发布返回步骤S4;S6、各个节点将接收到的新区块同步到区块链中。
- 根据权利要求1所述的融合区块链技术拟态存储防篡改日志的方法,其特征在于,所述融合区块链技术拟态存储防篡改日志的方法还包括以下步骤:S7、对区块链网络中存储日志进行查询和自检与修复。
- 根据权利要求1所述的融合区块链技术拟态存储防篡改日志的方法,其特征在于,所述步骤S1中日志采集包含了UDP和TCP两个消息列队,根据日志数据的大小选择UDP或者TCP进行传输。
- 根据权利要求3所述的融合区块链技术拟态存储防篡改日志的方法,其特征在于,所述日志查询分为快速查询和安全查询,所述快速查询包括以下步骤:S711、通过查询单元发送查询请求;S712、快速查询数据库向查询单元反馈查询结果;S713、查询单元对查询结果中的日志和hash的一致性进行验证,如是,则执行下一步,如否,则返回步骤S712并将错误信息反馈给查询管理者;S714、将查询到的标准日志转换为特定类型的日志向管理员反馈日志。
- 根据权利要求4所述的融合区块链技术拟态存储防篡改日志的方法,其特征在于,所述安全查询包括以下步骤:S721、通过查询单元向快速查询数据库发送查询请求;S722、快速查询数据库返回查询结果并对查询结果中的日志和hash的一致性进行校验,如是,则执行下一步,如否,则向任意区块链网络节点发送日志查询请求并跳转至S727;S723、查询单元将日志的hash以及区块编号作为校验信息发送给任意的区块链网络节点;S724、接收到该信息的区块链网络节点把校验信息转发给区块链网络中的所有节点;S725、所有接收到校验请求的区块链网络节点核验区块中是否存在指定hash的日志,如存在,则向查询单元反馈验证通过,如不存在,则反馈验证错误信息给查询单元;S726、若查询单元在有效时间内接收到超过半数以上的区块链网络节点的验证通过信息,则把标准日志转换为指定类型日志并反馈给管理员,结束查询,否则向任意区块链节点发送查询请求;S727、接收到查询请求的区块链网络节点向区块链网络转发查询请求,所有接收到查询请求的节点都在自己的区块链数据库中查找对应的日志,并把日志返回给查询单元;S728、若查询单元在有效时间内接收到超过节点数量一半的相同日志,则转换该日志并反馈给管理员,否则向管理员反馈查询失败。
- 根据权利要求5所述的融合区块链技术拟态存储防篡改日志的方法,其特征在于,所述自检和修复包括以下步骤:S731、进行自检的节点向区块链网络发送创世区块请求;S732、接收到请求的区块链网络节点向数据库节点反馈创世区块;S733、若自检节点在有效时间内接收到超过节点数量一半的相同区块,则对比该区块与自身数据库中的区块是否相同,如果不相同,把数据库中的创世区块替换为正确的创世区块,然后从创世区块开始通过hash值校验区块链中的每个区块是否正确;S734、如果在校验过程中有某一个区块不正确,则从区块链网络中同步该区块后继续进行校验,直到检验完毕,若该节点同时也是快速查询数据库中的节点,则在自检同步过程中需要对数据库中的内容进行更新。
- 一种融合区块链技术拟态存储防篡改日志的系统,其特征在于,所述融合区块链技术拟态存储防篡改日志的系统包括采集转换模块,用于采集拟态储存系统中各个异构体的日志并将其转换成标准格式的日志;封装模块,用于将转换的标准日志发送给区块链网络节点,由该区块链网络节点把日志封装为交易;存储模块,用于将交易发给管家节点,管家节点将日志存储到预区块中;节点验证和签名发送模块,用于管家节点将预区块发给委员节点,委员节点验证预区块并发送签名给管家节点;判断模块,用于管家节点判断收集到的委员签名是否超过半数,如是,则发布正式区块给所有节点并执行下一步,如否,则放弃发布返回节点验证和签名发送模块;存储模块,用于各个区块链网络节点将接收到的新区块同步到区块链中。
- 根据权利要求7所述的融合区块链技术拟态存储防篡改日志的系统,其特征在于,所述融合区块链技术拟态存储防篡改日志的系统还包括查询和自检与修复模块,用于对区块链网络中存储日志进行查询和自检与修复。
- 根据权利要求8所述的融合区块链技术拟态存储防篡改日志的系统,其特征在于,所述采集转换模块中日志采集包含了UDP和TCP两个消息列队,根据日志数据的大小选择UDP或者TCP进行传输。
- 根据权利要求9所述的融合区块链技术拟态存储防篡改日志的系统,其特征在于,所述日志查询分为快速查询和安全查询,所述快速查询包括:快速请求发送单元,用于通过查询单元发送查询请求;快速反馈单元,用于快速查询数据库向查询单元反馈查询结果;快速判断单元,用于在区块链中对查询结果中的日志和hash的一致性进行验证,如是,则执行下一步,如否,则返回快速反馈单元并将错误信息反馈给查询管理者;快速转换返回单元,用于将查询到的标准日志转换为特定类型的日志向管理员返回日志;所述安全查询包括:安全请求单元,用于通过查询单元向快速查询数据库发送查询请求;安全判断单元,用于快速查询数据库返回查询结果并对查询结果中的日志和hash的一致性进行校验,如是,则执行安全校验单元,如否,则向任意区块链网络节点发送日志查询请求并跳转至安全区块查询返回单元;安全校验单元,用于查询单元将日志的hash以及区块编号作为校验信息发送给任意的区块链网络节点;校验信息转发单元,用于接收到该信息的区块链网络节点把校验信息转发给区块链网络中的所有节点;安全查看返回结果单元,用于所有接收到校验请求的区块链网络节点核验区块中是否存在指定hash的日志,如存在,则想查询单元反馈验证通过,如不存在,则反馈验证错误信息给查询单元;安全转换日志单元,用于若查询单元在有效时间内接收到超过半数以上的区块链网络节点的验证通过信息,则把标准日志转换为指定类型日志并反馈给管理员,结束查询,否则向任意区块链节点发送查询请求;安全区块查询返回单元,用于接收到查询请求的区块链网络节点向区块链网络转发查询请求,所有接收到查询请求的节点都在自己的区块链数据库中查找对应的日志,并把日志返回给查询单元;返回管理员单元,用于若查询单元在有效时间内接收到超过节点数量一半的相同日志,则转换该日志并反馈给管理员,否则向管理员反馈查询失败;所述自检和修复包括:发送创世区块单元,用于进行自检的节点向区块链网络发送创世区块请求;节点返回单元,用于接收到请求的区块链网络节点向数据库节点反馈创世区块;对比单元,用于若自检节点在有效时间内接收到超过节点数量一半的相同区块,则对比该区块与自身数据库中的区块是否相同,如果不相同,把数据库中的创世区块替换为正确的创世区块,然后从创世区块开始通过hash值校验区块链中的每个区块是否正确;更新单元,用于如果在校验过程中有某一个区块不正确,则从区块链网络中同步该区块后继续进行校验,直到检验完毕,若该节点同时也是快速查询数据库中的节点,则在自检同步过程中需要对数据库中的内容进行更新。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2018/109007 WO2020062211A1 (zh) | 2018-09-30 | 2018-09-30 | 一种融合区块链技术拟态存储防篡改日志的方法及系统 |
CN201880093162.0A CN112313916B (zh) | 2018-09-30 | 2018-09-30 | 一种融合区块链技术拟态存储防篡改日志的方法及系统 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2018/109007 WO2020062211A1 (zh) | 2018-09-30 | 2018-09-30 | 一种融合区块链技术拟态存储防篡改日志的方法及系统 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020062211A1 true WO2020062211A1 (zh) | 2020-04-02 |
Family
ID=69952739
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/109007 WO2020062211A1 (zh) | 2018-09-30 | 2018-09-30 | 一种融合区块链技术拟态存储防篡改日志的方法及系统 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112313916B (zh) |
WO (1) | WO2020062211A1 (zh) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111414433A (zh) * | 2020-05-09 | 2020-07-14 | 北京阳光欣晴健康科技有限责任公司 | 基于区块链和密文检索技术的分布式随访系统 |
CN111414431A (zh) * | 2020-04-28 | 2020-07-14 | 武汉烽火技术服务有限公司 | 基于区块链技术的网络运维数据容灾备份管理方法及系统 |
CN111597168A (zh) * | 2020-05-20 | 2020-08-28 | 北京邮电大学 | 一种基于诚信值的区块链容量回收方案 |
CN111813070A (zh) * | 2020-09-11 | 2020-10-23 | 之江实验室 | 一种拟态工业控制器主控单元之间的数据分级同步方法 |
CN112214800A (zh) * | 2020-09-17 | 2021-01-12 | 新华三信息安全技术有限公司 | 基于区块链的日志数据分选存证方法及系统、设备、介质 |
CN112256808A (zh) * | 2020-11-13 | 2021-01-22 | 泰康保险集团股份有限公司 | 一种数据处理方法、设备及存储介质 |
CN112347491A (zh) * | 2020-09-24 | 2021-02-09 | 上海对外经贸大学 | 一种用于双中台双链架构的内生性数据安全交互的方法 |
CN112383407A (zh) * | 2020-09-22 | 2021-02-19 | 法信公证云(厦门)科技有限公司 | 一种基于区块链的在线公证全流程日志处理方法及系统 |
CN112448946A (zh) * | 2020-11-09 | 2021-03-05 | 北京工业大学 | 基于区块链的日志审计方法及装置 |
CN112883106A (zh) * | 2020-12-31 | 2021-06-01 | 北京百度网讯科技有限公司 | 一种区块链的出块节点确定方法、装置、设备和介质 |
CN112948898A (zh) * | 2021-03-31 | 2021-06-11 | 北京众享比特科技有限公司 | 一种区块链中防止应用数据被篡改的方法和安全模块 |
CN113051616A (zh) * | 2021-04-09 | 2021-06-29 | 张宇翔 | 一种提升区块链安全性的方法及系统 |
CN113259552A (zh) * | 2021-04-19 | 2021-08-13 | 北京麦哲科技有限公司 | 一种防偷窥护隐私的拍摄装置和方法 |
CN113409141A (zh) * | 2021-05-27 | 2021-09-17 | 航天信息江苏有限公司 | 基于区块链技术的面向粮食收储全流程可追溯的监管方法 |
CN113469376A (zh) * | 2021-05-20 | 2021-10-01 | 杭州趣链科技有限公司 | 基于区块链的联邦学习后门攻击的防御方法和装置 |
CN113535493A (zh) * | 2021-07-23 | 2021-10-22 | 北京天融信网络安全技术有限公司 | 拟态Web服务器裁决测试方法、装置、介质和设备 |
CN113792290A (zh) * | 2021-06-02 | 2021-12-14 | 国网河南省电力公司信息通信公司 | 拟态防御的裁决方法及调度系统 |
CN113901142A (zh) * | 2021-10-13 | 2022-01-07 | 辽宁大学 | 一种面向时空数据的区块链架构及范围查询处理方法 |
CN113973008A (zh) * | 2021-09-28 | 2022-01-25 | 佳源科技股份有限公司 | 基于拟态技术和机器学习的检测系统、方法、设备及介质 |
CN114301624A (zh) * | 2021-11-24 | 2022-04-08 | 天链(宁夏)数据科技有限公司 | 一种应用于金融业务的基于区块链的防篡改系统 |
CN114595205A (zh) * | 2021-11-29 | 2022-06-07 | 国网辽宁省电力有限公司大连供电公司 | 基于区块链的电力系统日志分区存储与检索验证方法 |
CN114860807A (zh) * | 2022-05-11 | 2022-08-05 | 金蝶软件(中国)有限公司 | 区块链的数据查询方法、装置、设备和存储介质 |
CN114915657A (zh) * | 2022-04-24 | 2022-08-16 | 中国人民解放军战略支援部队信息工程大学 | 基于OpenTracing规范的拟态应用分布式追踪方法 |
CN116015978A (zh) * | 2023-02-13 | 2023-04-25 | 中国南方电网有限责任公司 | 一种基于拟态安全技术的异构冗余流量检测系统 |
CN116028990A (zh) * | 2023-03-30 | 2023-04-28 | 中国科学技术大学 | 一种基于区块链的防篡改隐私保护日志审计方法 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113269557B (zh) * | 2021-05-31 | 2024-06-21 | 中国银行股份有限公司 | 一种交易日志采集系统及其工作方法 |
CN114756901B (zh) * | 2022-04-11 | 2022-12-13 | 敏于行(北京)科技有限公司 | 操作性风险监控方法及装置 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107239953A (zh) * | 2017-06-20 | 2017-10-10 | 无锡井通网络科技有限公司 | 基于区块链的快速数据存储方法及系统 |
CN107835080A (zh) * | 2017-11-09 | 2018-03-23 | 成都国盛天丰网络科技有限公司 | 一种分布式系统数据收集方法及数据签名生成方法 |
CN108306893A (zh) * | 2018-03-05 | 2018-07-20 | 北京大学深圳研究生院 | 一种自组网络的分布式入侵检测方法和系统 |
CN108494581A (zh) * | 2018-02-09 | 2018-09-04 | 孔泽 | Sdn网络的控制器分布式日志生成方法及装置 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105404701B (zh) * | 2015-12-31 | 2018-11-13 | 浙江图讯科技股份有限公司 | 一种基于对等网络的异构数据库同步方法 |
US20170264428A1 (en) * | 2016-03-08 | 2017-09-14 | Manifold Technology, Inc. | Data storage system with blockchain technology |
US20180157700A1 (en) * | 2016-12-06 | 2018-06-07 | International Business Machines Corporation | Storing and verifying event logs in a blockchain |
CN110287259A (zh) * | 2019-06-27 | 2019-09-27 | 浪潮卓数大数据产业发展有限公司 | 一种基于区块链的审计日志防篡改方法 |
-
2018
- 2018-09-30 CN CN201880093162.0A patent/CN112313916B/zh active Active
- 2018-09-30 WO PCT/CN2018/109007 patent/WO2020062211A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107239953A (zh) * | 2017-06-20 | 2017-10-10 | 无锡井通网络科技有限公司 | 基于区块链的快速数据存储方法及系统 |
CN107835080A (zh) * | 2017-11-09 | 2018-03-23 | 成都国盛天丰网络科技有限公司 | 一种分布式系统数据收集方法及数据签名生成方法 |
CN108494581A (zh) * | 2018-02-09 | 2018-09-04 | 孔泽 | Sdn网络的控制器分布式日志生成方法及装置 |
CN108306893A (zh) * | 2018-03-05 | 2018-07-20 | 北京大学深圳研究生院 | 一种自组网络的分布式入侵检测方法和系统 |
Cited By (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111414431A (zh) * | 2020-04-28 | 2020-07-14 | 武汉烽火技术服务有限公司 | 基于区块链技术的网络运维数据容灾备份管理方法及系统 |
CN111414433A (zh) * | 2020-05-09 | 2020-07-14 | 北京阳光欣晴健康科技有限责任公司 | 基于区块链和密文检索技术的分布式随访系统 |
CN111597168A (zh) * | 2020-05-20 | 2020-08-28 | 北京邮电大学 | 一种基于诚信值的区块链容量回收方案 |
CN111813070A (zh) * | 2020-09-11 | 2020-10-23 | 之江实验室 | 一种拟态工业控制器主控单元之间的数据分级同步方法 |
CN112214800A (zh) * | 2020-09-17 | 2021-01-12 | 新华三信息安全技术有限公司 | 基于区块链的日志数据分选存证方法及系统、设备、介质 |
CN112383407B (zh) * | 2020-09-22 | 2023-05-12 | 法信公证云(厦门)科技有限公司 | 一种基于区块链的在线公证全流程日志处理方法及系统 |
CN112383407A (zh) * | 2020-09-22 | 2021-02-19 | 法信公证云(厦门)科技有限公司 | 一种基于区块链的在线公证全流程日志处理方法及系统 |
CN112347491A (zh) * | 2020-09-24 | 2021-02-09 | 上海对外经贸大学 | 一种用于双中台双链架构的内生性数据安全交互的方法 |
CN112448946B (zh) * | 2020-11-09 | 2024-03-19 | 北京工业大学 | 基于区块链的日志审计方法及装置 |
CN112448946A (zh) * | 2020-11-09 | 2021-03-05 | 北京工业大学 | 基于区块链的日志审计方法及装置 |
CN112256808A (zh) * | 2020-11-13 | 2021-01-22 | 泰康保险集团股份有限公司 | 一种数据处理方法、设备及存储介质 |
CN112256808B (zh) * | 2020-11-13 | 2023-09-12 | 泰康保险集团股份有限公司 | 一种数据处理方法、设备及存储介质 |
CN112883106B (zh) * | 2020-12-31 | 2024-02-13 | 北京百度网讯科技有限公司 | 一种区块链的出块节点确定方法、装置、设备和介质 |
CN112883106A (zh) * | 2020-12-31 | 2021-06-01 | 北京百度网讯科技有限公司 | 一种区块链的出块节点确定方法、装置、设备和介质 |
CN112948898A (zh) * | 2021-03-31 | 2021-06-11 | 北京众享比特科技有限公司 | 一种区块链中防止应用数据被篡改的方法和安全模块 |
CN113051616A (zh) * | 2021-04-09 | 2021-06-29 | 张宇翔 | 一种提升区块链安全性的方法及系统 |
CN113051616B (zh) * | 2021-04-09 | 2023-12-19 | 新疆量子通信技术有限公司 | 一种提升区块链安全性的方法及系统 |
CN113259552A (zh) * | 2021-04-19 | 2021-08-13 | 北京麦哲科技有限公司 | 一种防偷窥护隐私的拍摄装置和方法 |
CN113469376A (zh) * | 2021-05-20 | 2021-10-01 | 杭州趣链科技有限公司 | 基于区块链的联邦学习后门攻击的防御方法和装置 |
CN113409141A (zh) * | 2021-05-27 | 2021-09-17 | 航天信息江苏有限公司 | 基于区块链技术的面向粮食收储全流程可追溯的监管方法 |
CN113792290A (zh) * | 2021-06-02 | 2021-12-14 | 国网河南省电力公司信息通信公司 | 拟态防御的裁决方法及调度系统 |
CN113792290B (zh) * | 2021-06-02 | 2024-02-02 | 国网河南省电力公司信息通信公司 | 拟态防御的裁决方法及调度系统 |
CN113535493B (zh) * | 2021-07-23 | 2023-08-25 | 北京天融信网络安全技术有限公司 | 拟态Web服务器裁决测试方法、装置、介质和设备 |
CN113535493A (zh) * | 2021-07-23 | 2021-10-22 | 北京天融信网络安全技术有限公司 | 拟态Web服务器裁决测试方法、装置、介质和设备 |
CN113973008A (zh) * | 2021-09-28 | 2022-01-25 | 佳源科技股份有限公司 | 基于拟态技术和机器学习的检测系统、方法、设备及介质 |
CN113973008B (zh) * | 2021-09-28 | 2023-06-02 | 佳源科技股份有限公司 | 基于拟态技术和机器学习的检测系统、方法、设备及介质 |
CN113901142A (zh) * | 2021-10-13 | 2022-01-07 | 辽宁大学 | 一种面向时空数据的区块链架构及范围查询处理方法 |
CN113901142B (zh) * | 2021-10-13 | 2024-05-07 | 辽宁大学 | 一种面向时空数据的区块链架构及范围查询处理方法 |
CN114301624A (zh) * | 2021-11-24 | 2022-04-08 | 天链(宁夏)数据科技有限公司 | 一种应用于金融业务的基于区块链的防篡改系统 |
CN114595205A (zh) * | 2021-11-29 | 2022-06-07 | 国网辽宁省电力有限公司大连供电公司 | 基于区块链的电力系统日志分区存储与检索验证方法 |
CN114915657A (zh) * | 2022-04-24 | 2022-08-16 | 中国人民解放军战略支援部队信息工程大学 | 基于OpenTracing规范的拟态应用分布式追踪方法 |
CN114915657B (zh) * | 2022-04-24 | 2024-01-26 | 中国人民解放军战略支援部队信息工程大学 | 基于OpenTracing规范的拟态应用分布式追踪方法 |
CN114860807A (zh) * | 2022-05-11 | 2022-08-05 | 金蝶软件(中国)有限公司 | 区块链的数据查询方法、装置、设备和存储介质 |
CN116015978B (zh) * | 2023-02-13 | 2023-12-05 | 中国南方电网有限责任公司 | 一种基于拟态安全技术的异构冗余流量检测系统 |
CN116015978A (zh) * | 2023-02-13 | 2023-04-25 | 中国南方电网有限责任公司 | 一种基于拟态安全技术的异构冗余流量检测系统 |
CN116028990A (zh) * | 2023-03-30 | 2023-04-28 | 中国科学技术大学 | 一种基于区块链的防篡改隐私保护日志审计方法 |
Also Published As
Publication number | Publication date |
---|---|
CN112313916B (zh) | 2023-01-17 |
CN112313916A (zh) | 2021-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020062211A1 (zh) | 一种融合区块链技术拟态存储防篡改日志的方法及系统 | |
US8412733B1 (en) | Method for distributed RDSMS | |
CN101997823B (zh) | 一种分布式文件系统及其数据访问方法 | |
Goodhope et al. | Building LinkedIn's Real-time Activity Data Pipeline. | |
Riesen et al. | Alleviating scalability issues of checkpointing protocols | |
US11892976B2 (en) | Enhanced search performance using data model summaries stored in a remote data store | |
CN102902615A (zh) | 一种Lustre并行文件系统错误报警方法及其系统 | |
Essid et al. | Combining intrusion detection datasets using MapReduce | |
EP3349416B1 (en) | Relationship chain processing method and system, and storage medium | |
Gorton et al. | The medici integration framework: A platform for high performance data streaming applications | |
Qi et al. | Blockchain based consensus checking in cloud storage | |
US20230009460A1 (en) | Trail recording system and data verification method | |
Jiang et al. | MyStore: A high available distributed storage system for unstructured data | |
Ahmad et al. | Discrepancy detection in whole network provenance | |
Chun et al. | Design Considerations for Information Planes. | |
Silalahi et al. | A survey on logging in distributed system | |
Fang et al. | A blockchain consensus mechanism for marine data management system | |
Zhang et al. | BFT Consensus Algorithms | |
Lin et al. | An optimized multi-Paxos protocol with centralized failover mechanism for cloud storage applications | |
Sulkava | Building scalable and fault-tolerant software systems with Kafka | |
Shen | Distributed storage system model design in internet of things based on hash distribution | |
Wang et al. | Blockchain-Based Multi-Cloud Data Storage System Disaster Recovery | |
Tian et al. | Overview of Storage Architecture and Strategy of HDFS | |
Carchiolo et al. | ICs Manufacturing Workflow Assessment via Multiple Logs Analysis. | |
Köstler et al. | SmartStream: towards byzantine resilient data streaming |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18935682 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18935682 Country of ref document: EP Kind code of ref document: A1 |