CN110196885B - Cloud distributed real-time database system - Google Patents

Cloud distributed real-time database system Download PDF

Info

Publication number
CN110196885B
CN110196885B CN201910508447.2A CN201910508447A CN110196885B CN 110196885 B CN110196885 B CN 110196885B CN 201910508447 A CN201910508447 A CN 201910508447A CN 110196885 B CN110196885 B CN 110196885B
Authority
CN
China
Prior art keywords
model
real
transaction
time
snapshot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910508447.2A
Other languages
Chinese (zh)
Other versions
CN110196885A (en
Inventor
于全喜
孔海斌
高贞彦
唐军沛
姜雪梅
王飞
常晓萌
吕秋霞
刘春庆
谭军光
周志辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongfang Electronics Co Ltd
Original Assignee
Dongfang Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongfang Electronics Co Ltd filed Critical Dongfang Electronics Co Ltd
Priority to CN201910508447.2A priority Critical patent/CN110196885B/en
Publication of CN110196885A publication Critical patent/CN110196885A/en
Application granted granted Critical
Publication of CN110196885B publication Critical patent/CN110196885B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3006Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is distributed, e.g. networked systems, clusters, multiprocessor systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/80Database-specific techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/805Real-time

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Quality & Reliability (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a cloud distributed real-time database system, which comprises a real-time database and a real-time service bus; the real-time database comprises a storage layer, a positioning layer and an interface layer of a distributed hierarchical structure; the storage layer stores a real-time model based on a distributed NoSQL database, and the real-time model corresponds to a relational model of a relational database of the power grid system; the positioning layer is used for storing the fragment memory snapshots, and the fragment memory snapshots comprise monitoring model snapshots, numerical section snapshots, transaction snapshots and the like; the interface layer provides a transaction-type component for realizing the establishment, access and processing of the partitioned memory snapshot; the real-time service bus comprises a model loading service, a fat client service and the like. The method is based on the CIM (common information model) grouping and slicing technology of the intelligent power grid regulation and control system, realizes the real-time calculation processing of fat client side service and the quasi-real-time access of the thin client side, expands the functions of model checking and synchronization, and improves the concurrency control and the extension expansibility of the system.

Description

Cloud distributed real-time database system
Technical Field
The invention relates to a database system, in particular to a cloud distributed real-time database system for an intelligent power grid.
Background
The traditional distributed real-time database matched with the fourth-generation SCADA/EMS system takes a shared memory type cache as a center, and monitoring service application is constructed layer by layer outwards; and (3) constructing an access interface of each cache layer by adopting a hierarchical index technology of a Hash ring, collecting acquired data and operation instructions by applying a service main node, finishing calculation and snapshot maintenance, and multicasting the changed data to the standby node to maintain the snapshot.
The partition fault tolerance of the traditional real-time database is poor, the capacity and the performance are limited by the resources of a single node, and a resource island and an information island are easy to form; the relational database does not support distributed deployment and high concurrent access; after the NoSQL database is distributed, cross-network access is needed, and high-frequency access is not supported. The research on a cloud distributed real-time database with flexible delivery and flexible capacity is a problem to be solved urgently in the field.
Disclosure of Invention
The invention provides a cloud distributed real-time database system, which aims to solve the technical problems that: the flexible-delivery and elastic-expansion database is provided for the smart power grid, and the requirements of high-speed, concurrent and real-time access of the smart power grid cloud SCADA system are met.
The technical scheme of the invention is as follows:
a cloud distributed real-time database system comprises a real-time database and a real-time service bus;
the real-time database comprises a storage layer, a positioning layer and an interface layer of a distributed hierarchical structure;
the storage layer is based on a distributed NoSQL database and is used for constructing storage service; the NoSQL database is used for storing a real-time model, the real-time model corresponds to a relational model stored in a relational database of the power grid system, and the relational model is a CIM model;
the positioning layer is used for storing fragment memory snapshots, and the fragments are grouped based on a CIM (common information model); the memory snapshot is a multi-layer MAP container sharing pointers, and a monitoring model snapshot, a numerical value section snapshot, an index snapshot and the like are established according to a real-time model and a model change notice; the monitoring model in the monitoring model snapshot is the mapping of a CIM model, and the numerical section snapshot is used for storing the numerical value of the measuring point calculated by the power grid SCADA system;
the interface layer comprises a transaction type component, and the transaction type component is used for realizing the establishment, access and processing of the partitioned memory snapshot;
the real-time service bus is used for realizing a cloud monitoring function by calling the transaction type component of the interface layer.
Further: implementing streaming computation based on the elastic message queue;
the real-time service bus comprises a model loading service, the interface layer comprises a Mempop component, and the model loading service calls the Mempop component to realize full loading of the relation model to the real-time model and incremental processing of the real-time model;
the full loading refers to that the Mempop component loads the relation model into the real-time model in full;
the incremental processing is as follows: after full loading, the model of the Mempop component maintains a flow monitoring model in a transaction subscription elastic message queue, and a transaction maintenance source language in the flow monitoring model is cached into a transaction snapshot; the flow monitoring model is a flow type cache model based on an elastic message queue, the transaction maintenance source language is generated according to an increment maintenance log of a relation model and contains information for performing increment processing on a real-time model; and the model maintenance transaction submits the NoSQL database according to the transaction maintenance source language records in the transaction snapshot to complete the processing of the real-time model.
Further: the transaction maintenance source language contains the following information: maintaining database names, table names, operation types, record contents, batch execution numbers and submission marks related to the logs, wherein the operation types comprise updating, inserting and deleting; in the same transaction maintenance source language, the batch execution number and the submission mark only comprise one;
when the model maintenance affair extracts the affair maintenance source language, whether the affair maintenance source language has batch execution numbers or a submission mark is judged: if the transaction maintenance source language has the commit mark, directly executing a commit operation after the transaction maintenance source language is put into the transaction snapshot, and committing all records in the transaction snapshot to the real-time model; if the transaction maintenance source language has the batch execution serial number, checking whether the record number of the transaction snapshot exceeds a preset record upper limit value, if not, storing the transaction maintenance source language in the transaction snapshot and then extracting the next transaction maintenance source language, and if so, storing the transaction maintenance source language in the transaction snapshot after performing one-time commit operation and then continuously extracting the next transaction maintenance source language; if neither the commit marker nor the batch execution number exists, then the next transaction maintenance source is continuously fetched.
Further: the NoSQL database is provided with a key value table, namely a locking state LMS, and is used for recording which records in the real-time model are related to the transaction snapshot;
when the model maintenance affair executes the submitting operation, the affair maintenance source language is read from the affair snapshot in sequence, after each affair maintenance source language is taken, the locking state LMS is detected regularly to judge whether the real-time model record related to the affair maintenance source language is in the locking state LMS or not: if not, adding the real-time model record into the locking state LMS, and transferring the information in the transaction maintenance source language into a map container to wait for batch submission; if the LMS is in the locking state, a deadlock management flow is entered: and circularly and periodically detecting the locking state LMS within a preset timeout time limit until the real-time model record is no longer associated with the transaction maintenance source language in the transaction snapshot, exiting the management process, transferring the information in the transaction maintenance source language into a map container to wait for batch submission, and if the timeout time limit is exceeded and the real-time model record is still associated with the transaction maintenance source language in the transaction snapshot, taking a hard maintenance measure, namely deleting all data in the NoSQL database and then reinserting the deleted data.
Further: when the records in the map container are submitted to the NoSQL database in batches, traversing each record, judging whether the operation type is inserted or not, loading the record of which the operation type is inserted into an execution block Bulk of the NoSQL database, submitting for execution once after traversing, and executing the record of which the operation type is updated or deleted;
judging whether the submission maintenance is successful after each submission is executed:
if the submission fails, entering a rollback flow: for the insert operation, the operation is analyzed from the execution block Bulk and is changed into single execution, and for the update or deletion operation, the operation is submitted in a single execution mode; if the resubmission fails, a hard maintenance measure is adopted, namely, all data in the NoSQL database are deleted and then inserted again;
if the submission is successful or the rollback process is executed, clearing records in the locking state LMS associated with the submission, emptying the transaction snapshot, recording batch execution numbers in the last record in the transaction snapshot before emptying, and then issuing a model change notice to the elastic message queue.
Further: the real-time service bus further comprises a fat client service, and the transactional component further comprises a Paramdb component and a Valuedb component;
when the fat client side service is started, the NoSQL database is retrieved by taking the service tag as an input condition, and the affiliated slices are extracted to establish a slice memory snapshot of the fat client side service, namely a monitoring model snapshot, a numerical value section snapshot, an index snapshot and a cluster snapshot;
the service label is a mark of micro service, namely the self-fat client service, and is used for determining the grouping of the CIM model;
the index snapshot is used for storing secondary index information of a data table in the NoSQL database;
the cluster snapshot is used for storing node information related to the microservice;
when the fat client side service runs, the Paramdb component and the Valuedb component are used for completing retrieval and maintenance of the fragment memory snapshot;
the Paramdb component integrates a snapshot maintenance transaction, the snapshot maintenance transaction subscribes to a model change notification issued by the model maintenance transaction in the elastic message queue, and maintains a fragment memory snapshot of the fat client service according to the model change notification;
the Valuedb component integrates numerical value processing transactions, the numerical value processing transactions adopt sub-snapshots of views at specific time points to carry out batch maintenance on collected and processed numerical values, the operations of retrieving, inserting, updating and deleting numerical value cross-section snapshots are completed, the numerical value processing transactions also write back changed numerical values to the NoSQL database at regular time, and numerical value change notifications are issued through an elastic message queue.
Further: the fat client side service is deployed through a master-slave mode, and only the numerical processing transaction of the master service is complex and written back to the NoSQL database.
Further: the real-time service bus also includes a thin client service for directly accessing the NoSQL database through a proxy mode.
Further: the system also comprises a model synchronization mechanism which synchronizes the real-time model to other databases of the isomorphic system or cascaded sub-databases; the synchronization mode is as follows: and synchronizing the primary full-scale model according to the specified interval, and performing incremental maintenance on the synchronization target database in real time according to the model change notification and the value change notification in the elastic message queue.
Further: the system also comprises a model checking mechanism for checking the consistency of the real-time model and the relation model.
The technical solution is further explained as follows:
the cloud distributed real-time database system comprises a distributed hierarchical structure and a streaming computing framework, provides real-time monitoring micro-services such as standard transaction type components, assembly model loading, thick clients, thin clients and the like, and realizes a cloud monitoring platform through a real-time service bus. The method is based on the CIM (common information model) grouping and slicing technology of the intelligent power grid regulation and control system, realizes the real-time calculation processing of the fat client side service and the quasi-real-time access of the thin client side, expands the functions of model checking and synchronization, and improves the concurrency control and the extension expansibility of the system.
The distributed hierarchical structure comprises a component assembling and physical hierarchical dividing method for implementing cloud technology, and the streaming computing framework comprises a micro-service integration and running mode, a model and a numerical parallel computing process. The model comprises three models of relation (SQL or CIM), real-time (NoSQL) and monitoring (fragment memory snapshot), and the numerical value comprises the acquisition of a measuring point and alarm data.
The distributed hierarchical structure comprises a component assembling and physical hierarchical dividing method for implementing cloud technology, and the streaming computing framework comprises a micro-service integration and running mode, a model and a numerical parallel computing process.
The transactional component encapsulates the message queue and access interface of the NoSQL database: loading a Mempop component by the model, integrating model maintenance affairs, loading a power grid CIM model in full or incremental mode, and issuing a change notice; the model library Paramdb component integrates snapshot maintenance transactions and manages the fragment memory snapshot; and the value library Valuedb component integrates numerical processing transactions and realizes retrieval and maintenance of snapshots.
The assembly method of the distributed hierarchical structure adopts a reactor mode supporting event driver development, selects a message queue or a NoSQL database by maintaining a configuration file, and assembles transactional components; the Mempop component provides a one-way loading function of the relation model to the real-time model; the Paramdb and Valuedb components are assembled into a composite component, the former provides a retrieval interface for fat client services, and the latter provides a numerical processing interface for services.
According to the physical level division method of the distributed hierarchical structure, a storage layer builds storage services by relying on a message queue, a NoSQL database, a distributed file system and the like provided by a cloud service provider; the positioning layers are distributed in the microservices, and monitoring models are designed based on a multi-layer MAP container sharing pointer class, namely on-chip memory snapshots such as a numerical value section snapshot, a monitoring model snapshot, a cluster | index snapshot and the like; the interface layer provides access interfaces (IParamdb, IValuedb) and management interfaces (IMempop) for transactional components.
For further explanation of the interface layer: the IMempop interface completes the mapping of a relation model- > a real-time model, and the view of the global data is kept consistent. The IPAramdb interface retrieves an index snapshot and a monitoring model snapshot of the monitoring model, defines methods such as get, select and list _ table, and adopts a short-term scanning Iterator method to construct a specific time point view and support high-frequency access. And the IValuedb interface retrieves and maintains the index snapshot and the numerical section snapshot of the monitoring model, and adopts a sub-snapshot method of a view at a specific time point to retrieve, insert, update, delete and other maintained atomic operations on the collected and processed numerical values.
The micro-service integration and operation mode of the streaming computing framework packs the services of a message queue, a NoSQL database, model loading, a fat client, a thin client, model checking, model synchronization and the like into a mirror image, can use mirror image warehouse management, supports micro-service label management, and can use an open source or private task scheduler.
The parallel computing process of the streaming computing framework realizes distributed mapping of a relational model (SQL library), a real-time model (NoSQL library) and a monitoring model (micro-service memory) by transaction maintenance source language transfer messages, and specifically comprises a full-load process: relation model- > Mempop initializes full-quantity- > real-time model; an increment processing flow: relationship model maintenance- > increment maintenance log- > increment stream model- > Mempop increment processing- > real-time model and change notification; when the fat client side service is started, initializing a monitoring model of the micro-service; at run-time, the sharded memory snapshot is retrieved or maintained using the IParamdb or IValuedb interface.
The positioning layer 'multi-layer MAP container sharing pointer class' has the structure that: map < key, shared _ ptr < CLASS > >, key is the ID of the elastic monitoring object (KObject, a JSON CLASS, integrated access interface), value is KObject, and CLASS is KObject or interface implementation CLASS.
The real-time service bus adopts the intermediary technologies of distributed plug and play directory service, distributed dynamic routers and the like to realize wide-area transparent access of the service.
The distributed plug-and-play directory service uses one fragment of the distributed real-time database to create a directory database, and stores and manages registered service information.
The fat client side service only supports C + + integration, registers to a real-time service bus during initialization, generates a directory service snapshot record, records a service name in a service data offset record table of a NoSQL library, and finally submits an offset recorded successfully to ensure the consistency of switching between the main service and the standby service.
The thin client service adopts a stateless design, and can directly access the NoSQL database through a proxy mode at any time to realize cross-C + + | Python | Java programming language client proxy access.
The model of the transactional Mempop component maintains transactions, uses model transaction management to realize coordination processing with transaction snapshots, and comprises a subscription flow monitoring model, deadlock management, submission management, rollback management and the like.
For further explanation of model maintenance transactions: the submission management is that for the data in the transaction snapshot container, the insertion type operation executes one batch submission, and other types execute the operation in a single mode; if the transaction is successful, returning to 'transaction success feedback', and issuing a model change notification. Deadlock management is to store the key of the transaction maintenance source language record in the locking state LMS (a key table in the NoSQL library, LMS = < record. When the id of the maintained monitoring model KObject is in the LMS and has a value of 1, the KObject has a transaction lock, and other concurrent transactions can only wait when maintaining the KObject; with its id not in the LMS or a value of 0, KObject has no transaction lock or a lock invalid, and other concurrent transactions do not have to wait. Rollback management is an operation that fails to commit once a batch insert, converting batch data into a single transaction. For a single record that commit is done with an error, the error log is thrown and the NoSQL library is maintained hard (delete then insert full data). Finally, model change notifications are issued on a case-by-case basis. A concurrency control strategy, wherein a model loading service corresponding to a transaction is deployed in a master-slave mode, and only the master service executes a submission operation; and after the main service is abnormal, one standby service becomes a main node, subscribes the assembly transaction maintenance source from the recorded offset, and starts Mempop increment operation. And the commit operation is a process of maintaining a model change notice of transaction release according to a service label subscription model and resetting a snapshot pointer of the fragmented memory to point to a new model. The value processing transaction also only provides commit operation, the value change writes back the snapshot, and the changed value set is written back to the NoSQL library at regular time.
According to the transactional Paramdb component, a snapshot maintenance transaction maintains a fragment memory snapshot based on an OCC-BC protocol, and model retrieval is based on an iterator mode to realize methods such as single key query, multi-key retrieval and conditional query.
In the transactional Valuedb component, the numerical value processing transaction adopts a sub-snapshot method of a view at a specific time point to carry out maintenance atomic operations such as retrieval, insertion, update, deletion and the like on the collected and processed numerical values.
The value of the main service of the fat client maintains the affair, and the changed value is issued to the standby service for processing after the snapshot is maintained; when the main service and the standby service are switched, the standby service is directly converted into the main service, and the original main service is started and then serves as the standby service.
The real-time model checking is used for checking the consistency of the real-time model based on a network model matching method, and checking of direct table conversion, extended table conversion and the like is realized.
The real-time model synchronization is to synchronize the model to other isomorphic or cascaded real-time database systems, generally to synchronize the full-scale model at specified date intervals, and the real-time synchronization model maintains the model change notification of the transaction issue. Model change notifications are also in the form of transaction maintenance source language, published via wide area message queues to the topics of flow monitoring models of other real-time databases.
The technical innovation points of the invention are mainly as follows:
(1) the transactional component and the micro-service technology are adopted to realize a distributed hierarchical structure and a streaming computing framework and support the development, operation and maintenance of the DevOps.
(2) Realizing a fragment memory snapshot based on a multilayer container of shared pointer class, and establishing key value mapping from a NoSQL library to the snapshot; only providing two types of access interfaces of Paramdb and Valuedb of the elastic monitoring object based on the JSON type, and supporting secondary development of component assembling type.
(3) The source language is maintained by using affairs, global model subscription and publishing are realized, the incremental log technology of an integrated relational database, the JSON processing technology of an integrated NoSQL database, the integrated intelligent pointer technology and the management technology of an integrated relational, real-time and monitoring model are realized.
(4) By using the transactional components, the problem that some NoSQL databases have no transaction processing is shielded, and abnormal data information is thrown out through the transaction processing, so that operation and maintenance management is facilitated.
Compared with the prior art, the invention has the following positive effects:
(1) supporting cloud distributed real-time data processing, and elastically stretching the processing capacity; flexible message queues and NoSQL type selection and deployment are supported; after the message queue is opened, cloud expansion is easy to realize.
(2) The real-time computing module of the fat client service only accesses the fragment memory snapshot, and the operation of the business is not influenced by the abnormity of the relational database and the abnormity of the NoSQL database; monitoring a snapshot of a partitioned memory of the model, and supporting high-frequency concurrent access of real-time calculation; the NoSQL database of the real-time model supports distributed deployment and can be better in track with IaaS service; the relational model is developed by adopting SQL language, so that the management cost of the model can be reduced.
(3) After the monitoring cloud ecology is expanded, only one set of relational database cluster needs to be deployed in the monitoring center, only the NoSQL database needs to be deployed in other cloud environments, the NoSQL database which is not compatible with the type can be compatible, and the management cost of the cloud SCADA system is reduced.
(4) The assembly technology simplifies secondary development, and the micro-service design and maintenance supports DevOps management; the strong consistency processing of a relation model- > a real-time model- > a monitoring model is supported; the numerical value processing interface is simple, the usability and the partition fault tolerance are strong, and the weak data consistency processing is supported.
Drawings
FIG. 1 is a schematic diagram of a distributed hierarchy according to the present invention. The invention adopts a distributed structure, and hierarchically describes and implements a cloud technology integration framework from bottom to top: the monitoring service platform sequentially comprises an IaaS layer, a storage layer, a positioning layer and an interface layer, and provides real-time data access support for monitoring service of distributed pre-acquisition and parallel distributed SCADA (supervisory control and data acquisition) in the monitoring service platform through technologies such as a real-time service bus, component splicing, model management and the like.
FIG. 2 is a flow-through computing framework of the present invention. The invention adopts a streaming processing framework to describe the conversion process from a relational model to a real-time model, the conversion process from the real-time model to a fragment memory snapshot, the model loading and numerical processing process of the fat client monitoring service and the like.
FIG. 3 is a simplified flow diagram of a model maintenance transaction, depicting the processing flow of the "model maintenance transaction" in the Mempop incremental component of the model loading service of FIG. 2, integrating deadlock management and batch commit NoSQL database operations.
FIG. 4 is a flow diagram of model maintenance transaction deadlock management.
FIG. 5 is a flow diagram of model maintenance transaction commit, rollback management.
Fig. 6 is a simplified flow chart of a snapshot maintenance transaction, which subscribes to the model change notification of fig. 2 and executes the OCC-BC protocol to maintain the fragmented memory snapshot.
Fig. 7 is a schematic diagram of concurrency control of numeric processing transactions, which describes a concurrency control policy of numeric processing transactions in the primary/backup/thick client service, and implements a weakly consistent numeric synchronization.
Detailed Description
The technical scheme of the invention is explained in detail in the following with the accompanying drawings:
1. base definition
1) Model and value
The model of the invention is a real-time monitoring equipment model and a measuring point model, and comprises the information of a CIM model of a typical transformer substation. The numerical value is the collected alarm data of the measuring point, and comprises the shaping or floating point numerical value of remote measurement, remote signaling, remote control and remote regulation and the alarm of related events.
The models are classified as follows:
the relation model is as follows: and storing the power grid CIM model into a relational database.
Real-time modeling: key-value class models and values, stored in the NoSql database.
And (3) monitoring the model: the model in the in-service memory snapshot is stored.
Flow monitoring model: incremental maintenance records of the models in the elastic message queue.
2) Elastic monitoring object
The elastic monitoring object KObject is a JSON class and supports a simple access processing interface. Each NoSQL database table, mapped to a KObject set, can be implemented using nlohmann/json.
3) Transactional maintenance source language
The transaction maintenance source language record is a JSON string with a fixed format, and the data structure in this embodiment is as follows:
record
--------------------------------------
key format description remark
database string database name kemp
table | string | table name |
type | string | operation type | DML operation: update, insert, delete
data string latest records KObject of a row of data of JSON relational table
offset | long | flow monitor model offset | positive integer
ts | Long | second value operating time |1542189172
xoffset | long | bulk execution integer number | none (commit) or positive integer (maintenance number)
commit strand commit instruction none or true (last record, commit transaction)
old string original record JSON update operation
xoffset and commit can only exist one per transaction maintenance source language record. During batch maintenance, the xoffset positive integer increment records the maintained record number, and commit is the submission mark of the last record.
4) Locked state LMS
LMS(Lock Model Record) = <record.data.id, status>;
Data.id exists in transaction snapshot, status = 1; otherwise key empty or status = 0. A key value table is created in NoSQL library, and LMS is stored.
5) Mapping to NoSQL library with respect to relational library table
Key value library: id, KObject >.
Document library: table = [ id, KObject:: to json () ] }, id is the primary key.
Column library: table = { [ first-level key-value relationship for KObject ] }, id is the primary key.
6) Fragmented memory snapshot
The model fragments are groups of CIM models of the power grid, and the minimum unit is the CIM model of one transformer substation. A Slice-Snapshot (Slice-Snapshot) is a data view of any time point designed on the basis of a multilayer container sharing pointers, caches monitoring models or real-time numerical values, and has the following structure:
map<key, shared_ptr<CLASS(map<key, value>)>>
wherein, the key is the ID of the flexible monitoring object (KObject, a JSON class, integrated access interface), is the primary key of the relational table, and supports adding a prefix or a composite key value of multiple keys; value is KObject, shared _ ptr is shared pointer; CLASS is KObject or interface implementation CLASS. And (5) packaging the inner map by using classes, and releasing the memory of the inner map in a destructor.
(1) The monitoring model snapshot of the positioning layer converts public equipment, preposed acquisition, SCADA and authority management table of the NoSQL library, and has a two-layer structure:
a. monitoring model table class:
ParamTable = map<KObject.id, shared_ptr<KObject>>
b. monitoring model library classes:
Paramdb = map<table, shared_ptr<ParamTable>>
wherein, table is the name of the relational database model table; any row of records may construct a KObject whose primary key indicates the id.
(2) Positioning layer's numerical value section snapshot, conversion point value table value and item table event, three-layer structure:
a. numerical record table class:
ValueRecord = shared_ptr<KObject>
b. numerical section table type:
ValueTable = map<point.id, shared_ptr<ValueRecord>>
c. numerical section library type:
Valuedb = map<table, shared_ptr<ValueTable>>
wherein, point is a measuring point object; table is the name of the relational database model table; any row of records may construct a KObject whose primary key indicates the id.
(3) Positioning layer cluster snapshot, index snapshot (NoSQL database secondary index information)
The cluster snapshot is a key-value mapping of microservice and node information, offset for fast location consumption, and other slice information. The index snapshot is a secondary index key value mapping in the NoSQL database and comprises a primary key ID expansion method, an elastic monitoring object ID array method, a key value cache method and the like:
a. primary key ID extension method:
the id of KObject is a string linked by using' symbols as the ids of a plurality of other objects, such as the analog quantity limit value ana _ limits associating the analog quantity point ana _ point, the seasonal division search and the conversion type interval _ type: ana _ limits.id = "& ana _ point.id & search.id & intersection _ type.id".
b. Elastic monitoring object ID array method:
key value class index structure: key = KObject id, value = KObject id set of object KObject-fk associated with KObject's foreign key, and the structure of the index snapshot is as follows:
<KObject.id, [KObject-fk.id]>
c. key value caching method
Key value class index structure: map < key, KObject >
(4) Transaction snapshot TS (transaction Snapshot)
Constructed using MAP and VECTOR vessels:
TS = map<table, vector<shared_ptr<KObject>>>
KObject=map<record.type,vector<record>>
wherein, the table is the table name of the relational database, and the vector caches the operation record container of the same kind of monitoring object.
7) Directory service snapshot
(1) Category record sheet type:
KNamingRecord = shared_ptr<KObject>
(2) directory service table class:
KNamingTable = map<service.name, shared_ptr<KNamingRecord>>
(3) directory database:
KNamingDatabas = map<table, shared_ptr<KNamingTable>>
wherein, the service is a client application service; table is directory service table name; the service information may construct a KObject whose primary key indicates the id.
The modeling method is similar to numerical section snapshot, three tables of a name service table svcnaming, a distributed dynamic routing table svcroute and a service data offset record table svcoffset are added in a NoSQL library, and information of the three tables can be retrieved by a cluster index.
8) Service label
The service label is label information during micro-service integration, and is a label parameter of a docker run when the docker container is adopted for implementation. Service tags are one or more comma separated substation names (station.name) when the model is driven; during the numerical driving, a transformer substation (station) model is analyzed from the first piece of data collected by the service, and a service label is filled in.
2. Component and access interface
1) Monitoring model library component
When the Paramdb component monitoring model is searched, a fragment memory snapshot is searched by adopting an Iterator mode of short-term scanning, and a data view of a monitoring service at a specific time point is constructed. The method of the IParamdb interface is as follows:
(1) single key inquiry: list _ table (), obtaining a list of table names of the monitoring models; get (table _ name, key), according to the table name and the key query.
(2) Multi-key retrieval: batch _ get (table _ name, vector < string > & keys), multi-value retrieval is a batch query operation for a plurality of key values, and a monitoring service aims at the multi-value retrieval.
(3) And (3) condition query: select (table _ name, KObject & condition), query condition corresponds to using index snapshot and only supports direct and equal query of single-layer structure of the service fragment memory snapshot.
(4) Snapshot change event: add _ list (list) register listener, del _ list (list) unregister listener, notify _ event (event) model change notification event.
2) Numerical value section storehouse class subassembly
The Valuedb component, which may also be referred to as a "state library component," provides the IValueDb API to access the numerical profile snapshot: get _ table (table _ name) acquires the specified state table; commit () accumulates change value submissions.
Valuedb caches the ValueTable class of the numerical section table, and the IValueTable interface is defined as follows: get (key) get the specified record; batch _ get (keys) batch acquires specified records; select (condition) for querying the record according to the condition; put (key, value) sets up the record appointed, if record exists in the table, cover the original record; update (key, value) updates the specified record and merges records if they exist in the table. Valuedb and the value processing transaction component cooperate to complete concurrent access and maintenance processing of the monitored value.
3) Real-time model loading class component
The Mempop component interface IMempop is used to define the external APIs of the component:
pop (table _ catalog), initializing data imported into the relevant table by table category.
Add _ pop (records), incremental change transaction maintains the import of the source language.
c. The back-end agent interface finishes the methods of accessing the NoSQL library, such as single-key query, multi-key retrieval, conditional query and the like according to the IParamdb interface, and increases the maintenance operation of the NoSQL library: put (key, value), set up the record appointed, if record exists in the table, cover the original record; update (key, value), update the specified record, if the record exists in the table, merge the record; erase (key) and delete the corresponding record according to the key value.
3. Transaction processing and concurrency control
1) Model maintenance transactions
The model maintenance transaction is a core module of the memtop increment processing, and the main processing flow is as shown in fig. 3, and the deadlock management flow of fig. 4 and the transaction submission and rollback flow of fig. 5 are extended.
Subscribing a transaction maintenance source language from a flow monitoring model of the elastic message queue by the model maintenance transaction, and loading the transaction maintenance source language into a transaction snapshot; performing commit operation once for more than 1000 homogeneous records in the transaction snapshot; and if the subscribed message contains the commit, directly executing the commit operation once.
commit operation relies on deadlock management and NoSQL library execution feedback, and the steps are as follows:
a. taking a record of the transaction snapshot;
b. detecting whether the locking state LMS, record.data.id is in LMS at fixed time, if yes, entering deadlock management processing; if not, storing the record into LMS, and transferring the record into new map type, vector type.
c. And (c) detecting whether the record of the transaction snapshot is processed, and otherwise, returning to the step (a).
d. The new maps are submitted in bulk to the NoSQL library.
The presence of spam data in the locked-state LMS may cause inefficient waiting of unique transaction execution. Deadlock management is to detect the waiting state and perform timeout unlocking processing to avoid monitoring parallel maintenance exception of the real-time model, and the processing flow of fig. 4 is as follows:
a. deadlock detecting whether a value corresponding to record.data.id in LMS exists and is 1; if the value is 0, deleting the record; a value of 1 detects whether a timeout occurs.
b. And d, directly executing the step without timeout.
c. If the data is overtime, unlocking the data, and maintaining the NoSQL database (delete and insert all data first); and then the LMS deletes the record.
d. Jump to timing detection locked state LMS
The transaction of FIG. 5 is committed with attention to the performance of insert, using an execution block Bulk binding insert statement, to commit and insert at once; update or delete then commit in a single execution. If the submission fails, entering a rollback flow: if the number of the inserts is batch insert, the insert is converted into single execution; if the commit NoSQL library fails, the NoSQL library is maintained hard (delete then insert all data), and the maintenance log is recorded. And (3) successfully submitting in batch or single mode, clearing the associated records in the LMS, clearing the records in the transaction snapshot TS, recording the offset of the last record, and feeding back the success of the transaction after the change notification is issued.
The model loading service is deployed in a main multi-standby mode, and only the main service can execute commit operation of the NoSQL library. And after the main service is abnormal, one standby service becomes the main node, subscribing and assembling the transaction maintenance source from the recorded offset of the flow monitoring model, and starting Mempop incremental operation.
2) Snapshot maintenance transactions
The snapshot maintenance transaction is defined as "model change notification" issued by the maintenance transaction of the model in fig. 5, and the fragmented memory snapshot of the thick client service is maintained, for example, in the transaction processing flow in fig. 6, the maintenance and reconstruction of the index snapshot are concerned, and the broadcast submission type optimistic concurrency control protocol OCC-bc (broadcast commit) protocol is adopted to handle the conflict.
The fat client monitors the business service and deploys in a mode of service drift or a mode of one master mode and one standby mode. When the service is initialized, subscribing 'model change notification' from the beginning, when the service is restarted, loading the cluster snapshot, taking the offset recorded by the name of the microservice, and starting to maintain the monitoring model snapshot of the service; i.e., the primary or standby services each independently perform snapshot maintenance transactions.
Objects of the snapshot maintenance transaction are: the monitoring model table class ParamTable resets the monitoring model object pointed by the shared pointer according to the keywords; the time for batch maintenance is extremely short, and the actual conflict is very little. The snapshot maintenance transaction is based on the query condition of a select method in OCC protocol detection model retrieval, and abnormality is thrown out when concurrency conflict is detected in a verification stage; and (4) solving the conflict by adopting a Reload method, namely enabling the select to abandon the currently searched model and load the model into the NoSQL database again.
The Reload () function issues a Reload signal to the model retrieval thread, and starts the thread to recall the select method. Thus, the snapshot maintenance transaction can guarantee commit once it reaches the validation phase.
3) Numerical processing transactions
The numerical value processing transaction needs to support long-time running operation, and maintenance operations such as retrieval, insertion, updating, deletion and the like are carried out on the collected and processed numerical values by adopting a 'sub-snapshot' method of a view at a specific time point, namely the sub-snapshot is taken as an atom whole and supports batch processing of commit and rollback so as to reduce consistency difference of concurrent access.
And a thick client service deployed in a master multi-standby mode calls numerical processing transaction high concurrent retrieval and maintains a numerical section snapshot, and only the numerical processing transaction of the master service is written back to the NoSQL library. Consistency between numerical section snapshots of each service and between each snapshot and the NoSQL library cannot be guaranteed. According to the CAP principle, the numerical processing transaction also satisfies the AP principle: the key value class numerical value structure is adopted, the requirements of availability and partition tolerance are met, and the consistency requires that the numerical value delay of each service is in a range which cannot be sensed by naked eyes.
As shown in fig. 7, when the main thick client service is started, the measurement point set is retrieved from the NoSQL library according to the service tag, and the numerical section snapshot is initially retrieved, and the message is processed by "numerical processing transaction" to record "change numerical value", and the following operations are performed: and maintaining 'numerical section snapshot' at regular time, issuing the snapshot to an elastic message queue, and writing the snapshot back to a NoSQL library. When the main service runs, the data processing transaction records a change value by adopting a sub-snapshot, and the sub-snapshot structure is a value table of a numerical section table type, provides commit operation, and does not provide a rollback mechanism.
The numerical processing transaction is divided into a maintenance transaction and a numerical retrieval, and when the main client service maintains the transaction, the numerical value of any measuring point acquired by the retrieval transaction on the standby service is required to be consistent with the numerical value on the main service fragment memory snapshot.
(1) Maintaining transactions
Setting a global available mark of NoSQL as NoSQL _ flag, when the maintenance transaction reaches the data processing transaction, the operation steps are as follows:
a. initializing nosql _ flag to be 1, and initializing memory cache memtable (key value structure: map < measuring point ID, measuring point value object >);
b. writing the change value to memtable;
c. building a 'sub-snapshot' by using memtable (iterating and traversing memtable, put into the ValueTable class);
d. designing a processor, and calling the commit operation of the ValueTable class every second: calling ValueTable _ batch _ get according to a key set of memtable to obtain a ValueRecord object set; detecting an available mark of the nosql _ flag; when the nosql _ flag is 1, the ValueRecord object set performs the following operations: covering the value corresponding to the index key in the client snapshot, maintaining a NoSQL library (directly covering the value according to the key), and issuing a change value; commit success is returned. When the NoSQL _ flag is 0, the operation blocking or access exception of the NoSQL library is indicated, and the ValueRecord object set executes the following operations: covering a value corresponding to the index key in the client snapshot and issuing a change value; a commit failure is returned.
And e, when the commit fails, triggering a real-time library operation failure alarm, and returning to the b operation execution.
When the commit is successful, clearing the memtable and returning to the b operation execution.
Maintaining real-time concurrency control of transactions requires solving two consistency problems:
a. the sub-snapshot ValueTable executes commit operation, corrects the original snapshot of the service, and then generates an exception when maintaining the NoSQL library; at this time, the problem of consistency between the snapshot in the master/backup service and the records in the NoSQL library is solved: maintaining the section of memtable cache change value in the transaction, and accumulating the cache change by the memtable when the NoSQL library is abnormally accessed, wherein the values in the main and standby service snapshots are consistent; when the NoSQL access is recovered, the data cached by the memtable can be written into the NoSQL library.
b. When the main service and the standby service are switched, the problem of consistency between the snapshot of the fragmented memory of the standby service and the snapshot in the main service is solved: the client service started first is the main service, numerical section snapshots are initialized from the NoSQL library, and maintenance transactions of numerical processing transactions are started.
And (3) preparing a fat client service starting operation:
a. setting a global variable nosql _ flag to be 0, and blocking a maintenance transaction of the standby service;
b. acquiring the latest offset on the elastic message queue, and writing the offset back to a service data offset record table of the NoSQL library;
c. loading a numerical section snapshot from a NoSQL library;
d. setting a global variable NoSQL _ flag to be 1, writing the memtable of the main service into a NoSQL library, and issuing a change value to update the snapshot of the standby service.
In a master-standby mode, the standby service migration operation is equivalent to starting the standby service on other nodes after the standby customer service is closed on the node. The main service is abnormal or the main and standby switching function is executed, and the standby service is changed into the main service: the standby service receives a task scheduling command for switching to the main service; and blocking the variable numerical value consumption thread on the standby service and releasing the maintenance transaction of the standby service.
(2) Numerical value retrieval
The sub-snapshot method is used for obtaining a current numerical section of a client service specified model, and can be regarded as a numerical condition retrieval method (for example, numerical sections of related monitoring models are retrieved according to conditions such as intervals or stations), and the main processing procedures are as follows:
a) inquiring the numerical value section snapshot according to the transmitted service label and the condition parameter, starting an overtime timer, and waiting for a processing result;
b) if a correct response is returned within the timeout period of the timer, the corresponding sub-snapshot of the client is returned (numerical profile class: the ValueTable comprises a monitoring object set corresponding to the key value set);
c) and if the correct response is not returned within the overtime time of the timer, reinitializing the numerical section snapshot corresponding to the key set from the NoSQL database, and returning the correct response and the corresponding sub-snapshot (nullable).
4. Real-time model checking
And (3) checking the relation model by using a grammar rule and a manual method, and carrying out real-time model checking. And (4) checking the consistency of the real-time model, namely checking the consistency of the real-time model and the relation model. The relational model is converted into NoSQL in two modes of 'table direct conversion' and 'table extended conversion'.
When the table is directly converted and checked, retrieving a full-scale model from the relational model one by one, and processing the full-scale model into a monitoring model table A; then, searching a full-scale model of the corresponding table from the NoSQL library, and processing the full-scale model into a monitoring model table type B; and C, matching the value of the traversal record from the A with the value of the B, deleting the record of the B if the matching is successful, and issuing the record to a theme corresponding to the flow monitoring model if the matching is failed.
An expansion table is designed based on the incidence relation of the foreign key of the relational database table, and the purpose is to facilitate the realization of a front-end processor and a cloud SCADA in the monitoring service. The expansion table can be used as the ID or query condition of other tables to quickly establish association with other tables. When the table is expanded, converted and checked, the record matching process of direct conversion and check of the table is simplified: whether the key matching the record of A is in B, and other processing procedures are unchanged.
The principle of the real-time model openness is a model defined and used by a NoSQL library, and a model change notification theme issued to a wide area message queue by a parent system Mempop is directly used as a 'flow monitoring model' of a subsystem. The operation process of openness check of the real-time model comprises the following steps:
a. the name of the simulation subsystem is a client service label, and a fragment memory snapshot is extracted from a NoSQL library of a parent system;
b. and transferring the data of the snapshot of the fragment memory to the JSON file one by one.
c. And issuing the JSON file to the subsystem.
d. And the subsystem analyzes the JSON file, checks the record in NoSQL and obtains the difference item information.
e. Checking whether the difference information is in a recent wide area message queue or not, obtaining a nonexistent difference information item, generating a checking log and issuing the checking log to the theme of the flow monitoring model of the subsystem.
5. Real-time model synchronization
Model synchronization is the synchronization of real-time models to the real-time database (e.g., backup system, system across secure zones) or cascaded sub-real-time databases of a homogeneous system. The model synchronization of FIG. 2 adopts a method of synchronizing a full-scale model once at a specified date interval and synchronizing incremental maintenance models in real time.
The full backup files adopt backup tools corresponding to the NoSQL database, and only the same NoSQL database can be used. The full JSON model of the sub real-time library is a JSON document which is extracted according to domain name tag fragments and used for obtaining a stored transaction maintenance source language, can be compatible with different NoSQL databases, and achieves full synchronization of an open model. The model loading service issues a model change notification to the wide area message queue, directly synchronizing the change model to the subject of the flow monitoring model of the target system.

Claims (9)

1. A cloud distributed real-time database system, characterized by: the system comprises a real-time database and a real-time service bus;
the real-time database comprises a storage layer, a positioning layer and an interface layer of a distributed hierarchical structure;
the storage layer builds storage service based on a distributed NoSQL database, a stream monitoring model and a distributed log cache file; the NoSQL database is used for storing a real-time model, the real-time model corresponds to a relational model stored in a relational database of the power grid system, and the relational model is a CIM model;
the positioning layer is used for storing fragment memory snapshots, and the fragments are grouped based on a CIM (common information model); the partitioned memory snapshot is a multilayer MAP container sharing pointers, and comprises a monitoring model snapshot and a numerical section snapshot which are established according to a real-time model and a model change notification, wherein the model change notification is a notification issued according to the change of a relational model; the monitoring model in the monitoring model snapshot is the mapping of a CIM model, and the numerical section snapshot is used for storing the numerical value of the measuring point calculated by the power grid SCADA system;
the interface layer comprises a transaction type component, and the transaction type component is used for realizing the establishment, access and processing of the partitioned memory snapshot;
the real-time service bus is used for realizing a cloud monitoring function by calling a transaction type component of the interface layer;
the cloud distributed real-time database system realizes stream computing based on an elastic message queue;
the real-time service bus comprises a model loading service, the interface layer comprises a Mempop component, and the model loading service calls the Mempop component to realize full loading of the relation model to the real-time model and incremental processing of the real-time model;
the full loading refers to that the Mempop component loads the relation model into the real-time model in full;
the incremental processing is as follows: after full loading, the model of the Mempop component maintains a flow monitoring model in a transaction subscription elastic message queue, processes a transaction maintenance source language in the flow monitoring model, and caches the result in a transaction snapshot; the flow monitoring model is a flow type cache model based on an elastic message queue, the transaction maintenance source language is generated according to an increment maintenance log of a relation model and contains information for performing increment processing on a real-time model; and the model maintenance transaction submits the NoSQL database according to the transaction maintenance source language records in the transaction snapshot to complete the processing of the real-time model.
2. The clouded distributed real-time database system of claim 1, wherein: the transaction maintenance source language contains the following information: maintaining database names, table names, operation types, record contents, batch execution numbers and submission marks related to the logs, wherein the operation types comprise updating, inserting and deleting; in the same transaction maintenance source language, the batch execution number and the submission mark only comprise one;
when the model maintenance affair extracts the affair maintenance source language, whether the affair maintenance source language has batch execution numbers or a submission mark is judged: if the transaction maintenance source language has the commit mark, directly executing a commit operation after the transaction maintenance source language is put into the transaction snapshot, and committing all records in the transaction snapshot to the real-time model; if the transaction maintenance source language has the batch execution serial number, checking whether the record number of the transaction snapshots exceeds a preset record upper limit value, if not, storing the transaction maintenance source language into the transaction snapshots and then extracting the next transaction maintenance source language, and if so, storing the transaction maintenance source language into the transaction snapshots after performing one-time commit operation and then continuously extracting the next transaction maintenance source language; if neither the commit marker nor the batch execution number exists, then the next transaction maintenance source is continuously fetched.
3. The clouded distributed real-time database system of claim 2, wherein: the NoSQL database is provided with a key value table, namely a locking state LMS, and is used for recording which records in the real-time model are related to the transaction snapshot;
when the model maintenance affair executes the submitting operation, the affair maintenance source language is read from the affair snapshot in sequence, after each affair maintenance source language is taken, the locking state LMS is detected regularly to judge whether the real-time model record related to the affair maintenance source language is in the locking state LMS or not: if not, adding the real-time model record into the locking state LMS, and transferring the information in the transaction maintenance source language into a map container to wait for batch submission; if the LMS is in the locking state, a deadlock management flow is entered: and circularly and periodically detecting the locking state LMS within a preset timeout time limit until the real-time model record is no longer associated with the transaction maintenance source language in the transaction snapshot, exiting the management process, transferring the information in the transaction maintenance source language into a map container to wait for batch submission, and if the timeout time limit is exceeded and the real-time model record is still associated with the transaction maintenance source language in the transaction snapshot, taking a hard maintenance measure, namely deleting all data in the NoSQL database and then reinserting the deleted data.
4. The clouded distributed real-time database system of claim 3, wherein: when the records in the map container are submitted to the NoSQL database in batches, traversing each record, judging whether the operation type is inserted or not, loading the record of which the operation type is inserted into an execution block Bulk of the NoSQL database, submitting for execution once after traversing, and executing the record of which the operation type is updated or deleted;
judging whether the submission maintenance is successful after each submission is executed:
if the submission fails, entering a rollback flow: for the insert operation, the operation is analyzed from the execution block Bulk and is changed into single execution, and for the update or deletion operation, the operation is submitted in a single execution mode; if the resubmission fails, a hard maintenance measure is adopted, namely, all data in the NoSQL database are deleted and then inserted again;
if the submission is successful or the rollback process is executed, clearing records in the locking state LMS associated with the submission, emptying the transaction snapshot, recording batch execution numbers in the last record in the transaction snapshot before emptying, and then issuing a model change notice to the elastic message queue.
5. The clouded distributed real-time database system of claim 4, wherein: the real-time service bus further comprises a fat client service, and the transactional component further comprises a Paramdb component and a Valuedb component;
when the fat client side service is started, the NoSQL database is retrieved by taking the service tag as an input condition, and the affiliated slices are extracted to establish a slice memory snapshot of the fat client side service, namely a monitoring model snapshot, a numerical value section snapshot, an index snapshot and a cluster snapshot;
the service label is a mark of micro service, namely the self-fat client service, and is used for determining the grouping of the CIM model;
the index snapshot is used for storing secondary index information of a data table in the NoSQL database;
the cluster snapshot is used for storing node information related to the microservice;
when the fat client side service runs, the Paramdb component and the Valuedb component are used for completing retrieval and maintenance of the fragment memory snapshot;
the Paramdb component integrates a snapshot maintenance transaction, the snapshot maintenance transaction subscribes to a model change notification issued by the model maintenance transaction in the elastic message queue, and maintains a fragment memory snapshot of the fat client service according to the model change notification;
the Valuedb component integrates numerical value processing transactions, the numerical value processing transactions adopt sub-snapshots of views at specific time points to carry out batch maintenance on collected and processed numerical values, the operations of retrieving, inserting, updating and deleting numerical value cross-section snapshots are completed, the numerical value processing transactions also write back changed numerical values to the NoSQL database at regular time, and numerical value change notifications are issued through an elastic message queue.
6. The clouded distributed real-time database system of claim 5, wherein: the fat client side service is deployed through a master-slave mode, and only the numerical processing transaction of the master service is complex and written back to the NoSQL database.
7. The clouded distributed real-time database system of claim 5, wherein: the real-time service bus also includes a thin client service for directly accessing the NoSQL database through a proxy mode.
8. The clouded distributed real-time database system of claim 5, wherein: the system also comprises a model synchronization mechanism which synchronizes the real-time model to other databases of the isomorphic system or cascaded sub-databases; the synchronization mode is as follows: and synchronizing the primary full-scale model according to the specified interval, and performing incremental maintenance on the synchronization target database in real time according to the model change notification and the value change notification in the elastic message queue.
9. The clouded distributed real-time database system according to any of claims 1 to 8, characterized in that: the system also comprises a model checking mechanism for checking the consistency of the real-time model and the relation model.
CN201910508447.2A 2019-06-13 2019-06-13 Cloud distributed real-time database system Active CN110196885B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910508447.2A CN110196885B (en) 2019-06-13 2019-06-13 Cloud distributed real-time database system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910508447.2A CN110196885B (en) 2019-06-13 2019-06-13 Cloud distributed real-time database system

Publications (2)

Publication Number Publication Date
CN110196885A CN110196885A (en) 2019-09-03
CN110196885B true CN110196885B (en) 2021-02-02

Family

ID=67754405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910508447.2A Active CN110196885B (en) 2019-06-13 2019-06-13 Cloud distributed real-time database system

Country Status (1)

Country Link
CN (1) CN110196885B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111061724B (en) * 2019-11-08 2023-11-14 珠海许继芝电网自动化有限公司 High-speed real-time database management method and device for power distribution automation system
CN111026805A (en) * 2019-11-08 2020-04-17 许昌许继软件技术有限公司 Substation master station system and telemetry data cross-region transmission method
US11509619B2 (en) * 2020-01-14 2022-11-22 Capital One Services, Llc Techniques to provide streaming data resiliency utilizing a distributed message queue system
CN111427964A (en) * 2020-04-15 2020-07-17 南京核新数码科技有限公司 Industrial cloud data storage model for running timestamp
CN111858001B (en) * 2020-07-15 2021-02-26 武汉众邦银行股份有限公司 Workflow processing method based on micro-service architecture system
CN112597242B (en) * 2020-12-16 2023-06-06 四川新网银行股份有限公司 Extraction method based on application system data slices related to batch tasks
CN114296649B (en) * 2021-12-27 2024-01-02 天翼云科技有限公司 Inter-cloud service migration system
CN116756162B (en) * 2023-06-28 2024-03-12 蝉鸣科技(西安)有限公司 Method and system for guaranteeing data consistency
CN116567007B (en) * 2023-07-10 2023-10-13 长江信达软件技术(武汉)有限责任公司 Task segmentation-based micro-service water conservancy data sharing and exchanging method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101354758A (en) * 2007-07-25 2009-01-28 中国科学院软件研究所 System and method for integrating real-time data and relationship data
CN106294888A (en) * 2016-10-24 2017-01-04 北京亚控科技发展有限公司 A kind of method for subscribing of object data based on space-time database
CN107229639A (en) * 2016-03-24 2017-10-03 上海宝信软件股份有限公司 The storage system of distributing real-time data bank

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751426A (en) * 2008-12-11 2010-06-23 北京市电力公司 Method and device for realizing information sharing between SCADA and GIS
US20120303901A1 (en) * 2011-05-28 2012-11-29 Qiming Chen Distributed caching and analysis system and method
CN102737086B (en) * 2012-01-13 2015-06-03 冶金自动化研究设计院 Iron and steel enterprise information integration platform based on CIM model
CN103577938A (en) * 2013-11-15 2014-02-12 国家电网公司 Power grid dispatching automation main-and-standby system model synchronizing method and synchronizing system thereof
KR101664701B1 (en) * 2015-06-12 2016-10-11 한국전력공사 Apparatus and method for verifying validity of cim-xml file
CN105574640A (en) * 2015-09-25 2016-05-11 国网浙江省电力公司 Method for constructing unified and comprehensive management platform of application
CN108052634B (en) * 2017-12-20 2021-11-12 江苏瑞中数据股份有限公司 Integration method of multi-information system of power grid production control large area and asset management large area
CN109816161A (en) * 2019-01-14 2019-05-28 中国电力科学研究院有限公司 A kind of power distribution network operation computer-aided decision support System and its application method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101354758A (en) * 2007-07-25 2009-01-28 中国科学院软件研究所 System and method for integrating real-time data and relationship data
CN107229639A (en) * 2016-03-24 2017-10-03 上海宝信软件股份有限公司 The storage system of distributing real-time data bank
CN106294888A (en) * 2016-10-24 2017-01-04 北京亚控科技发展有限公司 A kind of method for subscribing of object data based on space-time database

Also Published As

Publication number Publication date
CN110196885A (en) 2019-09-03

Similar Documents

Publication Publication Date Title
CN110196885B (en) Cloud distributed real-time database system
US11704290B2 (en) Methods, devices and systems for maintaining consistency of metadata and data across data centers
KR102307371B1 (en) Data replication and data failover within the database system
US11182356B2 (en) Indexing for evolving large-scale datasets in multi-master hybrid transactional and analytical processing systems
CN109739935B (en) Data reading method and device, electronic equipment and storage medium
US9002805B1 (en) Conditional storage object deletion
CN102779185B (en) High-availability distribution type full-text index method
US9052942B1 (en) Storage object deletion job management
US9218383B2 (en) Differentiated secondary index maintenance in log structured NoSQL data stores
US9740582B2 (en) System and method of failover recovery
CN111881223B (en) Data management method, device, system and storage medium
US20120254249A1 (en) Database Management System
KR20180021679A (en) Backup and restore from a distributed database using consistent database snapshots
US20200301942A1 (en) Transferring Connections in a Multiple Deployment Database
GB2472620A (en) Distributed transaction processing and committal by a transaction manager
CN109446395A (en) A kind of method and system of the raising based on Hadoop big data comprehensive inquiry engine efficiency
CN112269781B (en) Data life cycle management method, device, medium and electronic equipment
Xiong et al. Data vitalization: a new paradigm for large-scale dataset analysis
CN111966692A (en) Data processing method, medium, device and computing equipment for data warehouse
CN112328702A (en) Data synchronization method and system
CN111752945B (en) Time sequence database data interaction method and system based on container and hierarchical model
US11921704B2 (en) Version control interface for accessing data lakes
US9870402B2 (en) Distributed storage device, storage node, data providing method, and medium
US20230385265A1 (en) Data lake with transactional semantics
Lev-Ari et al. Quick: a queuing system in cloudkit

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A cloud distributed real-time database system

Effective date of registration: 20211213

Granted publication date: 20210202

Pledgee: Yantai financing guarantee Group Co.,Ltd.

Pledgor: DONGFANG ELECTRONICS Co.,Ltd.

Registration number: Y2021980014783

PE01 Entry into force of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20220725

Granted publication date: 20210202

Pledgee: Yantai financing guarantee Group Co.,Ltd.

Pledgor: DONGFANG ELECTRONICS Co.,Ltd.

Registration number: Y2021980014783

PC01 Cancellation of the registration of the contract for pledge of patent right