CN105391790A - Database high-availability method similar to RAC One Node - Google Patents

Database high-availability method similar to RAC One Node Download PDF

Info

Publication number
CN105391790A
CN105391790A CN201510837438.XA CN201510837438A CN105391790A CN 105391790 A CN105391790 A CN 105391790A CN 201510837438 A CN201510837438 A CN 201510837438A CN 105391790 A CN105391790 A CN 105391790A
Authority
CN
China
Prior art keywords
resource
sid
database
items
raconenode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510837438.XA
Other languages
Chinese (zh)
Inventor
郭加鹏
朱广新
郑磊
李东辉
俞俊
滕家雨
王渊
宋文
张天宇
石浩瀚
张旭刚
王小刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Integration Of Information System Branch Office Of Nanjing Nanrui Group Co ltd
State Grid Corp of China SGCC
Nanjing NARI Group Corp
Information and Telecommunication Branch of State Grid Jiangsu Electric Power Co Ltd
Original Assignee
Integration Of Information System Branch Office Of Nanjing Nanrui Group Co ltd
State Grid Corp of China SGCC
Nanjing NARI Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Integration Of Information System Branch Office Of Nanjing Nanrui Group Co ltd, State Grid Corp of China SGCC, Nanjing NARI Group Corp filed Critical Integration Of Information System Branch Office Of Nanjing Nanrui Group Co ltd
Priority to CN201510837438.XA priority Critical patent/CN105391790A/en
Publication of CN105391790A publication Critical patent/CN105391790A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0893Assignment of logical groups to network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a data high-availability method similar to RAC One Node. Resources of each physical machine are integrated to form a large cluster; and each database in different physical hosts is connected with a shared storage. The data high-availability method similar to the RAC One Node overcomes the functional defect of only supporting oracle 11G, perfectly supports ORACLE 10G and 11G, and realizes high availability of a single instance of oracle in a non-cluster environment.

Description

The database high availability method of class RAC One Node
Technical field
The present invention relates to a kind of database high availability method, be specifically related to the database high availability method of a kind RACOneNode.
Background technology
High Availabitity (HA): the reliability of computer system was measured with the mean free error time (MTTF), and namely how long computer system on average can normally be run, and just primary fault occurs.The reliability of system is higher, and the mean free error time is longer, and High Availabitity generally has 3 kinds of implementations:
Master-slave mode (asymmetric manner), operation principle: host work, standby host is in monitoring preparation situation; When machine delayed by main frame, all work of standby host adapter main frame, until main frame recover normal after, service to be switched on main frame in automatic or manual mode by the setting of user and to run, the consistency of data is solved by shared memory systems.
Two-shipper duplex mode (mutually for helping mutually), operation principle: two main frames run respective services simultaneously and mutually monitor situation, when machine delayed by arbitrary main frame, all work that another main frame connects that let it be immediately, guarantee work is real-time, and the critical data of application service system leaves in shared memory systems.
Cluster working method (multiserver is standby mode mutually), operation principle: multiple host works together, one or several service of each self-operating is respectively the one or more backup host of service definition, when certain hostdown, the service run thereon just can be taken over by other main frame.
OracleRAC: be writing a Chinese character in simplified form of OracleRealApplicationCluster, official's Chinese document is generally translated as " Real application cluster ", being the technology started adopted in Oracle9i database, is also the core technology that oracle database supports grid computing environment.Its appearance solves the major issue faced in traditional database application: high-performance, contradiction between high scalability and low price.
The principle of Oracle11G function RON (RACOneNode) is different from former HA database; RACOneNode is based on RAC database; and realized by Oracle clustered software (GRID) management; only need start an example of RAC database; when the node of running example need to safeguard shut down when, by the mode of onlinedatabaserelocation database instance can be switched on other nodes in cluster and run.
Summary of the invention
The present invention is directed to above-mentioned prior art Problems existing to make improvements, namely the technical problem to be solved in the present invention is to provide the database high availability method of a kind RACOneNode.
In order to solve the problems of the technologies described above, the invention provides following technical scheme:
The database high availability method of one kind RACOneNode, gets up the resource consolidation of each physical machine, becomes a large cluster; Each database Connection Sharing in different physical hosts is stored.
The administration order of GI is utilized to carry out unified management technology.
The resource integrated is resource items, and resource items is respectively: $ SID resource, $ SID.vip resource, $ SID.lsnr resource, $ SID.db resource and $ SID.head resource.
$ SID resource: the basis being whole resource items, by defining this resource, to make other resources in resource items all depend on it, achieve the unified management to resource items, to the stopping of $ SID resource, to reset and handover operation to have influence in resource items other all resources.
$ SID.vip resource: be a VIP resource by OracleClusterware internal control, $ SID.vip Resource Dependence is in Internet resources and $ SID resource.
$ SID.lsnr resource: be the monitor resource in resource items, protected by shell script act_lsnr.ksh, $ SID.lsnr Resource Dependence is in $ SID.vip resource.
$ SID.db resource: be the single instance data resource in resource items, protected by shell script act_db.ksh; $ SID.db Resource Dependence is in $ SID resource.
$ SID.head resource, echoes with $ SID resource head and the tail, is encapsulated by whole resource items, integrally carries out management maintenance; SID.head resource is protected by shell script act_rgh.ksh.
The invention has the beneficial effects as follows that the present invention saves a large amount of server resource, realize server cluster, the database of centering compact applications is moved into the saving number of servers that cluster can be a large amount of.By showing in national grid multiple data centers statistics: be deployed in cluster for taking the application system of server resource less than 10%, reach 70% in each node resource utilization rate of guarantee cluster, such node can run 7 application.For the cluster of four nodes, wherein one is used as standby host, can also run 21 application, relative to the resource saving 38 station servers HA data.Often increase a node, resource utilization can be significantly improved and reduce number of servers.
Save a large amount of license expense, build database resource pond, existing database Direct Transfer can enter pond.Do not produce license expense. for the application of newly reaching the standard grade, the license only buying single storehouse is just passable, need not buy expensive RAC expense.
Prolongable cluster topology flexibly, promote the cluster that availability is provided by GI, can add clustered node flexibly and delete, the business that can not affect normally be carried out.Realize failover and online shift function, effectively ensure that shutdown outside the plan and inside the plan downtime.
Realize existing oracle10GRACONENODE, save database upgrade expense and development cost.The database at a lot of database data center still adopts the version of oracle10G at present, based on ERON technology, can realize the RACONENODE technology that 11G just has, and does not need special shutdown upgrading, promotes the availability of operation system; Between database 11G and 10G, there is larger difference, database upgrade, need again to set many parameters, and the exploitation between database code.
Promote automation O&M level, reduce O&M cost, database is being delayed in machine situation, can realize automation migration, promotes automation O&M level; By ERON cluster, reduce the quantity of server buying, power cost saving, calculator room equipment expense, improve the technology of operation maintenance personnel.
Accompanying drawing explanation
Accompanying drawing is used to provide a further understanding of the present invention, and forms a part for specification, together with embodiments of the present invention for explaining the present invention, is not construed as limiting the invention.In the accompanying drawings:
Fig. 1 is that database of the present invention disposes schematic diagram;
Fig. 2 is that database of the present invention disposes dynamic migration schematic diagram;
Fig. 3 is scheduling of resource figure of the present invention;
Fig. 4 is resource associations figure of the present invention;
Fig. 5 is resource switch flow chart of the present invention;
Fig. 6 is GIagent routine works flow chart of the present invention;
Fig. 7 is based on conventional method oracle database deployment scheme schematic diagram;
Fig. 8 is based on ERON method oracle database deployment scheme schematic diagram;
Fig. 9 is one of achievement schematic diagram of the present invention;
Figure 10 is achievement schematic diagram two of the present invention.
Embodiment
As Figure 1-10 shows, the present invention's preferred embodiment is disclosed.
The present invention, ERON (ExtendedRACONENODE), it is by the resource consolidation of each physical machine, becomes a large cluster (cluster).It can integrated service device, improves failover ability, provides loadbalance ability, in addition its can fulfillment database store virtual, the standardization of database environment and avoid the servicing down times of upgrading.
As shown in Figure 1, assuming that always have 3 physical hosts, ServerA, ServerB, ServerC, these 3 physical hosts form a singlecluster, on 3 physical hosts, can have the database of 5 different singleinstance, DB1 and DB2 is positioned on ServerA, DB3 is positioned on ServerB, DB4 and DB5 is on ServerC.Each database Connection Sharing stores.
As shown in Figure 2, when DB3 place node ServerB break down service cannot be provided time, by GI agent process detection to and call OMotion routine and be responsible for the database instance on DB3 to forward on Server, without the need to manual intervention.
ERON technology carries out unified management technology based on the administration order of a kind of GI of utilization under GridInfrastructure platform environment.Such as, database and monitoring service are registered in GI with the form of " resource items ", like this by realizing the management to database and monitoring service to resource items management (start and close).
In ERON technology, resource items is respectively: $ SID resource, $ SID.vip resource, $ SID.lsnr resource, $ SID.db resource, $ SID.head resource five resource items ($ SID refers to the instance name of database), wherein except $ SID.vip other four resources be all its write respective script for resource items status function realize and state protection.Wherein, the relevance relation of five resources as shown in Figure 3.
As shown in Figure 4, single instance database of GI protection is actually a resource items, is the set of a related resource, is interacted, complete the protection to single instance database by the dependence between each resource.
$ SID resource:
This resource is the basis of whole resource items, by defining this resource, to make other resources in resource items all depend on it, achieves the unified management to resource items.To the stopping of this resource, to reset and handover operation to have influence in resource items other all resources.
$ SID.vip resource:
This is a VIP resource by OracleClusterware internal control, therefore without the need to writing shell script to protect.This Resource Dependence is in Internet resources and previously described $ SID resource.In general, Internet resources are ora.net1.network.
$ SID.lsnr resource:
This is the monitor resource in resource items, is protected by shell script act_lsnr.ksh.This Resource Dependence is in $ SID.vip resource above.In addition, because safety adopts now is shared OracleHOME, and use ACFS to provide this shared mechanism, therefore monitor resource also depends on the ACFS resource at corresponding OracleHOME place.
$ SID.db resource:
This is the single instance data resource in resource items, is protected by shell script act_db.ksh.This Resource Dependence is in $ SID resource above.Because that safety adopts now is shared OracleHOME, and use ACFS to provide this shared mechanism, therefore database resource also depends on the ACFS resource at corresponding OracleHOME place.In addition, the data file due to database is deposited and is designed into DATA_DG and FRA_DG two disk groups, and therefore database resource depends on this two resources.
$ SID.head resource:
This resource and the head and the tail of $ SID resource above echo, and are encapsulated by whole resource items, integrally carry out management maintenance.This resource is protected by shell script act_rgh.ksh.This script is similar with the script function for the protection of $ SID resource, is all by creating a local temporary file, checking the state of resource according to the content of temporary file.
This resource is arranged in the top of resource items dependence, and therefore its OFFLINE can't make other resources in resource items also be set to OFFLINE.But the switching of this resource but can cause whole resource items to switch.Therefore, for the switching of this resource, also adopt the switchover policy the same with monitor resource, namely do not restart, do not switch.The switching occurring being caused by this resource exception is avoided to occur.Because this resource adopts the mechanism the same with $ SID resource, this resource is if there is problem, and $ SID resource also probably goes wrong simultaneously.The $ SID be positioned at bottom dependence the control of whole resource items is transferred to carry out more suitable.
As shown in Figure 5, checked the state of resource items by the agnet working routine of GI, define restarting or failvoer of resource items by the change of resource status according to the parameter of resource.
Fig. 5 only for user-defined resource, and only considered the effect of single resource.The switching caused due to the complicated dependence between each resource is not taken into account.
Occur extremely in OracleClusterware resource (hereinafter referred to as resource), Clusterware is to the inspection failure of this resource.In this case, first Clusterware can judge according to the RESTART_ATTEMPTS property value defined in resource whether resource allows and need to restart.Judgement carries out based on the Dynamic attribute values RESTART_COUNT of Current resource and static attribute value RESTART_ATTEMPTS.Such as current RESTART_COUNT is 2 (illustrating that this resource has restarted 2 times), the definition value of RESTART_ATTEMPTS is 3, then Clusterware thinks that this resource still can be restarted at local node, can attempt restarting this resource, and RESTART_COUNT value is added 1.
Attention: resource restarts success or not RESTART_COUNT at local node all can be increased, and start success except non-resource or reach the UPTIME_THRESHOLD of resource, otherwise RESTART_COUNT all can not to be reset be 0.
If resource is restarted unsuccessfully at local node, according to the definition of resource, if this resource allows to run on the multiple nodes in Cluster, so Clusterware can switch according to the node order defined in HOSTING_MEMBERS or SERVER_POOLS in this resource to other node and runs.If also cannot successfully start this resource on other nodes, Clusterware can continue to attempt switching this resource until all nodes defined in HOSTING_MEMBERS or SERVER_POOLS were all attempted.In this case, the state of this resource just can be set to OFFLINE and stop this resource of monitoring by OracleClusterware.
If in handoff procedure, resource is successfully restarted by any one node, so the RESTART_COUNT of resource can be reset to 0 by Clusterware.
Attention: by the unsuccessful resource switch that causes of asset reboot and this dynamic resource attribute of FAILURE_COUNT without any contact.FAILURE_COUNT value can't increase because of the resource switch caused thus by OracleClusterware.
When resource inspection failure, if the number of times of restarting of a resource has reached the RESTART_ATTEMPTS value of definition or this resource and do not operate in local node and carry out restarting trial (by arranging RESTART_ATTEMTPS=0), so Clusterware can determine whether this resource can switch.Whether resource can carry out switching is come coefficient by the static attribute FAILURE_THRESHOLD of the dynamic attribute FAILURE_COUNT of resource and resource.If the FAILURE_COUNT value of a resource has reached the FAILURE_THRESHOLD of definition, then this resource has not been allowed to switch.In this case, the FAILURE_COUNT attribute of resource can be reset to 0 by Clusterware, resource status is set to OFFLINE and stops this resource of monitoring.If the FAILURE_COUNT of resource does not also reach the FAILURE_THRESHOLD value of definition, Clusterware then carries out the handover operation of resource according to the node order defined in Resource Properties HOSTING_MEMBERS or SERVER_POOLS.If handover success, then the RESTART_COUNT resetting resource is 0; If switch unsuccessful, then continue according to the step in the 2nd switching attempting resource.
Attention: the dynamic attribute FAILURE_COUNT of resource only just can increase when the RESTART_ATTEMPTS value of resource is reached and switches.In other words, OracleClusterware only have just think in this case resource local node failure.
The Dynamic attribute values FAILURE_COUNT of resource can to reset to after resource starts successfully or uses crsctlstartres order successfully to start resource that 0, FAILURE_COUNT only reaches the FAILURE_THRESHOLD that defines in resource at it or just can be reset after having exceeded the definition FAILURE_INTERVAL time limit unlike RESTART_COUNT be 0.In a first scenario, namely reach FAILURE_THRESHOLD, resource status can be set to OFFLINE and stop this resource of monitoring by Clusterware; And in the second situation, Clusterware still can monitor this resource and carry out handover operation to resource when needed.
As shown in Figure 6, GIagent routine works flow process.Fig. 6 only for user-defined resource, and only considered the effect of single resource.The resource start and stop caused due to the complicated dependence between each resource and switching are not taken into account.Fig. 6 describes OracleClusterwareAgentFramework and how to manage resource, and in which situation, call the cutting point (EntryPoint) of which resource.
The CHECK routine that ClusterwareAgent can regularly call resource according to resources definition attribute CHECK_INTERVAL checks resource.If check time-out, (this is controlled by SCRIPT_TIMEOUT), if there is ABORT routine, then calls ABORT routine; If no, then Agent process exits.Because successfully can not detect the state of resource, thus judge that whether resource is healthy, resource status can be set to INTERMEDIATE by Clusterware, and detailed status information one hurdle is CHECKTIMEDOUT.
If the CHECK routine calling resource returns mistake, then mean the inspection failure to resource, resource occurs abnormal.In this case, Clusterware can operate resource accordingly according to resource switch flow process described above.
When to asset reboot, Agent starts this resource by the START routine calling resource.If START routine is overtime, (this is controlled by START_TIMEOUT or SCRIPT_TIMEOUT), when there being ABORT routine, Agent calls ABORT routine; If no, Agent process exits.
After calling START routine, Agent can call CHECK routine immediately to check the state of resource.If check successfully, then resource successfully starts and operational excellence; If check unsuccessfully, then the startup failure of resource is described, the resource of CLEAN routine to local node that Agent can call resource is cleared up, and can determine whether switch this resource afterwards as the case may be.
In the process calling STOP routine stopping resource, if STOP routine returns successfully, Agent can call CHECK routine and check resource situation, if CHECK returns failure, illustrate that resource is stopped, this is a normal condition, and resource status can be set to OFFLINE by Clusterware.
Calling in STOP routine stopping resource process, if there is time-out (this is controlled by STOP_TIMEOUT or SCRIPT_TIMEOUT), having in ABORT routine situation, Agent calls ABORT routine, otherwise Agent process exits.No matter STOP returns unsuccessfully XOR is time-out, Agent can call CHECK routine and check resource status, and no matter CHECK routine returns successfully or failure, Agent can call CLEAN and CHECK routine and clear up resource, and resource status is finally set to OFFLINE.
As shown in table 1, the 9 large functional tests checkings such as abnormal to main frame of the present invention, application is abnormal, GI external member is abnormal, employing aggregated structure, achieves the failover function of ERON technology and online shift function.
Table 1
Common 5 different operation systems, oracle database is disposed as shown in Figure 7:
Operation system first, second is high because of business need degree, respectively uses 2 station servers just RAC cluster; Operation system third, fourth, penta respectively use 1 station server, need use 7 station server altogether.
As shown in Figure 8, adopt ERON of the present invention, first server A, B, C are done cluster; Dispose operation system second, third on server again; In server B deploy fourth, penta; In server C deploy first.Amount to and use server 3, the High Availabitity of operation system can be realized.Not only save software and hardware expense, and decrease the consumption of server room space hold and the energy.Be conducive to concentrating simultaneously and dispose and centralized management.
Actual test data is as shown in Figures 9 and 10: 5 cover operation systems are moved into after resource pool, as can be seen from important performance indexes such as CPU, I/O, internal memories:
Cpu load below 10%, the IOPS of I/O about 200, resource pool performance quite stable;
Resource pool aboundresources, more business of can moving into system.
Database AWR part index number is as shown in table 2:
Table 2
Data analytically number of times, logic read, the dimension demonstrating data storehouse stable operation index such as physical read:
Be upgraded to 10.2.0.5 unified for the database of 5 cover different editions in implementation process, improve database runnability and stability.
Database has two host nodes secondary node in the cluster, improves the high availability of original single instance database.
The foregoing is only the preferred embodiments of the present invention, be not limited to the present invention, although with reference to previous embodiment to invention has been detailed description, for a person skilled in the art, it still can be modified to the technical scheme described in foregoing embodiments, or carries out equivalent replacement to wherein portion of techniques feature.Within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (8)

1. the database high availability method of a kind RACOneNode, is characterized in that, the resource consolidation of each physical machine is got up, and becomes a large cluster; Each database Connection Sharing in different physical hosts is stored.
2. the database high availability method of class RACOneNode according to claim 1, is characterized in that: utilize the administration order of GI to carry out unified management technology.
3. the database high availability method of class RACOneNode according to claim 1 and 2, it is characterized in that, the resource integrated is resource items, and resource items is respectively: $ SID resource, $ SID.vip resource, $ SID.lsnr resource, $ SID.db resource and $ SID.head resource.
4. the database high availability method of class RACOneNode according to claim 3, it is characterized in that, $ SID resource: the basis being whole resource items, by defining this resource, all it is depended on to make other resources in resource items, achieve the unified management to resource items, to the stopping of $ SID resource, to reset and handover operation to have influence in resource items other all resources.
5. the database high availability method of class RACOneNode according to claim 3, it is characterized in that, $ SID.vip resource: be a VIP resource by OracleClusterware internal control, $ SID.vip Resource Dependence is in Internet resources and $ SID resource.
6. the database high availability method of class RACOneNode according to claim 3; it is characterized in that; $ SID.lsnr resource: be the monitor resource in resource items, protected by shell script act_lsnr.ksh, $ SID.lsnr Resource Dependence is in $ SID.vip resource.
7. the database high availability method of class RACOneNode according to claim 3, is characterized in that, $ SID.db resource: be the single instance data resource in resource items, protected by shell script act_db.ksh; $ SID.db Resource Dependence is in $ SID resource.
8. the database high availability method of class RACOneNode according to claim 3, is characterized in that, $ SID.head resource, echoes, be encapsulated by whole resource items, integrally carry out management maintenance with $ SID resource head and the tail; SID.head resource is protected by shell script act_rgh.ksh.
CN201510837438.XA 2015-11-26 2015-11-26 Database high-availability method similar to RAC One Node Pending CN105391790A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510837438.XA CN105391790A (en) 2015-11-26 2015-11-26 Database high-availability method similar to RAC One Node

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510837438.XA CN105391790A (en) 2015-11-26 2015-11-26 Database high-availability method similar to RAC One Node

Publications (1)

Publication Number Publication Date
CN105391790A true CN105391790A (en) 2016-03-09

Family

ID=55423620

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510837438.XA Pending CN105391790A (en) 2015-11-26 2015-11-26 Database high-availability method similar to RAC One Node

Country Status (1)

Country Link
CN (1) CN105391790A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106130763A (en) * 2016-06-24 2016-11-16 平安科技(深圳)有限公司 Server cluster and be applicable to the database resource group method for handover control of this cluster
CN109543365A (en) * 2018-11-26 2019-03-29 新华三技术有限公司 A kind of authorization method and device
CN113572862A (en) * 2021-09-27 2021-10-29 武汉四通信息服务有限公司 Cluster deployment method and device, electronic equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106130763A (en) * 2016-06-24 2016-11-16 平安科技(深圳)有限公司 Server cluster and be applicable to the database resource group method for handover control of this cluster
CN109543365A (en) * 2018-11-26 2019-03-29 新华三技术有限公司 A kind of authorization method and device
CN109543365B (en) * 2018-11-26 2020-11-06 新华三技术有限公司 Authorization method and device
CN113572862A (en) * 2021-09-27 2021-10-29 武汉四通信息服务有限公司 Cluster deployment method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US9747179B2 (en) Data management agent for selective storage re-caching
US9098454B2 (en) Speculative recovery using storage snapshot in a clustered database
CA2861257C (en) Fault tolerance for complex distributed computing operations
CN100426247C (en) Data recovery method
CN101689114B (en) Dynamic cli mapping for clustered software entities
CN103929500A (en) Method for data fragmentation of distributed storage system
CN111935244B (en) Service request processing system and super-integration all-in-one machine
CN111181780A (en) HA cluster-based host pool switching method, system, terminal and storage medium
CN105391790A (en) Database high-availability method similar to RAC One Node
CN113515316A (en) Novel edge cloud operating system
CN106612314A (en) System for realizing software-defined storage based on virtual machine
WO2020233001A1 (en) Distributed storage system comprising dual-control architecture, data reading method and device, and storage medium
US8621260B1 (en) Site-level sub-cluster dependencies
CN101686261A (en) RAC-based redundant server system
CN111488247B (en) High availability method and equipment for managing and controlling multiple fault tolerance of nodes
CN112540873B (en) Disaster tolerance method and device, electronic equipment and disaster tolerance system
CN104683131A (en) Application stage virtualization high-reliability method and device
CN114416301A (en) Data collection service container management method
US10365934B1 (en) Determining and reporting impaired conditions in a multi-tenant web services environment
US9143410B1 (en) Techniques for monitoring guest domains configured with alternate I/O domains
CN102662702B (en) Equipment management system, device, substrate management devices and method
WO2024103902A1 (en) Database access method, apparatus and system, and device and readable storage medium
CN107741966A (en) A kind of node administration method and device
Wen et al. Research on key technologies for database containerized deployment
CN105162873A (en) High available method and system of K1 servers

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20170303

Address after: 100031 Xicheng District West Chang'an Avenue, No. 86, Beijing

Applicant after: State Grid Corporation of China

Applicant after: Nanjing Nari Co., Ltd.

Applicant after: Integration of information system branch office of Nanjing NanRui Group Co.,Ltd

Applicant after: INFORMATION COMMUNICATION BRANCH, STATE GRID JIANGSU ELECTRIC POWER COMPANY

Address before: 100031 Xicheng District West Chang'an Avenue, No. 86, Beijing

Applicant before: State Grid Corporation of China

Applicant before: Nanjing Nari Co., Ltd.

Applicant before: Integration of information system branch office of Nanjing NanRui Group Co.,Ltd

RJ01 Rejection of invention patent application after publication

Application publication date: 20160309

RJ01 Rejection of invention patent application after publication