CN110532096B - System and method for multi-node grouping parallel deployment - Google Patents

System and method for multi-node grouping parallel deployment Download PDF

Info

Publication number
CN110532096B
CN110532096B CN201910799070.0A CN201910799070A CN110532096B CN 110532096 B CN110532096 B CN 110532096B CN 201910799070 A CN201910799070 A CN 201910799070A CN 110532096 B CN110532096 B CN 110532096B
Authority
CN
China
Prior art keywords
deployment
application servers
parallel
server
description
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910799070.0A
Other languages
Chinese (zh)
Other versions
CN110532096A (en
Inventor
黄贤钹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yuncunbao Technology Co ltd
Original Assignee
Shenzhen Yuncunbao Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yuncunbao Technology Co ltd filed Critical Shenzhen Yuncunbao Technology Co ltd
Priority to CN201910799070.0A priority Critical patent/CN110532096B/en
Publication of CN110532096A publication Critical patent/CN110532096A/en
Application granted granted Critical
Publication of CN110532096B publication Critical patent/CN110532096B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0709Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a distributed system consisting of a plurality of standalone computer nodes, e.g. clusters, client-server systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0715Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a system implementing multitasking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0793Remedial or corrective actions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system

Abstract

The invention relates to a system and a method for multi-node grouping parallel deployment, wherein the system comprises: the system comprises a deployment storage server, a judging unit and a judging unit, wherein the deployment storage server is internally provided with the judging unit; the deployment task server carries out parallel grouping on the application servers according to the instruction of the deployment storage server; the application servers are distributed into parallel routes according to the deployment task servers; and the deployment description file is arranged in the deployment storage server, sets the access addresses and passwords of all application servers in the system, can record the number of the current online application servers according to the feedback of the deployment storage server, and performs parallel grouping description on the online application servers.

Description

System and method for multi-node packet parallel deployment
Technical Field
The invention relates to a system and a method for parallel deployment of servers, in particular to a system and a method for automatic grouping and parallel deployment of multiple nodes.
Background
The current server deployment scheme mostly adopts serial deployment.
CN201310077244.5 discloses a method for multi-node-oriented cloud deployment, which describes deployment of a cloud service host. CN200710093947.1 discloses a server deployment system and method, which describes a method for deploying servers through a proxy deployment machine, and does not provide a solution for parallel deployment. CN 201110065647.9 discloses a deployment method of a cluster parallel computing environment, which is a method for processing among processors, mainly relying on a lustre file system, and the method relies on allocation of a host to the processors, and does not involve deployment of an overall server.
Disclosure of Invention
In view of this, the present invention provides a parallel deployment method for current server deployment in the prior art, and through the method, the present invention can objectively solve the problem of relatively reasonable resource configuration, and can solve the solution existing in the prior art once a certain server in a target server is powered off, restarted, or even fails.
The specific technical scheme of the invention is as follows:
a system for multi-node packet parallel deployment, the system comprising:
the system comprises a deployment storage server, a judging unit and a judging unit, wherein the deployment storage server is provided with the judging unit;
the deployment task server carries out parallel grouping on the application servers according to the instruction of the deployment storage server;
the application servers are distributed into parallel routes according to the deployment task servers;
the deployment description file is arranged in the deployment storage server, the access addresses and passwords of all application servers in a system are set, the number of the current online application servers can be recorded according to the feedback of the deployment storage server, and the online application servers are subjected to parallel grouping description;
the judging unit starts the deployment description file based on the instruction of the deployment task event; the deployment task server pulls the deployment description file to each application server, and carries out parallel grouping on the reference server according to the parallel grouping description of the deployment description file.
Further, after the deployment description file records the number of the current online application servers, performing password confirmation on each online application server, and performing parallel packet description only on the online application servers subjected to password confirmation.
Further, the parallel packet description employs an equalization algorithm, which includes:
acquiring the number of online application servers or acquiring the number M of the online application servers subjected to password confirmation, and dividing the number M into N groups of K stations;
performing natural number trial on a group number N until M-X ^2<0, taking N = X-1 groups, randomly distributing the rest M-N ^2 application servers to the N groups, wherein the number of each group is at most one when the random distribution is performed, and X is the trial number of the group number N;
and the K application servers in each group are deployed in series according to the address sequence.
Further, when the residual M-N ^2 application servers are distributed, random distribution is not carried out, and the application servers are distributed to each group in turn.
A method of multi-node packet parallel deployment, the method comprising:
reading the number of the online application servers;
setting a deployment description file, wherein the deployment description file sets access addresses and passwords of all application servers in a system, records the number of the current online application servers, and performs parallel grouping description on the online application servers;
and judging whether a deployment task event is triggered, if so, transmitting the deployment description file to all the current online application servers, and performing parallel grouping on all the current online application servers according to the parallel grouping description.
Further, after the deployment description file records the number of the current online application servers, performing password confirmation on each online application server, and performing parallel packet description only on the online application servers subjected to password confirmation.
Further, the parallel packet description employs an equalization algorithm, which includes:
acquiring the number of online application servers or acquiring the number M of the online application servers subjected to password confirmation, and dividing the number M into N groups of K stations;
performing natural number trial on a group number N until M-X ^2<0, taking N = X-1 groups, randomly distributing the rest M-N ^2 application servers to the N groups, wherein the number of each group is at most one during random distribution, and X is the trial number of the group number N;
and the K application servers in each group are deployed in series according to the address sequence.
Further, when the residual M-N ^2 application servers are distributed, random distribution is not carried out, and the application servers are distributed to each group in turn.
Further, in the serial deployment process, if a failed application server occurs, the failed application server is skipped after reporting failure, and deployment is continued without recalculation according to the parallel packet description in the deployment description file.
Further, if all the serially deployed servers in one parallel route are failed, the parallel route is reserved, and deployment is continued according to the parallel grouping description in the deployment description file without recalculation.
Further, when distributing the rest M-N ^2 application servers, firstly supplementing the serial routes of the application servers with faults according to the sequence from large to small of the reported failure quantity, and if the rest application servers exist, then performing random distribution or alternate distribution.
By the technical scheme, the problem of deployment failure once an application server has defects in serial deployment can be solved, the problem of deployment interruption can be solved by adopting parallel deployment, and the problem of service interruption caused by simultaneous restart of multiple nodes can be avoided by using packet deployment, so that the working difficulty and intensity of server deployment personnel are reduced, and the deployment time is saved; and the server of the mass online system can be more stably and effectively deployed and updated.
Drawings
FIG. 1 is a deployment diagram of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below, and it is obvious that the described embodiments are a part of the embodiments of the present invention, but not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, and "a" and "an" generally include at least two, but do not exclude at least one, unless the context clearly dictates otherwise.
It should be understood that the term "and/or" as used herein is merely a relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B, may represent: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter associated objects are in an "or" relationship.
It should be understood that, although the terms first, second, third, etc. may be used in the embodiments of the present invention to describe \8230; \8230, these terms are not intended to be limiting for 8230; etc. These terms are used only to distinguish between 8230; and vice versa. For example, a first 8230; also referred to as a second 8230; without departing from the scope of embodiments of the invention, similarly, a second method of (8230) \ 8230; also referred to as a first method of (8230); a method of) preparing a polymer material using a polymeric material is also provided.
The words "if", as used herein, may be interpreted as "at \8230; \8230when" or "when 8230; \823030, when" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a good or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such good or system. Without further limitation, an element defined by the phrases "comprising one of \8230;" does not exclude the presence of additional like elements in an article or system comprising the element.
As shown in fig. 1, one of the inventive points of the present invention is to set a deployment description file yml, set the number M of target server nodes, configure the access address and SSH user password of each machine for the remote transmission and execution of commands in the following deployment process. And M is described in parallel packets.
Preferably, since the target server may be offline, and the offline server cannot participate in the post-deployment operation, when editing the deployment description file, the number of currently online application servers is recorded according to the feedback of the deployment storage server.
More preferably, after recording the online application servers, in order to prevent unauthorized or insecure application servers from being mixed in, each online application server is subjected to password confirmation, and only the password-confirmed online application servers are subjected to parallel packet description.
One preferred way of parallel packet description is: setting M target nodes, dividing the target nodes into N groups, setting K groups in each group, and in order to require balanced allocation, expecting that N is close to or equal to K, supposing that the N is equal to the K, setting the required group number X =1,2,3 \8230, continuously setting until M-X ^2 is less than 0, and dividing the target nodes into X-1 groups. If 11 servers exist, when X is increased to 4, 11-4^2<0 is divided into 4-1 =groups of three in each group, and the rest 2 servers are alternately divided into groups to obtain 1,2 groups of 4 and 3 rd groups of 3.
Or the allocation is not carried out by turns, but a random allocation mode is adopted. And in random distribution, at most one application server is distributed in each group.
And then, triggering a deployment task by deployment personnel, generating a deployment event, informing a task processing process in a deployment task server, pulling a deployment file from a deployment storage server by the task processing process, balancing the algorithm according to the step 2, starting N deployment task processes by N groups, distributing the deployment file to each deployment task process, and executing all the deployment task processes in parallel.
And the deployment task process is serially deployed according to the target server obtained by the equalization algorithm and the machine address sequence configured in the deployment description file, so that the condition that the service is unavailable due to the simultaneous restart of all the machines is avoided, the deployment task process remotely transmits the deployment file to the target server through the ssh user password configured in the deployment file, and then restarts the service process on the target server through a remote command, so that the multi-task grouping parallel deployment is completed.
And (3) keeping connection with the task processing process in the deployment process of the task processing process, reporting a deployment result after the deployment of one machine is finished, if a deployed target server fails, continuing to deploy a subsequent target machine after the reporting fails, if the fault in the deployment process of the task processing process is disconnected with the task processing process, taking the task processing process to the group of target machines which are not deployed, and starting a new deployment task process to continue the deployment. The fault related to the method is other faults which can be started by the server and occur after password verification, such as key software failure, antivirus server alarm, hardware alarm and the like. In particular, the amount of the solvent to be used,
in the serial deployment process, if a failed application server occurs, the application server is skipped after the reporting fails, and the deployment is continued without recalculation according to the parallel packet description in the deployment description file.
If all the serially deployed servers in one parallel route have faults, the parallel route is reserved, and deployment is continued according to the parallel grouping description in the deployment description file without recalculation.
When a fault application server exists, the residual M-N ^2 application servers are distributed, the serial routes of the fault application servers are supplemented according to the sequence from the large reporting failure quantity to the small reporting failure quantity, and if the residual application servers exist, random distribution or alternate distribution is carried out.
By the technical scheme, the problem of deployment failure once a certain application server has defects in serial deployment can be solved, the problem of deployment interruption can be solved by adopting parallel deployment, and the problem of service interruption caused by simultaneous restart of multiple nodes can be avoided by using packet deployment, so that the working difficulty and intensity of server deployment personnel are reduced, and the deployment time is saved; and the server of the mass online system can be more stably and effectively deployed and updated. In addition, 3 hierarchies are set in the method, namely an online server, a verified server and a server with faults after first screening and second screening, so that the effectiveness of the serially deployed application servers in each parallel route and the parallel routes can be guaranteed to the greatest extent.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (8)

1. A system for multi-node packet parallel deployment, the system comprising:
the system comprises a deployment storage server, a judging unit and a judging unit, wherein the deployment storage server is provided with the judging unit;
the deployment task server carries out parallel grouping on the application servers according to the instruction of the deployment storage server;
a plurality of application servers distributed to the parallel routes according to the deployment task servers;
the deployment description file is arranged in the deployment storage server, the access addresses and the passwords of all application servers in a system are set, the number of the current online application servers can be recorded according to the feedback of the deployment storage server, and the online application servers are subjected to parallel grouping description;
the judging unit starts the deployment description file based on the instruction of the deployment task event; the deployment task server pulls the deployment description file to each application server, and performs parallel grouping on the application servers according to the parallel grouping description of the deployment description file;
the parallel packet description adopts an equalization algorithm, and the equalization algorithm comprises the following steps:
acquiring the number of online application servers or acquiring the number M of the online application servers subjected to password confirmation, and dividing the number M into N groups of K stations;
performing natural number trial on a group number N until M-X ^2<0, taking N = X-1 groups, randomly distributing the rest M-N ^2 application servers to the N groups, wherein the number of each group is at most one when the random distribution is performed, and X is the trial number of the group number N;
and the K application servers in each group are deployed in series according to the address sequence.
2. The system according to claim 1, wherein after the deployment description file records the number of the currently online application servers, a password validation is performed on each online application server, and only the password validated online application servers are subjected to parallel packet description.
3. The system of claim 1 or 2, wherein the remaining M-N ^2 application servers are allocated not randomly but alternately to groups.
4. A method of multi-node packet parallel deployment, the method comprising:
reading the number of online application servers;
setting a deployment description file, wherein the deployment description file sets access addresses and passwords of all application servers in a system, records the number of the current online application servers, and performs parallel grouping description on the online application servers;
judging whether a deployment task event is triggered, if so, transmitting the deployment description file to all the current online application servers, and performing parallel grouping on all the current online application servers according to the parallel grouping description;
the parallel packet description adopts an equalization algorithm, and the equalization algorithm comprises the following steps:
acquiring the number of online application servers or acquiring the number M of the online application servers subjected to password confirmation, and dividing the number M into N groups of K stations;
performing natural number trial on a group number N until M-X ^2<0, taking N = X-1 groups, randomly distributing the rest M-N ^2 application servers to the N groups, wherein the number of each group is at most one during random distribution, and X is the trial number of the group number N;
and the K application servers in each group are deployed in series according to the address sequence.
5. The method according to claim 4, wherein after the deployment description file records the number of the application servers currently on line, the password validation is performed on each online application server, and only the password-validated online application servers are subjected to parallel packet description.
6. The method of claim 4 or 5, wherein the remaining M-N ^2 application servers are allocated not randomly but alternately to groups.
7. The method according to claim 4 or 5, characterized in that in the serial deployment process, if a failed application server occurs, the application server is skipped after reporting failure, and deployment is continued without recalculation according to the parallel packet description in the deployment description file;
if all the serially deployed servers in one parallel route have faults, the parallel route is reserved, and deployment is continued according to the parallel grouping description in the deployment description file without recalculation.
8. The method of claim 7, wherein when the remaining M-N ^2 application servers are allocated, the serial routes of the application servers with faults are supplemented in the order from large to small according to the number of failed reports, and if the number of the failed reports is more than the number of the failed reports, random allocation or alternate allocation is performed.
CN201910799070.0A 2019-08-28 2019-08-28 System and method for multi-node grouping parallel deployment Active CN110532096B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910799070.0A CN110532096B (en) 2019-08-28 2019-08-28 System and method for multi-node grouping parallel deployment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910799070.0A CN110532096B (en) 2019-08-28 2019-08-28 System and method for multi-node grouping parallel deployment

Publications (2)

Publication Number Publication Date
CN110532096A CN110532096A (en) 2019-12-03
CN110532096B true CN110532096B (en) 2022-12-30

Family

ID=68664596

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910799070.0A Active CN110532096B (en) 2019-08-28 2019-08-28 System and method for multi-node grouping parallel deployment

Country Status (1)

Country Link
CN (1) CN110532096B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327842A (en) * 2020-09-29 2022-04-12 华为技术有限公司 Multitask deployment method and device
CN112615916A (en) * 2020-12-14 2021-04-06 微医云(杭州)控股有限公司 File deployment method and device, electronic equipment and storage medium
CN113766047B (en) * 2021-09-16 2024-03-22 北京恒安嘉新安全技术有限公司 Task grouping method and device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102882900A (en) * 2011-07-11 2013-01-16 阿里巴巴集团控股有限公司 Application and deployment method for large-scale server cluster and large-scale server cluster
CN103765851A (en) * 2011-06-30 2014-04-30 思杰系统有限公司 Systems and methods for transparent layer 2 redirection to any service
CN105743680A (en) * 2014-12-11 2016-07-06 深圳云之家网络有限公司 Cluster disposition method and disposition device
WO2018099067A1 (en) * 2016-11-29 2018-06-07 上海壹账通金融科技有限公司 Distributed task scheduling method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7657887B2 (en) * 2000-05-17 2010-02-02 Interwoven, Inc. System for transactionally deploying content across multiple machines

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103765851A (en) * 2011-06-30 2014-04-30 思杰系统有限公司 Systems and methods for transparent layer 2 redirection to any service
CN102882900A (en) * 2011-07-11 2013-01-16 阿里巴巴集团控股有限公司 Application and deployment method for large-scale server cluster and large-scale server cluster
CN105743680A (en) * 2014-12-11 2016-07-06 深圳云之家网络有限公司 Cluster disposition method and disposition device
WO2018099067A1 (en) * 2016-11-29 2018-06-07 上海壹账通金融科技有限公司 Distributed task scheduling method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Using parallel distributed reasoning for monitoring computing networks;S. Musman et al;《2010 - MILCOM 2010 MILITARY COMMUNICATIONS CONFERENCE》;20110106;417-422页 *
基于组密钥服务器的加密文件系统的设计和实现;肖达等;《计算机学报》;20080430(第04期);600-610页 *
面向海量高清视频数据的高性能分布式存储系统;操顺德等;《软件学报》;20170831;第28卷(第8期);1999-2009页 *

Also Published As

Publication number Publication date
CN110532096A (en) 2019-12-03

Similar Documents

Publication Publication Date Title
CN110532096B (en) System and method for multi-node grouping parallel deployment
CN109586952B (en) Server capacity expansion method and device
CN106170971B (en) Arbitration process method, arbitration storage device and system after a kind of cluster fissure
CN111901422B (en) Method, system and device for managing nodes in cluster
EP3142011B9 (en) Anomaly recovery method for virtual machine in distributed environment
US20100223609A1 (en) Systems and methods for automatic discovery of network software relationships
US9087005B2 (en) Increasing resiliency of a distributed computing system through lifeboat monitoring
CN108173911B (en) Micro-service fault detection processing method and device
CN110134518B (en) Method and system for improving high availability of multi-node application of big data cluster
EP2614436A2 (en) Controlled automatic healing of data-center services
CN106068626B (en) Load balancing in a distributed network management architecture
CN107729185B (en) Fault processing method and device
CN107453932B (en) Distributed storage system management method and device
CN110990115A (en) Containerized deployment management system and method for honeypots
US9280741B2 (en) Automated alerting rules recommendation and selection
JPWO2022255247A5 (en)
CN106254312A (en) A kind of method and device being realized server attack protection by virtual machine isomery
JP6607572B2 (en) Recovery control system and method
CN107656847A (en) Node administration method, system, device and storage medium based on distributed type assemblies
CN109379223A (en) A kind of method and apparatus for realizing network interface card automated setting
CN109039781B (en) Network equipment fault diagnosis method, execution node, server and system
CN106406963B (en) Initialization method and device of Linux system
CN106209561A (en) The sending method of loop detection message and device
JP2018026709A (en) Fault recovery system and method
JP6269199B2 (en) Management server, failure recovery method, and computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20221215

Address after: 518000 502, Building 5, Fanshen Industrial Zone, Yangguang Second Road, Yangguang Community, Xili Street, Nanshan District, Shenzhen, Guangdong

Applicant after: Shenzhen Yuncunbao Technology Co.,Ltd.

Address before: Room 193, No. 333, jiufo Jianshe Road, Zhongxin Guangzhou Knowledge City, Guangzhou, Guangdong 510663

Applicant before: GUANGDONG LEZHIKANG MEDICAL TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant