CN114338363A - Continuous integration method, device, equipment and storage medium - Google Patents
Continuous integration method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN114338363A CN114338363A CN202111552876.3A CN202111552876A CN114338363A CN 114338363 A CN114338363 A CN 114338363A CN 202111552876 A CN202111552876 A CN 202111552876A CN 114338363 A CN114338363 A CN 114338363A
- Authority
- CN
- China
- Prior art keywords
- service
- node
- cluster
- distributed
- slave
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Landscapes
- Debugging And Monitoring (AREA)
Abstract
The application discloses a continuous integration method, a device, equipment and a storage medium, comprising the following steps: creating a distributed service cluster comprising a main service node and a first preset number of auxiliary service nodes; acquiring state information of all service nodes in a distributed service cluster in real time to obtain the state information of the service nodes; monitoring all service nodes in the distributed service cluster, determining target slave service nodes from the slave service nodes according to the service node state information when monitoring that a master service node is abnormal, and acquiring the target data from the target slave service nodes so as to integrate a continuous integration system by using the target data. According to the method and the device, the distributed service cluster is created, the candidate queue is constructed through the state information of each node in the cluster, the fact that a new node take over task can be elected when a single node is abnormal is guaranteed, the high reliability of the whole continuous integration process can be guaranteed, the iterative delivery of products is accelerated, and the safety of data is guaranteed.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a persistent integration method, apparatus, device, and storage medium.
Background
Currently, with the explosive growth of market demand and the popularization of agile development mode, high frequency delivery has become one of the essential elements of product maturity. In a conventional Continuous Integration (CI) system, most of core processing modules such as a code library, a compiler, a Continuous Integration tool server, an automation tester, and a mirror library are stand-alone and operate independently. Once the continuous integrated system is abnormal or suffers from natural disasters and the like, power failure or downtime is easily caused, so that products cannot be delivered, and even serious consequences such as data loss are caused.
Therefore, how to continue to complete the delivery of the product and ensure the safety of the data when the continuous integrated system fails is still a problem to be further solved.
Disclosure of Invention
In view of this, an object of the present application is to provide a persistent integration method, apparatus, device and storage medium, which can accelerate iterative delivery of products and ensure data security. The specific scheme is as follows:
in a first aspect, the present application discloses a persistent integration method, comprising:
creating a distributed service cluster comprising a main service node and a first preset number of auxiliary service nodes; target data stored in the main service node is synchronized to the auxiliary service node according to a preset synchronization rule;
acquiring state information of all service nodes in the distributed service cluster in real time to obtain service node state information;
and monitoring all service nodes in the distributed service cluster, determining a target slave service node from the slave service nodes according to the service node state information when monitoring that the master service node is abnormal, and acquiring the target data from the target slave service node so as to integrate a continuous integration system by using the target data.
Optionally, the determining a target slave service node from the slave service nodes according to the service node state information includes:
calculating the variance of the slave service node by using the service node state information to obtain the slave service node variance;
and sequencing the slave service nodes corresponding to the slave service node variance according to the sequence from small to large of the slave service node variance to obtain a target candidate queue, and determining a target slave service node from the target candidate queue.
Optionally, the persistent integration method further includes:
constructing a distributed cluster management system for managing the distributed service cluster; the distributed cluster management system comprises a master management node and a second preset number of slave management nodes;
and judging whether the master management node in the distributed cluster management system is normal or not, if so, managing the distributed service cluster through the master management node, if not, determining a target slave management node from the slave management nodes, and managing the distributed service cluster through the target slave management node.
Optionally, the persistent integration method further includes:
and backing up the node cache data in the distributed service cluster and the distributed cluster management system within a preset time slice threshold and/or when the preset cache water level threshold is reached.
Optionally, the integrating the persistent integration system by using the target data includes:
and merging the target data into a preset version control system so that the version control system can automatically run construction and case test by using the target data to obtain an automatic case running result, and releasing an application program in a mirror image warehouse.
Optionally, the persistent integration method further includes:
judging whether the integration is successful or not according to the operation result of the automatic case, and if the integration is failed, performing abnormal analysis to obtain an abnormal analysis result;
and judging whether the abnormal analysis result is caused by a service code problem or not, if not, re-triggering the step of integrating the continuous integration system by using the target data, and if so, generating corresponding abnormal alarm information and sending the abnormal alarm information to a developer terminal according to a preset prompting mode.
Optionally, the monitoring the service nodes in the distributed service cluster includes:
acquiring log information of an operating system corresponding to a service node in the distributed service cluster, and judging whether an abnormal service node exists in the distributed service cluster according to the log information;
and if the distributed service cluster has abnormal service nodes, marking the abnormal service nodes according to a preset marking rule to obtain abnormal marked service nodes, and deleting the abnormal service marked nodes from the distributed service cluster.
In a second aspect, the present application discloses a persistent integration apparatus comprising:
the service cluster creating module is used for creating a distributed service cluster comprising a main service node and a first preset number of auxiliary service nodes; target data stored in the main service node is synchronized to the auxiliary service node according to a preset period;
the state information acquisition module is used for acquiring the state information of all the service nodes in the distributed service cluster in real time to obtain the state information of the service nodes;
and the integration module is used for monitoring all service nodes in the distributed service cluster, determining a target slave service node from the slave service nodes according to the service node state information when the master service node is monitored to be abnormal, and acquiring the target data from the target slave service node so as to integrate a continuous integration system by using the target data.
In a third aspect, the present application discloses an electronic device comprising a processor and a memory; wherein the processor implements the aforementioned persistent integration method when executing the computer program stored in the memory.
In a fourth aspect, the present application discloses a computer readable storage medium for storing a computer program; wherein the computer program, when executed by a processor, implements the aforementioned persistent integration method.
It can be seen that, in the present application, a distributed service cluster including a master service node and a first preset number of slave service nodes is created, then state information of all service nodes in the distributed service cluster is obtained in real time, state information of the service nodes is obtained, all service nodes in the distributed service cluster are monitored, when it is monitored that the master service node is abnormal, a target slave service node is determined from the slave service nodes according to the state information of the service nodes, and target data is obtained from the target slave service node, so as to integrate a persistent integration system by using the target data. Therefore, the distributed service cluster is created, the candidate queue is constructed through the state information of each node in the cluster, the fact that a new node take over task can be elected when a single node is abnormal is guaranteed, the high reliability of the whole continuous integration process can be guaranteed, the iterative delivery of products is accelerated, and the safety of data is guaranteed.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of a method of persistent integration as disclosed herein;
FIG. 2 is a flow chart of a specific persistent integration processing service disclosed herein;
FIG. 3 is a flow chart of a particular persistent integration method disclosed herein;
FIG. 4 is a schematic diagram of a persistent integration apparatus according to the present disclosure;
fig. 5 is a block diagram of an electronic device disclosed in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application discloses a continuous integration method, and as shown in fig. 1, the method comprises the following steps:
step S11: creating a distributed service cluster comprising a main service node and a first preset number of auxiliary service nodes; and the target data stored in the main service node is synchronized to the slave service node according to a preset synchronization rule.
In this embodiment, first, a distributed service cluster including a master service node and a preset number of slave service nodes needs to be created for each independently operating service module in the persistent integration system; target data stored in the master service node is synchronized to the slave service node according to a preset synchronization rule; each independently operating business module in the continuous integration system comprises but is not limited to a code library, a compiler, a continuous integration tool server, an automation tester, a mirror library and the like. Accordingly, the distributed service cluster includes, but is not limited to, a code library cluster, a compiling cluster, a persistent integration service cluster, an automation cluster, a mirror library cluster, and the like.
In this embodiment, before creating the distributed service cluster including one master service node and a first preset number of slave service nodes, the method specifically further includes: constructing a distributed cluster management system for managing the distributed service cluster; the distributed cluster management system comprises a master management node and a second preset number of slave management nodes; and judging whether the master management node in the distributed cluster management system is normal or not, if so, managing the distributed service cluster through the master management node, if not, determining a target slave management node from the slave management nodes, and managing the distributed service cluster through the target slave management node. In this embodiment, a distributed cluster management system is pre-constructed, where the distributed cluster management system includes a master management node and a preset number of slave management nodes, and is used to manage the distributed service cluster. It can be understood that, in a normal case, that is, when a master management node of the distributed cluster management system is not abnormal, the distributed service cluster may be managed by the master management node, and when the master management node is abnormal, a target slave management node is determined from slave management nodes in the distributed cluster management system, and the distributed service cluster is managed by the target slave management node.
Meanwhile, a synchronization mechanism and a fault tolerance mechanism are configured in the distributed cluster management system; the synchronization mechanism can synchronize target data stored in the master management node to the slave management node according to a customized synchronization period, and when the inconsistency of the target data between the master management node and the slave management node is monitored, the target data stored in the master management node can be automatically synchronized and covered to the slave management node. Wherein the synchronization period is 1 second at least and not more than 60 minutes at most. It should be noted that, in the process of automatic synchronization coverage, the operation of each management node in the distributed cluster management system is not affected, and the normal operation of the distributed service cluster in which the communication connection relationship is established is also not affected.
In this embodiment, the continuous integration method may specifically further include: and backing up the node cache data in the distributed service cluster and the distributed cluster management system within a preset time slice threshold and/or when the preset cache water level threshold is reached. In this embodiment, in order to prevent node cache data loss caused by extreme conditions, such as an earthquake, a fire, and the like, a cache processing mechanism is added in the distributed cluster management system and the distributed cluster management system, and when a preset time slice threshold value is within and/or reaches a preset cache water level threshold value, that is, the time slice threshold value and the cache water level threshold value are jointly controlled, node cache data in the distributed service cluster and the distributed cluster management system are backed up and stored.
Step S12: and acquiring the state information of all service nodes in the distributed service cluster in real time to obtain the state information of the service nodes.
In this embodiment, after the distributed service cluster including one master service node and a first preset number of slave service nodes is created, the state information of all the service nodes in the distributed service cluster is obtained in real time, so as to obtain the state information of the service nodes. The state information of the node includes, but is not limited to, configuration information and configuration service information of each service node.
Step S13: and monitoring all service nodes in the distributed service cluster, determining a target slave service node from the slave service nodes according to the service node state information when monitoring that the master service node is abnormal, and acquiring the target data from the target slave service node so as to integrate a continuous integration system by using the target data.
In this embodiment, after state information of all service nodes in the distributed service cluster is obtained in real time, each service node in the distributed service cluster is monitored in real time, when it is monitored that the master service node is abnormal, a target slave service node is determined from the slave service nodes according to the state information of the service node, the target data is obtained from the target slave service node, and then the obtained target data is used to perform integration operation on the entire persistent integration system.
In this embodiment, the integrating the persistent integration system by using the target data may specifically include: and merging the target data into a preset version control system so that the version control system can automatically run construction and case test by using the target data to obtain an automatic case running result, and releasing an application program in a mirror image warehouse. It can be understood that after the target data is obtained from the target service node, the target data may be merged into a preset version control system, and after the version control system obtains the target data, the target data is utilized to automatically run construction and case testing to obtain an automatic case running result, and the automatic case running result is analyzed, and under the condition that the automatic case running result is correct, an application program is released in a mirror image warehouse.
In this embodiment, after obtaining the automatic use case operation result, the method specifically includes: judging whether the integration is successful or not according to the operation result of the automatic case, and if the integration is failed, performing abnormal analysis to obtain an abnormal analysis result; and judging whether the abnormal analysis result is caused by a service code problem or not, if not, re-triggering the step of integrating the continuous integration system by using the target data, and if so, generating corresponding abnormal alarm information and sending the abnormal alarm information to a developer terminal according to a preset prompting mode. It can be understood that whether the integration is successful or not can be judged by analyzing the operation result of the automatic use case, if the integration is failed, corresponding abnormal analysis is performed to obtain an abnormal analysis result, whether the integration is caused by a business code problem or not is further judged according to the abnormal analysis result, if the integration is not caused by the business code problem, the step of integrating the continuous integration system by using the target data is re-executed, and if the integration is caused by the business code problem, corresponding abnormal alarm information is generated and sent to the developer terminal according to a preset prompt mode. For example, the method can be pushed to a developer terminal in real time in the form of mail and displayed on a large-screen interface.
Specifically, referring to fig. 2, fig. 2 shows an entire business flow chart of the persistent integration method. Firstly, a developer submits a code to a code repository corresponding to an application program, for example, a GIT (open source distributed version control system) or an SVN (Subversion, an open source code version control system), and then, according to a predefined construction relationship, different Slave is specified in a jenkins (a persistent integration tool developed based on Java) cluster to construct the application program. Pushing the mirror image to a test environment after construction is completed, judging whether pushing is successful, starting an automatic test case set corresponding to the application if pushing is successful, pushing an automatic case operation result, and then releasing an application program in a mirror image warehouse; if the failure is caused, performing abnormal analysis on the failure reason and judging whether the failure reason is caused by the service code, if not, triggering a corresponding automatic test case set, pushing an automatic case operation result, releasing an application program in a mirror image warehouse after the pushing is successful, if the pushing is failed, performing abnormal analysis, and when the abnormal analysis result is caused by the service code, directly generating corresponding abnormal alarm information, sending the abnormal alarm information to a developer terminal in a mail form and displaying the abnormal alarm information on a large screen, if the abnormal alarm information is caused by a non-service code problem, triggering jenkins mirror image construction again, and if the abnormal alarm information is caused by other problems or the automatic test result fails to display, pushing jenkins log packages to a code submitting person (developer) in a mail way, and simultaneously performing synchronous display on the large screen.
It can be seen that, in the embodiment of the present application, a distributed service cluster including a master service node and a first preset number of slave service nodes is created, then state information of all service nodes in the distributed service cluster is obtained in real time, state information of the service nodes is obtained, all service nodes in the distributed service cluster are monitored, when it is monitored that the master service node is abnormal, a target slave service node is determined from the slave service nodes according to the state information of the service nodes, and target data is obtained from the target slave service node, so that a persistent integration system is integrated by using the target data. Therefore, the distributed service cluster is created, the candidate queue is constructed through the state information of each node in the cluster, the fact that a new node take over task can be elected when a single node is abnormal is guaranteed, the high reliability of the whole continuous integration process can be guaranteed, the iterative delivery of products is accelerated, and the safety of data is guaranteed.
The embodiment of the application discloses a specific continuous integration method, and as shown in fig. 3, the method includes:
step S21: creating a distributed service cluster comprising a main service node and a first preset number of auxiliary service nodes; and the target data stored in the main service node is synchronized to the slave service node according to a preset synchronization rule.
Step S22: and acquiring the state information of all service nodes in the distributed service cluster in real time to obtain the state information of the service nodes.
Step S23: and acquiring log information of an operating system corresponding to the service node in the distributed service cluster, and judging whether an abnormal service node exists in the distributed service cluster according to the log information.
In this embodiment, after state information of all service nodes in the distributed service cluster is obtained in real time, log information (i.e., OS, Operating System) of an Operating System corresponding to the service node in the distributed service cluster, such as log information recorded in a display message (display message) file, a dump file, or a messages file, is collected, and whether an abnormal service node exists in the distributed service cluster is determined according to the collected log information.
Step S24: and if the distributed service cluster has abnormal service nodes, marking the abnormal service nodes according to a preset marking rule to obtain abnormal marked service nodes, and deleting the abnormal service marked nodes from the distributed service cluster.
In this embodiment, after obtaining log information of an operating system corresponding to a service node in the distributed service cluster and determining whether an abnormal service node exists in the distributed service cluster according to the log information, if an abnormal service node exists in the distributed service cluster, marking the abnormal service node according to a preset marking rule to obtain an abnormal marked service node, and deleting the abnormal service marked node from the distributed service cluster. For example, the abnormal service node is assigned to 100 according to a preset assignment rule, the normal service node is assigned to 0, and the service node assigned to 100 is deleted from the distributed service cluster.
Step S25: and monitoring all service nodes in the distributed service cluster, and when the master service node is monitored to be abnormal, calculating the variance of the slave service node by using the service node state information to obtain the slave service node variance.
In this embodiment, if an abnormal service node exists in the distributed service cluster, the abnormal service node is marked according to a preset marking rule to obtain an abnormal marked service node, and after the abnormal service marked node is deleted from the distributed service cluster, if it is monitored that the main service node is abnormal, for example, it is monitored that the main service node is assigned as 100, the variance of each slave service node is calculated by using the service node state information to obtain a slave service node variance. It should be noted that the slave service node is a service node deleted by an abnormal service node, that is, the slave service node which is not abnormal in the distributed service cluster currently exists.
Step S26: and sequencing the slave service nodes corresponding to the slave service node variance according to the sequence from small to large of the slave service node variance to obtain a target candidate queue, determining a target slave service node from the target candidate queue, and acquiring the target data from the target slave service node so as to integrate a continuous integration system by using the target data.
In this embodiment, when it is monitored that the master service node is abnormal, the variance of the slave service node is calculated by using the service node state information, after the variance of the slave service node is obtained, the variances of the slave service nodes are sorted according to a sequence from small to large to obtain a sorted queue, that is, the target candidate queue, and then a target slave service node is determined from the target candidate queue, so as to obtain the target data from the target slave service node, and further, the obtained target data may be used to perform an integration operation on the persistent integration system.
For more specific processing procedures of the steps S21 and S22, reference may be made to corresponding contents disclosed in the foregoing embodiments, and details are not repeated here.
It can be seen that, in the embodiment of the present application, after state information of all service nodes in a distributed service cluster is obtained in real time to obtain state information of the service nodes, log information of an operating system corresponding to the service nodes in the distributed service cluster is obtained, whether an abnormal service node exists in the distributed service cluster is judged according to the log information, if an abnormal service node exists in the distributed service cluster, the abnormal service node is marked according to a preset marking rule to obtain an abnormal marked service node, the abnormal service marking node is deleted from the distributed service cluster, when an abnormality of a master service node is monitored, a variance of the slave service node is calculated by using the state information of the service nodes to obtain a variance of the slave service nodes, and then the slave service nodes corresponding to the variance of the slave service nodes are sorted according to a sequence from small to large of the variance of the slave service nodes, and obtaining a target candidate queue, determining a target slave service node from the target candidate queue, and acquiring the target data from the target slave service node so as to integrate a continuous integration system by using the target data. Therefore, the embodiment of the application constructs the candidate queues of the distributed service clusters by calculating the variance of the service nodes in each distributed service cluster in real time, ensures that the main service node in the distributed service clusters is abnormal, and can effectively select a new task to be taken over from the service nodes, thereby avoiding single-point faults and accelerating and guaranteeing the product delivery.
Correspondingly, the embodiment of the present application further discloses a persistent integration apparatus, as shown in fig. 4, the apparatus includes:
a service cluster creating module 11, configured to create a distributed service cluster including a master service node and a first preset number of slave service nodes; target data stored in the main service node is synchronized to the auxiliary service node according to a preset synchronization rule;
a status information obtaining module 12, configured to obtain status information of all service nodes in the distributed service cluster in real time to obtain status information of the service nodes;
an integration module 13, configured to monitor all service nodes in the distributed service cluster, determine a target slave service node from the slave service nodes according to the service node state information when it is monitored that the master service node is abnormal, and obtain the target data from the target slave service node, so as to integrate a persistent integration system by using the target data.
For the specific work flow of each module, reference may be made to corresponding content disclosed in the foregoing embodiments, and details are not repeated here.
It can be seen that, in the embodiment of the present application, a distributed service cluster including a master service node and a first preset number of slave service nodes is created, then state information of all service nodes in the distributed service cluster is obtained in real time, state information of the service nodes is obtained, all service nodes in the distributed service cluster are monitored, when it is monitored that the master service node is abnormal, a target slave service node is determined from the slave service nodes according to the state information of the service nodes, and target data is obtained from the target slave service node, so that a persistent integration system is integrated by using the target data. Therefore, the distributed service cluster is created, the candidate queue is constructed through the state information of each node in the cluster, a new node replacing task can be selected when a single node is abnormal, the high reliability of the whole continuous integration process is guaranteed, the iterative delivery of products is accelerated, and the safety of data is guaranteed.
In some specific embodiments, the integration module 13 may specifically include:
the calculating unit is used for calculating the variance of the slave service node by using the service node state information to obtain the slave service node variance;
and the sequencing unit is used for sequencing the slave service nodes corresponding to the slave service node variances according to the sequence from small to large of the slave service node variances to obtain a target candidate queue, and determining a target slave service node from the target candidate queue.
In some embodiments, the persistent integration apparatus may further include:
the distributed cluster management system building unit is used for building a distributed cluster management system for managing the distributed service cluster; the distributed cluster management system comprises a master management node and a second preset number of slave management nodes;
the first judging unit is used for judging whether the main management node in the distributed cluster management system is normal or not;
the first management unit is used for managing the distributed service cluster through the main management node if the distributed service cluster is normal;
and the second management unit is used for determining a target slave management node from the slave management nodes if the distributed service cluster is abnormal, and managing the distributed service cluster through the target slave management node.
In some embodiments, the persistent integration apparatus may further include:
and the cache data backup unit is used for backing up the node cache data in the distributed service cluster and the distributed cluster management system within a preset time slice threshold value and/or when the preset cache water level threshold value is reached.
In some specific embodiments, the integration module 13 may specifically include:
and the integration unit is used for merging the target data into a preset version control system so that the version control system can automatically run construction and case test by using the target data to obtain an automatic case running result and release an application program in a mirror image warehouse.
In some embodiments, the persistent integration apparatus may further include:
the second judgment unit is used for judging whether the integration is successful according to the operation result of the automation case;
the abnormality analysis unit is used for carrying out abnormality analysis if the failure occurs to obtain an abnormality analysis result;
and the third judging unit is used for judging whether the abnormal analysis result is caused by a service code problem or not, if not, the step of integrating the continuous integration system by using the target data is triggered again, and if so, corresponding abnormal alarm information is generated and sent to the developer terminal according to a preset prompting mode.
In some specific embodiments, the integration module 13 may specifically include:
the information acquisition unit is used for acquiring log information of an operating system corresponding to the service node in the distributed service cluster;
a fourth judging unit, configured to judge whether an abnormal service node exists in the distributed service cluster according to the log information;
a service node marking unit, configured to mark, if an abnormal service node exists in the distributed service cluster, the abnormal service node according to a preset marking rule, so as to obtain an abnormal marked service node;
and the deleting unit is used for deleting the abnormal service marking node from the distributed service cluster.
Further, an electronic device is disclosed in the embodiments of the present application, and fig. 5 is a block diagram of the electronic device 20 according to an exemplary embodiment, which should not be construed as limiting the scope of the application.
Fig. 5 is a schematic structural diagram of an electronic device 20 according to an embodiment of the present disclosure. The electronic device 20 may specifically include: at least one processor 21, at least one memory 22, a power supply 23, a communication interface 24, an input output interface 25, and a communication bus 26. Wherein the memory 22 is used for storing a computer program, which is loaded and executed by the processor 21 to implement the relevant steps in the persistent integration method disclosed in any of the foregoing embodiments. In addition, the electronic device 20 in the present embodiment may be specifically an electronic computer.
In this embodiment, the power supply 23 is configured to provide a working voltage for each hardware device on the electronic device 20; the communication interface 24 can create a data transmission channel between the electronic device 20 and an external device, and a communication protocol followed by the communication interface is any communication protocol applicable to the technical solution of the present application, and is not specifically limited herein; the input/output interface 25 is configured to obtain external input data or output data to the outside, and a specific interface type thereof may be selected according to specific application requirements, which is not specifically limited herein.
In addition, the storage 22 is used as a carrier for resource storage, and may be a read-only memory, a random access memory, a magnetic disk or an optical disk, etc., and the resources stored thereon may include an operating system 221, a computer program 222, etc., and the storage manner may be a transient storage or a permanent storage.
The operating system 221 is used for managing and controlling each hardware device on the electronic device 20 and the computer program 222, and may be Windows Server, Netware, Unix, Linux, or the like. The computer programs 222 may further include computer programs that can be used to perform other specific tasks in addition to the computer programs that can be used to perform the persistent integration method disclosed by any of the foregoing embodiments and executed by the electronic device 20.
Further, the present application also discloses a computer-readable storage medium for storing a computer program; wherein the computer program, when executed by a processor, implements the persistent integration method disclosed above. For the specific steps of the method, reference may be made to the corresponding contents disclosed in the foregoing embodiments, which are not described herein again.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above continuous integration method, apparatus, device and storage medium provided by the present application are described in detail, and specific examples are applied herein to illustrate the principles and embodiments of the present application, and the description of the above embodiments is only used to help understand the method and core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (10)
1. A method of persistent integration, comprising:
creating a distributed service cluster comprising a main service node and a first preset number of auxiliary service nodes; target data stored in the main service node is synchronized to the auxiliary service node according to a preset synchronization rule;
acquiring state information of all service nodes in the distributed service cluster in real time to obtain service node state information;
and monitoring all service nodes in the distributed service cluster, determining a target slave service node from the slave service nodes according to the service node state information when monitoring that the master service node is abnormal, and acquiring the target data from the target slave service node so as to integrate a continuous integration system by using the target data.
2. The persistent integration method of claim 1, wherein said determining the target slave service node from the slave service nodes according to the service node status information comprises:
calculating the variance of the slave service node by using the service node state information to obtain the slave service node variance;
and sequencing the slave service nodes corresponding to the slave service node variance according to the sequence from small to large of the slave service node variance to obtain a target candidate queue, and determining a target slave service node from the target candidate queue.
3. The persistent integration method of claim 1, further comprising:
constructing a distributed cluster management system for managing the distributed service cluster; the distributed cluster management system comprises a master management node and a second preset number of slave management nodes;
and judging whether the master management node in the distributed cluster management system is normal or not, if so, managing the distributed service cluster through the master management node, if not, determining a target slave management node from the slave management nodes, and managing the distributed service cluster through the target slave management node.
4. The persistent integration method of claim 3, further comprising:
and backing up the node cache data in the distributed service cluster and the distributed cluster management system within a preset time slice threshold and/or when the preset cache water level threshold is reached.
5. The persistent integration method of claim 1, wherein the integrating the persistent integration system with the target data comprises:
and merging the target data into a preset version control system so that the version control system can automatically run construction and case test by using the target data to obtain an automatic case running result, and releasing an application program in a mirror image warehouse.
6. The persistent integration method of claim 5, further comprising:
judging whether the integration is successful or not according to the operation result of the automatic case, and if the integration is failed, performing abnormal analysis to obtain an abnormal analysis result;
and judging whether the abnormal analysis result is caused by a service code problem or not, if not, re-triggering the step of integrating the continuous integration system by using the target data, and if so, generating corresponding abnormal alarm information and sending the abnormal alarm information to a developer terminal according to a preset prompting mode.
7. The persistent integration method according to any one of claims 1 to 6, wherein the monitoring the service nodes in the distributed service cluster comprises:
acquiring log information of an operating system corresponding to a service node in the distributed service cluster, and judging whether an abnormal service node exists in the distributed service cluster according to the log information;
and if the distributed service cluster has abnormal service nodes, marking the abnormal service nodes according to a preset marking rule to obtain abnormal marked service nodes, and deleting the abnormal service marked nodes from the distributed service cluster.
8. A persistent integration device, comprising:
the service cluster creating module is used for creating a distributed service cluster comprising a main service node and a first preset number of auxiliary service nodes; target data stored in the main service node is synchronized to the auxiliary service node according to a preset synchronization rule;
the state information acquisition module is used for acquiring the state information of all the service nodes in the distributed service cluster in real time to obtain the state information of the service nodes;
and the integration module is used for monitoring all service nodes in the distributed service cluster, determining a target slave service node from the slave service nodes according to the service node state information when the master service node is monitored to be abnormal, and acquiring the target data from the target slave service node so as to integrate a continuous integration system by using the target data.
9. An electronic device comprising a processor and a memory; wherein the processor, when executing the computer program stored in the memory, implements the persistent integration method of any of claims 1 to 7.
10. A computer-readable storage medium for storing a computer program; wherein the computer program, when executed by a processor, implements the persistent integration method of any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111552876.3A CN114338363A (en) | 2021-12-17 | 2021-12-17 | Continuous integration method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111552876.3A CN114338363A (en) | 2021-12-17 | 2021-12-17 | Continuous integration method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114338363A true CN114338363A (en) | 2022-04-12 |
Family
ID=81052918
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111552876.3A Withdrawn CN114338363A (en) | 2021-12-17 | 2021-12-17 | Continuous integration method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114338363A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115277547A (en) * | 2022-07-29 | 2022-11-01 | 济南浪潮数据技术有限公司 | Path adjusting method, device, equipment and medium |
CN115695320A (en) * | 2023-01-04 | 2023-02-03 | 苏州浪潮智能科技有限公司 | Front-end and back-end load management method, system, equipment and computer storage medium |
-
2021
- 2021-12-17 CN CN202111552876.3A patent/CN114338363A/en not_active Withdrawn
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115277547A (en) * | 2022-07-29 | 2022-11-01 | 济南浪潮数据技术有限公司 | Path adjusting method, device, equipment and medium |
CN115695320A (en) * | 2023-01-04 | 2023-02-03 | 苏州浪潮智能科技有限公司 | Front-end and back-end load management method, system, equipment and computer storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109683899B (en) | Software integration method and device | |
CN103635885B (en) | By providing the instant availability of prebuild environment to dispose the environment for testing | |
CN107660289B (en) | Automatic network control | |
CN103902542B (en) | The O&M method and system of database in a kind of test environment | |
US8954579B2 (en) | Transaction-level health monitoring of online services | |
CN114338363A (en) | Continuous integration method, device, equipment and storage medium | |
CN112486629B (en) | Micro-service state detection method, micro-service state detection device, electronic equipment and storage medium | |
CN110765018A (en) | Automatic interface testing method and equipment | |
CN113312153B (en) | Cluster deployment method and device, electronic equipment and storage medium | |
CN113778486A (en) | Containerization processing method, device, medium and equipment for code pipeline | |
CN112162761A (en) | Method, system and equipment for automatically deploying project to public cloud containerization platform | |
CN109299124B (en) | Method and apparatus for updating a model | |
US7673178B2 (en) | Break and optional hold on failure | |
CN114168471A (en) | Test method, test device, electronic equipment and storage medium | |
US10970159B1 (en) | Automated system maintenance capabilities for a computing system | |
JP2017016507A (en) | Test management system and program | |
CN114039848A (en) | Method, device and equipment for realizing high availability of InCloudInsight management platform | |
CN115080834A (en) | Failure detection method and device for push link, electronic equipment and storage medium | |
CN110134558B (en) | Method and device for detecting server | |
CN111064624A (en) | Whole cabinet node positioning method, device, equipment and readable storage medium | |
CN115525568A (en) | Code coverage rate inspection method and device, computer equipment and storage medium | |
CN112596750B (en) | Application testing method and device, electronic equipment and computer readable storage medium | |
CN112783730B (en) | Interface monitoring method, device, medium and electronic equipment | |
US20160275002A1 (en) | Image capture in application lifecycle management for documentation and support | |
US8595172B2 (en) | Ensuring high availability of services via three phase exception handling |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20220412 |
|
WW01 | Invention patent application withdrawn after publication |