CN114281362A - Cloud computing product installation method, device, equipment, medium and program product - Google Patents

Cloud computing product installation method, device, equipment, medium and program product Download PDF

Info

Publication number
CN114281362A
CN114281362A CN202111585058.3A CN202111585058A CN114281362A CN 114281362 A CN114281362 A CN 114281362A CN 202111585058 A CN202111585058 A CN 202111585058A CN 114281362 A CN114281362 A CN 114281362A
Authority
CN
China
Prior art keywords
queue
implementation
installation
node
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111585058.3A
Other languages
Chinese (zh)
Inventor
李波
张程
张胡颖逸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Construction Bank Corp
Original Assignee
China Construction Bank Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Construction Bank Corp filed Critical China Construction Bank Corp
Priority to CN202111585058.3A priority Critical patent/CN114281362A/en
Publication of CN114281362A publication Critical patent/CN114281362A/en
Pending legal-status Critical Current

Links

Images

Abstract

The disclosure provides an installation method of a cloud computing product, which can be applied to the technical field of big data. The installation method of the cloud computing product comprises the following steps: generating an installation part node corresponding to the cloud computing product in the preprocessing queue; storing the installation part nodes in the preprocessing queue into an implementation queue; reading system part nodes of installation part nodes of the implementation queue and environment part nodes of the installation part nodes in parallel; and executing the system piece implementation parameters corresponding to the system piece nodes and the environment piece implementation parameters corresponding to the environment piece nodes to complete the installation of the cloud computing product. The present disclosure also provides an installation apparatus, a device, a storage medium, and a program product of a cloud computing product.

Description

Cloud computing product installation method, device, equipment, medium and program product
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, a medium, and a program product for installing a cloud computing product.
Background
With the rapid development of cloud computing services, PaaS (Platform As a Service) is typically provided by more and more cloud Service providers to users As a Service. The PaaS environment installation is used as the most important link in the PaaS resource supply of the cloud platform, and the installation of an OS system and middleware specified by a user can be realized on the basis of a physical machine or a virtual machine supplied by the platform.
Disclosure of Invention
In view of at least one of the above technical problems in the installation process of the PaaS environment, the present disclosure provides an installation method, apparatus, device, medium, and program product of a cloud computing product that enables parallel installation to improve installation efficiency.
According to a first aspect of the present disclosure, there is provided an installation method of a cloud computing product, including: generating an installation part node corresponding to the cloud computing product in the preprocessing queue; storing the installation part nodes in the preprocessing queue into an implementation queue; reading system part nodes of installation part nodes of the implementation queue and environment part nodes of the installation part nodes in parallel; and executing the system piece implementation parameters corresponding to the system piece nodes and the environment piece implementation parameters corresponding to the environment piece nodes to complete the installation of the cloud computing product.
According to an embodiment of the present disclosure, before the generating the mount node corresponding to the cloud computing product in the preprocessing queue, the method further includes: analyzing a user resource request form corresponding to the cloud computing product, and determining a system piece and an environment piece; when the system component is matched with the installation component, the system component implementation parameters corresponding to the system component and the environment component implementation parameters corresponding to the environment component are determined.
According to an embodiment of the present disclosure, in the above generating an installation node corresponding to a cloud computing product in a preprocessing queue, the method includes: storing the system component cache identification of the system component implementation parameters into a preprocessing queue to generate a system component node of an installation component node; and storing the environment piece cache identification of the environment piece implementation parameters into a preprocessing queue to generate an environment piece node of the installation piece node.
According to the embodiment of the present disclosure, before storing the environment piece cache identifier of the environment piece implementation parameter into the preprocessing queue and generating the environment piece node of the installation piece node, the method further includes: generating check nodes in the preprocessing queue according to the system part nodes; the system part nodes and the environment part nodes are sequentially arranged in the preprocessing queue by taking the check nodes as boundary nodes.
According to the embodiment of the present disclosure, before storing the environment piece cache identifier of the environment piece implementation parameter into the preprocessing queue and generating the environment piece node of the installation piece node, the method further includes: and storing the environment piece implementation parameters into a cache region according to the environment piece type to generate an environment piece cache identifier.
According to the embodiment of the present disclosure, in storing the mount nodes in the preprocessing queue into the implementation queue, the method includes: reading a queue identification of an implementation queue in an implementation master queue; and when the implementation queue corresponding to the queue identification and the preprocessing queue meet the node matching condition, storing the check nodes in the preprocessing queue, and the system part nodes and the environment part nodes of the installation part nodes into the implementation queue.
According to an embodiment of the present disclosure, in a system component node and an environment component node of a mount component node that read in parallel an implementation queue, the method includes: reading a head element of the implementation queue according to the queue state of the implementation queue; when the read queue head element of the implementation queue is a system element node or an environment element node, calling a corresponding system element implementation parameter or environment element implementation parameter in the cache region; and when the head element of the read implementation queue is a check node, repeatedly reading another implementation queue according to the reading completion degree of the system element node corresponding to the check node.
According to the embodiment of the disclosure, in executing the system element implementation parameters corresponding to the system element node and the environment element implementation parameters corresponding to the environment element node, the method comprises the following steps: executing installation implementation in parallel according to the system element implementation parameters and the environment element implementation parameters; and completing the installation of the cloud computing product according to the implementation result of the installation implementation.
A second aspect of the present disclosure provides an installation apparatus of a cloud computing product, including a node generation module, a node storage module, a node reading module, and a parameter execution module. The node generation module is used for generating installation part nodes corresponding to the cloud computing products in the preprocessing queue; the node storing module is used for storing the installation part nodes in the preprocessing queue into an implementation queue; the node reading module is used for reading the system element nodes of the installation element nodes of the implementation queue and the environment element nodes of the installation element nodes in parallel; and the parameter execution module is used for executing the system element implementation parameters corresponding to the system element nodes and the environment element implementation parameters corresponding to the environment element nodes in parallel to complete the installation of the cloud computing product.
A third aspect of the present disclosure provides an electronic device, comprising: one or more processors; a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the installation method of the cloud computing product described above.
A fourth aspect of the present disclosure also provides a computer-readable storage medium having executable instructions stored thereon, which, when executed by a processor, cause the processor to perform the installation method of the cloud computing product described above.
A fifth aspect of the present disclosure also provides a computer program product comprising a computer program that, when executed by a processor, implements the installation method of the cloud computing product described above.
Drawings
The foregoing and other objects, features and advantages of the disclosure will be apparent from the following description of embodiments of the disclosure, which proceeds with reference to the accompanying drawings, in which:
fig. 1 schematically illustrates an application scenario diagram of an installation method, apparatus, device, medium, and program product of a cloud computing product according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of a method of installation of a cloud computing product according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a process flow diagram for pre-processing queues in a method of installing a cloud computing product according to an embodiment of the present disclosure;
fig. 4 schematically illustrates a process flow diagram of storing an implementation queue for an installation node in an installation method of a cloud computing product according to an embodiment of the present disclosure;
fig. 5 schematically shows a process flow diagram for performing installation according to the read installation node in an installation method of a cloud computing product according to an embodiment of the present disclosure;
fig. 6 schematically illustrates a block diagram of a structure of an installation apparatus of a cloud computing product according to an embodiment of the present disclosure; and
fig. 7 schematically illustrates a block diagram of an electronic device suitable for implementing an installation method of a cloud computing product according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
The existing cloud computing product installation method is realized by taking a cloud computing product as a unit and adopting a sequential installation mode. Specifically, in the existing installation process, the installation of the OS system and the middleware is sequentially performed as an overall flow, so that on one hand, the total number of currently performed installation executions cannot be integrally controlled from the perspective of installing the server; on the other hand, when a cluster-type cloud computing product needs to be provided, the OS system and the middleware as a whole in the cluster must be sequentially implemented, so that the concurrency of installation implementation cannot be further improved. In other words, the traditional sequential mounting method cannot realize the current limiting and the paralleling implemented on the bottom layer. When the cluster is installed, each machine in the cluster waits in a queue for the previous machine to complete a complete installation process, and the performance of the installation server cannot be fully exerted, so that when the cluster is installed in batch, the installation server cannot realize current limiting, and huge pressure is caused on a resource implementation layer.
Therefore, the existing cloud computing product installation method can only install the OS system and the middleware which have the same business service requirement as an installation whole, and under the condition of huge installation amount, the installation efficiency is greatly limited, the performance of an installation server is prevented from being exerted, and the waste of installation resources is caused by the process of sequentially carrying out the whole installation of the OS system and the middleware according to each business service requirement.
In view of at least one of the above technical problems in the installation process of the PaaS environment, the present disclosure provides an installation method, apparatus, device, medium, and program product of a cloud computing product that enables parallel installation to improve installation efficiency.
It should be noted that the installation method and apparatus of the cloud computing product disclosed by the present disclosure may be used in the technical field of big data and the technical field of artificial intelligence, and may also be used in any field other than the financial field and the financial field.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and applying the data including the personal information of the user are all in accordance with the regulations of related laws and regulations, necessary confidentiality measures are taken, and the public order and good custom are not violated. Wherein, before the personal information of the user is obtained or collected, the authorization or the consent of the user is obtained.
An embodiment of the present disclosure provides an installation method of a cloud computing product, including: generating an installation part node corresponding to the cloud computing product in the preprocessing queue; storing the installation part nodes in the preprocessing queue into an implementation queue; reading system part nodes of installation part nodes of the implementation queue and environment part nodes of the installation part nodes in parallel; and executing the system piece implementation parameters corresponding to the system piece nodes and the environment piece implementation parameters corresponding to the environment piece nodes in parallel, and completing the installation of the cloud computing product.
Fig. 1 schematically illustrates an application scenario diagram of an installation method, apparatus, device, medium, and program product of a cloud computing product according to an embodiment of the present disclosure.
As shown in fig. 1, the application scenario 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have installed thereon various communication client applications, such as shopping-like applications, web browser applications, search-like applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (for example only) providing support for websites browsed by users using the terminal devices 101, 102, 103. The background management server may analyze and perform other processing on the received data such as the user request, and feed back a processing result (e.g., a webpage, information, or data obtained or generated according to the user request) to the terminal device.
It should be noted that the installation method of the cloud computing product provided by the embodiment of the present disclosure may be generally executed by the server 105. Accordingly, the installation apparatus of the cloud computing product provided by the embodiment of the present disclosure may be generally disposed in the server 105. The installation method of the cloud computing product provided by the embodiment of the present disclosure may also be executed by a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the installation apparatus of the cloud computing product provided by the embodiment of the present disclosure may also be disposed in a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
The installation method of the cloud computing product of the disclosed embodiment will be described in detail below with reference to fig. 2 to 7 based on the scenario described in fig. 1.
Fig. 2 schematically illustrates a flow chart of a method of installing a cloud computing product according to an embodiment of the present disclosure.
As shown in fig. 2, the installation method of the cloud computing product of this embodiment includes operations S201 to S204.
In operation S201, generating an installation part node corresponding to the cloud computing product in the preprocessing queue;
in operation S202, storing the mount nodes in the preprocessing queue into an implementation queue;
in operation S203, reading in parallel a system part node of an installation part node and an environment part node of the installation part node of the implementation queue;
in operation S204, the system component implementation parameters corresponding to the system component nodes and the environment component implementation parameters corresponding to the environment component nodes are executed, so as to complete installation of the cloud computing product.
In the embodiment of the disclosure, the cloud computing product can be a business service which provides a virtual machine or a physical machine comprising system components, environment components and other installation components to a user in a cloud computing mode to meet the requirements of the user. The cloud computing product of the embodiment of the disclosure can be implemented based on PaaS, and the installation process of programs such as an OS system and middleware specified by a user can be executed on a specified physical machine or virtual machine, so as to implement installation of the cloud computing product.
The preprocessing queues can be linear tables which are used for generating installation part nodes corresponding to cloud computing products and storing the nodes, so that the nodes are used as queue data to be sequentially arranged, and reading of corresponding table head element data and writing of table tail element data can be achieved corresponding to each preprocessing queue. The element data occupying a specific position of the linear table may be understood as a node, the element data of each position has a unique node identity, such as a node ID, and each element data may define a specific implementation flow. The installation part node is an implementation flow node of element data of installation parts corresponding to the system parts, the environment parts and the like in the preprocessing queue.
The implementation queue is a bidirectional queue for storing the nodes of the implementation process to be executed, and may be embodied in a linear table form, and may be used to implement reading and writing of element data of the nodes. And calling the installation part nodes in the preprocessing queue one by one in order, and storing the called installation part nodes into the implementation queue. Thus, the processing speed of the node can be increased by implementing the queue.
The mount node may include a system component node and an environment component node. The system component may be a computer program such as an OS system for managing and controlling computer hardware and software resources, and the system component node is an implementation flow node of a system component implementation parameter corresponding to the system component; the environment piece may be system software or a service program such as middleware corresponding to the OS system for implementing an installation environment or a use environment of the operating system, and the environment piece node may be an implementation flow node for implementing parameters for the environment piece corresponding to the environment piece. Wherein, the implementation queue can carry out parallel reading operation on the installation node through different reading tools. That is, one-to-one parallel reading of each mount node can be performed by multiple reading tools at the same time. Therefore, the reading speed of several times or even dozens of hundreds of times of nodes can be realized, and the system part nodes and the environment part nodes in the implementation queue are respectively read in parallel. In addition, when there are multiple parallel implementation queues, reading of corresponding nodes in the queues can be performed on all the implementation queues respectively, and on the basis of batch reading of multiple nodes of the implementation queues, parallel reading of the multiple implementation queues is achieved in batches, so that the parallel reading speed of node element data is further increased.
Through the read system component nodes, system component implementation parameters can be correspondingly called; and through the read environment element nodes, environment element implementation parameters can be correspondingly called. The system component implementation parameters are installation data, parameters, files, scripts and the like required in the installation process of the system component; the environment piece implementation parameters are installation data, parameters, files, scripts and the like required in the process of installing the environment piece. The system component can be installed by executing the installation and execution operation of the parameters on the system component; by performing the installation execution operation of the parameters on the environment piece, the installation of the environment piece can be realized. Therefore, when the installation operation process is completed by the system component and the environment component corresponding to the system component, the installation of the corresponding cloud computing product is completed.
Therefore, the method for installing the cloud computing product based on the implementation queue can realize nodularization of the installation implementation parameters of the installation part, store the nodularization of the installation implementation parameters in the implementation queue, sequentially realize batch reading of the system part nodes and the environment part nodes corresponding to the installation implementation parameters from the implementation queue through the execution reading tool such as the execution thread and the like, and perform installation and execution, so that controllable batch installation of the cloud computing product is realized.
Therefore, compared with the existing traditional installation process of implementing the original integrated OS system and middleware integrally and sequentially, the method disclosed by the embodiment of the invention can realize the splitting of the unified implementation process into the independent system component installation and the environment component installation, thereby improving the concurrence granularity of product installation and implementation, further realizing the parallel execution of the implementation parameters of the installation components, realizing the current limitation of installation and implementation and ensuring the high availability of the system. Obviously, the installation method of the embodiment of the present disclosure can greatly improve the installation efficiency of the product, exert the installation performance of the installation server as much as possible, save installation resources, and effectively improve the installation speed of the product even in the presence of a huge installation amount under the condition of ensuring the installation accuracy of the product.
Fig. 3 schematically illustrates a process flow diagram for a pre-processing queue in an installation method of a cloud computing product according to an embodiment of the present disclosure.
As shown in fig. 2-3, according to an embodiment of the present disclosure, before the operation S201 generates an mount node corresponding to a cloud computing product in a preprocessing queue, the method further includes:
analyzing a user resource request form corresponding to the cloud computing product, and determining a system piece and an environment piece;
when the system component is matched with the installation component, the system component implementation parameters corresponding to the system component and the environment component implementation parameters corresponding to the environment component are determined.
As shown in fig. 3, a user resource request form provided by a user is obtained, where the user resource request form is a resource data file provided by the user and used for implementing a user service requirement, and the user resource request form may be in the form of a data table or a data file to represent an installation resource corresponding to the user service requirement. Specifically, the user resource request form is parsed, and installation resources, such as an OS and installation components, such as middleware, that match the user resource request form are obtained, in operation S301. Wherein the mounting member comprises a system member and an environment member. The same user resource list can correspond to a plurality of business services, and each business service corresponds to one cloud computing product. In other words, if it is required to install a plurality of business services in the user resource request form, a plurality of cloud computing products need to be installed in parallel.
After determining the installation components matching the user service requirements, it is determined whether installation parameters such as installation files and implementation procedures satisfying the corresponding system components (such as OS image files) and environment components (such as middleware) exist in the setting database, as in operation S302. The installation parameters corresponding to the system elements are system element implementation parameters, and the installation parameters corresponding to the environment elements are environment element implementation parameters.
Further, since there are a plurality of environment members suitable for one system member, there are also a plurality of system members suitable for one environment member. Therefore, it is necessary to match whether the determined system component and environment component match each other. In the installation method in the embodiment of the disclosure, the installation can be simultaneously performed for a plurality of cloud computing products in parallel. Accordingly, by retrieving the preset validity list of the combination of the system component and the environment component, it may be confirmed that the system component selected by the user may be applicable to the current environment component selected by the user, or it may be confirmed that the environment component selected by the user may be applicable to the current system component selected by the user, as in operation S303.
And acquiring system component implementation parameters such as corresponding mirror images, installation flows, implementation parameters and the like in a preset database according to the determined system components which are selected to meet the user service requirements. Accordingly, the environment implementation parameters such as the corresponding mirror image, the installation flow, the implementation parameters, and the like may also be obtained according to the environment selected to meet the service requirement of the user, such as operations S304, S309, and S311.
Therefore, implementation parameters of corresponding system components and environment components can be determined according to the service requirements of the user, and therefore when the implementation parameters are subsequently installed, the installed cloud computing product can accurately meet the service requirements of the user.
As shown in fig. 2-3, in the generating, in operation S201, an mount node corresponding to a cloud computing product in a preprocessing queue according to an embodiment of the present disclosure, the generating includes:
storing the system component cache identification of the system component implementation parameters into a preprocessing queue to generate a system component node of an installation component node;
and storing the environment piece cache identification of the environment piece implementation parameters into a preprocessing queue to generate an environment piece node of the installation piece node.
And storing the determined system component implementation parameters into a set cache region, wherein the cache region can be a cache server, a database or a data table. Meanwhile, a system component cache identifier (i.e., a cache ID) corresponding to the system component implementation parameter in the cache region is stored as node element data in the pre-processing queue, and a system component node of the node element data in the pre-processing queue is generated at the same time, as in operation S305. When a plurality of system component cache identifications of a plurality of system component implementation parameters corresponding to a plurality of user service demands are stored in the preprocessing queue as node element data, batch generation of system component nodes can be realized, wherein each system component node corresponds to a node ID for identifying the node.
Similarly, the determined environment implementation parameters are stored in a set cache area, which may be a cache server, a database, or a data table. At the same time, the environment element cache identifier corresponding to the environment element implementation parameter identifier in the cache region is stored as node element data in the preprocessing queue, and the environment element node of the node element data in the preprocessing queue is generated, as in operations S309 and S311. When a plurality of environment piece cache identifications of a plurality of environment piece implementation parameters corresponding to a plurality of user service requirements are stored in the preprocessing queue as node element data, batch generation of environment piece nodes can be realized, wherein each environment piece node corresponds to a node ID for identifying the node.
Therefore, the user resource request form is converted into implementation parameters of installation pieces such as installation flows and related files and scripts required by resource implementation through preprocessing, then the implementation parameters of the installation pieces are stored in the cache region, and the cache IDs of the implementation parameters in the cache region are stored in a preprocessing queue in batches as node IDs. Therefore, the implementation parameters corresponding to the installation parts can be confirmed, the corresponding identification of the implementation parameters is stored into the preprocessing queue as the node, and parameter marking is realized, so that the accurate calling of the real-time parameters in the follow-up process is facilitated.
As shown in fig. 2-3, according to an embodiment of the present disclosure, before storing the environment element cache identifier of the environment element implementation parameter into the preprocessing queue, and generating the environment element node of the installation element node, the method further includes:
generating check nodes in the preprocessing queue according to the system part nodes;
the system part nodes and the environment part nodes are sequentially arranged in the preprocessing queue by taking the check nodes as boundary nodes.
As shown in fig. 3, for a plurality of system components corresponding to a plurality of business services of the user resource request form, returning to operation S303, continuously determining whether all the plurality of system components complete the operations of parameter determination, node storage, and the like, and if all the system components complete, it means that there are no system components to be installed in the current batch, such as an OS, and operation S306 needs to be further executed.
The check nodes are node sets of all system part nodes for subsequently checking the operations of parameter determination, node storage and the like, each check node comprises a plurality of node IDs of all the system part nodes in the preprocessing queue, and meanwhile, the check nodes generate a fixed check node ID corresponding to the set of the plurality of node IDs so as to uniquely identify the set of the plurality of node IDs.
Therefore, after all the system component nodes (such as OS node IDs) in the current batch are completely stored, a check node (such as check node) is placed in the preprocessing queue, and the check node can store the node IDs of all the system component nodes in the current batch for subsequently checking the execution condition of the system component nodes.
After the storage of all system part nodes of the current batch in the preprocessing queue is completed, the check nodes are stored in the preprocessing queue in sequence, and then the storage of the environment part nodes is performed, so that the check nodes are used as boundary nodes between the system part nodes and the environment part nodes. Therefore, when the check node is read in the process of sequentially executing reading on the preprocessing queue, the check node means that the system part node has finished reading, and the reading operation on other nodes such as the environment part node and the like is further realized.
Therefore, check nodes such as check nodes are added to serve as check layers of the preprocessing queue and the subsequent implementation queue, the independent judgment of different environment pieces on the types of the environment pieces relative to the system pieces can be met, and the requirements of clustered installation of the environment pieces and the like can be met.
Further, when it is determined that there is no system component to be installed in the current batch (as in operation S303), it may be further determined whether there is an environment component to be installed (as in middleware) in the current batch, specifically as in operation S307. For a cloud computing product, there is a case where installation of the cloud computing product can be completed only by installing a system component, and therefore, it is necessary to judge whether installation of an environment component is performed on the cloud computing product.
As shown in fig. 2-3, according to an embodiment of the present disclosure, before storing the environment element cache identifier of the environment element implementation parameter into the preprocessing queue, and generating the environment element node of the installation element node, the method further includes:
and storing the environment piece implementation parameters into a cache region according to the environment piece type to generate an environment piece cache identifier.
The environment piece types mainly comprise two types of cluster installation environment pieces and non-cluster installation environment pieces. The non-cluster installation generally refers to that a single environment element corresponds to a single system element to execute installation, and the cluster installation generally refers to that an environment group consisting of a plurality of environment elements corresponds to execute parallel installation.
Therefore, the classification type is processed according to the determined environment. When the environment component is installed in the cluster, implementation information such as an installation package, a patch package, an installation flow, implementation parameters, and the like corresponding to the environment component group that needs to be installed in the cluster is obtained, for example, operation S309. Then, the implementation information is stored in the buffer, and the buffer ID is stored as a node ID in the preprocessing queue, as in operation S310. When the environment component (e.g., the middleware) is installed in the non-cluster, implementation information such as an installation package, a patch package, an installation flow, implementation parameters, and the like corresponding to each middleware is sequentially acquired, for example, operation S311. Then, the implementation information is stored in the buffer area, and the buffer IDs are sequentially stored in the preprocessing queue as the node IDs, as in operation S312.
Therefore, the corresponding identification information of the system element implementation parameters and the environment element implementation parameters for the cloud computing product can be written into the preprocessing queue, and the preprocessing of the implementation parameters is completed. Specifically, the user cloud computing resource request form is converted into an installation process and relevant implementation parameters such as files and scripts required by resource implementation through the preprocessing queue. And storing the preprocessing results such as the implementation parameters in a cache region, and further storing the cache ID of the preprocessing result generated by the current resource application form as a node in a preprocessing queue in order. Therefore, the installation implementation process of the originally integrated system components such as the OS system and the environment components such as the middleware is favorably divided into the independent system component installation and the independent environment component installation, and the concurrence granularity of the installation implementation is further improved. In addition, the requirements of different environment pieces on clustered installation can be met by adding check layers such as check.
Fig. 4 schematically illustrates a process flow diagram of storing an implementation queue for an installation node in an installation method of a cloud computing product according to an embodiment of the present disclosure.
As shown in fig. 2-4, according to an embodiment of the present disclosure, storing the mount node in the pre-processing queue into the implementation queue in operation S202 includes:
reading a queue identification of an implementation queue in an implementation master queue;
and when the implementation queue corresponding to the queue identification and the preprocessing queue meet the node matching condition, storing the check nodes in the preprocessing queue, and the system part nodes and the environment part nodes of the installation part nodes into the implementation queue.
The implementation master queue is a component of the implementation master queue, and forms the implementation master queue together with the implementation queue group. The implementation master queue stores the IDs of all the implementation queue groups, and polls the implementation queue groups in order. The set of enforcement queues consists of a set of multiple enforcement queues. The implementation queue is a bidirectional queue, the implementation queue can store specific implementation process nodes to be executed, and the implementation process nodes can be added into the implementation queue group from the preprocessing queue in sequence according to the implementation process batch specified by the business service of the user resource request form.
First, an implementation queue ID at the end of the implementation master queue is read from the implementation master queue as a queue ID of the implementation queue, as in operation S401. Wherein, the queue identification may be the node ID of the implementation queue in the implementation main queue.
Based on the implementation queue ID, the implementation queue acquired may be determined, as in operation S402.
The node matching condition is the matching rule of the number of nodes in the implementation queue and the preprocessing queue determined in the operation S402, that is, during the installation of the cloud computing product, the number of nodes in the preprocessing queue that need to be written into the implementation queue needs to be matched with the number of vacant nodes in the implementation queue, so as to prevent all necessary nodes in the preprocessing queue from being written into the implementation queue.
Therefore, it is necessary to perform a check operation on the execution queue in combination with the current node number of the preprocessing queue, and the specific check method is to compare the occupied node number in the execution queue plus the number of write nodes in the preprocessing queue with the length threshold of the execution queue. Wherein, the length threshold of the implementation queue is the threshold of the number of nodes which can be stored in the implementation queue.
If the sum of the number of nodes in the enforcement queue plus the number of nodes in the pre-processing queue is greater than the enforcement queue length threshold, waiting for a polling cycle, again reading another enforcement queue, and repeating operations S401-S403, as in operation S431.
If the sum of the number of the nodes in the implementation queue and the number of the nodes in the preprocessing queue is less than or equal to the length threshold of the implementation queue, the current implementation queue can be locked, and the implementation queue can be determined to store all the pre-written nodes in the preprocessing queue. Further, it is determined to obtain the write permission of the execution queue, so as to facilitate writing the write nodes of the preprocessing queue into the execution queue in sequence, as described later, in operation S404.
Further, sequential reading is performed on each node in the pre-processing queue, so that nodes (e.g., corresponding cache IDs of the system component and the environment component) in the pre-processing queue are sequentially dequeued and written into the implementation queue, as in operation S405, thereby completing writing to the node of the implementation queue, and writing all the system component nodes and environment component nodes of the pre-processing queue into the implementation queue. Finally, the write authority of the current execution queue is released, and the execution queue is unlocked in operation S406.
And establishing an implementation main queue comprising an implementation main queue and an implementation queue group, and finishing the storage of implementation information from the preprocessing queue to the implementation main queue. The implementation queues in the implementation queue group are bidirectional queues, and store all node IDs which need to be installed and checked. And simultaneously orderly storing the IDs of all queues in the implementation queue group into a main queue to form an implementation main queue. The enforcement master queue may be a one-way queue, where the head-of-queue element is the enforcement queue ID that can currently be read (dequeued), i.e., the current enforcement queue is read; the queue tail element is stored by a separate node and is the current implementation queue ID for logging. When the implementation information in the preprocessing queue is written into the implementation total queue through the current implementation queue, the system acquires and verifies the implementation queue corresponding to the tail element of the implementation main queue, and writes the implementation parameter and other related identifiers into the implementation queue. Therefore, the creation and monitoring of the implementation main queue and the implementation queue group can be realized, and simultaneously, the storage and reading operations of the installation member nodes in the preprocessing queue to the implementation total queue are completed. By means of the implementation queue for implementing a plurality of writing nodes in the total queue, the node reading parallelism can be further improved, so that the node execution speed is further improved, and the node execution accuracy is guaranteed.
Fig. 5 schematically shows a process flow diagram for performing installation according to the read installation node in an installation method of a cloud computing product according to an embodiment of the present disclosure;
as shown in fig. 2 to 5, in the parallel reading of a system component node of an installation component node and an environment component node of an installation component node implementing a queue in operation S203, according to an embodiment of the present disclosure, the method includes:
reading a head element of the implementation queue according to the queue state of the implementation queue;
when the read queue head element of the implementation queue is a system element node or an environment element node, calling a corresponding system element implementation parameter or environment element implementation parameter in the cache region;
and when the head element of the read implementation queue is a check node, repeatedly reading another implementation queue according to the reading completion degree of the system element node corresponding to the check node.
A read is performed on the implementation main queue by executing a read tool, such as an execution thread, and a head-of-line element in the implementation main queue is obtained, as in operation S501. The head-of-line element may be an implementation queue ID currently located at the head of the implementation master queue.
Based on the implementation queue ID determined by the acquired head-of-line element, an implementation queue a may be determined, as in operation S502.
The queue status of the implementation queue a is determined (e.g., whether the queue status is empty), and different processing procedures are performed according to the queue status, as in operation S503. The queue state is a node state of a corresponding implementation queue. When the node of the implementation queue is not read, the queue state can be empty; the queue state may be non-empty upon reading a node in the implementation queue. A further distinction can be made regarding a non-empty queue state, such as when a check node in the implementation queue is read, the queue state can be a check state; the queue state may be the execution state upon reading an execution node in the execution queue.
When the queue status of the implementation queue a is empty, which indicates that there may not be an executable node in the implementation queue, the queue tail element of the implementation main queue may be updated to the queue ID of the implementation queue a, as in operation S531. Thereafter, the queue ID of the implementation queue a is placed in the implementation master queue from the end of the queue, as in operation S532. Finally, the current head-of-line element of the implementation main queue is re-read, thereby determining another implementation queue, and operations S501-S503 are repeatedly performed.
When the queue status of the implementation queue a is not empty, the read and write permissions of the current implementation queue a may be obtained, as in operation S504.
For the case that the queue state of the implementation queue a is not empty, when the head element of the read implementation queue a is a check node (e.g., a check node), the implementation queue a is in the check state, and whether all the system component node IDs stored in the check node are completely read may be checked, as in operation S506. If the checking is completely completed, the next node of the check node, such as the environment element node, is read, and operations S505 to S506 are repeated. Otherwise, the IDs of all system component nodes are not completely read, the check node is placed back to the head of the implementation queue a, the tail element of the implementation main queue is updated to the queue ID of the implementation queue a, the queue ID of the implementation queue a is written into the implementation main queue from the tail, the operations S501 to S506 are restarted, and the next implementation queue of the implementation main queue is read, as in operations S561 to S565.
When the head-of-line element of the implementation queue a is a non-empty element and is another implementation node other than the check node, the queue read and write permissions of the current implementation queue a are released, in operation S507. Thereby performing a read operation on the enforcement queue a.
And reading a head-of-line element m of the implementation queue A, wherein the head-of-line element m can be a node ID of the implementation flow node. The implementation process node may be an environment part node or a system part node.
The queue tail element of the implementation main queue a is updated to the queue ID of the implementation queue a, and the queue ID of the implementation queue a is put back into the implementation main queue a from the tail, as in operations S508-S509.
According to the corresponding node ID of the head-of-line element m of the implementation queue, the corresponding installation implementation parameters, such as the mirror image, the installation process, and the implementation parameters, may be called from the buffer by the execution thread, as in operation S510. And when the read corresponding node ID is the environment part node ID, the environment part implementation parameter is correspondingly called.
Therefore, regardless of the number of implementation queues for storing the installation device nodes or the number of write nodes in the implementation queues, the execution of the read execution of the execution thread can realize the accurate parallel calling of the execution parameters of the installation devices such as the corresponding system device implementation parameters or environment device implementation parameters. In other words, compared with the traditional operation mode of integrally installing and executing the system components and the installation components, the installation implementation process of the originally integrated system components such as the OS system and the environment components such as the middleware can be respectively split and split into the independent system component installation execution process and the independent environment component installation execution process, so that the concurrence granularity of installation implementation is further improved, the node execution speed is further improved, and the node execution accuracy is ensured. The installation execution efficiency of the cloud computing product is improved on the whole, the overall performance of the installation server is better exerted, and the installation resources are saved.
The execution thread is the smallest execution unit capable of performing operation scheduling in the installation server, and may be used to execute the implementation flow node, such as a system component node and an environment component node of the installation component node. A significant number of threads of execution may be pooled together to form a pool of execution threads. When the read total implementation queue or the queue state of the implementation queue is not empty, the execution thread in the execution pool acquires a corresponding implementation parameter node from the total implementation queue and executes implementation; furthermore, after the implementation of the implementation parameter node is completed, the execution thread will be released back into the execution pool. Wherein the number of execution threads in the execution pool is limited by a thread threshold.
By using the execution pool capable of controlling the number of the execution threads, the current limitation of installation and implementation is realized, and the high availability of the system is ensured. And, through increasing check layer, satisfy the demand that different middleware are installed to the cluster.
As shown in fig. 2 to 5, according to the embodiment of the present disclosure, in executing the system element implementation parameters corresponding to the system element node and the environment element implementation parameters corresponding to the environment element node in operation S204, the method includes:
executing installation implementation in parallel according to the system element implementation parameters and the environment element implementation parameters;
and completing the installation of the cloud computing product according to the implementation result of the installation implementation.
Based on the retrieved installation implementation information, such as the system component implementation parameters and the environment component implementation parameters, the execution thread executes the installation implementation, waits for the implementation result, and puts the implementation result into the implementation result set, as in operation S512. The installation of corresponding installed parts can be completed, and due to the fact that parallel installation operation is conducted, the system part and the environment part can be installed and implemented at the same time, installation of cloud computing products is completed, users can achieve user service requirements through the cloud computing products more quickly, and user experience is greatly improved.
Further, after the installation implementation of the corresponding implementation parameters is completed, the corresponding execution threads are released back to the execution pool.
The installation implementation process is implemented by simultaneously and parallelly installing a considerable number of execution threads, and is matched with the parallel reading of installation part nodes such as system part nodes and environment part nodes, so that the parallel installation speed of installation part implementation parameters such as system part implementation parameters and environment part implementation parameters is further improved. Therefore, the installation performance of the installation server can be greatly exerted, and the utilization rate of installation resources is improved.
Therefore, an execution pool is built, execution information in an implementation master queue is read and executed using execution threads in the execution pool, and execution threads that can execute implementation flow nodes are put together in a pooled manner due to the execution pool. When the implementation total queue is not empty, the execution thread in the execution pool acquires implementation information from the implementation total queue and implements the implementation information; after the execution is completed, the execution thread is released to the execution pool again. Therefore, according to the set parallel execution capacity, the implementation information in the implementation master queue is concurrently acquired and executed, and the corresponding result is recorded after the execution is completed. Therefore, by using the execution pool capable of controlling the number of the execution threads, the current limitation of installation and implementation is realized, and the high availability of the system is ensured.
Therefore, the method of the embodiment of the present disclosure can split the installation implementation flow of the originally integrated installation components such as the OS system and the middleware into the independent OS system installation and the middleware installation, thereby increasing the concurrence granularity of the installation implementation; in addition, the requirements of different middleware on cluster installation can be further met by adding a check layer; furthermore, by using the execution pool capable of controlling the number of the execution threads, the current limitation of installation and implementation is realized, and the high availability of the system is ensured.
Further, the installation process of the cloud computing resources applied by the user is split into a process queue comprising an OS system installation process and a middleware installation process by using cloud computing resource application preprocessing, read-write separation and integral grasp of the installation implementation process are realized by establishing the implementation total queue, and concurrence controllability of the installation implementation process is realized through the execution pool.
Based on the installation method of the cloud computing product, the invention further provides an installation device of the cloud computing product. The apparatus will be described in detail below with reference to fig. 6.
Fig. 6 schematically shows a block diagram of a structure of an installation apparatus of a cloud computing product according to an embodiment of the present disclosure.
As shown in fig. 6, the installation apparatus 600 of the cloud computing product of this embodiment includes a node generation module 610, a node logging module 620, a node reading module 630, and a parameter execution module 640.
The node generation module 610 is configured to generate an installation node corresponding to the cloud computing product in the preprocessing queue. In an embodiment, the node generating module 610 may be configured to perform the operation S201 described above, which is not described herein again.
The node storing module 620 is configured to store the mount nodes in the preprocessing queue into the enforcement queue. In an embodiment, the node logging module 620 may be configured to perform the operation S202 described above, which is not described herein again.
The node reading module 630 is configured to read in parallel system component nodes of the mount nodes implementing the queue and environment component nodes of the mount nodes. In an embodiment, the node reading module 630 may be configured to perform the operation S203 described above, which is not described herein again.
The parameter execution module 640 is configured to execute the system component implementation parameters corresponding to the system component nodes and the environment component implementation parameters corresponding to the environment component nodes in parallel, so as to complete installation of the cloud computing product. In an embodiment, the parameter execution module 640 may be configured to execute the operation S204 described above, which is not described herein again.
According to the embodiment of the present disclosure, any plurality of the node generating module 610, the node storing module 620, the node reading module 630 and the parameter executing module 640 may be combined into one module to be implemented, or any one of the modules may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the node generating module 610, the node storing module 620, the node reading module 630 and the parameter executing module 640 may be at least partially implemented as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or implemented by any one of three implementations of software, hardware and firmware, or any suitable combination of any of them. Alternatively, at least one of the node generation module 610, the node logging module 620, the node reading module 630 and the parameter execution module 640 may be at least partially implemented as a computer program module, which when executed, may perform a corresponding function.
Fig. 7 schematically illustrates a block diagram of an electronic device suitable for implementing an installation method of a cloud computing product according to an embodiment of the present disclosure.
As shown in fig. 7, an electronic device 700 according to an embodiment of the present disclosure includes a processor 701, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. The processor 701 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 701 may also include on-board memory for caching purposes. The processor 701 may comprise a single processing unit or a plurality of processing units for performing the different actions of the method flows according to embodiments of the present disclosure.
In the RAM703, various programs and data necessary for the operation of the electronic apparatus 700 are stored. The processor 701, the ROM 702, and the RAM703 are connected to each other by a bus 704. The processor 701 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM 702 and/or the RAM 703. It is noted that the programs may also be stored in one or more memories other than the ROM 702 and RAM 703. The processor 701 may also perform various operations of method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
Electronic device 700 may also include input/output (I/O) interface 705, which input/output (I/O) interface 705 is also connected to bus 704, according to an embodiment of the present disclosure. The electronic device 700 may also include one or more of the following components connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, a computer-readable storage medium may include the ROM 702 and/or the RAM703 and/or one or more memories other than the ROM 702 and the RAM703 described above.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the method illustrated in the flow chart. When the computer program product runs in a computer system, the program code is used for causing the computer system to realize the item recommendation method provided by the embodiment of the disclosure.
The computer program performs the above-described functions defined in the system/apparatus of the embodiments of the present disclosure when executed by the processor 701. The systems, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
In one embodiment, the computer program may be hosted on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted in the form of a signal on a network medium, distributed, downloaded and installed via the communication section 709, and/or installed from the removable medium 711. The computer program containing program code may be transmitted using any suitable network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program, when executed by the processor 701, performs the above-described functions defined in the system of the embodiment of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
In accordance with embodiments of the present disclosure, program code for executing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, these computer programs may be implemented using high level procedural and/or object oriented programming languages, and/or assembly/machine languages. The programming language includes, but is not limited to, programming languages such as Java, C + +, python, the "C" language, or the like. The program code may execute entirely on the user computing device, partly on the user device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (12)

1. An installation method of a cloud computing product, comprising:
generating an installation part node corresponding to the cloud computing product in a preprocessing queue;
storing the installation part nodes in the preprocessing queue into an implementation queue;
reading system part nodes of the installation part nodes of the implementation queue and environment part nodes of the installation part nodes in parallel; and
and executing the system piece implementation parameters corresponding to the system piece nodes and the environment piece implementation parameters corresponding to the environment piece nodes, and finishing the installation of the cloud computing product.
2. The method of claim 1, wherein prior to the generating the mount node corresponding to the cloud computing product in the pre-processing queue, further comprising:
analyzing a user resource request form corresponding to the cloud computing product, and determining a system piece and an environment piece;
when the system part is matched with the installation part, determining system part implementation parameters corresponding to the system part and environment part implementation parameters corresponding to the environment part.
3. The method of claim 1, wherein, in the generating the cloud computing product corresponding mount node in the pre-processing queue, comprises:
storing the system component cache identification of the system component implementation parameters into the preprocessing queue to generate a system component node of the installation component node;
and storing the environment piece cache identification of the environment piece implementation parameters into the preprocessing queue to generate the environment piece node of the installation piece node.
4. The method of claim 3, wherein prior to storing the environment piece cache identification of the environment piece implementation parameter in the pre-processing queue, generating the environment piece node for the mount piece node, further comprising:
generating check nodes in the preprocessing queue according to the system part nodes;
the system part nodes and the environment part nodes are sequentially arranged in the preprocessing queue by taking the check nodes as boundary nodes.
5. The method of claim 3, wherein prior to storing the environment piece cache identification of the environment piece implementation parameter in the pre-processing queue, generating the environment piece node for the mount piece node, further comprising:
and storing the environment piece implementation parameters into a cache region according to the environment piece type to generate the environment piece cache identification.
6. The method of claim 1, wherein storing the mount nodes in the pre-processing queue in an enforcement queue comprises:
reading a queue identification of the enforcement queue in an enforcement master queue;
and when the implementation queue corresponding to the queue identification and the preprocessing queue meet a node matching condition, storing the system part nodes and the environment part nodes of the check nodes and the installation part nodes in the preprocessing queue into the implementation queue.
7. The method of claim 1, wherein in the concurrently reading system component nodes of mount nodes and environment component nodes of mount nodes of the enforcement queue, comprising:
reading a head element of the implementation queue according to the queue state of the implementation queue;
when the read queue head element of the implementation queue is the system element node or the environment element node, calling the corresponding system element implementation parameter or environment element implementation parameter in the cache region;
and when the read queue head element of the implementation queue is a check node, repeatedly reading another implementation queue according to the reading completion degree of the system element node corresponding to the check node.
8. The method of claim 1, wherein the executing the system element implementation parameters corresponding to the system element node and the environment element implementation parameters corresponding to the environment element node comprises:
executing installation implementation in parallel according to the system element implementation parameters and the environment element implementation parameters;
and completing the installation of the cloud computing product according to the implementation result of the installation implementation.
9. An installation apparatus of a cloud computing product, comprising:
the node generation module is used for generating installation part nodes corresponding to the cloud computing products in the preprocessing queue;
the node storing module is used for storing the installation part nodes in the preprocessing queue into an implementation queue;
a node reading module for reading in parallel a system component node of the installation component nodes of the implementation queue and an environment component node of the installation component nodes; and
and the parameter execution module is used for executing the system element implementation parameters corresponding to the system element nodes and the environment element implementation parameters corresponding to the environment element nodes in parallel to finish the installation of the cloud computing product.
10. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-8.
11. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the method of any one of claims 1 to 8.
12. A computer program product comprising a computer program which, when executed by a processor, implements a method according to any one of claims 1 to 8.
CN202111585058.3A 2021-12-22 2021-12-22 Cloud computing product installation method, device, equipment, medium and program product Pending CN114281362A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111585058.3A CN114281362A (en) 2021-12-22 2021-12-22 Cloud computing product installation method, device, equipment, medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111585058.3A CN114281362A (en) 2021-12-22 2021-12-22 Cloud computing product installation method, device, equipment, medium and program product

Publications (1)

Publication Number Publication Date
CN114281362A true CN114281362A (en) 2022-04-05

Family

ID=80874084

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111585058.3A Pending CN114281362A (en) 2021-12-22 2021-12-22 Cloud computing product installation method, device, equipment, medium and program product

Country Status (1)

Country Link
CN (1) CN114281362A (en)

Similar Documents

Publication Publication Date Title
CN107729139B (en) Method and device for concurrently acquiring resources
CA2919839C (en) Virtual computing instance migration
US8819683B2 (en) Scalable distributed compute based on business rules
US11861405B2 (en) Multi-cluster container orchestration
US10033816B2 (en) Workflow service using state transfer
US11567802B2 (en) Resource monitor for monitoring long-standing computing resources
US20130152097A1 (en) Resource Health Based Scheduling of Workload Tasks
US10331669B2 (en) Fast query processing in columnar databases with GPUs
US20190199785A1 (en) Determining server level availability and resource allocations based on workload level availability requirements
US20180067951A1 (en) Computer-implemented object management via tags
US20200278975A1 (en) Searching data on a synchronization data stream
CN114817050A (en) Task execution method and device, electronic equipment and computer readable storage medium
CN113076224B (en) Data backup method, data backup system, electronic device and readable storage medium
US11556332B2 (en) Application updating in a computing environment using a function deployment component
CN117076096A (en) Task flow execution method and device, computer readable medium and electronic equipment
CN114237765B (en) Functional component processing method, device, electronic equipment and medium
CN115373822A (en) Task scheduling method, task processing method, device, electronic equipment and medium
CN114281362A (en) Cloud computing product installation method, device, equipment, medium and program product
US11886932B1 (en) Managing resource instances
CN111367889B (en) Cross-cluster data migration method and device based on webpage interface
CN114363172B (en) Decoupling management method, device, equipment and medium for container group
US11943292B2 (en) Extend controller for multi-tenancy
US11496602B2 (en) Fence computing
CN113419922A (en) Method and device for processing batch job running data of host
CN115185886A (en) Partition-based data migration method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination