CN111736952A - Cloud platform resource pool deployment method, device, equipment and readable medium - Google Patents

Cloud platform resource pool deployment method, device, equipment and readable medium Download PDF

Info

Publication number
CN111736952A
CN111736952A CN202010570085.2A CN202010570085A CN111736952A CN 111736952 A CN111736952 A CN 111736952A CN 202010570085 A CN202010570085 A CN 202010570085A CN 111736952 A CN111736952 A CN 111736952A
Authority
CN
China
Prior art keywords
cloud platform
docker
index
mirror
resource pool
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010570085.2A
Other languages
Chinese (zh)
Inventor
李红卫
袁东海
胡玉鹏
刘进源
魏传程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202010570085.2A priority Critical patent/CN111736952A/en
Publication of CN111736952A publication Critical patent/CN111736952A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)

Abstract

The invention discloses a method for deploying a cloud platform resource pool, which comprises the following steps: acquiring docker mirror images of a plurality of CPU architectures, and generating corresponding sub-warehouses; creating an index based on the mirror name corresponding to the application in each sub-warehouse, and deploying the index to the cloud platform; and in response to receiving a request for running the application by the client, finding the corresponding docker mirror image through the index according to the CPU architecture of the client and the mirror image name of the application, and sending the docker mirror image to the client. The invention also discloses a device for deploying the cloud platform resource pool, computer equipment and a readable storage medium. The invention respectively stores the docker images suitable for different CPU architectures in different image warehouses, and creates an index for the docker images which are applied to the same application but are suitable for different CPU architectures, thereby realizing the purpose of hiding the docker images and the CPU architectures of physical machines for deploying tools and deploying cloud platforms in a mixed resource pool with multiple CPU architectures.

Description

Cloud platform resource pool deployment method, device, equipment and readable medium
Technical Field
The invention relates to the technical field of cloud platform management, in particular to a method, a device, equipment and a readable medium for cloud platform resource pool deployment.
Background
In the cloud computing era, numerous applications are deployed on a virtual machine of an openstack platform, the openstack platform is the most popular open source cloud platform at present, and the main purpose of the openstack platform is to hide details of underlying physical hardware through a virtualization technology, provide resource isolation and resource use limitation on the aspects of computing, storage, network and the like for a user, and enable the user to look like monopolizing one machine. At present, a large number of developers participate in the method on the global scale, and powerful guarantee is provided for the rapid development of OpenStack. Through years of development, the technology of OpenStack is mature and stable, and the OpenStack has flexible expandability while providing high availability, so that the OpenStack is widely applied to various industries.
One of the main functions of the OpenStack cloud platform is to provide virtualization of computing resources (i.e., CPU resources) for users, and there are many types of CPU architectures on the market today, such as amd64, arm64, and so on.
In order to achieve the purposes of diversity of computing resources, autonomous controllability, safety and the like, many manufacturers select machines of different CPU architecture types when purchasing physical machines of the OpenStack cloud platform, for example, purchasing physical machines of the amd64 and the arm64 types of CPUs at the same time. Therefore, the cost of the equipment is greatly increased, and meanwhile, when various types of physical machines work, operations such as switching selection and the like are involved, so that the time consumption is high and the efficiency is low.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method, an apparatus, a device, and a readable medium for cloud platform resource pool deployment, in which docker images suitable for different CPU architectures are stored in different image warehouses, and an index is created for the docker images applicable to the same application but different CPU architectures, so that the purpose of deploying a cloud platform in a mixed resource pool having multiple CPU architectures while hiding the docker images and the physical machine CPU architecture for deploying a tool is achieved.
Based on the above object, an aspect of the embodiments of the present invention provides a method for deploying a cloud platform resource pool, including the following steps: acquiring docker mirror images of a plurality of CPU architectures, and generating corresponding sub-warehouses; creating an index based on the mirror name corresponding to the application in each sub-warehouse, and deploying the index to the cloud platform; and in response to receiving a request for running the application by the client, finding the corresponding docker mirror image through the index according to the CPU architecture of the client and the mirror image name of the application, and sending the docker mirror image to the client.
In some embodiments, the method is used for an openstack open source cloud platform.
In some embodiments, obtaining docker images of a plurality of CPU architectures and generating corresponding child warehouses comprises: and opening the characteristics of the docker client and the docker server, and pushing docker mirror images suitable for different CPU architectures to different sub-warehouses.
In some embodiments, creating an index based on the same mirror name in each sub-repository comprises: traversing the mirror name of each sub-warehouse, and generating a mirror name: a CPU architecture index; and pushing the indexes to the corresponding sub-warehouses.
In some embodiments, deploying the index onto the cloud platform comprises: configuring the index to a mirror repository of a cloud platform deployment tool; and deploying the cloud platform mixed resource pool by using a deployment tool.
In another aspect of the embodiments of the present invention, there is also provided a device for cloud platform resource pool deployment, including: the system comprises a mirror image acquisition module, a storage module and a storage module, wherein the mirror image acquisition module is configured to acquire docker mirror images of a plurality of CPU architectures and generate corresponding sub-warehouses; the index generation module is configured to create an index based on the mirror name corresponding to the application in each sub-warehouse and deploy the index to the cloud platform; and the operation module is configured to respond to a request for running the application by the client, find a corresponding docker mirror image through an index according to the CPU architecture of the client and the mirror image name of the application, and send the docker mirror image to the client.
In some embodiments, the image acquisition module is further configured to: and opening the characteristics of the docker client and the docker server, and pushing docker mirror images suitable for different CPU architectures to different sub-warehouses.
In some embodiments, the index generation module is further configured to: traversing the mirror name of each sub-warehouse, and generating a mirror name: a CPU architecture index; and pushing the indexes to the corresponding sub-warehouses.
In some embodiments of the invention, the index generation module is further configured to: configuring the index to a mirror repository of a cloud platform deployment tool; and deploying the cloud platform mixed resource pool by using a deployment tool.
In another aspect of the embodiments of the present invention, there is also provided a computer device, including: at least one processor; and a memory storing computer instructions executable on the processor, the instructions when executed by the processor implementing the steps of the method.
In a further aspect of the embodiments of the present invention, a computer-readable storage medium is also provided, in which a computer program for implementing the above method steps is stored when the computer program is executed by a processor.
The invention has the following beneficial technical effects: the method comprises the steps of storing the docker images suitable for different CPU architectures in different image warehouses respectively, and creating indexes for the docker images which are applied to the same application but suitable for different CPU architectures, so that the purpose of hiding the docker images and the CPU architectures of physical machines for deploying tools and deploying cloud platforms in a mixed resource pool with multiple CPU architectures at the same time is achieved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other embodiments can be obtained by using the drawings without creative efforts.
Fig. 1 is a schematic diagram of an embodiment of a cloud platform resource pool deployment method provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention are described in further detail with reference to the accompanying drawings.
It should be noted that all expressions using "first" and "second" in the embodiments of the present invention are used for distinguishing two entities with the same name but different names or different parameters, and it should be noted that "first" and "second" are merely for convenience of description and should not be construed as limitations of the embodiments of the present invention, and they are not described in any more detail in the following embodiments.
Based on the above purpose, a first aspect of the embodiments of the present invention provides an embodiment of a method for deploying a cloud platform resource pool. Fig. 1 is a schematic diagram illustrating an embodiment of a method for cloud platform resource pool deployment provided by the present invention. As shown in fig. 1, the embodiment of the present invention includes the following steps:
s1, acquiring docker images of a plurality of CPU architectures, and generating corresponding sub-warehouses;
s2, creating an index based on the mirror name corresponding to the application in each sub-warehouse, and deploying the index to the cloud platform; and
s3, responding to a request of the client for running the application, finding a corresponding docker mirror image through an index according to the CPU architecture of the client and the mirror image name of the application, and sending the docker mirror image to the client.
In this embodiment, the docker images suitable for different CPU architectures are stored in different image warehouses, and then an index is created for the docker images suitable for different CPU architectures and the same application by using the experience property provided by the docker, and the index provides abstraction and unification for the docker images suitable for different CPU architectures and the same application by using the same image name and label. When OpenStack is deployed in a mixed resource pool with various CPU architectures, the created index information is configured in a related configuration file of a deployment tool, and when a container is started, a docker daemon running on the CPU architectures of different clients searches the index and automatically downloads a docker image which is the same as the CPU architecture of the client, so that the goal of hiding the docker image and the details of the CPU architecture of a physical machine for the deployment tool and deploying an OpenStack cloud platform in the mixed resource pool with the multi-CPU architecture is achieved.
In some embodiments of the invention, the method is used for an openstack open source cloud platform. One of the main functions of the OpenStack cloud platform is to provide virtualization of computing resources (i.e., CPU resources) for users, and there are many types of CPU architectures on the market today, such as amd64, arm64, and the like.
In some embodiments of the present invention, obtaining docker images of a plurality of CPU architectures and generating corresponding sub-warehouses comprises: and opening the characteristics of the docker client and the docker server, and pushing docker mirror images suitable for different CPU architectures to different sub-warehouses.
Modifying a/root/. docker/config.json configuration file of the docker client, and adding a configuration item in the configuration file: "experimental" and "enabled", completing the opening of the experimental property of the docker client; modifying a/etc/docker/daemon json configuration file of docker daemon, and adding configuration items in the configuration file: "experimental" true, completing to open the experimental property of the docker server; and creating a docker mirror warehouse, and pushing docker mirrors suitable for different CPU architectures to different sub-warehouses repo of the docker mirror warehouse.
In some embodiments of the invention, creating an index based on the same mirror name in each sub-repository comprises: traversing the mirror name of each sub-warehouse, and generating a mirror name: a CPU architecture index; and pushing the indexes to the corresponding sub-warehouses.
And using the same mirror image name and label as the same application and the docker mirror image creation index suitable for different CPU architectures, and pushing the creation index to a sub-warehouse repo of the mirror image warehouse by using a docker push command.
In some embodiments of the invention, deploying the index onto the cloud platform comprises: configuring the index to a mirror repository of a cloud platform deployment tool; and deploying the cloud platform mixed resource pool by using a deployment tool.
It should be particularly noted that, in the embodiments of the cloud platform resource pool deployment method, each step may be intersected, replaced, added, or deleted, and therefore, the method for cloud platform resource pool deployment, which is transformed by reasonable permutation and combination, shall also belong to the scope of the present invention, and shall not limit the scope of the present invention to the embodiments.
Based on the above object, a second aspect of the embodiments of the present invention provides a device for cloud platform resource pool deployment, including: the system comprises a mirror image acquisition module, a storage module and a storage module, wherein the mirror image acquisition module is configured to acquire docker mirror images of a plurality of CPU architectures and generate corresponding sub-warehouses; the index generation module is configured to create an index based on the mirror name corresponding to the application in each sub-warehouse and deploy the index to the cloud platform; and the operation module is configured to respond to a request for running the application by the client, find a corresponding docker mirror image through an index according to the CPU architecture of the client and the mirror image name of the application, and send the docker mirror image to the client.
In some embodiments of the invention, the image acquisition module is further configured to: and opening the characteristics of the docker client and the docker server, and pushing docker mirror images suitable for different CPU architectures to different sub-warehouses.
In some embodiments of the invention, the index generation module is further configured to: traversing the mirror name of each sub-warehouse, and generating a mirror name: a CPU architecture index; and pushing the indexes to the corresponding sub-warehouses.
In some embodiments of the invention, the index generation module is further configured to: configuring the index to a mirror repository of a cloud platform deployment tool; and deploying the cloud platform mixed resource pool by using a deployment tool.
In view of the above object, a third aspect of the embodiments of the present invention provides a computer device, including: at least one processor; and a memory storing computer instructions executable on the processor, the instructions when executed by the processor implementing the steps of the above method.
The invention also provides a computer readable storage medium storing a computer program which, when executed by a processor, performs the method as above.
Finally, it should be noted that, as one of ordinary skill in the art can appreciate that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program to instruct related hardware, and the program of the method for cloud platform resource pool deployment can be stored in a computer readable storage medium, and when executed, the program can include the processes of the embodiments of the methods described above. The storage medium of the program may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like. The embodiments of the computer program may achieve the same or similar effects as any of the above-described method embodiments.
Furthermore, the methods disclosed according to embodiments of the present invention may also be implemented as a computer program executed by a processor, which may be stored in a computer-readable storage medium. Which when executed by a processor performs the above-described functions defined in the methods disclosed in embodiments of the invention.
Further, the above method steps and system elements may also be implemented using a controller and a computer readable storage medium for storing a computer program for causing the controller to implement the functions of the above steps or elements.
Further, it should be appreciated that the computer-readable storage media (e.g., memory) herein can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. By way of example, and not limitation, nonvolatile memory can include Read Only Memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which can act as external cache memory. By way of example and not limitation, RAM is available in a variety of forms such as synchronous RAM (DRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The storage devices of the disclosed aspects are intended to comprise, without being limited to, these and other suitable types of memory.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as software or hardware depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments of the present invention.
The various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein may be implemented or performed with the following components designed to perform the functions herein: a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination of these components. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP, and/or any other such configuration.
The steps of a method or algorithm described in connection with the disclosure herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
In one or more exemplary designs, the functions may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk, blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The foregoing is an exemplary embodiment of the present disclosure, but it should be noted that various changes and modifications could be made herein without departing from the scope of the present disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements of the disclosed embodiments of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
It should be understood that, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly supports the exception. It should also be understood that "and/or" as used herein is meant to include any and all possible combinations of one or more of the associated listed items.
The numbers of the embodiments disclosed in the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, of embodiments of the invention is limited to these examples; within the idea of an embodiment of the invention, also technical features in the above embodiment or in different embodiments may be combined and there are many other variations of the different aspects of the embodiments of the invention as described above, which are not provided in detail for the sake of brevity. Therefore, any omissions, modifications, substitutions, improvements, and the like that may be made without departing from the spirit and principles of the embodiments of the present invention are intended to be included within the scope of the embodiments of the present invention.

Claims (10)

1. A method for deploying a cloud platform resource pool is characterized by comprising the following steps:
acquiring docker mirror images of a plurality of CPU architectures, and generating corresponding sub-warehouses;
creating an index based on the mirror name corresponding to the application in each sub-warehouse, and deploying the index to a cloud platform; and
and in response to receiving a request of a client for running the application, finding a corresponding docker mirror image through the index according to the CPU architecture of the client and the mirror image name of the application, and sending the docker mirror image to the client.
2. The method for cloud platform resource pool deployment according to claim 1, wherein the method is used for an openstack open source cloud platform.
3. The method for cloud platform resource pool deployment according to claim 1, wherein obtaining docker images of a plurality of CPU architectures and generating corresponding sub-warehouses comprises:
and opening the characteristics of the docker client and the docker server, and pushing docker mirror images suitable for different CPU architectures to different sub-warehouses.
4. The method for cloud platform resource pool deployment according to claim 1, wherein creating an index based on the same mirror name in each of the sub-warehouses comprises:
traversing the mirror name of each sub-warehouse, and generating a mirror name: a CPU architecture index;
and pushing the index to the corresponding sub-warehouse.
5. The method for cloud platform resource pool deployment according to claim 1, wherein deploying the index onto a cloud platform comprises:
configuring the index to a mirror repository of a cloud platform deployment tool;
and deploying the cloud platform mixed resource pool by using the deployment tool.
6. An apparatus for cloud platform resource pool deployment, comprising:
the system comprises a mirror image acquisition module, a storage module and a storage module, wherein the mirror image acquisition module is configured to acquire docker mirror images of a plurality of CPU architectures and generate corresponding sub-warehouses;
the index generation module is configured to create an index based on the mirror name corresponding to the application in each sub-warehouse and deploy the index to a cloud platform; and
and the operation module is configured to respond to a request of a client for running an application, find a corresponding docker mirror image through the index according to the CPU architecture of the client and the mirror image name of the application, and send the docker mirror image to the client.
7. The cloud platform resource pool deployment apparatus of claim 6, wherein the image acquisition module is further configured to:
and opening the characteristics of the docker client and the docker server, and pushing docker mirror images suitable for different CPU architectures to different sub-warehouses.
8. The cloud platform resource pool deployed apparatus of claim 6, wherein the index generation module is further configured to:
traversing the mirror name of each sub-warehouse, and generating a mirror name: a CPU architecture index;
and pushing the index to the corresponding sub-warehouse.
9. A computer device, comprising:
at least one processor; and
a memory storing computer instructions executable on the processor, the instructions when executed by the processor implementing the steps of any of the methods 1-5.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 5.
CN202010570085.2A 2020-06-21 2020-06-21 Cloud platform resource pool deployment method, device, equipment and readable medium Withdrawn CN111736952A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010570085.2A CN111736952A (en) 2020-06-21 2020-06-21 Cloud platform resource pool deployment method, device, equipment and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010570085.2A CN111736952A (en) 2020-06-21 2020-06-21 Cloud platform resource pool deployment method, device, equipment and readable medium

Publications (1)

Publication Number Publication Date
CN111736952A true CN111736952A (en) 2020-10-02

Family

ID=72651973

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010570085.2A Withdrawn CN111736952A (en) 2020-06-21 2020-06-21 Cloud platform resource pool deployment method, device, equipment and readable medium

Country Status (1)

Country Link
CN (1) CN111736952A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113268312A (en) * 2021-05-14 2021-08-17 济南浪潮数据技术有限公司 Application migration method and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113268312A (en) * 2021-05-14 2021-08-17 济南浪潮数据技术有限公司 Application migration method and system

Similar Documents

Publication Publication Date Title
US20170154017A1 (en) Web Application Management
US20170010673A1 (en) Gesture based sharing of user interface portion
US10878032B2 (en) Labeled graph isomorphism allowing for false positive
US9535949B2 (en) Dynamic rules to optimize common information model queries
US11582285B2 (en) Asynchronous workflow and task api for cloud based processing
US11089000B1 (en) Automated source code log generation
US20230216895A1 (en) Network-based media processing (nbmp) workflow management through 5g framework for live uplink streaming (flus) control
US11121942B2 (en) Orchestration engine facilitating management of dynamic connection components
CN113495797A (en) Message queue and consumer dynamic creation method and system
CN111736952A (en) Cloud platform resource pool deployment method, device, equipment and readable medium
US20200142761A1 (en) Detecting co-resident services in a container cloud
US11221923B2 (en) Performing selective backup operations
US20230164210A1 (en) Asynchronous workflow and task api for cloud based processing
Dhouib et al. Surveying collaborative and content management platforms for enterprise
US9858355B1 (en) Search engine optimization based upon most popular search history
CN115525396A (en) Application management method and device based on cloud protogenesis
CN115904407A (en) Mirror image construction method, system and computer readable storage medium
WO2022177631A1 (en) Structure self-aware model for discourse parsing on multi-party dialogues
US11438398B2 (en) 3rd generation partnership project (3gpp) framework for live uplink streaming (flus) sink capabilities determination
CN114296651A (en) Method and equipment for storing user-defined data information
CN114064209A (en) Method, device and equipment for creating cloud host through mirror image and readable medium
CN104714959A (en) Application query method and application query device
US11765236B2 (en) Efficient and extensive function groups with multi-instance function support for cloud based processing
US11409501B1 (en) Detecting infrastructure as code compliance inconsistency in a multi-hybrid-cloud environment
CN113805958B (en) Third party service access method and system based on OSB API specification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20201002