CN115145692A - Container creation method and device, and computer-readable storage medium - Google Patents

Container creation method and device, and computer-readable storage medium Download PDF

Info

Publication number
CN115145692A
CN115145692A CN202210821259.7A CN202210821259A CN115145692A CN 115145692 A CN115145692 A CN 115145692A CN 202210821259 A CN202210821259 A CN 202210821259A CN 115145692 A CN115145692 A CN 115145692A
Authority
CN
China
Prior art keywords
user
container
cpu architecture
specified
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210821259.7A
Other languages
Chinese (zh)
Inventor
武宇亭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN202210821259.7A priority Critical patent/CN115145692A/en
Publication of CN115145692A publication Critical patent/CN115145692A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/61Installation
    • G06F8/63Image based installation; Cloning; Build to order
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure relates to a container creation method and device and a computer-storable medium, and relates to the technical field of cloud computing. The container creation method comprises the following steps: receiving container creation information generated by a main node, wherein the container creation information comprises information of mirror images specified by a user; determining the CPU architecture type appointed by the user according to the information of the mirror image appointed by the user; selecting a simulator corresponding to the CPU architecture type specified by the user from the CPU architecture simulators of the working nodes; the container is created using a simulator corresponding to the CPU architecture type specified by the user. According to the present disclosure, flexibility of container creation is improved.

Description

Container creation method and device, and computer-readable storage medium
Technical Field
The present disclosure relates to the field of cloud computing technologies, and in particular, to a container creation method and apparatus, and a computer-readable storage medium.
Background
In the cloud service technology, containers are light-weight technology, and more container instances can be created on the basis of the same amount of resources compared with a physical machine and a virtual machine. The containers need to be managed and orderly accessed to the external environment, so that tasks such as scheduling, load balancing, distribution and the like are realized, kubernets is an industrial-level container arrangement platform proposed by Google, allows automated deployment, management and capacity expansion containerization application, and is a fact standard of container arrangement.
With the continuous development of CPU architectures such as ARM and open source RISC-V and the like and the research and development of domestic CPU, more and more applications need to be simultaneously adapted to CPUs with various architectures. If an application is required to support multiple CPU architectures, tests should be performed on the multiple CPU architectures when the application is developed, and thus, server resources of different CPU architectures need to be prepared. As applications become containerized, containers may be created on kubernets' nodes to enable development and testing of applications.
Disclosure of Invention
According to a first aspect of the present disclosure, there is provided a container creation method, performed by a work node, including: receiving container creation information generated by a main node, wherein the container creation information comprises information of a mirror image specified by a user; determining the CPU architecture type appointed by the user according to the information of the mirror image appointed by the user; selecting a simulator corresponding to the CPU architecture type specified by the user from the CPU architecture simulators of the working nodes; the container is created using a simulator corresponding to the CPU architecture type specified by the user.
In some embodiments, the container creation method further comprises: and sending the type of the CPU architecture simulator of the working node to a main node.
In some embodiments, the container creation method further comprises: and determining the type of the CPU architecture simulator of the working node according to the file of the CPU architecture simulator under the file directory of the working node.
In some embodiments, the determining the user-specified CPU architecture type from the user-specified mirrored information includes:
acquiring metadata of a mirror image specified by a user according to the information of the mirror image specified by the user;
and determining the CPU architecture type specified by the user according to the metadata of the mirror image specified by the user.
In some embodiments, said creating a container with a simulator corresponding to a user-specified CPU architecture type comprises:
and mapping the binary file of the simulator under the file directory of the working node into the file directory of the container.
In some embodiments, said creating a container with a simulator corresponding to a user-specified CPU architecture type comprises:
generating a container-started parameter list according to the container creation information and the simulator corresponding to the CPU architecture type specified by the user, wherein the container-started parameter list comprises the name of the simulator corresponding to the CPU architecture type specified by the user;
and creating the container according to the parameter list of the container start.
In some embodiments, the information of the user-specified image includes an address of the user-specified image.
According to a second aspect of the present disclosure, there is provided a container creation method, performed by a master node, including:
receiving a container creation request of a user, wherein the container creation request comprises information of a mirror image specified by the user;
determining the CPU architecture type specified by the user according to the information of the mirror image specified by the user;
receiving types of CPU architecture simulators of a plurality of working nodes;
selecting a target working node from the plurality of working nodes according to a CPU architecture type specified by a user and the types of CPU architecture simulators of the plurality of working nodes;
and generating container creation information according to the container creation request of the user, wherein the container creation information indicates the target work node to create the container.
In some embodiments, the selecting a target worker node from the plurality of worker nodes according to the CPU architecture type specified by the user and the type of the CPU architecture simulator possessed by the plurality of worker nodes includes:
a target work node is selected from work nodes having a CPU architecture simulator corresponding to a CPU architecture type specified by a user.
In some embodiments, the selecting a target work node from the work nodes having the CPU architecture simulator corresponding to the user-specified CPU architecture type comprises:
and selecting a target working node according to a preset filtering condition.
In some embodiments, the preset filtering condition includes at least one of a CPU model, a memory size, and a remaining resource of the work node.
According to a third aspect of the present disclosure, there is provided a container creation apparatus, deployed on a work node, including:
the receiving module is configured to receive container creation information generated by the main node, wherein the container creation information comprises information of a mirror image specified by a user;
the determining module is configured to determine the CPU architecture type specified by the user according to the information of the mirror image specified by the user;
a selection module configured to select a simulator corresponding to a CPU architecture type designated by a user from CPU architecture simulators that the work node has;
a creation module configured to create a container using a simulator corresponding to the CPU architecture type specified by the user.
According to a fourth aspect of the present disclosure, there is provided a container creation apparatus, deployed on a master node, including:
a first receiving module configured to receive a container creation request of a user, wherein the container creation request includes information of a mirror image specified by the user;
the determining module is configured to determine the CPU architecture type specified by the user according to the information of the mirror image specified by the user;
a second receiving module configured to receive a type of a CPU architecture simulator that the plurality of work nodes have;
a selection module configured to select a target work node from the plurality of work nodes according to a CPU architecture type specified by a user and a type of a CPU architecture simulator that the plurality of work nodes have;
the generating module is configured to generate container creation information according to a container creation request of a user, wherein the container creation information indicates that the target work node creates a container.
According to a fifth aspect of the present disclosure, there is provided a cluster system comprising a worker node and a master node according to any one of the embodiments of the present disclosure.
According to a sixth aspect of the present disclosure, there is provided a container creating apparatus including:
a memory; and
a processor coupled to the memory, the processor configured to perform the container creation method according to any embodiment based on instructions stored in the memory.
According to a seventh aspect of the present disclosure, there is provided a computer-storable medium having stored thereon computer program instructions which, when executed by a processor, implement a container creation method according to any one of the embodiments of the present disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The present disclosure may be more clearly understood from the following detailed description, taken with reference to the accompanying drawings, in which:
FIG. 1 illustrates a flow diagram of a container creation method according to some embodiments of the present disclosure;
FIG. 2 illustrates a flow diagram of a container creation method according to further embodiments of the disclosure;
FIG. 3 shows a schematic view of a container creation method according to further embodiments of the present disclosure;
FIG. 4 illustrates a block diagram of a container creation apparatus, in accordance with some embodiments of the present disclosure
FIG. 5 shows a block diagram of a container creation apparatus, according to further embodiments of the present disclosure;
FIG. 6 illustrates a block diagram of a cluster system, according to some embodiments of the present disclosure;
FIG. 7 shows a block diagram of a container creation apparatus, according to further embodiments of the present disclosure;
FIG. 8 illustrates a block diagram of a computer system for implementing some embodiments of the present disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of parts and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
In the related art, kubernetes can only create containers with the same CPU architecture in a Worker node, and the CPU architecture of the containers and the physical CPU architecture of the Worker node are consistent. If the application is required to be adaptable to various CPU architectures, various CPU hardware resources need to be prepared in the development and test processes of the application, which causes great cost pressure and resource waste.
The disclosure provides a container creation method and device and a computer readable storage medium, which realize that a container with a physical architecture different from that of a Worker node is created on the single Worker node of Kubernetes.
FIG. 1 illustrates a flow diagram of a container creation method according to some embodiments of the present disclosure.
As shown in FIG. 1, the container creation method includes steps S110-S140, which are performed by a worker node.
In step S110, container creation information generated by the master node is received, wherein the container creation information includes information of a mirror image specified by a user.
For example, a database corresponding to the container creation information is monitored, and when the new or changed container creation information is newly added or changed in the database, the new container creation information and the information indicating that the working node creates the container are obtained.
In some embodiments, the information of the user-specified image includes an address of the user-specified image.
In some embodiments, the container creation method further comprises sending the type of CPU architecture simulator the present worker node has to the master node.
For example, simulators of various types of CPU architectures are deployed on worker nodes. On the basis of reporting the physical CPU architecture type of the working node, the working node also newly adds the report of the type of the CPU architecture simulator of the working node.
Each work node uses a CPU architecture simulator provided by a Qemu (Quick Emulator) to start a container different from a Worker node CPU architecture, and each CPU architecture corresponds to one simulator. In order to let the scheduler of the master node know the CPU architecture type supported by the working node, each working node reports the CPU architecture type supported by the working node (i.e., the type of the CPU architecture simulator that the working node has) to the master node.
The master node determines which work node is the target node to create the container, based on the type of the CPU architecture simulator each work node has, and the information of the image specified by the user.
In some embodiments, the container creation method further comprises determining the type of the CPU architecture emulator that the worker node has from the files of the CPU architecture emulator under the file directory of the worker node.
For example, the simulator of the CPU architecture is saved as a binary file under the working node standard directory. The worker node identifies which CPU architected emulators the worker node contains by querying the file directory (e.g.,/usr/bin /).
The binding relationship between the Kubernet Worker node agent and the container CPU architecture is decoupled by deploying the CPU architecture simulator on the working node. The working node reports the CPU architecture simulator type of the node, not only reports the physical CPU architecture type of the node, and can provide more choices for the scheduling of the main node.
In step S120, the CPU architecture type specified by the user is determined from the information of the mirror image specified by the user.
For example, the worker node agent parses out the type of CPU architecture corresponding to the user-specified container image.
In some embodiments, determining the user-specified CPU architecture type from the user-specified mirrored information comprises: acquiring metadata of a mirror image specified by a user according to the information of the mirror image specified by the user; and determining the CPU architecture type specified by the user according to the metadata of the mirror image specified by the user.
For example, the working node downloads metadata of the mirror image specified by the user from a mirror image warehouse according to the address of the mirror image specified by the user, and then checks the metadata of the mirror image specified by the user through a docker insert command, so as to determine the type of the CPU architecture specified by the user and determine which simulator of the working node should be used for simulating the CPU architecture corresponding to the mirror image.
In step S130, a simulator corresponding to the CPU architecture type specified by the user is selected from the CPU architecture simulators that the work node has.
For example, if the CPU architecture type specified by the user is x86, then the simulator of the x86 architecture is selected to create the container.
By analyzing the requirement of the container mirror selected by the user on the CPU architecture, the corresponding CPU architecture simulator is selected according to the requirement, and then the container corresponding to the CPU architecture can be created by the CPU architecture simulator.
In step S140, a container is created using a simulator corresponding to the CPU architecture type specified by the user.
In some embodiments, creating the container with the simulator corresponding to the user-specified CPU architecture type includes mapping the binary files of the simulator under the file directory of the worker node into the file directory of the container.
For example, if the type of container to be created is different from the physical CPU architecture type of the Worker node, a simulator may be used. When the container is started, the corresponding simulator needs to be mapped into the container in a file mapping mode, so that the starting parameters of the simulator are changed. If a container of the arm architecture needs to be created on a worker node of physical architecture x86, the following commands are used:
docker run-itd--privileged-v/usr/bin/qemu-arm-static:/usr/bin/qemu-arm-static ioft/armhf-ubuntu:trusty/bin/bash
v/usr/bin/qemu-arm-static,/usr/bin/qemu-arm-static is newly added content, namely, binary files of the simulator under the file directory of the working node are mapped to the file directory of the container on the basis of an original container creating command.
The present disclosure decouples the binding relationship of the physical CPU architecture of the work node and the CPU architecture of the container that the node can create by creating the container with a CPU architecture simulator.
In some embodiments, creating a container with a simulator corresponding to a user-specified CPU architecture type includes: generating a container-started parameter list according to the container creation information and the simulator corresponding to the CPU architecture type specified by the user, wherein the container-started parameter list comprises the name of the simulator corresponding to the CPU architecture type specified by the user; the container is created according to the container-initiated parameter list.
For example, in the parameter list of container start, the name of the simulator corresponding to the CPU architecture type specified by the user is added, and the modified container start parameter is transferred to the container runtime interface. According to parameters such as the name of the simulator corresponding to the CPU architecture type specified by the user, a Container operation Engine (Container Engine) is called through a Container operation Interface (CRI) to realize the creation of the Container.
According to the method, the simulator of the CPU architecture is used for scheduling and creating the container, so that the working node not only supports the creation of the container of the CPU architecture which is the same as the physical CPU architecture of the node on the node, but also supports the creation of the container which is different from the physical CPU architecture of the node on the node, and therefore one working node can support the creation of the containers of various different types of CPU architectures, the simulator is not limited by the physical CPU architecture of the node, and the flexibility of creating the container is improved.
In addition, the binding relationship between the physical CPU architecture of the working node and the CPU architecture of the container which can be created by the node is decoupled through the simulator, the containers of a plurality of different CPU architecture types are created on one working node without the need of hardware resources of a plurality of different CPU architectures, and the cost of application development and testing is reduced.
FIG. 2 illustrates a flow diagram of a container creation method according to further embodiments of the present disclosure.
As shown in fig. 2, the container creation method includes steps S210-S250, which are performed by the master node.
In step S210, a container creation request of a user is received, wherein the container creation request includes information of a user-specified mirror image, and the information of the user-specified mirror image includes an address of the user-specified mirror image.
For example, the mirror is selected by the user and a container creation request is sent to the master node and the required CPU architecture type is specified.
In step S220, the CPU architecture type specified by the user is determined according to the information of the mirror image specified by the user.
For example, according to the information of the mirror image specified by the user, the address of the mirror image specified by the user is inquired in the mirror image warehouse, and the CPU architecture type corresponding to the information of the mirror image specified by the user is obtained.
In step S230, the types of CPU architecture emulators that the plurality of work nodes have are received.
For example, the type of the CPU architecture simulator that each work node has reported after querying is received.
In step S240, a target work node is selected from the plurality of work nodes according to the CPU architecture type specified by the user and the types of CPU architecture simulators that the plurality of work nodes have
For example, if a worker node has a simulator with the same CPU architecture type as the one specified by the user, the worker node is selected as the target worker node.
In some embodiments, selecting a target worker node from the plurality of worker nodes based on a user-specified CPU architecture type and a type of CPU architecture simulator the plurality of worker nodes have includes: a target work node is selected from the work nodes having the CPU architecture simulator corresponding to the CPU architecture type specified by the user.
For example, if there are a plurality of nodes each having a simulator corresponding to a CPU architecture type specified by the user, one of them is selected as a target work node.
In some embodiments, selecting a target work node from work nodes having a CPU architecture simulator corresponding to a user-specified CPU architecture type comprises: and selecting a target working node according to a preset filtering condition.
For example, if a plurality of nodes all have simulators corresponding to the CPU architecture type specified by the user, a target working node is selected from the simulators according to a preset policy.
In some embodiments, the preset filtering condition includes at least one of a CPU model, a memory size, and a remaining resource of the work node.
For example, the scheduling of the working node is performed in the following two steps.
1. And (3) a filtering stage: and selecting worker nodes meeting the requirement of the mirror image cpu architecture, and filtering candidate working nodes from the worker nodes meeting the requirement according to the requirements of a user on the cpu model and the memory size of the container.
2. In the preferred phase, the candidate working nodes are further screened, for example, the candidate node with the most residual resources is selected.
In step S250, container creation information is generated according to the container creation request of the user, wherein the container creation information indicates that the target work node creates a container.
For example, container creation information including a mirror address specified by a user is generated and stored in a database, and a scheduling target work node creates a container.
The present disclosure, in selecting a target work node, is based on the type of the CPU architecture simulator of each work node, not just on the type of the physical CPU architecture of each work node. By utilizing the CPU architecture simulator, the problem that the main node is limited by the type of the physical CPU architecture of the working node when selecting the target node is solved, so that more working nodes can be selected, a more appropriate target node can be found according to the selected working nodes without the limitation of the physical CPU architecture, the scheduling flexibility is improved, and the cost is reduced.
FIG. 3 shows a schematic diagram of a container creation method according to further embodiments of the present disclosure.
As shown in fig. 3, firstly, each worker node detects one or more simulator types of the node, and then reports the simulator types of the node to a master node;
the user then selects the mirror, and initiates a container creation request through the k8s API Sever.
And the scheduler creates a request for analyzing the CPU architecture type required by the mirror image from the container, selects a node supporting the architecture according to the supported CPU architecture type reported by each Worker node, and completes scheduling by combining a preset scheduling strategy. The CPU architecture type of the support reported by each Worker node is provided by a 'node CPU architecture capability reporting module' of each computing node.
The scheduled Worker node agent receives the request information for creating the container, and the Worker node agent analyzes the CPU architecture corresponding to the container mirror image; the Worker node agent generates a parameter list corresponding to the simulator generation container starting according to the requirements such as container creation specification and the like and the CPU architecture indicated by the mirror image, and maps the binary file of the simulator corresponding to the CPU architecture into the container in a-v parameter mode).
And finally, calling a container operation engine by the Worker node agent through a container operation interface according to the generated parameters to finish the creation of the container.
Fig. 4 illustrates a block diagram of a container creation apparatus, according to some embodiments of the present disclosure.
As shown in fig. 4, the container creation apparatus 40 includes a receiving module 401, a determining module 402, a selecting module 403, and a creating module 404.
A receiving module 401 configured to receive container creation information generated by a host node, where the container creation information includes information of a mirror image specified by a user, for example, to execute step S110 shown in fig. 1;
a determining module 402 configured to determine the CPU architecture type specified by the user according to the mirrored information specified by the user, for example, to execute step S120 shown in fig. 2;
a selecting module 403 configured to select a simulator corresponding to the CPU architecture type specified by the user from the CPU architecture simulators that the work node has, for example, to execute step S130 shown in fig. 1;
a creating module 404 configured to create a container using a simulator corresponding to the CPU architecture type specified by the user, for example, to perform step S140 shown in fig. 1.
FIG. 5 illustrates a block diagram of a container creation apparatus, according to further embodiments of the present disclosure.
As shown in fig. 5, the container creation apparatus 50 is disposed on the host node and includes a first receiving module 501, a determining module 502, a second receiving module 503, a selecting module 504, and a generating module 505.
A first receiving module 501 configured to receive a container creation request of a user, where the container creation request includes information of a mirror image specified by the user, for example, execute step S210 shown in fig. 2.
The determining module 502 is configured to determine the CPU architecture type specified by the user according to the mirrored information specified by the user, for example, to execute step S220 shown in fig. 2.
A second receiving module 503 configured to receive the types of the CPU architecture simulator that the plurality of work nodes have, for example, to execute step S230 shown in fig. 2.
A selecting module 504 configured to select a target work node from the plurality of work nodes according to the CPU architecture type specified by the user and the types of the CPU architecture simulators that the plurality of work nodes have, for example, to execute step S240 shown in fig. 2.
A generating module 505 configured to generate container creation information according to the container creation request of the user, wherein the container creation information indicates that the target work node creates a container, for example, execute step S250 shown in fig. 2.
FIG. 6 illustrates a block diagram of a cluster system, according to some embodiments of the disclosure.
As shown in fig. 6, the cluster system 6 includes a worker node 4 including a container creation apparatus according to some examples of the present disclosure and a master node 5 including a container creation apparatus according to some examples of the present disclosure. .
FIG. 7 illustrates a block diagram of a container creation apparatus, according to further embodiments of the present disclosure.
As shown in fig. 7, the flow rate prediction device 7 includes a memory 71; and a processor 72 coupled to the memory 71, the memory 71 being configured to store an execution flow prediction method. The processor 72 is configured to perform the container creation method in any of the embodiments of the present disclosure based on instructions stored in the memory 71.
FIG. 8 illustrates a block diagram of a computer system for implementing some embodiments of the present disclosure.
As shown in FIG. 8, computer system 80 may take the form of a general purpose computing device. Computer system 80 includes a memory 810, a processor 820, and a bus 800 that connects the various system components.
The memory 810 may include, for example, system memory, non-volatile storage media, and the like. The system memory stores, for example, an operating system, an application program, a Boot Loader (Boot Loader), and other programs. The system memory may include volatile storage media such as Random Access Memory (RAM) and/or cache memory. A non-volatile storage medium, for example, stores instructions to perform a container creation method in any of some embodiments of the present disclosure. Non-volatile storage media include, but are not limited to, magnetic disk storage, optical storage, flash memory, and the like.
The processor 820 may be implemented as discrete hardware components, such as a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gates or transistors, or the like. Accordingly, each of the modules, such as the judging module and the determining module, may be implemented by a Central Processing Unit (CPU) executing instructions in a memory for performing the corresponding step, or may be implemented by a dedicated circuit for performing the corresponding step.
The bus 800 may use any of a variety of bus architectures. For example, bus architectures include, but are not limited to, industry Standard Architecture (ISA) bus, micro Channel Architecture (MCA) bus, and Peripheral Component Interconnect (PCI) bus.
The computer system 80 may also include an input-output interface 830, a network interface 840, a storage interface 850, and the like. These interfaces 830, 840, 850 and the memory 810 and the processor 820 may be connected by a bus 800. The input/output interface 830 may provide a connection interface for input/output devices such as a display, a mouse, and a keyboard. The network interface 840 provides a connection interface for various networking devices. The storage interface 850 provides a connection interface for external storage devices such as a floppy disk, a usb disk, and an SD card.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable apparatus to produce a machine, such that the execution of the instructions by the processor results in an apparatus that implements the functions specified in the flowchart and/or block diagram block or blocks.
These computer-readable program instructions may also be stored in a computer-readable memory that can direct a computer to function in a particular manner, such that the instructions cause an article of manufacture to be produced including instructions which implement the function specified in the flowchart and/or block diagram block or blocks.
The present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects.
By the container creation method and device and the computer-readable storage medium in the embodiments, the flexibility of container creation is improved, and the cost of development and testing is reduced.
Thus far, a container creation method and apparatus, a computer readable storage medium according to the present disclosure have been described in detail. Some details well known in the art have not been described in order to avoid obscuring the concepts of the present disclosure. It will be fully apparent to those skilled in the art from the foregoing description how to practice the presently disclosed embodiments.

Claims (16)

1. A container creation method, performed by a worker node, comprising:
receiving container creation information generated by a main node, wherein the container creation information comprises information of mirror images specified by a user;
determining the CPU architecture type appointed by the user according to the information of the mirror image appointed by the user;
selecting a simulator corresponding to a CPU architecture type specified by a user from CPU architecture simulators of the working nodes;
the container is created using a simulator corresponding to the CPU architecture type specified by the user.
2. The container creation method of claim 1, further comprising:
and sending the type of the CPU architecture simulator of the working node to a main node.
3. The container creation method of claim 2, further comprising:
and determining the type of the CPU architecture simulator of the working node according to the file of the CPU architecture simulator under the file directory of the working node.
4. The container creation method of claim 1, wherein said determining a user-specified CPU architecture type from user-specified mirrored information comprises:
acquiring metadata of a mirror image specified by a user according to the information of the mirror image specified by the user;
and determining the CPU architecture type specified by the user according to the metadata of the mirror image specified by the user.
5. The container creation method of claim 1, wherein creating a container using a simulator corresponding to a user-specified CPU architecture type comprises:
and mapping the binary file of the simulator under the file directory of the working node into the file directory of the container.
6. The container creation method of claim 1, wherein creating a container using a simulator corresponding to a user-specified CPU architecture type comprises:
generating a container starting parameter list according to the container creating information and the simulator corresponding to the CPU architecture type specified by the user, wherein the container starting parameter list comprises the name of the simulator corresponding to the CPU architecture type specified by the user;
the container is created according to the container-initiated parameter list.
7. The container creation method of claim 1, wherein the user-specified mirrored information comprises an address of a user-specified mirror.
8. A container creation method, performed by a master node, comprising:
receiving a container creation request of a user, wherein the container creation request comprises information of a mirror image specified by the user;
determining the CPU architecture type appointed by the user according to the information of the mirror image appointed by the user;
receiving types of CPU architecture simulators of a plurality of working nodes;
selecting a target working node from the plurality of working nodes according to the CPU architecture type specified by a user and the types of the CPU architecture simulators of the plurality of working nodes;
and generating container creation information according to the container creation request of the user, wherein the container creation information indicates the target work node to create the container.
9. The container creation method of claim 8, wherein the selecting a target worker node from the plurality of worker nodes according to a user-specified CPU architecture type and a type of CPU architecture simulator that the plurality of worker nodes have comprises:
a target work node is selected from the work nodes having the CPU architecture simulator corresponding to the CPU architecture type specified by the user.
10. The container creation method of claim 8, wherein said selecting a target work node from work nodes having a CPU architecture simulator corresponding to a user-specified CPU architecture type comprises:
and selecting a target working node according to a preset filtering condition.
11. The container creation method according to claim 10, wherein the preset filter condition includes at least one of a CPU model, a memory size, and a remaining resource of a work node.
12. A container creation apparatus, deployed on a worker node, comprising:
the receiving module is configured to receive container creation information generated by the main node, wherein the container creation information comprises information of mirror images specified by a user;
the determining module is configured to determine the CPU architecture type specified by the user according to the information of the mirror image specified by the user;
a selection module configured to select a simulator corresponding to a CPU architecture type designated by a user from CPU architecture simulators that the work node has;
a creation module configured to create a container using a simulator corresponding to the CPU architecture type specified by the user.
13. A container creation apparatus, deployed on a host node, comprising:
a first receiving module configured to receive a container creation request of a user, wherein the container creation request includes information of a mirror image specified by the user;
the determining module is configured to determine the CPU architecture type specified by the user according to the information of the mirror image specified by the user;
a second receiving module configured to receive a type of a CPU architecture simulator that the plurality of work nodes have;
a selection module configured to select a target work node from the plurality of work nodes according to a CPU architecture type specified by a user and a type of a CPU architecture simulator that the plurality of work nodes have;
the generating module is configured to generate container creation information according to a container creation request of a user, wherein the container creation information indicates that the target work node creates a container.
14. A cluster system comprising worker nodes comprising the container creation apparatus according to claim 12 and master nodes comprising the container creation apparatus according to claim 13.
15. A container creation apparatus comprising:
a memory; and
a processor coupled to the memory, the processor configured to perform the container creation method of any of claims 1 to 11 based on instructions stored in the memory.
16. A computer-storable medium having stored thereon computer program instructions which, when executed by a processor, implement a container creation method according to any one of claims 1 to 11.
CN202210821259.7A 2022-07-13 2022-07-13 Container creation method and device, and computer-readable storage medium Pending CN115145692A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210821259.7A CN115145692A (en) 2022-07-13 2022-07-13 Container creation method and device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210821259.7A CN115145692A (en) 2022-07-13 2022-07-13 Container creation method and device, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN115145692A true CN115145692A (en) 2022-10-04

Family

ID=83412633

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210821259.7A Pending CN115145692A (en) 2022-07-13 2022-07-13 Container creation method and device, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN115145692A (en)

Similar Documents

Publication Publication Date Title
US20190324772A1 (en) Method and device for processing smart contracts
CN108319460B (en) Method and device for generating application program installation package, electronic equipment and storage medium
CN107896162B (en) Deployment method and device of monitoring system, computer equipment and storage medium
EP1582985A2 (en) Test case inheritance controlled via attributes
CN110231994B (en) Memory analysis method, memory analysis device and computer readable storage medium
CN103309800A (en) Automatic webpage testing method and system
CN111104158A (en) Software packaging method and device, computer equipment and storage medium
WO2019117767A1 (en) Method, function manager and arrangement for handling function calls
CN111045941A (en) Positioning method and device of user interface control and storage medium
CN108459906A (en) A kind of dispatching method and device of VCPU threads
CN110958138B (en) Container expansion method and device
CN110727570A (en) Concurrent pressure measurement method and related device
CN115145692A (en) Container creation method and device, and computer-readable storage medium
CN108089895B (en) Activity registration method and device in plug-in, electronic equipment and storage medium
CN115757172A (en) Test execution method and device, storage medium and computer equipment
CN111506601B (en) Data processing method and device
US20220350596A1 (en) Computing node allocation based on build process specifications in continuous integration environments
CN115237631A (en) Easily-extensible data sharing system and method based on data sharing plug-in
CN114860204A (en) Program processing method, program operating device, terminal, smart card and storage medium
CN114253845A (en) Automatic testing method and device for special-shaped architecture integration system
CN114564210A (en) Copy deployment method, device, system, electronic equipment and storage medium
CN110297625B (en) Application processing method and device
CN112181825A (en) Test case library construction method and device, electronic equipment and medium
US20220207438A1 (en) Automatic creation and execution of a test harness for workflows
US20240037017A1 (en) Verification of core file debugging resources

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination