CN114296933A - Implementation method of lightweight container under terminal edge cloud architecture and data processing system - Google Patents

Implementation method of lightweight container under terminal edge cloud architecture and data processing system Download PDF

Info

Publication number
CN114296933A
CN114296933A CN202111649189.3A CN202111649189A CN114296933A CN 114296933 A CN114296933 A CN 114296933A CN 202111649189 A CN202111649189 A CN 202111649189A CN 114296933 A CN114296933 A CN 114296933A
Authority
CN
China
Prior art keywords
container
mirror image
node
strategy
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111649189.3A
Other languages
Chinese (zh)
Inventor
牛思杰
庞涛
崔思静
潘碧莹
陈梓荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN202111649189.3A priority Critical patent/CN114296933A/en
Publication of CN114296933A publication Critical patent/CN114296933A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Stored Programmes (AREA)

Abstract

The disclosure relates to the technical field of computers, in particular to a method for realizing a lightweight container under an end edge cloud architecture, a data processing system and a storage medium. The method comprises the following steps: the cloud management platform sends a container creating instruction to the edge server; the edge server determines a mirror image pulling strategy according to the container creating command and sends a control command to each node according to the mirror image pulling strategy; the control instruction comprises a mirror image pulling instruction and a container creating instruction; and pulling the mirror image layers by each node according to the control instruction, and jointly mounting and generating a container layer according to the hierarchical relationship between the mirror image layers so as to start the container. According to the scheme, the problem that some resource-limited terminals cannot smoothly operate the container can be effectively avoided under the end edge cloud architecture, limited resources are reasonably applied, and a feasible scheme is provided for realizing the cooperation of the end edge cloud container.

Description

Implementation method of lightweight container under terminal edge cloud architecture and data processing system
Technical Field
The disclosure relates to the technical field of computers, in particular to a method for realizing a lightweight container under an end edge cloud architecture, a data processing system and a storage medium.
Background
The terminal side cloud architecture refers to an integrated architecture formed by cooperation of terminal equipment, an edge side server and a cloud server. At present, a mainstream container scheme, such as a docker container, needs to pull a container image before creating the container, and create the container based on the container image, which is equivalent to instantiating the image. For container mirroring, taking an ordinary nginx mirroring as an example, the size is 127MB, and for many IoT (Internet of Things) terminal devices with limited disk space, for example, a camera is usually only 64MB in flash size or even lower, and cannot bear most mirrors, the container solution at present is not feasible.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
An object of the present disclosure is to provide a method for implementing a lightweight container under an edge cloud architecture, a data processing system, and a storage medium, thereby overcoming, at least to some extent, the drawbacks due to the limitations and drawbacks of the related art.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of the present disclosure, there is provided a method for implementing a lightweight container under an end edge cloud architecture, the method including:
the cloud management platform sends a container creating instruction to the edge server;
the edge server determines a mirror image pulling strategy according to the container creating command and sends a control command to each node according to the mirror image pulling strategy; the control instruction comprises a mirror image pulling instruction and a container creating instruction;
and pulling the mirror image layers by each node according to the control instruction, and jointly mounting and generating a container layer according to the hierarchical relationship between the mirror image layers so as to start the container.
In an exemplary embodiment of the present disclosure, the edge server includes a master node, a container deployment policy module;
the edge server determines a mirror image pulling strategy according to the container creating instruction, and the method comprises the following steps:
the master node receives the container creating instruction, analyzes the container creating instruction to obtain container creating parameters, and sends the container creating parameters to a container deployment strategy module;
the container deployment strategy module formulates a mirror image pulling strategy according to the container creation parameters and preset rules, and returns the mirror image pulling strategy to the master node; the mirror image pulling strategy comprises a master node mirror image pulling strategy and a node mirror image pulling strategy.
In an exemplary embodiment of the present disclosure, the container creation parameter includes: the method comprises the steps of container mirror image configuration information, node resource states, the number of containers to be created and mirror names of the created containers.
In an exemplary embodiment of the present disclosure, the creating a mirror image pull policy according to a preset rule by the container deployment policy module according to the container creation parameter includes:
carrying out priority sequencing on the nodes according to the resource state of each node;
sequentially dividing the corresponding nodes according to the number of the containers to be created and the priority order to obtain a container division result;
evaluating each node based on the container division result and the node resource state, and determining container mirror image pulling information corresponding to each node when the evaluation is passed;
and generating a mirror image pulling strategy according to the container division result and the container mirror image pulling information, and sending the mirror image pulling strategy to the master node.
In an exemplary embodiment of the present disclosure, the sending a control instruction to each node according to the mirror image pull policy includes:
and the master node sends a control instruction to each node according to the mirror image pulling strategy.
In an exemplary embodiment of the present disclosure, the node pulls the mirror layers according to the control instruction, and jointly mounts and generates the container layer according to a hierarchical relationship between the mirror layers, including:
and the node remotely mounts the mirror image layer of the master node, and jointly mounts the mirror image layer and the local mirror image layer to generate a container layer according to the inheritance relationship between the mirror image layers.
In an exemplary embodiment of the present disclosure, the method further comprises:
the cloud management platform responds to the container service request, creates a container creating task, and executes the container creating task to send a container creating instruction to the edge server.
According to a second aspect of the present disclosure, there is provided a data processing system, the system comprising:
the cloud management platform is used for sending a container creating instruction to the edge server;
the edge server is used for determining a mirror image pulling strategy according to the container creating instruction and sending a control instruction to each node according to the mirror image pulling strategy; the control instruction comprises a mirror image pulling instruction and a container creating instruction;
and the node is used for pulling the mirror image layers according to the control instruction and jointly mounting and generating a container layer according to the hierarchical relationship between the mirror image layers so as to start the container.
In an exemplary embodiment of the present disclosure, the edge server includes a master node, a container deployment policy module;
the master node is used for receiving the container creating instruction, analyzing the container creating instruction to obtain a container creating parameter, and sending the container creating parameter to a container deployment strategy module;
the container deployment strategy module is used for making a mirror image pulling strategy according to the container creation parameters and preset rules and returning the mirror image pulling strategy to the master node; the mirror image pulling strategy comprises a master node mirror image pulling strategy and a node mirror image pulling strategy.
According to a third aspect of the present disclosure, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the method for implementing a lightweight container under an edge cloud architecture described above.
In the method for implementing the lightweight container under the end edge cloud architecture, a container creating instruction is sent to an edge server by a cloud management platform; enabling the edge server to determine a mirror image pulling strategy according to the container creating command and send a control command to each node according to the mirror image pulling strategy; each node pulls the mirror image layers according to the control instruction, and jointly mounts and generates a container layer according to the hierarchical relation between the mirror image layers; therefore, the container mirror image is hierarchically pulled, the container mirror image pulled to the local in the traditional container scheme is stored in the edge server and the local terminal in a distributed mode, and the operation problem of the container under the condition that terminal resources are limited is solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.
Fig. 1 schematically illustrates a schematic diagram of a method for implementing a lightweight container under an end edge cloud architecture in an exemplary embodiment of the disclosure;
fig. 2 schematically illustrates a schematic diagram of an end edge cloud system architecture in an exemplary embodiment of the present disclosure;
FIG. 3 is a schematic diagram that schematically illustrates a method of determining a mirror pull policy, in an exemplary embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating a mirror pull policy flow in an exemplary embodiment of the present disclosure;
FIG. 5 schematically illustrates a schematic diagram of a data processing system in an exemplary embodiment of the present disclosure;
fig. 6 schematically illustrates a schematic diagram of a storage medium in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
In the related art, in a mainstream container scheme (for example, docker), before creating a container, the container image needs to be pulled, and the container is created based on the container image, which is equivalent to instantiating the image. The container mirror image is a set of binary files and dependent packages required by the container operation, and is hierarchically stored on the system, and the files and the configuration information of each layer are overlapped together to form the mirror image. The bottom layer of the mirror image is a base mirror image (base), which is usually a filesystem of a linux operating system, and the base mirror image usually provides a dependency package and a common instruction set of executable files of other layers, so that the base mirror image is generally large in size; the other layers that are mirrored are typically front layer based modifications, and are typically relatively small in volume. Taking a common nginx image as an example, the size of the common nginx image is 127MB, and for many IoT terminal devices with limited disk space, for example, a camera is usually only 64MB in flash size or even lower, and cannot bear most images, the container scheme at the present stage is not feasible. The container virtualization transformation of the terminal is an important method for bringing the terminal into an end-edge cloud collaborative integrated architecture, and with the development of a 5G network, connection of more IoT devices except a mobile phone, such as a camera, a home router and the like is supported, and the hardware resources of the terminal devices are relatively limited and cannot meet the operating conditions of a universal container scheme.
In view of the above-mentioned shortcomings and drawbacks of the prior art, the present exemplary embodiment provides a method for realizing a lightweight container under an end edge cloud architecture. Referring to fig. 1, the method for implementing the lightweight container under the terminal edge cloud architecture may include the following steps:
step S11, the cloud management platform sends a container creating instruction to the edge server;
step S12, the edge server determines a mirror image pull strategy according to the container creating command and sends a control command to each node according to the mirror image pull strategy; the control instruction comprises a mirror image pulling instruction and a container creating instruction;
and step S13, each node pulls the mirror image layers according to the control instruction, and jointly mounts and generates a container layer according to the hierarchical relationship between the mirror image layers so as to start the container.
In the implementation method of the lightweight container under the end edge cloud architecture provided by the present example embodiment, based on the characteristics of layered storage and layered pulling of container images, and read-only of the image layers, the container images pulled locally in the conventional container scheme are stored in the edge server and the local terminal in a distributed manner; the operation problem of the container under the condition that the terminal resource is limited is effectively solved.
Hereinafter, the steps of the method for realizing a lightweight container under an edge cloud architecture in the present exemplary embodiment will be described in more detail with reference to the drawings and examples.
In step S11, the cloud management platform sends a create container instruction to the edge server.
In this example embodiment, referring to the end-edge cloud system architecture shown in fig. 2, a management platform 211 and a mirror repository 212 may be deployed in the cloud 21. The cloud management platform can interact with the edge terminal 22 device and the terminal 23. The cloud 21 may be a cloud server. The edge terminal 22 may be a deployed edge server; the edge server may include a master node (master node) 221 and a container deployment policy module 222. The terminal 23 described above may include a plurality of terminal devices, and each terminal device may serve as one node 231 (slave node). For example, the terminal may be a smart terminal device on the user side, such as a mobile phone, a tablet computer, and the like.
In this example embodiment, specifically, the cloud management platform creates a container creation task in response to a container service request, so as to execute the container creation task and send a container creation instruction to the edge server.
For example, when an application in the user terminal needs to create a container, a container service request may be generated, and the request may be sent to the cloud management platform. The container service request may include the number of containers to be created by the application, the application name and configuration information, and the like. After receiving the request, the cloud management platform can break through the corresponding container creating task at the cloud server and send the task data to the edge server. The create container instruction may include information such as a mirror name of the create container, the number of containers required to be created, and the like. Specifically, the create container instruction may be sent to the master node in the edge server.
In step S12, the edge server determines a mirror image pull policy according to the container creation instruction, and sends a control instruction to each node according to the mirror image pull policy; the control instruction comprises a mirror image pull instruction and a container creating instruction.
In this example embodiment, the edge server includes a master node and a container deployment policy module. Referring to fig. 3, the determining, by the edge server, the mirror pull policy according to the create container instruction may include:
step S121, the master node receives the container creating instruction, analyzes the container creating instruction to obtain container creating parameters, and sends the container creating parameters to a container deployment strategy module; and the number of the first and second groups,
step S122, the container deployment strategy module makes a mirror image pulling strategy according to the container creation parameters and preset rules, and returns the mirror image pulling strategy to the master node; the mirror image pulling strategy comprises a master node mirror image pulling strategy and a node mirror image pulling strategy.
Specifically, the container deployment policy module may interact with the master node in the http restful api form, and record interactive information in the json form, where the recorded information mainly includes: 1) the method comprises the steps that container mirror image configuration information and the number of containers needing to be started are required to be pulled; and resource state information of each node. The container deployment strategy module can return mirror layers and other configuration information required by the master node and the node to the master node. For example, the data sent by the master node to the container deployment policy module may include node resource state information such as an IP address of the node, memory usage of the node, and disk usage. The information returned by the container deployment strategy module to the master node may include the IP address of the node or the master node, and the encoding of the mirror layer.
In this example embodiment, the container creation parameter may include: container mirror configuration information, node resource status, number of containers to create, mirror name of the container to create, etc. The node resource state may be a resource state of a node, for example: CPU utilization, memory usage, memory configuration, disk usage, disk configuration, and the like.
In this example embodiment, referring to fig. 4, the step of the container deployment policy module formulating the mirror image pull policy according to the container creation parameter and the preset rule may specifically include:
step S21, the nodes are prioritized according to the resource state of each node;
step S22, dividing the corresponding nodes in sequence according to the number of the containers to be created and the priority order to obtain the container division result;
step S23, evaluating each node based on the container division result and the node resource state, and determining the container mirror image pulling information corresponding to each node when the evaluation is passed;
and step S24, generating a mirror image pulling strategy according to the container division result and the container mirror image pulling information, and sending the mirror image pulling strategy to the master node.
Specifically, the specifying method of the mirror pull policy may include:
3.1, carrying out priority sequencing on the node nodes according to the resource condition of each node, wherein the node nodes with more sufficient resources have higher priority;
3.2, dividing the containers to be created into node nodes in sequence according to the number of the containers to be created and the priority; for example, if the number of created containers is 5, and there are 8 node nodes in total, then 1 container is divided for node nodes with priorities of 1-5;
3.3, if the number of containers is larger than the number of the nodes, distributing [ the number of containers/the number of the nodes ] ([ ] represents a whole) container for each node, and repeating the step 3.2; for example, if the number of created containers is 9, and there are 4 node nodes in total, then 3 containers are divided for the node with the priority of 1, and the remaining node nodes are divided into 2 containers;
3.4, evaluating each node according to the division result and the node resource condition, and generating container mirror image pulling information of the node through representing the node to be deployable; the information comprises which mirror image layers the node needs to pull and which mirror image layers the master node needs to pull); if the node evaluation result is that the node cannot pass, subtracting 1 from the container division number of the node, and re-evaluating until the node passes the evaluation or the container division number is subtracted to 0;
3.5, redistributing the number of the containers of all the node nodes in the step 3.4 which are not reduced due to the failure of the evaluation to the node nodes which do not have the failure condition in the evaluation process according to the steps 3.2 and 3.3;
3.6, repeating the steps 3.4 and 3.5 by the node until the evaluation results of all the nodes are passed, and if the evaluation results are still not satisfied after 10 times of repetition, returning the results of the incapability of making the strategy to the master node by the container deployment strategy module;
and 3.7, sorting the container division results of the nodes and the container mirror image pulling information to generate a mirror image pulling strategy, and returning the mirror image pulling strategy to the master node.
In this exemplary embodiment, the sending a control instruction to each node according to the mirror image pull policy includes: and the master node sends a control instruction to each node according to the mirror image pulling strategy.
In step S13, each node pulls the mirror layers according to the control instruction, and jointly mounts and generates a container layer according to the hierarchical relationship between the mirror layers, so as to start the container.
In this exemplary embodiment, specifically, the control instruction may include a mirror layer and configuration parameters that the node needs to pull; each node can always pull a specified number of mirror layers to the mirror warehouse in the cloud. And the node RPC remote mounting mirror image layer of the master node and the local mirror image layer jointly mount and generate a container layer according to the inheritance relationship between the mirror image layers.
For example, a base mirror may be deployed on a master node; the node may also be selected based on the node resource status. For example, if there are 3 Node nodes, if the memory occupancy of Node2 is too high, then a container may be created by Node1 and Node 3; or when the Node1 disk occupies a higher space, the three-layer mirror image can be downloaded locally, and the two-layer mirror image on the Master Node is mounted; or, when the disk occupancy of the Node3 is relatively low, the four-layer mirror image can be locally downloaded, and the one-layer mirror image (namely base mirror image) on the Master Node is mounted; alternatively, if Node1 has more mirror layers mounted relative to Node3, the memory consumption will be more.
The method for realizing the lightweight container under the terminal edge cloud architecture provided by the disclosure is based on the characteristics of container mirror image layered storage, layered pulling and mirror image layer read-only, and the container mirror image which is pulled to the local in the traditional container scheme is stored in an edge server and a local terminal in a distributed manner. Through the container deployment strategy module deployed on the edge server node, when the edge terminal receives a terminal container starting instruction from the cloud management platform, the container deployment strategy module can determine a mirror image pulling and storing strategy according to the resource condition of the terminal node and the configuration information of a target container mirror image, for example, a base layer with a larger volume is pulled to the edge server, and a mirror image layer with a smaller volume is pulled to the terminal. When the terminal starts the container, the base layer in the edge server is designated as the rootfs of the container in an RPC remote mounting mode, and the rootfs and other mirror image layers in the terminal are jointly mounted to generate a writable container layer so as to finish the starting of the container. If there are multiple nodes in the cluster, the container deployment policy module will decide on which node/nodes to start the container, and the number of mirror layers pulled by the nodes may be different. The intelligent flexible deployment of the cluster container is realized through a container deployment strategy module; in addition, multiple node nodes in the cluster can reuse the mirror image layer on the master, so that the space is saved; and in addition, 4, the configuration container deployment strategy module can be modified according to the actual situation and the requirement of the cluster. Compared with the mainstream container scheme in the industry, the method mainly focuses on operation in the cloud and the edge, and has the characteristics of having many limitations on the support of some resource-limited devices such as a mobile terminal and a smart terminal; the technical scheme disclosed by the invention can effectively avoid the problem that some resource-limited terminals cannot smoothly operate the container, reasonably use the limited resources and provide a feasible basis for realizing the cooperation of the end edge cloud containers.
It is to be noted that the above-mentioned figures are only schematic illustrations of the processes involved in the method according to an exemplary embodiment of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Further, referring to fig. 5, the data processing system 50 according to the present exemplary embodiment may include: a cloud server 501, an edge server 502 and a terminal 503; wherein the content of the first and second substances,
the cloud server 501 is configured to bear a cloud management platform 5011, and send a container creation instruction to the edge server 502;
the edge server 502 is configured to determine a mirror image pull policy according to the container creation instruction, and send a control instruction to each node 5031 according to the mirror image pull policy; the control instruction comprises a mirror image pulling instruction and a container creating instruction;
the terminal 503 is configured to provide the node 5031, pull the mirror image layers according to the control instruction, and mount and generate a container layer according to a hierarchical relationship between the mirror image layers to start the container.
In some example embodiments, the edge server 502 may include a master node 5022, a container deployment policy module 5021.
The master node 5022 may be configured to receive the container creation instruction, parse the container creation instruction to obtain container creation parameters, and send the container creation parameters to a container deployment policy module.
The container deployment policy module 5021 may be configured to formulate a mirror image pull policy according to the container creation parameter and a preset rule, and return the mirror image pull policy to the master node; the mirror image pulling strategy comprises a master node mirror image pulling strategy and a node mirror image pulling strategy.
In some exemplary embodiments, the cloud server 501, including the mirror repository 5012, is configured to pull a mirror from the node 5031 to the mirror repository 5012.
The method for realizing the lightweight container under the end edge cloud architecture is applied to the data processing system 50; the specific details of each module in the data processing system 50 have been described in detail in the method for implementing a lightweight container under the corresponding edge cloud architecture, and therefore are not described herein again.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
In an exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the above section "exemplary methods" of the present description, when said program product is run on the terminal device.
Referring to fig. 6, a program product 600 for implementing the above method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Furthermore, the above-described figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (10)

1. A method for realizing a lightweight container under a terminal edge cloud architecture is characterized by comprising the following steps:
the cloud management platform sends a container creating instruction to the edge server;
the edge server determines a mirror image pulling strategy according to the container creating command and sends a control command to each node according to the mirror image pulling strategy; the control instruction comprises a mirror image pulling instruction and a container creating instruction;
and pulling the mirror image layers by each node according to the control instruction, and jointly mounting and generating a container layer according to the hierarchical relationship between the mirror image layers so as to start the container.
2. The method for implementing the lightweight container under the end edge cloud architecture of claim 1, wherein the edge server comprises a master node, a container deployment policy module;
the edge server determines a mirror image pulling strategy according to the container creating instruction, and the method comprises the following steps:
the master node receives the container creating instruction, analyzes the container creating instruction to obtain container creating parameters, and sends the container creating parameters to a container deployment strategy module;
the container deployment strategy module formulates a mirror image pulling strategy according to the container creation parameters and preset rules, and returns the mirror image pulling strategy to the master node; the mirror image pulling strategy comprises a master node mirror image pulling strategy and a node mirror image pulling strategy.
3. The method for implementing the light-weight container under the end edge cloud architecture according to claim 2, wherein the container creation parameters include: the method comprises the steps of container mirror image configuration information, node resource states, the number of containers to be created and mirror names of the created containers.
4. The method for implementing the light-weight container under the end edge cloud architecture according to claim 2 or 3, wherein the container deployment policy module formulates a mirror image pull policy according to the container creation parameter and a preset rule, and the method comprises the following steps:
carrying out priority sequencing on the nodes according to the resource state of each node;
sequentially dividing the corresponding nodes according to the number of the containers to be created and the priority order to obtain a container division result;
evaluating each node based on the container division result and the node resource state, and determining container mirror image pulling information corresponding to each node when the evaluation is passed;
and generating a mirror image pulling strategy according to the container division result and the container mirror image pulling information, and sending the mirror image pulling strategy to the master node.
5. The method for implementing a lightweight container under an end edge cloud architecture according to claim 2, wherein the sending of the control instruction to each node according to the mirror image pull policy includes:
and the master node sends a control instruction to each node according to the mirror image pulling strategy.
6. The method for realizing the lightweight container under the end edge cloud architecture according to claim 2, wherein each node pulls mirror image layers according to the control instruction and jointly mounts and generates a container layer according to a hierarchical relationship between the mirror image layers, and the method comprises the following steps:
and the node remotely mounts the mirror image layer of the master node, and jointly mounts the mirror image layer and the local mirror image layer to generate a container layer according to the inheritance relationship between the mirror image layers.
7. The method for realizing a lightweight container under an end edge cloud architecture according to claim 1, further comprising:
the cloud management platform responds to the container service request, creates a container creating task, and executes the container creating task to send a container creating instruction to the edge server.
8. A data processing system, characterized in that the system comprises:
the cloud server is used for bearing a cloud management platform and sending a container creating instruction to the edge server;
the edge server is used for determining a mirror image pulling strategy according to the container creating instruction and sending a control instruction to each node according to the mirror image pulling strategy; the control instruction comprises a mirror image pulling instruction and a container creating instruction;
and the terminal is used for providing a node, pulling the mirror image layers according to the control instruction, and jointly mounting and generating a container layer according to the hierarchical relationship between the mirror image layers so as to start the container.
9. The data processing system of claim 8, wherein the edge server comprises a master node, a container deployment policy module;
the master node is used for receiving the container creating instruction, analyzing the container creating instruction to obtain a container creating parameter, and sending the container creating parameter to a container deployment strategy module;
the container deployment strategy module is used for making a mirror image pulling strategy according to the container creation parameters and preset rules and returning the mirror image pulling strategy to the master node; the mirror image pulling strategy comprises a master node mirror image pulling strategy and a node mirror image pulling strategy.
10. A storage medium having stored thereon a computer program which, when executed by a processor, implements a method of implementing a lightweight container under an end edge cloud architecture according to any one of claims 1 to 7.
CN202111649189.3A 2021-12-30 2021-12-30 Implementation method of lightweight container under terminal edge cloud architecture and data processing system Pending CN114296933A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111649189.3A CN114296933A (en) 2021-12-30 2021-12-30 Implementation method of lightweight container under terminal edge cloud architecture and data processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111649189.3A CN114296933A (en) 2021-12-30 2021-12-30 Implementation method of lightweight container under terminal edge cloud architecture and data processing system

Publications (1)

Publication Number Publication Date
CN114296933A true CN114296933A (en) 2022-04-08

Family

ID=80973942

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111649189.3A Pending CN114296933A (en) 2021-12-30 2021-12-30 Implementation method of lightweight container under terminal edge cloud architecture and data processing system

Country Status (1)

Country Link
CN (1) CN114296933A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115587394A (en) * 2022-08-24 2023-01-10 广州红海云计算股份有限公司 Cloud native architecture human resource data processing method and device
CN115617006A (en) * 2022-12-16 2023-01-17 广州翼辉信息技术有限公司 Industrial robot controller design method based on distributed safety container architecture
CN115665172A (en) * 2022-10-31 2023-01-31 北京凯思昊鹏软件工程技术有限公司 Management system and management method of embedded terminal equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115587394A (en) * 2022-08-24 2023-01-10 广州红海云计算股份有限公司 Cloud native architecture human resource data processing method and device
CN115587394B (en) * 2022-08-24 2023-08-08 广州红海云计算股份有限公司 Human resource data processing method and device of cloud native architecture
CN115665172A (en) * 2022-10-31 2023-01-31 北京凯思昊鹏软件工程技术有限公司 Management system and management method of embedded terminal equipment
CN115617006A (en) * 2022-12-16 2023-01-17 广州翼辉信息技术有限公司 Industrial robot controller design method based on distributed safety container architecture

Similar Documents

Publication Publication Date Title
US10614117B2 (en) Sharing container images between mulitple hosts through container orchestration
US11056107B2 (en) Conversational framework
CN114296933A (en) Implementation method of lightweight container under terminal edge cloud architecture and data processing system
US20190108067A1 (en) Decomposing monolithic application into microservices
US9851933B2 (en) Capability-based abstraction of software-defined infrastructure
CN107733977A (en) A kind of cluster management method and device based on Docker
CN108196915A (en) Code process method, equipment and storage medium based on application container engine
US11204840B2 (en) Efficient container based application recovery
CN112036577B (en) Method and device for applying machine learning based on data form and electronic equipment
US11755926B2 (en) Prioritization and prediction of jobs using cognitive rules engine
WO2014128597A1 (en) Method and system for providing high availability for state-aware applications
CN110740194A (en) Micro-service combination method based on cloud edge fusion and application
CN116414518A (en) Data locality of big data on Kubernetes
Czarnul A model, design, and implementation of an efficient multithreaded workflow execution engine with data streaming, caching, and storage constraints
US11573770B2 (en) Container file creation based on classified non-functional requirements
WO2023066053A1 (en) Service request processing method, network device and computer-readable storage medium
CN116755799A (en) Service arrangement system and method
CN112286622A (en) Virtual machine migration processing and strategy generating method, device, equipment and storage medium
CN115437647A (en) Multi-frame-adaptive micro-service deployment method, device, terminal and storage medium
CN114514730B (en) Method and system for filtering group messages
CN117859309A (en) Automatically selecting a node on which to perform a task
CN113010428B (en) Method, device, medium and electronic equipment for testing server cluster
EP3958606A1 (en) Methods and devices for pushing and requesting model, storage medium and electronic device
KR102642396B1 (en) Batch scheduling device for deep learning inference model using limited gpu resources
US20240004698A1 (en) Distributed process engine in a distributed computing environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination