CN117931097A - Information providing method and device applied to servers of edge computing cluster - Google Patents

Information providing method and device applied to servers of edge computing cluster Download PDF

Info

Publication number
CN117931097A
CN117931097A CN202410330236.5A CN202410330236A CN117931097A CN 117931097 A CN117931097 A CN 117931097A CN 202410330236 A CN202410330236 A CN 202410330236A CN 117931097 A CN117931097 A CN 117931097A
Authority
CN
China
Prior art keywords
target
directory
virtual machine
container
file corresponding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410330236.5A
Other languages
Chinese (zh)
Inventor
赵吉壮
王剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Volcano Engine Technology Co Ltd
Original Assignee
Beijing Volcano Engine Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Volcano Engine Technology Co Ltd filed Critical Beijing Volcano Engine Technology Co Ltd
Priority to CN202410330236.5A priority Critical patent/CN117931097A/en
Publication of CN117931097A publication Critical patent/CN117931097A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure relates to the technical field of Internet, and discloses an information providing method and device applied to servers of an edge computing cluster, wherein the method comprises the following steps: creating a file corresponding to a target directory, wherein the target directory is arranged on a host, and the format of the file corresponding to the target directory is related to a copy-on-write data writing mode; hot-plug a file corresponding to the target directory into a target virtual machine through a protocol based on block granularity, wherein the protocol based on block granularity is a protocol for block equipment; and formatting a file corresponding to the target directory into the target file system directory in the target virtual machine, and mounting the target file system directory to a mounting point in a target container in the target virtual machine, wherein the target container performs data reading and data writing for the target file system directory through the protocol based on the block granularity.

Description

Information providing method and device applied to servers of edge computing cluster
Technical Field
The disclosure relates to the technical field of internet, and in particular relates to an information providing method, an information providing device, computer equipment and a storage medium of a server applied to an edge computing cluster.
Background
Containers in virtual machines in an edge computing cluster, such as the edge computing kubernetes (k 8s for short), such as the container of kata containers items, need to be read and written in relation to directories on hosts in the edge computing cluster when in operation. The virtual machines in the edge computing clusters and the hosts in the edge computing clusters can be respectively deployed on servers of the edge computing clusters. At present, a directory on a host is generally transferred to a container in a virtual machine, and the container in the virtual machine performs reading and writing related to the directory on the host, so that a central processing unit in the container has higher input/output waiting (cpu iowait) and lower reading and writing efficiency related to the directory. Adversely affecting the overall operating efficiency of the edge computing cluster. How to improve the read-write efficiency related to the directory becomes a problem to be solved.
Disclosure of Invention
In view of this, the embodiments of the present disclosure provide an information providing method, apparatus, computer device and storage medium applied to a server of an edge computing cluster, so as to solve the problem of how to improve the read/write efficiency related to a directory.
In a first aspect, an embodiment of the present disclosure provides an information providing method applied to a server of an edge computing cluster, including:
creating a file corresponding to a target directory, wherein the target directory is arranged on a host, and the format of the file corresponding to the target directory is related to a copy-on-write data writing mode;
Hot-plug a file corresponding to the target directory into a target virtual machine through a protocol based on block granularity, wherein the protocol based on block granularity is a protocol for block equipment;
And formatting a file corresponding to the target directory into a target file system directory in the target virtual machine, and mounting the target file system directory to a mounting point in a target container in the target virtual machine, wherein the target container performs data reading and data writing for the target file system directory through the block granularity-based protocol.
In a second aspect, an embodiment of the present disclosure provides an information providing apparatus applied to a server of an edge computing cluster, including:
the creating unit is used for creating a file corresponding to a target directory, wherein the target directory is arranged on a host, and the format of the file corresponding to the target directory is related to a copy-on-write data writing mode;
The hot-plug unit is used for hot-plug the file corresponding to the target directory into the target virtual machine through a protocol based on the block granularity, wherein the protocol based on the block granularity is a protocol for block equipment;
the mounting unit is used for formatting a file corresponding to the target directory into a target file system directory in the target virtual machine and mounting the target file system directory to a mounting point in a target container in the target virtual machine, wherein the target container performs data reading and data writing aiming at the target file system directory through the protocol based on the block granularity.
In a third aspect, embodiments of the present disclosure provide a computer device comprising: the memory and the processor are in communication connection, the memory stores computer instructions, and the processor executes the computer instructions to perform the method of the first aspect or any implementation manner corresponding to the first aspect.
In a fourth aspect, the disclosed embodiments provide a computer-readable storage medium having stored thereon computer instructions for causing a computer to perform the method of the first aspect or any of its corresponding embodiments.
The information providing method applied to the servers of the edge computing cluster provided by the embodiment of the disclosure creates a file corresponding to a target directory; hot-plug a file corresponding to a target directory into a target virtual machine through a protocol based on block granularity; and formatting the file corresponding to the target directory into a target file system directory in the target virtual machine, and mounting the target file system directory to a mounting point in a target container in the target virtual machine. The target container can read and write data aiming at the target file system catalog through a protocol based on the block granularity, and can read and write data aiming at the target file system catalog with the block granularity, so that the read-write efficiency related to the target catalog is improved. Meanwhile, the files corresponding to the target directory are related to a copy-on-write data writing mode, the target file system directory obtained by formatting the files corresponding to the target directory is also related to a copy-on-write data writing mode, and the storage space is only allocated when the data is written into the target file system directory, so that the storage space can be saved.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the prior art, the drawings that are required in the detailed description or the prior art will be briefly described, it will be apparent that the drawings in the following description are some embodiments of the present disclosure, and other drawings may be obtained according to the drawings without inventive effort for a person of ordinary skill in the art.
Fig. 1 is a schematic diagram of an information providing method applied to a server of an edge computing cluster according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of a method for providing information for servers applied to an edge computing cluster according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of another method for providing information to servers of an edge computing cluster according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of another information providing method applied to servers of an edge computing cluster provided by an embodiment of the present disclosure;
FIG. 5 is a block diagram of an information providing apparatus applied to a server of an edge computing cluster provided by an embodiment of the present disclosure;
fig. 6 is a schematic hardware structure of a computer device according to an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are some embodiments of the present disclosure, but not all embodiments. Based on the embodiments in this disclosure, all other embodiments that a person skilled in the art would obtain without making any inventive effort are within the scope of protection of this disclosure.
Referring to fig. 1, a schematic diagram of an information providing method applied to a server of an edge computing cluster according to an embodiment of the present disclosure is shown.
The information providing method applied to the servers of the edge computing cluster provided by the embodiment of the disclosure can be applied to kubernetes (k 8 s) clusters. A unit for performing the method for applying to a server of an edge computing cluster provided by the embodiment of the present disclosure, for example, an information providing apparatus for applying to a server of an edge computing cluster provided by the embodiment of the present disclosure, may be deployed on each node of a k8s cluster. One kubelet is run on each node in the k8s cluster. kubelet are responsible for monitoring containers (containers) on the nodes, managing resources on the nodes.
FIG. 1 shows a container 1, container 2 in a target virtual machine. Both the container 1 and the container 2 are target containers. Fig. 1 shows the pod to which the container 1,2 belongs. The pod is the smallest resource management component in kubernetes and is also the resource object that minimizes running the containerized application. The containers 1 and 2 are kata containers containers. kata containers are container items managed by the OpenStack foundation, but independent of the OpenStack items. kata containers is a runtime tool that can create containers in the form of ultra-lightweight virtual machines using container mirroring. FIG. 1 shows containerd-shim-kata-v2 as daemon. The format of the file corresponding to the target directory is qcow2 (qemu copy-on-write 2). qcow2 file is a block device file provided for qemu. The qcow2 file corresponding to the target directory may be created from containerd-shim-kata-v2. The block granularity based protocol is virtioblk protocol. The virtioblk protocol is a protocol for block devices. The virtioblk protocol is used to transfer data between the virtual machine and the physical machine. The virtioblk protocol supports operations such as reading disk data, writing disk data, querying disk information, disk management, etc. Qemu is started by containerd-shim-kata-v2, and after qemu is started by containerd-shim-kata-v2, qcow2 files corresponding to the target directory are hot-plugged into the target virtual machine by containerd-shim-kata-v2 through virtioblk protocol. After the qcow2 file corresponding to the target directory is hot-plugged into the target virtual machine, the qcow2 file corresponding to the target directory appears as a block device in the target virtual machine. containerd-shim-kata-v2 may send instructions to a container agent in the target virtual machine, KATA AGENT, indicating the mount point in the target virtual machine to mount the target file system directory into container 1, the mount point in container 2, formatting, by KATA AGENT, the corresponding qcow2 file corresponding to the target directory into the target file system directory in the target virtual machine, and by KATA AGENT, the mount point in the target virtual machine to mount the target file system directory into container 1, the mount point in container 2. Thus, both containers 1,2 can perform data reading and data writing to the target file system directory at the block granularity. The data reading and data writing of the container 1 and the container 2 to the target file system directory by the virtioblk protocol may be equivalent to the data reading and data writing to the target directory. Therefore, the target container can read and write data aiming at the target file system directory component at the granularity of blocks, and the reading and writing efficiency related to the directory is improved. Meanwhile, the qcow2 file corresponding to the target directory is related to the copy-on-write data writing mode, the target file system directory obtained by formatting the qcow2 file corresponding to the target directory is also related to the copy-on-write data writing mode, and the storage space is only allocated when the data is written into the target file system directory, so that the storage space can be reduced.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for providing information of a server applied to an edge computing cluster, which may be performed by a computer device such as a server, according to an embodiment of the present disclosure.
In step S201, a file corresponding to a target directory is created, where the target directory is on a host, and a format of the file corresponding to the target directory is related to a copy-on-write data writing manner. That is, the format of the file corresponding to the target directory is a format supporting copy-on-write.
The target directory may be any directory that needs to be provided to a container in the target virtual machine, and the target virtual machine may be any virtual machine that is obtained by virtualizing using resources of the host machine. As an example, the format of the file corresponding to the target directory is qcow2 format, and the qcow2 file corresponding to the target directory may be created.
As one example, the target directory is a temporary storage directory shared by multiple containers in the target virtual machine. The temporary storage directory shared by multiple containers may be referred to as emptydir.
In step S202, the file corresponding to the target directory is hot-plugged into the target virtual machine through the protocol based on the block granularity, where the protocol based on the block granularity is the protocol for the block device.
The block device may be a virtual disk block device. The block granularity based protocol supports data reading and data writing to block devices at block granularity.
As an example, the block granularity based protocol, i.e., the block granularity based protocol, is virtioblk protocol. And hot-inserting (hotplug) the file corresponding to the target directory into the target virtual machine through virtioblk protocol.
After the file corresponding to the target directory is hot-plugged into the target virtual machine, the file corresponding to the target directory appears as a block device in the target virtual machine.
In step S203, the file corresponding to the target directory is formatted into the target file system directory in the target virtual machine, and the target file system directory is mounted to the mounting point in the target container in the target virtual machine.
Wherein the target container performs data reading and data writing to the target file system directory through a protocol based on block granularity. The target file system directory obtained by formatting the file corresponding to the target directory is also related to the copy-on-write data writing manner, that is, the target file system directory supports copy-on-write.
The target virtual machine may be any virtual machine obtained by virtualizing a resource of the host. The target container can be any container in the target virtual machine, which needs to read and write with the target directory. As one example, binding Mount (Bind Mount) may be employed to Mount a target file system directory in a target virtual machine to a Mount point in a target container in the target virtual machine. The target container performing data reading and data writing to the target file system directory via the block granularity based protocol may correspond to the target container performing data reading and data writing to the target directory via the block granularity based protocol. The target container performs data reading and data writing aiming at the target file system catalog through a protocol based on the block granularity, so that the target container can perform data reading and data writing aiming at the target file system catalog at the block granularity, and the reading and writing efficiency related to the catalog is improved.
As an example, the block granularity based protocol is virtioblk protocol, and the file corresponding to the target directory is the qcow2 file corresponding to the target directory. And formatting the corresponding qcow2 file corresponding to the target directory into a target file system directory in the target virtual machine. The target container performs data reading and data writing to the target file system directory via virtioblk protocol.
Referring to fig. 3, fig. 3 is a flowchart illustrating another information providing method applied to a server of an edge computing cluster according to an embodiment of the present disclosure.
In step S301, when the directory information of the target directory includes the configured volume size, a file corresponding to the target directory is created in association with the configured volume size, and when the directory information of the target directory does not include the configured volume size, a file corresponding to the target directory is created in association with the default volume size.
Wherein when the directory information of the target directory includes a configured volume size, the configured volume size may be configured by a user, such as an operation and maintenance engineer.
The data associated with the configured volume size may indicate that the file corresponding to the target directory may store at most the configured volume size. The correlation of the file size corresponding to the target directory and the configured volume is equivalent to limiting the data volume of the data stored in the file corresponding to the target directory, and is equivalent to limiting the data volume of the data stored in the target file system directory obtained by formatting the file corresponding to the target directory.
In the embodiment of the present disclosure, the file corresponding to the target directory may be related to the configured volume size, which considers the case that the data amount of the data used by the pod to which the container belongs in the virtual machine exceeds the defined data amount or the data amount of the data stored by the directory to which the container aims exceeds the defined data amount, and the pod to which the container belongs in the virtual machine is evicted by kubelet. When the directory information of the target directory includes the configured volume size, creating a file corresponding to the target directory and related to the configured volume size, which limits the data volume of the data stored in the target file system directory obtained by formatting the file corresponding to the target directory to be the configured volume size at most, so as to avoid the situation that the data volume of the data used by the pod belonging to the container in the target virtual machine exceeds the limited data volume or the data volume of the data stored in the directory aiming at the container exceeds the limited data volume, and the pod belonging to the container in the target virtual machine is evicted by kubelet. The configured volume size is less than or equal to the defined amount of data.
When the target directory is a temporary storage directory shared by a plurality of containers in the target virtual machine, if the directory information of the temporary storage directory shared by a plurality of containers includes a configured volume size, creating a file corresponding to the temporary storage directory shared by a plurality of containers, which is related to the configured volume size, corresponding to data limiting the volume size that can be configured at most by the file corresponding to the temporary storage directory shared by a plurality of containers in the target virtual machine, corresponding to data limiting the volume size that can be configured at most by the temporary storage directory shared by a plurality of containers in the target virtual machine, and corresponding to data limiting the data amount of data stored in the target file system directory obtained by formatting the file corresponding to the temporary storage directory shared by a plurality of containers. If the directory information of the temporary storage directory shared by the plurality of containers does not include the configured volume size, creating a file corresponding to the temporary storage directory shared by the plurality of containers and related to the default volume size, wherein the file corresponding to the temporary storage directory shared by the plurality of containers in the target virtual machine is limited to store data of the default volume size at most, and the file corresponding to the temporary storage directory shared by the plurality of containers in the target virtual machine is limited to store data of the default volume size at most. As one example, the default volume size is 2G. The default volume size is less than or equal to the defined amount of data.
When the directory information of the target directory includes the configured volume size, the container configuration information is parsed by using a daemon applied to the container in the target virtual machine, for example containerd-shim-kata-v2, to obtain the directory information of the target directory. The configured volume size of the target directory is obtained from the directory information of the target directory. The container configuration information is a config. Json file. The config. Json file is a container profile defined by the OCI (Open Container Initiative) specification. The config. Json file defines all configuration information that the container runtime needs to know and use.
In step S302, the file corresponding to the target directory is hot-plugged into the target virtual machine through the protocol based on the block granularity, where the protocol based on the block granularity is the protocol for the block device.
The block device may specifically be referred to as a virtual disk block device. The block granularity based protocol supports data reading and data writing to block devices at block granularity.
As one example, the block granularity based protocol is virtioblk protocol. And hot-inserting (hotplug) the file corresponding to the target directory into the target virtual machine through virtioblk protocol.
After the file corresponding to the target directory is hot-plugged into the target virtual machine, the file corresponding to the target directory appears as a block device in the target virtual machine.
Step S303, the file corresponding to the target directory is formatted into the target file system directory in the target virtual machine, and the target file system directory is mounted to the mounting point in the target container in the target virtual machine.
Wherein the target container performs data reading and data writing to the target file system directory through a protocol based on block granularity.
As one example, a file corresponding to a target directory is formatted in a target virtual machine as a target file system directory in ext4 format.
Step S303, mounting the target file system directory in the target virtual machine to the mounting point in the target container in the target virtual machine may include: and formatting the file corresponding to the target directory into a target file system directory in the target virtual machine by using a container agent in the target virtual machine. The container agent in the target virtual machine may be KATA AGENT.
In step S303, mounting the target file system directory to the mounting point in the target container in the target virtual machine includes: mounting the target file system directory to a directory corresponding to the identity of the target directory; the catalog corresponding to the identification of the target catalog is mounted to a mounting point in the target container. As one example, the directory corresponding to the identity of the target directory is a directory with a path of/run/kata-containers/< sandbox id >/rootfs/local/< volume name >, where < sandbox id > is the id of the pause container in the Pod to which the target container belongs and < volume name > is the identity of the target directory in the Pod declaration. The target file system directory is mounted to a directory with a path of/run/kata-containers/< sandbox id >/rootfs/local/< volume name >. After mounting the target file system directory to the identified directory corresponding to the target directory, the identified directory corresponding to the target directory may be mounted to a mount point in the target container using a binding mount.
Referring to fig. 4, a schematic diagram of another information providing method applied to a server of an edge computing cluster according to an embodiment of the present disclosure is shown.
The target directory is a temporary storage directory shared by a plurality of containers in the target virtual machine. The temporary storage directory shared by multiple containers in the target virtual machine may be referred to as emptydir.
FIG. 4 shows container 1, container 2 in a target virtual machine. Both the container 1 and the container 2 are target containers. The containers 1 and 2 are kata containers containers. FIG. 4 shows containerd-shim-kata-v2 as daemon. The file corresponding to the target directory has the format qcow2 (qemu Copy-On-Write). The corresponding qcow2 file corresponding to emptydir and corresponding to the configured volume size can be created by containerd-shim-kata-v2, and the corresponding qcow2 file corresponding to emptydir and corresponding to the configured volume size correspond to data limiting the volume size of the corresponding qcow2 file to emptydir and the volume size of the corresponding qcow2 file to be stored at most, correspond to data limiting the volume size of the corresponding emptydir and the volume size of the corresponding target file system to be stored at most. Limiting the volume size data that the qcow2 file corresponding to emptydir can store at most can avoid the situation that the data volume of the data used by the pod of the container in the target virtual machine exceeds the specified size or the data volume of the data stored in the temporary storage directory shared by a plurality of containers exceeds the limited data volume, and the pod of the container in the target virtual machine is evicted by kubelet. The block granularity based protocol is virtioblk protocol. Qemu is started by containerd-shim-kata-v2, and after qemu is started by containerd-shim-kata-v2, qcow2 files corresponding to the target directory are hot-plugged into the target virtual machine by containerd-shim-kata-v2 through virtioblk protocol. After the qcow2 file corresponding to the target directory is hot-plugged into the target virtual machine, the qcow2 file corresponding to the target directory appears as a block device in the target virtual machine. containerd-shim-kata-v2 may send instructions to a container agent KATA AGENT in the target virtual machine that indicate that the target file system directory is mounted to the mount point of container 1 and the mount point of container 2 in the target virtual machine, and format emptydir a corresponding qcow2 file in the target virtual machine by KATA AGENT to obtain the target file system directory. KATA AGENT mount the target file system directory to a directory corresponding to the identity of the target directory. After mounting the target file system directory to the directory mounted to the identification corresponding to the target directory, the target file system directory is mounted to the mount point of container 1, the mount point of container 2, in the target virtual machine by KATA AGENT by KATA AGENT. Thus, both containers 1, 2 can perform data reading and data writing to the target file system directory at the block granularity. The data reading and data writing of the target file system directories by the container 1 and the container 2 through the virtioblk protocol corresponds to the data reading and data writing of the target file system directories by the container 1 and the container 2 through the virtioblk protocol for emptydir. And the reading and writing efficiency related to emptydir is improved.
The embodiment of the present disclosure further provides an information providing device applied to a server of an edge computing cluster, where the device is used to implement the foregoing embodiment and the preferred implementation, and the description is omitted herein. As used below, the term "unit" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Referring to fig. 5, which is a block diagram illustrating a structure of an information providing apparatus of a server applied to an edge computing cluster according to an embodiment of the present disclosure, the information providing apparatus of a server applied to an edge computing cluster according to an embodiment of the present disclosure includes:
a creating unit 501, configured to create a file corresponding to a target directory, where the target directory is on a host, and a format of the file corresponding to the target directory is related to a copy-on-write data writing manner;
A hot-plug unit 502, configured to hot-plug, through a protocol based on a block granularity, a file corresponding to the target directory into a target virtual machine, where the protocol based on the block granularity is a protocol for a block device;
And a mounting unit 503, configured to format, in the target virtual machine, a file corresponding to the target directory into a target file system directory, and mount the target file system directory to a mounting point in a target container in the target virtual machine, where the target container performs data reading and data writing for the target file system directory through the protocol based on the block granularity.
In an alternative embodiment, the creating unit 501 is further configured to create a file corresponding to the target directory, where the directory information of the target directory includes a configured volume size, and the file corresponds to the configured volume size.
In an alternative embodiment, the information providing apparatus applied to the servers of the edge computing cluster further includes:
and the analysis unit is used for analyzing the container configuration information by utilizing the daemon applied to the container in the target virtual machine to obtain the directory information of the target directory.
In an alternative embodiment, the creating unit 501 is further configured to create a file corresponding to the target directory, which is related to a default volume size, when the directory information of the target directory does not include the configured volume size.
In an alternative embodiment, the target directory is a temporary storage directory shared by a plurality of containers in the target virtual machine.
In an alternative embodiment, the mounting unit 503 is further configured to format, in the target virtual machine, a file corresponding to the target directory into the target file system directory by using a container agent in the target virtual machine.
In an alternative embodiment, the mounting unit 503 is further configured to mount the target file system directory to a directory corresponding to the identifier of the target directory; and mounting the directory corresponding to the identification of the target directory to a mounting point in the target container.
The apparatus in this embodiment is presented in the form of functional units, where the units refer to ASIC circuits, processors and memories executing one or more software or firmware programs, and/or other devices that can provide the functionality described above.
Further functional descriptions of the above units are the same as those of the above corresponding embodiments, and are not repeated here.
Referring to fig. 6, fig. 6 is a schematic hardware structure of a computer device provided by an embodiment of the disclosure, where the computer device has the above apparatus, and the computer device includes: one or more processors 10, memory 20, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are communicatively coupled to each other using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the computer device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In some alternative embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple computer devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system).
The processor 10 may be a central processor, a network processor, or a combination thereof. The processor 10 may further include a hardware chip, among others. The hardware chip may be an application specific integrated circuit, a programmable logic device, or a combination thereof. The programmable logic device may be a complex programmable logic device, a field programmable gate array, a general-purpose array logic, or any combination thereof.
Wherein the memory 20 stores instructions executable by the at least one processor 10 to cause the at least one processor 10 to perform the methods shown in implementing the above embodiments.
The memory 20 may include a storage program area that may store an operating system, at least one application program required for functions, and a storage data area; the storage data area may store data created according to the use of the computer device, etc. In addition, the memory 20 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some alternative embodiments, memory 20 may optionally include memory located remotely from processor 10, which may be connected to the computer device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Memory 20 may include volatile memory, such as random access memory; the memory may also include non-volatile memory, such as flash memory, hard disk, or solid state disk; the memory 20 may also comprise a combination of the above types of memories.
The computer device further comprises input means 30 and output means 40. The processor 10, memory 20, input device 30, and output device 40 may be connected by a bus or other means.
The input device 30 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the computer apparatus, such as a touch screen, a keypad, a mouse, a trackpad, a touchpad, a pointer stick, one or more mouse buttons, a trackball, a joystick, and the like. The output means 40 may include a display device, auxiliary lighting means (e.g., LEDs), tactile feedback means (e.g., vibration motors), and the like. Such display devices include, but are not limited to, liquid crystal displays, light emitting diodes, displays and plasma displays. In some alternative implementations, the display device may be a touch screen.
The presently disclosed embodiments also provide a computer readable storage medium, and the methods described above according to the presently disclosed embodiments may be implemented in hardware, firmware, or as recordable storage medium, or as computer code downloaded over a network that is originally stored in a remote storage medium or a non-transitory machine-readable storage medium and is to be stored in a local storage medium, such that the methods described herein may be stored on such software processes on a storage medium using a general purpose computer, special purpose processor, or programmable or dedicated hardware. The storage medium can be a magnetic disk, an optical disk, a read-only memory, a random access memory, a flash memory, a hard disk, a solid state disk or the like; further, the storage medium may also comprise a combination of memories of the kind described above. It will be appreciated that a computer, processor, microprocessor controller or programmable hardware includes a storage element that can store or receive software or computer code that, when accessed and executed by the computer, processor or hardware, implements the methods illustrated by the above embodiments.
Although embodiments of the present disclosure have been described in connection with the accompanying drawings, various modifications and variations may be made by those skilled in the art without departing from the spirit and scope of the disclosure, and such modifications and variations are within the scope defined by the appended claims.

Claims (10)

1. An information providing method applied to servers of an edge computing cluster, the method comprising:
creating a file corresponding to a target directory, wherein the target directory is arranged on a host, and the format of the file corresponding to the target directory is related to a copy-on-write data writing mode;
Hot-plug a file corresponding to the target directory into a target virtual machine through a protocol based on block granularity, wherein the protocol based on block granularity is a protocol for block equipment;
And formatting a file corresponding to the target directory into a target file system directory in the target virtual machine, and mounting the target file system directory to a mounting point in a target container in the target virtual machine, wherein the target container performs data reading and data writing for the target file system directory through the block granularity-based protocol.
2. The method of claim 1, wherein creating a file corresponding to the target directory comprises:
when the directory information of the target directory includes the configured volume size, a file corresponding to the target directory is created in association with the configured volume size.
3. The method according to claim 2, wherein the method further comprises:
And analyzing the container configuration information by using a daemon applied to the container in the target virtual machine to obtain the directory information of the target directory.
4. The method of claim 1, wherein creating a file corresponding to the target directory comprises:
And when the directory information of the target directory does not comprise the configured volume size, creating a file corresponding to the target directory and related to a default volume size.
5. The method of claim 1, wherein formatting, in the target virtual machine, a file corresponding to the target directory as a target file system directory comprises:
And formatting a file corresponding to the target directory into the target file system directory in the target virtual machine by using a container agent in the target virtual machine.
6. The method of claim 1, wherein mounting the target file system directory to a mount point in a target container in the target virtual machine comprises:
mounting the target file system directory to a directory corresponding to an identification of the target directory;
And mounting the directory corresponding to the identification of the target directory to a mounting point in the target container.
7. The method of any of claims 1-6, wherein the target directory is a temporary storage directory shared by a plurality of containers in the target virtual machine.
8. An information providing apparatus applied to a server of an edge computing cluster, the apparatus comprising:
the creating unit is used for creating a file corresponding to a target directory, wherein the target directory is arranged on a host, and the format of the file corresponding to the target directory is related to a copy-on-write data writing mode;
The hot-plug unit is used for hot-plug the file corresponding to the target directory into the target virtual machine through a protocol based on the block granularity, wherein the protocol based on the block granularity is a protocol for block equipment;
the mounting unit is used for formatting a file corresponding to the target directory into a target file system directory in the target virtual machine and mounting the target file system directory to a mounting point in a target container in the target virtual machine, wherein the target container performs data reading and data writing aiming at the target file system directory through the protocol based on the block granularity.
9. A computer device, comprising:
A memory and a processor in communication with each other, the memory having stored therein computer instructions which, upon execution, cause the processor to perform the method of any of claims 1 to 7.
10. A computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any one of claims 1 to 7.
CN202410330236.5A 2024-03-21 2024-03-21 Information providing method and device applied to servers of edge computing cluster Pending CN117931097A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410330236.5A CN117931097A (en) 2024-03-21 2024-03-21 Information providing method and device applied to servers of edge computing cluster

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410330236.5A CN117931097A (en) 2024-03-21 2024-03-21 Information providing method and device applied to servers of edge computing cluster

Publications (1)

Publication Number Publication Date
CN117931097A true CN117931097A (en) 2024-04-26

Family

ID=90751111

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410330236.5A Pending CN117931097A (en) 2024-03-21 2024-03-21 Information providing method and device applied to servers of edge computing cluster

Country Status (1)

Country Link
CN (1) CN117931097A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100070544A1 (en) * 2008-09-12 2010-03-18 Microsoft Corporation Virtual block-level storage over a file system
CN107193504A (en) * 2017-06-02 2017-09-22 郑州云海信息技术有限公司 A kind of method and system of automation distribution and establishment application memory based on Kubernetes
CN116483514A (en) * 2023-04-23 2023-07-25 北京有竹居网络技术有限公司 Container starting method and device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100070544A1 (en) * 2008-09-12 2010-03-18 Microsoft Corporation Virtual block-level storage over a file system
CN107193504A (en) * 2017-06-02 2017-09-22 郑州云海信息技术有限公司 A kind of method and system of automation distribution and establishment application memory based on Kubernetes
CN116483514A (en) * 2023-04-23 2023-07-25 北京有竹居网络技术有限公司 Container starting method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US9043776B2 (en) Transferring files to a baseboard management controller (‘BMC’) in a computing system
CN110780890B (en) System upgrading method, device, electronic equipment and medium
EP2344953B1 (en) Provisioning virtual resources using name resolution
US8560686B2 (en) Communicating with an in-band management application through an out-of-band communications channel
US8863109B2 (en) Updating secure pre-boot firmware in a computing system in real-time
US9792240B2 (en) Method for dynamic configuration of a PCIE slot device for single or multi root ability
US9104818B2 (en) Accelerator management device, accelerator management method, and input-output device
US9342336B2 (en) Memory page de-duplication in a computer system that includes a plurality of virtual machines
US10768827B2 (en) Performance throttling of virtual drives
US20130086571A1 (en) Dynamically Updating Firmware In A Computing System
US9122793B2 (en) Distributed debugging of an application in a distributed computing environment
US10606677B2 (en) Method of retrieving debugging data in UEFI and computer system thereof
US10394711B2 (en) Managing lowest point of coherency (LPC) memory using a service layer adapter
CN103942088B (en) A kind of method for obtaining virtual machine USB storage device service condition
US11144326B2 (en) System and method of initiating multiple adaptors in parallel
US20150082014A1 (en) Virtual Storage Devices Formed by Selected Partitions of a Physical Storage Device
CN117931097A (en) Information providing method and device applied to servers of edge computing cluster
US9529759B1 (en) Multipath I/O in a computer system
CN114385537A (en) Page slot number dynamic allocation method, device, equipment and medium
US8645600B2 (en) Configuring expansion component interconnect (‘ECI’) physical functions on an ECI device in a computing system
CN117931096A (en) Information providing method and device applied to servers of edge computing cluster
CN106886373B (en) Physical machine and magnetic disk operation method and device thereof
CN116661951B (en) Mirror image file processing method and device, electronic equipment and storage medium
US9270635B2 (en) Loading an operating system of a diskless compute node using a single virtual protocol interconnect (‘VPI’) adapter
CN117234437B (en) Storage device, and method and device for controlling restarting of magnetic disk

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination