CN113535429A - Data processing method, apparatus, device, medium, and program product - Google Patents

Data processing method, apparatus, device, medium, and program product Download PDF

Info

Publication number
CN113535429A
CN113535429A CN202110796724.1A CN202110796724A CN113535429A CN 113535429 A CN113535429 A CN 113535429A CN 202110796724 A CN202110796724 A CN 202110796724A CN 113535429 A CN113535429 A CN 113535429A
Authority
CN
China
Prior art keywords
data
node
target
cluster
address
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110796724.1A
Other languages
Chinese (zh)
Inventor
张维杰
贾冬冬
姚星星
李伟
孟海秀
王克刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haier Digital Technology Qingdao Co Ltd
Haier Caos IoT Ecological Technology Co Ltd
Qingdao Haier Industrial Intelligence Research Institute Co Ltd
Original Assignee
Haier Digital Technology Qingdao Co Ltd
Haier Caos IoT Ecological Technology Co Ltd
Qingdao Haier Industrial Intelligence Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haier Digital Technology Qingdao Co Ltd, Haier Caos IoT Ecological Technology Co Ltd, Qingdao Haier Industrial Intelligence Research Institute Co Ltd filed Critical Haier Digital Technology Qingdao Co Ltd
Priority to CN202110796724.1A priority Critical patent/CN113535429A/en
Publication of CN113535429A publication Critical patent/CN113535429A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/547Remote procedure calls [RPC]; Web services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/955Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
    • G06F16/9566URL specific, e.g. using aliases, detecting broken or misspelled links
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45587Isolation or security of virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Bioethics (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a data processing method, a device, equipment, a medium and a program product, which are characterized in that a data request initiated by applying for calling target data for at least one first computing node in a computing node cluster is obtained, wherein the target data is composed of a plurality of data units, then calling addresses of corresponding data units are respectively found in the computing node cluster and a storage node cluster according to the data request, and then all data units are obtained according to the calling addresses, so that the target data are obtained by combination. The technical problem of how to carry out data distribution on the data call request of the database is solved. The target data are scattered to a plurality of positions to be acquired, and are not transmitted by only a database in the storage node cluster, so that the technical effect of reducing the data transmission pressure of the database when the large-scale cluster transmits the data concurrently is achieved.

Description

Data processing method, apparatus, device, medium, and program product
Technical Field
The present application relates to the field of computer applications, and in particular, to a data processing method, apparatus, device, medium, and program product.
Background
With the continuous development of big data and deep learning technology, a large amount of marked or unmarked data are utilized to carry out corresponding learning training on the model through a deep learning method, and finally a relatively accurate cognitive model is obtained. The trained deep learning model can reveal complex and rich information carried in the big data and can predict future or unknown events more accurately.
However, training the model consumes a large amount of computing resources, and in a distributed system, each computing node carries a respective computing task, and the tasks encounter pre-stored data in a call database when executed. In a large-scale computing node cluster, when a large number of computing nodes apply for calling data to a database concurrently, the database is subjected to extremely high bandwidth pressure and data transmission pressure, and information blockage is caused in severe cases, so that a training task of a model cannot be completed in time.
Therefore, how to perform data distribution on the data call request of the database becomes a technical problem to be solved urgently.
Disclosure of Invention
The application provides a data processing method, a data processing device, data processing equipment, a data processing medium and a program product, and aims to solve the technical problem of how to perform data distribution on a data call request of a database.
In a first aspect, the present application provides a data processing method, including:
the method comprises the steps of obtaining a data request, wherein the data request is used for applying for calling target data for at least one first computing node in a computing node cluster, and the target data is composed of a plurality of data units;
determining a calling address of each data unit according to the data request, wherein the calling address comprises: a first address, which is an address of at least one second computing node in the cluster of computing nodes, and/or a second address, which is a storage address in a database in at least one target storage node in the cluster of storage nodes;
and calling all the data units according to the calling address, and combining all the data units into the target data.
In one possible design, when the method is applied to a management node in a computing node cluster or a storage node cluster, the obtaining the data request includes:
receiving, by a management node in the cluster of computing nodes, the data request sent by at least one of the first computing nodes;
correspondingly, the determining the call address of each data unit according to the data request includes:
sending the data request to at least one target storage node through the management node, so that the target storage node determines the calling address according to the target data;
receiving the call address fed back by the target storage node through the management node;
correspondingly, the calling all the data units according to the calling address and combining all the data units into the target data includes:
and combining all the data units into the target data according to the calling address through the management node, and sending the target data to the first computing node.
In one possible design, when the method is applied to a compute node in a compute node cluster, the get data request includes:
responding to a trigger instruction of a preset task in the first computing node, and determining the data request, wherein the data request is used for enabling the first computing node to call the target data to execute the preset task;
correspondingly, the determining the call address of each data unit according to the data request includes:
sending a second data request to at least one other computing node in the computing node cluster through the first computing node according to the target data and a preset segmentation mode, wherein the second data request is used for acquiring the data unit from each data node;
receiving response results returned by the other computing nodes, and judging whether all the data units are received according to the response results;
if so, combining all the data units into the target data;
and if not, sending a third data request to at least one target storage unit, wherein the third data request is used for acquiring the rest data units from the target storage unit.
In one possible design, before the sending the data request to at least one of the target storage nodes, the method further includes:
acquiring working state information of each storage node in the storage node cluster;
and screening out at least one target storage node meeting preset requirements from the storage nodes according to the working state information.
In one possible design, when the method is applied to a storage node in a storage node cluster, the obtaining the data request includes:
receiving, by the target storage node, the data request;
correspondingly, determining the calling address of each data unit according to the data request includes:
determining, by the target storage node, each of the data units according to the target data in the data request;
determining, by the target storage node, the first address corresponding to part or all of the data units in each compute node of the compute node cluster;
and if the second computing node does not contain all the data units, determining the second addresses corresponding to the rest of the data units in a database.
In one possible design, the target data includes a Docker image, and the Docker image is used for completing construction of a target virtual environment on a host, where the target virtual environment corresponds to a preset user.
In one possible design, the storage node cluster includes a plurality of storage nodes, and each of the storage nodes includes: the system comprises a Docker Registry component and an interface component, wherein various types of Docker images are stored in a Registry image library in the Docker Registry component.
Optionally, the interface component includes a URL uniform resource locator interface based on an Nginx service platform.
In one possible design, the functions of the interface component include: caching the Docker mirror image and authenticating the identity information of the user.
In a second aspect, the present application provides a data processing apparatus comprising:
the system comprises an acquisition module, a data processing module and a data processing module, wherein the acquisition module is used for acquiring a data request, the data request is used for applying for calling target data for at least one first computing node in a computing node cluster, and the target data consists of a plurality of data units;
a processing module, configured to determine, according to the data request, a call address of each data unit, where the call address includes: a first address, which is an address of at least one second computing node in the cluster of computing nodes, and/or a second address, which is a storage address in a database in at least one target storage node in the cluster of storage nodes;
and the processing module is also used for calling all the data units according to the calling address and combining all the data units into the target data.
In a possible design, when the apparatus is disposed on a management node in a computing node cluster or a storage node cluster, the obtaining module is configured to receive, by the management node in the computing node cluster, the data request sent by at least one first computing node;
the processing module is configured to:
sending the data request to at least one target storage node through the management node, so that the target storage node determines the calling address according to the target data; receiving the call address fed back by the target storage node through the management node;
the processing module is further configured to combine, by the management node, all the data units into the target data according to the call address, and send the target data to the first computing node.
In a possible design, when the apparatus is configured on a computing node in a computing node cluster, the obtaining module is configured to determine, in the first computing node, the data request in response to a trigger instruction of a preset task, where the data request is used for causing the first computing node to call the target data to execute the preset task;
the processing module is configured to send, by the first computing node, a second data request to at least one other computing node in the computing node cluster according to the target data and a preset partition manner, where the second data request is used to obtain the data unit from each data node;
the acquisition module is used for receiving response results returned by the other computing nodes;
the processing module is used for judging whether all the data units are received according to the response result; if so, combining all the data units into the target data; and if not, sending a third data request to at least one target storage unit, wherein the third data request is used for acquiring the rest data units from the target storage unit.
In a possible design, the obtaining module is further configured to obtain working state information of each storage node in the storage node cluster;
the processing module is further configured to screen out at least one target storage node that meets a preset requirement from the storage nodes according to the working state information.
In one possible design, the operating state information includes: workload and availability status.
In one possible design, when the apparatus is configured on a storage node in a storage node cluster, the obtaining module is configured to receive the data request through the target storage node;
the processing module is configured to determine, by the target storage node, each data unit according to the target data in the data request; determining, by the target storage node, the first address corresponding to part or all of the data units in each compute node of the compute node cluster; and if the second computing node does not contain all the data units, determining the second addresses corresponding to the rest of the data units in a database.
In one possible design, the target data includes a Docker image, and the Docker image is used for completing construction of a target virtual environment on a host, where the target virtual environment corresponds to a preset user.
In one possible design, the storage node cluster includes a plurality of storage nodes, and each of the storage nodes includes: the system comprises a Docker Registry component and an interface component, wherein various types of Docker images are stored in a Registry image library in the Docker Registry component.
Optionally, the interface component includes a URL uniform resource locator interface based on an Nginx service platform.
In one possible design, the functions of the interface component include: caching the Docker mirror image and authenticating the identity information of the user.
In a third aspect, the present application provides a cluster system, including: the method comprises the steps that a computing node cluster and a storage node cluster are based on a preset application container engine; wherein the content of the first and second substances,
the computing node cluster comprises a plurality of computing nodes and at least one management node, the computing nodes are used for executing preset tasks, and the management node is used for processing data interaction between the computing node cluster and the storage node cluster;
the storage node cluster comprises a plurality of storage nodes, and each storage node comprises: the image library component comprises an image library component and an interface component, wherein various types of image files are stored in an image library in the image library component, and the interface component comprises: a uniform resource locator interface based on a predetermined service platform, the interface component configured to: caching the mirror image file and authenticating the identity information of the user;
the cluster system is used for realizing the data processing method of any one of claims 1 to 5.
Optionally, the preset application container engine includes: a Docker application container engine, the image file comprising: a Docker mirror, the mirror library component comprising: the Docker Registry component.
In one possible design, the pre-set task includes: and performing logic calculation on the deep learning model.
In one possible design, the interface assembly includes: and a URL uniform resource location interface based on the Nginx service platform.
In one possible design, the storage node cluster further includes at least one storage management node.
In a fourth aspect, the present application provides an electronic device comprising:
a processor; and the number of the first and second groups,
a memory for storing executable instructions of the processor;
wherein the processor is configured to execute any one of the possible data processing methods provided by the first aspect via execution of the executable instructions.
In a fifth aspect, the present application provides an air conditioner comprising: a display element, an energy storage element, and any one of the possible electronic devices provided by the third aspect.
In a sixth aspect, the present application further provides a storage medium, where a computer program is stored in the storage medium, where the computer program is used to execute any one of the possible data processing methods provided in the first aspect.
In a seventh aspect, the present application further provides a computer program product comprising a computer program, which when executed by a processor, implements any one of the possible data processing methods provided in the first aspect.
The application provides a data processing method, a device, equipment, a medium and a program product, which are characterized in that a data request initiated by applying for calling target data for at least one first computing node in a computing node cluster is obtained, wherein the target data is composed of a plurality of data units, then calling addresses of corresponding data units are respectively found in the computing node cluster and a storage node cluster according to the data request, and then all data units are obtained according to the calling addresses, so that the target data are obtained by combination. The technical problem of how to carry out data distribution on the data call request of the database is solved. The target data are scattered to a plurality of positions to be acquired, and are not transmitted by only a database in the storage node cluster, so that the technical effect of reducing the data transmission pressure of the database when the large-scale cluster transmits the data concurrently is achieved.
Drawings
In order to more clearly illustrate the technical solutions in the present application or the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a schematic structural diagram of a cluster system provided in the present application;
fig. 2 is a schematic flowchart of a first data processing method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a second data processing method according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a third data processing method according to an embodiment of the present application;
fig. 5 is a schematic flowchart of a fourth data processing method according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a data processing apparatus provided in the present application;
fig. 7 is a schematic structural diagram of an electronic device provided in the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, including but not limited to combinations of embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any inventive step are within the scope of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The terms referred to in this application are defined and explained below:
docker, an open source application container engine, allows developers to package applications and dependencies (i.e., the running environment of applications) into a portable container, and then distribute the container to any popular Linux machine, and also can implement virtualization, the container is completely using sandbox mechanism, and there is no interface between them. It should be noted that Docker is not a container per se, but a tool to create a container.
The container concept is developed so that various types or versions of applications built on different platforms or relying on different execution environments can be executed in the same operating system at the same time without conflict. If the operating system is compared to the sea, the vessel is a ship at sea and the mirror image is a container on the ship. Containers are the intermediary between various applications and operating systems.
The three core concepts of the Docker technology are respectively as follows: image (Image), Container (Container), Repository (reproducibility).
A Docker image may be understood as a duplicate of a developed application and its dependencies.
The Docker warehouse is a database used to store Docker images.
The Docker container is used for bearing a plurality of Docker images, so that applications which are isolated from each other and cannot be communicated with each other can normally run on the same operating system.
In the Docker application container engine, the Docker registration service is responsible for managing Docker images and functions like a warehouse manager.
There are three roles in the Docker Registry service, index, Registry and Registry client respectively.
index is responsible for and maintains information about user accounts, the checking of mirrors, and the common namespace. It maintains this information using the following components: web UI, metadata storage, authentication services, tokenization.
registry is a repository of mirrors and charts. However, it does not have a local database, nor does it provide authentication of the user, supported by S3, the cloud file, and the local file system. In addition, the identity authentication is performed by the Token mode of Index Auth service. Registraties can be of different types, for example:
(1) sponsor Registry: the third party's registry mirrors the repository for use by the customer and the Docker community.
(2) Mirror Registry: the third party's registry mirrors the repository and is only used by the customer.
(3) Vector Registry: a registry image repository provided by the vendor that published the Docker image.
(4) Private Registry: a registry mirror repository provided by a private entity provided with a firewall and an additional layer of security.
Registration Client: docker acts as a registry client responsible for maintaining the tasks of push and pull, as well as authorization of the client.
In the prior art, when a user wants to acquire and download an image, the workflow of the Docker registration service is as follows:
firstly, a user sends a request to an index to download a mirror image;
index then sends out a response, returning three relevant pieces of information: the registry where the mirror resides, the verification that the mirror includes all layers, and Token to target authorization for will.
Attention is paid to: token is returned only when X-Docker-Token exists in the request header. While private repositories require basic authentication, this is not mandatory for public repositories.
Next, the user communicates with the registry through Token returned after response, and the registry is in full authority to take charge of the mirror image and is used for storing basic mirror images and inherited layers;
then, the registry now wants to verify with index that the token is authorized;
finally, index will send "true" or "false" identification to the registry mirror repository, thereby determining whether the user is allowed to download the required mirror.
When the process of acquiring the mirror image by the user side is applied to a large-scale cluster system, a problem occurs, because each computing node in the large-scale cluster system is equivalent to each user side, and when a plurality of computing nodes simultaneously request the registration mirror image warehouse to acquire the mirror image, the bandwidth requirement and the data transmission pressure of the registration mirror image warehouse can be rapidly increased.
The inventive concept of the present application is presented below:
at present, in a large-scale cluster system, mirrors are generally deployed in a Docker manner, and each compute node in a compute node cluster pulls a corresponding mirror by requesting a storage node where a registration mirror warehouse is located. Under a large-scale concurrent scene, the bandwidth and the data transmission pressure of the storage node are greatly increased.
Although the bandwidth and data transmission pressure of a single storage node can be relieved by deploying a plurality of storage nodes with complete registration mirror image warehouses, due to the limitation of cost, the storage nodes cannot be increased without limit, and how to perform data distribution on the data call requests of the registration mirror image warehouses becomes an urgent technical problem to be solved.
The present inventors have found that, for a large-scale cluster system, it is possible that a plurality of computing nodes apply for the same image, or images having the same constituent units or the same data units, and if a plurality of applications are based on the Java platform environment, the data units of the Java platform environment in the plurality of images are the same. Thus, a compute node that has acquired a mirror or has the same data unit may be called as a data source. Therefore, the situation that only a limited number of storage nodes with complete registration mirror image warehouses are used as data sources originally becomes a situation that resources of all computing nodes in a large-scale cluster system can be fully utilized as dynamic data sources. The calling of the target data cannot be blocked on the storage nodes of the registration mirror warehouse, so that the data distribution problem of the data calling request is solved.
The following describes specific steps of the data processing method provided by the present application in detail with reference to several embodiments.
Fig. 1 is a schematic structural diagram of a cluster system provided in the present application. As shown in fig. 1, the cluster system 100 includes: a cluster of compute nodes 110 and a cluster of storage nodes 120. The compute node cluster 110 and the storage node cluster 120 are based on a Docker application container engine.
Among them, the computing node cluster 110 includes: the system comprises a plurality of computing nodes 111 and at least one management node 112, wherein the computing nodes 111 are used for executing preset tasks, including logic calculation tasks of a deep learning model;
the management node 112 is used for processing data interaction between the computing node cluster 110 and the storage node cluster 120;
the storage node cluster 120 includes: a plurality of storage nodes 121 and/or at least one management node (not shown), each storage node 121 comprising: a Docker Registry component 1211 and an interface component 1212. Various types of Docker images are stored in the Registry image library in Docker Registry component 1211. The interface assembly 1212 includes: a Nginx service platform based URL Uniform resource locator interface, the interface component 1212 configured to: caching the Docker mirror image and authenticating the identity information of the user.
It should be noted that, for caching the Docker image, the native Docker Registry image library is based on a file system, and has poor concurrency capability, and in order to reduce load pressure on the backend storage, a caching policy needs to be added. According to the method and the device, caching of URL (Uniform Resource Locator) interface data is achieved by using Nginx, cache invalidation time can be configured in order to avoid overlarge cache, and only hot data read within the latest preset time is cached. By configuring the cache strategy, the performance of downloading the same mirror image simultaneously is greatly improved.
It should be further noted that the data processing method provided in the present application may be applied to the management node of the compute node cluster 110 or the storage node cluster 120 of the cluster system shown in fig. 1, and may also be applied to the compute node 111 or the storage node 121.
The data processing method provided by the present application is generally described below by using the embodiment shown in fig. 2, and then a specific implementation manner when the data processing method is applied to different nodes is described by using the embodiments shown in fig. 3, fig. 4, and fig. 5.
Fig. 2 is a schematic flow chart of a first data processing method according to an embodiment of the present application. As shown in fig. 2, the specific steps of the data processing method include:
s201, acquiring a data request.
In this step, the data request is used to apply for invoking target data for at least one first compute node in the compute node cluster, where the target data is composed of a plurality of data units.
It should be noted that the data unit may be divided according to a preset division form, or may be divided according to an original file organization form of the target data. For example, when a large document is transmitted, the document is divided into a plurality of data packets, and the data size of each data packet has a certain requirement, for example, each data packet is 4M in size, and then the data packets can be managed as data units.
In the present embodiment, the target data includes various types of Docker images.
It should be further noted that the data request is a request for invoking the Docker mirror image, which is required by the compute node to complete or execute a predetermined task, such as a logic calculation task of a deep learning algorithm.
Specifically, at least one computing node in the computing node cluster triggers a data request when executing a preset task, and the computing node directly intercepts the data request;
alternatively, the first and second electrodes may be,
intercepting a data request initiated by a computing node by a management node, such as a super node, in a computing node cluster or a storage node cluster;
or, the storage nodes in the storage node cluster receive the data request initiated by the computing node.
Specific reference may be made to other embodiments below.
S202, determining the calling address of each data unit according to the data request.
In this step, the calling address includes: the first address is an address of at least one second computing node in the cluster of computing nodes, and the second address is a storage address in a database in at least one target storage node in the cluster of storage nodes.
Specifically, the identifier of the target data, such as the number of the Docker image, is extracted from the data request, and then the identifier of each data unit constituting the Docker image is determined according to a preset segmentation mode or an organization form of the Docker image.
Then, traversing each computing node from the computing node cluster according to the identifier of each data unit, and checking whether the corresponding data unit is stored in the computing node. If so, the computing node is the second computing node, which stores the address of the corresponding data unit, such as the IP address, which is the first address.
And if all the data units are found by traversing all the computing nodes, returning all the first addresses to the first computing node initiating the data request.
If all the computing nodes in the available state are traversed and all the data units are not found, the identification of the data units which are not found is sent to a database, such as a Registry mirror library, and the Registry mirror library sends the storage addresses, i.e. the second addresses, of the data units which are not found in the computing nodes to the first computing node.
S203, all the data units are called according to the calling address, and all the data units are combined into target data.
In this step, the data units are called from the calling addresses corresponding to the data units, and because each data unit can be derived from a plurality of second computing nodes of the computing node cluster, the original database needs to transmit the whole target data, and only part of the data units need to be transmitted, thereby realizing the shunting of data calling of the database. Even if all data units are available from the second computing node, the data transmission pressure of the database is less.
After all the data units are obtained, the data units are combined into target data according to a preset segmentation mode or data connection identification among the data units, and then the first computing node can call the target data.
In the data processing method provided in this embodiment, a data request initiated by applying for calling target data for at least one first computing node in a computing node cluster is obtained, where the target data is composed of a plurality of data units, then, according to the data request, call addresses of corresponding data units are respectively found in the computing node cluster and a storage node cluster, and then, according to the call addresses, all data units are obtained, so that the target data is obtained by combining the call addresses. The technical problem of how to carry out data distribution on the data call request of the database is solved. The target data are scattered to a plurality of positions to be acquired, and are not transmitted by only a database in the storage node cluster, so that the technical effect of reducing the data transmission pressure of the database when the large-scale cluster transmits the data concurrently is achieved.
When the data processing method provided by the present application is applied to a management node in a compute node cluster or a storage node cluster, the specific implementation steps are shown in fig. 3.
Fig. 3 is a flowchart illustrating a second data processing method according to an embodiment of the present application. As shown in fig. 3, the data processing method includes the specific steps of:
s301, receiving a data request sent by at least one first computing node through a management node in the computing node cluster.
In this step, when at least one first computing node in the computing node cluster executes a preset task, a call demand for at least one target data is triggered, so that the first computing node sends out a data request to acquire the target data.
The computing node sends the data request to the management node, or the computing node originally intends to send the data request to the storage node where the database is located.
S302, sending a data request to at least one target storage node through the management node, so that the target storage node determines a calling address according to target data.
In this step, a plurality of storage nodes containing complete database data exist in the storage node cluster, and the management node is responsible for data interaction between the compute node cluster and the storage node cluster.
Specifically, the management node sends a connection request to at least one storage node which is in an available state and has a load rate lower than a preset load threshold value, namely a target storage node,
then, the management node may establish a data connection, such as an HTTP connection, with the first target storage node that feeds back the connection request, and send the data request to the target storage node through an HTTP protocol.
Optionally, the management node may also divide the target data according to a preset division manner to determine the plurality of data units. And then the management node establishes data connection with the target storage nodes, and then applies for calling the corresponding data units to the target storage nodes through the data connection.
In one possible design, before sending the data request to the at least one target storage node, the method further includes:
acquiring the working state information of each storage node in the storage node cluster;
and screening out at least one target storage node meeting preset requirements from the storage nodes according to the working state information.
The working state information includes: workload and availability status.
Specifically, as shown in fig. 1, when a Docker Daemon server on a first computing node resolves a domain name address corresponding to a Docker Registry mirror library, a storage node in an available state is screened from storage nodes containing the Docker Registry mirror library; then, obtaining the working load of the storage node, for example, obtaining the load value of each storage node according to a preset load calculation model; and then sorting the load values from low to high, and selecting the storage nodes with the top N bits as target storage nodes, or selecting the storage nodes with the load values lower than a preset load threshold value as the target storage nodes.
Optionally, the storage node with the lowest load value is selected as the target storage node.
Next, the Domain Name address corresponding to the target storage node is resolved by a DNS (Domain Name System).
And S303, receiving the call address fed back by the target storage node through the management node.
In this step, the target storage node determines whether the target storage node includes a part or all of the data units corresponding to the target data by traversing each of the compute nodes in the compute node cluster, and feeds back a first address of a second compute node storing the corresponding data unit to the management node.
If all data units are not contained in the computing nodes of the computing node cluster, the target storage node feeds back the storage addresses of the remaining data units in the database, i.e., the second addresses, to the management node.
Therefore, data distribution to the database is realized, and the bandwidth and the data transmission pressure of the database are reduced.
In this embodiment, the target data includes a Docker image, the Docker image is divided into image partitions (i.e., data units) of a preset size (e.g., 4M) through a P2P mode, and a data request of the Docker image is redirected, so that only a small part of the data call request actually reaches an image warehouse (i.e., a database on the target storage node) to pull the image partitions, and a large part of the data call request acquires the image partitions from other computing nodes in the computing node cluster.
And S304, combining all the data units into target data through the management node according to the calling address.
S305, sending the target data to the first computing node.
In the data processing method provided in this embodiment, a data request initiated by applying for calling target data for at least one first computing node in a computing node cluster is obtained, where the target data is composed of a plurality of data units, then, according to the data request, call addresses of corresponding data units are respectively found in the computing node cluster and a storage node cluster, and then, according to the call addresses, all data units are obtained, so that the target data is obtained by combining the call addresses. The technical problem of how to carry out data distribution on the data call request of the database is solved. The target data are scattered to a plurality of positions to be acquired, and are not transmitted by only a database in the storage node cluster, so that the technical effect of reducing the data transmission pressure of the database when the large-scale cluster transmits the data concurrently is achieved.
When the data processing method provided by the present application is applied to a computing node in a computing node cluster, the specific implementation steps are shown in fig. 4.
Fig. 4 is a flowchart illustrating a third data processing method according to an embodiment of the present application. As shown in fig. 4, the data processing method includes the specific steps of:
s401, responding to a trigger instruction of a preset task in the first computing node, and determining a data request.
In this step, the data request is used to cause the first compute node to invoke the target data to perform a predetermined task.
In this embodiment, the target data includes a Docker image. When the first computing node executes a preset task, such as a logic calculation task of a deep learning model, or a creation task of a virtual environment, a corresponding Docker mirror image needs to be used due to task requirements.
In the prior art, a first compute node directly pulls a corresponding Docker image from a storage node where a Docker Registry image library is located. Therefore, when a large number of concurrent Docker mirror images are pulled, bandwidth blockage of the Docker Registry mirror image library is caused, and the computing node is restricted from completing preset tasks.
In this embodiment, after the computing node triggers the Docker mirror call request, a data request is determined, but a pull request is not directly sent to the storage node.
S402, sending a second data request to at least one other computing node in the computing node cluster through the first computing node according to the target data and a preset segmentation mode.
In this step, the second data request is used to obtain data units from each data node, and the target data in the data request in S401 may be divided into a plurality of data units according to a preset division manner.
The preset segmentation mode comprises the following steps: a packet segmentation mode corresponding to a data transmission protocol.
Specifically, the first computing node determines whether the first computing node includes a part or all of the data units corresponding to the target data by traversing other computing nodes in the computing node cluster, and if so, acquires a first address of a second computing node storing the corresponding data unit.
And S403, receiving response results returned by other computing nodes, and judging whether all data units are received according to the response results.
In this step, if all data units corresponding to the target data can be acquired from the other computing nodes to the first computing node, step S404 is executed, otherwise, step S405 is executed.
S404, all the data units are combined into target data.
S405, sending a third data request to at least one target storage unit.
In this step, the third data request is used to retrieve the remaining data units from the target storage unit.
In one possible design, before sending the third data request to the at least one target storage node, the method further includes:
acquiring working state information of each storage node in a storage node cluster;
and screening out at least one target storage node meeting the preset requirement from each storage node according to the working state information.
The working state information includes: workload and availability status.
Specifically, as shown in fig. 1, when a Docker Daemon server on a first computing node resolves a domain name address corresponding to a Docker Registry mirror library, a storage node in an available state is screened from storage nodes containing the Docker Registry mirror library; then, obtaining the working load of the storage node, for example, obtaining the load value of each storage node according to a preset load calculation model; and then sorting the load values from low to high, and selecting the storage nodes with the top N bits as target storage nodes, or selecting the storage nodes with the load values lower than a preset load threshold value as the target storage nodes.
Optionally, the storage node with the lowest load value is selected as the target storage node.
Next, the Domain Name address corresponding to the target storage node is resolved by a DNS (Domain Name System).
S406, all the data units are combined into target data.
In the data processing method provided in this embodiment, a data request initiated by applying for calling target data for at least one first computing node in a computing node cluster is obtained, where the target data is composed of a plurality of data units, then, according to the data request, call addresses of corresponding data units are respectively found in the computing node cluster and a storage node cluster, and then, according to the call addresses, all data units are obtained, so that the target data is obtained by combining the call addresses. The technical problem of how to carry out data distribution on the data call request of the database is solved. The target data are scattered to a plurality of positions to be acquired, and are not transmitted by only a database in the storage node cluster, so that the technical effect of reducing the data transmission pressure of the database when the large-scale cluster transmits the data concurrently is achieved.
When the data processing method provided by the present application is applied to a storage node in a storage node cluster, the specific implementation steps are shown in fig. 5.
Fig. 5 is a schematic flowchart of a fourth data processing method according to an embodiment of the present application. As shown in fig. 5, the data processing method includes the specific steps of:
s501, receiving a data request through a target storage node.
In this step, the data request is used to cause the first compute node to invoke the target data to perform a predetermined task.
In this embodiment, the target data includes a Docker image. When the first computing node executes a preset task, such as a logic calculation task of a deep learning model, or a creation task of a virtual environment, a corresponding Docker mirror image needs to be used due to task requirements.
The target storage node receives the data request sent by the first compute node.
S502, determining each data unit according to the target data in the data request through the target storage node.
In this step, the target storage node determines each data unit capable of forming complete target data by parsing the target data, for example, by a preset segmentation manner or a document composition form of the target data.
S503, determining a first address corresponding to part or all of the data units in each computing node of the computing node cluster through the target storage node.
In this step, the target storage node determines whether the target storage node includes a part or all of the data units corresponding to the target data by traversing each of the computing nodes in the computing node cluster, and if so, acquires a first address of a second computing node in which the corresponding data unit is stored. The first address is a storage address of the data unit in the second compute node.
S504, if the second computing node does not contain all the data units, second addresses corresponding to the remaining data units are determined in the database.
And S505, sending the first address and the second address to the first computing node so that the first computing node calls each data unit to combine into target data.
In the data processing method provided in this embodiment, a data request initiated by applying for calling target data for at least one first computing node in a computing node cluster is obtained, where the target data is composed of a plurality of data units, then, according to the data request, call addresses of corresponding data units are respectively found in the computing node cluster and a storage node cluster, and then, according to the call addresses, all data units are obtained, so that the target data is obtained by combining the call addresses. The technical problem of how to carry out data distribution on the data call request of the database is solved. The target data are scattered to a plurality of positions to be acquired, and are not transmitted by only a database in the storage node cluster, so that the technical effect of reducing the data transmission pressure of the database when the large-scale cluster transmits the data concurrently is achieved.
For any of the embodiments of fig. 2 to 5, in a possible design, the target data includes a Docker image, and the Docker image is used to complete the building of a target virtual environment on a host, where the target virtual environment corresponds to a preset user.
In one possible design, the storage node cluster includes a plurality of storage nodes, and each of the storage nodes includes: the system comprises a Docker Registry component and an interface component, wherein various types of Docker images are stored in a Registry image library in the Docker Registry component.
Optionally, the interface component includes a URL uniform resource locator interface based on an Nginx service platform.
In one possible design, the functions of the interface component include: caching the Docker mirror image and authenticating the identity information of the user.
In any of the above embodiments, the authentication method for the identity information of each computing node in the present application includes:
user authentication is carried out by adopting a base 64-bit coding mode of base auth provided by Docker Registry official, and then authentication is carried out on different URL (Uniform Resource Locator) routes by adopting a mode of OpenRestry extending nginx. The method comprises the steps that a container is subjected to corresponding user creation operation in the container through a container arranging tool according to uid user identity identification and gid group identity identification of a real user in Linux, the container arranging tool needs to perform docker starting operation according to root authority in Linux, and a home directory of the user and a private key of a root user are mounted during starting, wherein the home directory of the user provides files required by the user for the user, and the private key of the root user is mounted for scheduling among containers and ssh password-free access.
The user authentication mode can also support user authentication after container solidification, the uid user identity and gid group identity of the current user are recorded when the container arrangement tool is started, and the last user record can be cleared to create a new user uid user identity and gid group identity during next access, so that only one user in one container is ensured, the problem of conflict of the user uid user identity or the gid group identity can not occur, the problem that only one user exists in a single container is solved, and the user is a user with sudo authority, which is equivalent to root ownership of the container, and isolation of the user authority is realized.
Fig. 6 is a schematic structural diagram of a data processing apparatus provided in the present application. The data processing means may be implemented by software, hardware or a combination of both.
As shown in fig. 6, the data processing apparatus 600 provided in the present embodiment includes:
an obtaining module 601, configured to obtain a data request, where the data request is used to apply for calling target data for at least one first computing node in a computing node cluster, and the target data is composed of multiple data units;
a processing module 602, configured to determine, according to the data request, a call address of each data unit, where the call address includes: a first address and/or a second address, wherein the first address is an address of at least one second computing node in the computing node cluster, and the second address is a storage address in a database in at least one target storage node in the storage node cluster;
the processing module 602 is further configured to call all the data units according to the call address, and combine all the data units into target data.
In a possible design, when the apparatus is disposed on a management node in a compute node cluster or a storage node cluster, the obtaining module 601 is configured to receive, by a management node in the compute node cluster, a data request sent by at least one first compute node;
a processing module 602 configured to:
sending a data request to at least one target storage node through a management node so that the target storage node determines a calling address according to target data; receiving a call address fed back by a target storage node through a management node;
the processing module 602 is further configured to combine all the data units into target data according to the call address through the management node, and send the target data to the first computing node.
In a possible design, when the apparatus is configured on a computing node in a computing node cluster, the obtaining module 601 is configured to determine, in a first computing node, a data request in response to a trigger instruction of a preset task, where the data request is used to enable the first computing node to call target data to execute the preset task;
a processing module 602, configured to send, by a first computing node, a second data request to at least one other computing node in a computing node cluster according to target data and a preset partitioning manner, where the second data request is used to obtain a data unit from each data node;
an obtaining module 601, configured to receive response results returned by each other computing node;
a processing module 602, configured to determine whether all data units are received according to the response result; if so, combining all the data units into target data; and if not, sending a third data request to the at least one target storage unit, wherein the third data request is used for acquiring the rest data units from the target storage unit.
In a possible design, the obtaining module 601 is further configured to obtain working state information of each storage node in the storage node cluster;
the processing module 602 is further configured to screen out at least one target storage node that meets a preset requirement from each storage node according to the working state information.
In one possible design, the operating state information includes: workload and availability status.
In one possible design, when the apparatus is disposed on a storage node in a storage node cluster, the obtaining module 601 is configured to receive a data request through a target storage node;
a processing module 602, configured to determine, by a target storage node, each data unit according to target data in the data request; determining a first address corresponding to a part or all of data units in each computing node of a computing node cluster through a target storage node; if the second computing node does not contain all the data units, second addresses corresponding to the remaining data units are determined in the database.
In one possible design, the target data includes a Docker image, and the Docker image is used for completing construction of a target virtual environment on a host, where the target virtual environment corresponds to a preset user.
In one possible design, a storage node cluster includes a plurality of storage nodes, each storage node including: the system comprises a Docker Registry component and an interface component, wherein various types of Docker images are stored in a Registry image library in the Docker Registry component.
Optionally, the interface component includes a URL uniform resource locator interface based on a Nginx service platform.
In one possible design, the functions of the interface component include: caching the Docker mirror image, and authenticating the identity information of the user.
It should be noted that the data processing apparatus provided in the embodiment shown in fig. 6 can execute the method provided in any of the above method embodiments, and the specific implementation principle, technical features, term interpretation and technical effects thereof are similar and will not be described herein again.
Fig. 7 is a schematic structural diagram of an electronic device provided in the present application. As shown in fig. 7, the electronic device 700 may include: at least one processor 701 and a memory 702. Fig. 7 shows an electronic device as an example of a processor.
And a memory 702 for storing programs. In particular, the program may include program code including computer operating instructions.
The memory 702 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The processor 701 is configured to execute computer-executable instructions stored by the memory 702 to implement the methods described in the method embodiments above.
The processor 701 may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement the embodiments of the present application.
Alternatively, the memory 702 may be separate or integrated with the processor 701. When the memory 702 is a device independent from the processor 701, the electronic device 700 may further include:
a bus 703 for connecting the processor 701 and the memory 702. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. Buses may be classified as address buses, data buses, control buses, etc., but do not represent only one bus or type of bus.
Alternatively, in a specific implementation, if the memory 702 and the processor 701 are implemented in a single chip, the memory 702 and the processor 701 may communicate via an internal interface.
The present application also provides a computer-readable storage medium, which may include: various media that can store program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and in particular, the computer readable storage medium stores program instructions for the method in the above embodiments.
The present application also provides a computer program product comprising a computer program which, when executed by a processor, implements the method in the embodiments described above.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A data processing method, comprising:
the method comprises the steps of obtaining a data request, wherein the data request is used for applying for calling target data for at least one first computing node in a computing node cluster, and the target data is composed of a plurality of data units;
determining a calling address of each data unit according to the data request, wherein the calling address comprises: a first address, which is an address of at least one second computing node in the cluster of computing nodes, and/or a second address, which is a storage address in a database in at least one target storage node in the cluster of storage nodes;
and calling all the data units according to the calling address, and combining all the data units into the target data.
2. The data processing method of claim 1, wherein the get data request comprises:
receiving, by a management node in the cluster of computing nodes, the data request sent by at least one of the first computing nodes;
correspondingly, the determining the call address of each data unit according to the data request includes:
sending the data request to at least one target storage node through the management node, so that the target storage node determines the calling address according to the target data;
receiving the call address fed back by the target storage node through the management node;
correspondingly, the calling all the data units according to the calling address and combining all the data units into the target data includes:
and combining all the data units into the target data according to the calling address through the management node, and sending the target data to the first computing node.
3. The data processing method of claim 2, wherein the get data request comprises:
responding to a trigger instruction of a preset task in the first computing node, and determining the data request, wherein the data request is used for enabling the first computing node to call the target data to execute the preset task;
correspondingly, the determining the call address of each data unit according to the data request includes:
sending a second data request to at least one other computing node in the computing node cluster through the first computing node according to the target data and a preset segmentation mode, wherein the second data request is used for acquiring the data unit from each data node;
receiving response results returned by the other computing nodes, and judging whether all the data units are received according to the response results;
if so, combining all the data units into the target data;
and if not, sending a third data request to at least one target storage unit, wherein the third data request is used for acquiring the rest data units from the target storage unit.
4. The data processing method according to claim 2 or 3, further comprising, before sending a data request to at least one of the target storage nodes:
acquiring working state information of each storage node in the storage node cluster;
and screening out at least one target storage node meeting preset requirements from the storage nodes according to the working state information.
5. The data processing method of claim 1, wherein the get data request comprises:
receiving, by the target storage node, the data request;
correspondingly, determining the calling address of each data unit according to the data request includes:
determining, by the target storage node, each of the data units according to the target data in the data request;
determining, by the target storage node, the first address corresponding to part or all of the data units in each compute node of the compute node cluster;
and if the second computing node does not contain all the data units, determining the second addresses corresponding to the rest of the data units in a database.
6. A cluster system, comprising: the method comprises the steps that a computing node cluster and a storage node cluster are based on a preset application container engine; wherein the content of the first and second substances,
the computing node cluster comprises a plurality of computing nodes and at least one management node, the computing nodes are used for executing preset tasks, and the management node is used for processing data interaction between the computing node cluster and the storage node cluster;
the storage node cluster comprises a plurality of storage nodes, and each storage node comprises: the image library component comprises an image library component and an interface component, wherein various types of image files are stored in an image library in the image library component, and the interface component comprises: a uniform resource locator interface based on a predetermined service platform, the interface component configured to: caching the mirror image file and authenticating the identity information of the user;
the cluster system is used for realizing the data processing method of any one of claims 1 to 5.
7. A data processing apparatus, comprising:
the system comprises an acquisition module, a data processing module and a data processing module, wherein the acquisition module is used for acquiring a data request, the data request is used for applying for calling target data for at least one first computing node in a computing node cluster, and the target data consists of a plurality of data units;
a processing module, configured to determine, according to the data request, a call address of each data unit, where the call address includes: a first address, which is an address of at least one second computing node in the cluster of computing nodes, and/or a second address, which is a storage address in a database in at least one target storage node in the cluster of storage nodes;
and the processing module is also used for calling all the data units according to the calling address and combining all the data units into the target data.
8. An electronic device, comprising:
a processor; and the number of the first and second groups,
a memory for storing an executable computer program of the processor;
wherein the processor is configured to perform the data processing method of any of claims 1 to 5 via execution of the executable computer program.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the data processing method of any one of claims 1 to 5.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the data processing method of any one of claims 1 to 5 when executed by a processor.
CN202110796724.1A 2021-07-14 2021-07-14 Data processing method, apparatus, device, medium, and program product Pending CN113535429A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110796724.1A CN113535429A (en) 2021-07-14 2021-07-14 Data processing method, apparatus, device, medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110796724.1A CN113535429A (en) 2021-07-14 2021-07-14 Data processing method, apparatus, device, medium, and program product

Publications (1)

Publication Number Publication Date
CN113535429A true CN113535429A (en) 2021-10-22

Family

ID=78128046

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110796724.1A Pending CN113535429A (en) 2021-07-14 2021-07-14 Data processing method, apparatus, device, medium, and program product

Country Status (1)

Country Link
CN (1) CN113535429A (en)

Similar Documents

Publication Publication Date Title
JP7203444B2 (en) Selectively provide mutual transport layer security using alternate server names
US11836516B2 (en) Reducing execution times in an on-demand network code execution system using saved machine states
US10564946B1 (en) Dependency handling in an on-demand network code execution system
AU2019257143B2 (en) Policy based service routing
CN115269184B (en) Function As A Service (FAAS) execution allocator
US11206253B2 (en) Domain pass-through authentication in a hybrid cloud environment
US9973573B2 (en) Concurrency reduction service
US20170052807A1 (en) Methods, apparatuses, and computer program products for deploying and managing software containers
US9866547B2 (en) Controlling a discovery component, within a virtual environment, that sends authenticated data to a discovery engine outside the virtual environment
CN107690800A (en) Manage dynamic IP addressing distribution
US9264339B2 (en) Hosted network management
US20200244708A1 (en) Deriving system architecture from security group relationships
Stankovski et al. Implementing time-critical functionalities with a distributed adaptive container architecture
US10536559B2 (en) Blocking an interface of a redirected USB composite device
CN115086166A (en) Computing system, container network configuration method, and storage medium
WO2014100944A1 (en) Application layer resource selection method, device and system
CN113535429A (en) Data processing method, apparatus, device, medium, and program product
US11872497B1 (en) Customer-generated video game player matchmaking in a multi-tenant environment
US20240007537A1 (en) System and method for a web scraping tool
CN112073449B (en) Kubernetes-based environment switching processing method and equipment
US10599453B1 (en) Dynamic content generation with on-demand code execution
US20130332611A1 (en) Network computing over multiple resource centers
CN116775054A (en) Service deployment method and device, equipment and medium
CN110209464A (en) Virtual machine receive Guan Fangfa, storage medium and management platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination