CN114760307B - Data processing method, device, storage medium and processor - Google Patents

Data processing method, device, storage medium and processor Download PDF

Info

Publication number
CN114760307B
CN114760307B CN202210336070.9A CN202210336070A CN114760307B CN 114760307 B CN114760307 B CN 114760307B CN 202210336070 A CN202210336070 A CN 202210336070A CN 114760307 B CN114760307 B CN 114760307B
Authority
CN
China
Prior art keywords
server
request
cluster
cluster data
deployment type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210336070.9A
Other languages
Chinese (zh)
Other versions
CN114760307A (en
Inventor
赵鹏
胡东旭
毕衬衬
王经纬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Du Xiaoman Technology Beijing Co Ltd
Original Assignee
Du Xiaoman Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Du Xiaoman Technology Beijing Co Ltd filed Critical Du Xiaoman Technology Beijing Co Ltd
Priority to CN202210336070.9A priority Critical patent/CN114760307B/en
Publication of CN114760307A publication Critical patent/CN114760307A/en
Application granted granted Critical
Publication of CN114760307B publication Critical patent/CN114760307B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a data processing method, a data processing device, a storage medium and a processor. Wherein the method comprises the following steps: acquiring a first request from a client, wherein the first request is used for requesting a server set matched with cluster data to be distributed, and the cluster data to be distributed is cluster data of a remote dictionary server; in response to the first request, determining a server set of the target deployment type among the server sets of the plurality of deployment types; and distributing the cluster data to be distributed to the server set of the target deployment type. The invention solves the technical problem of low utilization rate of server resources.

Description

Data processing method, device, storage medium and processor
Technical Field
The present invention relates to the field of data processing, and in particular, to a data processing method, apparatus, storage medium, and processor.
Background
At present, in the internet field, based on the requirement of data security, service programs and data of different services need to be isolated, programs of different services cannot be mixed, and data cannot be directly communicated, so that the technical problem of low resource utilization rate of a server exists.
Aiming at the technical problem of low server resource utilization rate in the technology, no effective solution is proposed at present.
Disclosure of Invention
The embodiment of the invention provides a data processing method, a device, a storage medium and a processor, which are used for at least solving the technical problem of low utilization rate of server resources.
According to an aspect of an embodiment of the present invention, there is provided a data processing method. The method comprises the following steps: acquiring a first request from a client, wherein the first request is used for requesting a server set matched with cluster data to be distributed, and the cluster data to be distributed is cluster data of a remote dictionary server; in response to the first request, determining a server set of the target deployment type among the server sets of the plurality of deployment types; and distributing the cluster data to be distributed to the server set of the target deployment type.
Optionally, determining the server set of the target deployment type from the server sets of the multiple deployment types includes: and determining a server set of the target deployment type from the server sets of the plurality of deployment types based on the level of the cluster data to be allocated, wherein the level of the cluster data to be allocated is used for representing the service type selected by the client.
Optionally, determining the server set of the target deployment type from the server sets of the plurality of deployment types based on the level of the cluster data to be allocated includes: and determining one of the plurality of deployment type server sets as the target deployment type server set in response to the level of cluster data to be allocated being the first level.
Optionally, determining the server set of the target deployment type from the server sets of the plurality of deployment types based on the level of the cluster data to be allocated includes: and determining at least two of the server sets of the deployment types as the server set of the target deployment type in response to the level of the cluster data to be allocated being the second level.
Optionally, converting the first request into a second request, wherein the second request is for requesting an instance of the cluster data; responding to the second request, and acquiring an instance; the instance is started on a set of servers of the target deployment type.
Optionally, before starting the instance on the set of servers of the target deployment type, the method further comprises: an interaction component is deployed on each server in the set of servers, wherein the interaction component is configured to interact between each server in the set of servers and the set of servers of the target deployment type.
Optionally, after starting the instance on the set of servers of the target deployment type, the method further comprises: based on the instance roles of the instances, a cluster topology is constructed among a plurality of instances of the to-be-allocated cluster data, wherein the cluster topology is a cluster in which the to-be-allocated cluster data allocated to the server set of the target deployment type is constructed.
According to another aspect of the embodiment of the invention, a data processing apparatus is also provided. The device comprises: the server comprises an acquisition unit, a server and a server management unit, wherein the acquisition unit is used for acquiring a first request from a client, wherein the first request is used for requesting a server set matched with cluster data to be distributed, and the cluster data to be distributed is cluster data of a remote dictionary server; a determining unit configured to determine a server set of a target deployment type among server sets of a plurality of deployment types in response to a first request; and the distribution unit is used for distributing the cluster data to be distributed to the server set of the target deployment type.
According to another aspect of an embodiment of the present invention, there is also provided a computer-readable storage medium. The computer readable storage medium comprises a stored program, wherein the program is used for controlling equipment where the computer readable storage medium is located to execute the data processing method of the embodiment of the invention when running.
According to another aspect of an embodiment of the present invention, there is also provided a processor. The processor is used for running a program, wherein the data processing method of the embodiment of the invention is executed when the program runs.
In the embodiment of the invention, a first request from a client is acquired, wherein the first request is used for requesting a server set matched with cluster data to be distributed, and the cluster data to be distributed is cluster data of a remote dictionary server; in response to the first request, determining a server set of the target deployment type among the server sets of the plurality of deployment types; and distributing the cluster data to be distributed to the server set of the target deployment type. That is, the invention determines the server set of the target deployment type based on the first request initiated by the client, and distributes the cluster data to be distributed to the server set of the target deployment type, thereby achieving the purpose of scattering the cluster data to be distributed into different bottom environments, solving the technical problem of low server resource utilization rate and realizing the technical effect of improving the server resource utilization rate.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a flow chart of a method of data processing according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a hybrid operation and maintenance management platform according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a hybrid operation and maintenance management platform for accessing a rudder trunk according to an embodiment of the invention;
FIG. 4 is a schematic diagram of a hybrid operation and maintenance management platform for accessing a bare container according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a hybrid operation and maintenance management platform accessing an exclusive server according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of another hybrid operation and maintenance management platform according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a data processing apparatus according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, partial terms or terminology appearing in the course of describing the embodiments of the application are used for the following explanation:
a program which can run on a computer and has a specific function, for example, the communication software installed and run on a personal computer is a program;
A service comprising one or more programs accessible to an internal or external user and providing the user with one or more functions;
The server, the computer equipment that the program runs, can offer hardware resources such as the central processing unit (Central Processing Unit, abbreviated as CPU), memory, magnetic disk, network, graphic processor (Graphics Processing Unit, abbreviated as GPU) etc. to use for the program;
The online service is used for providing real-time interactive service for the user, and needs to quickly respond to the operation request of the user and return a result, for example, the user pays money by sweeping a code and needs to immediately return whether the user pays successfully or not and the deduction amount;
The offline service provides non-real-time interaction service, and can return results within a longer time range, for example, the daily income accounting of the live financial accounting and automatic fee deduction service can be realized only by executing on the same day;
A remote dictionary server (Remote Dictionary Server, abbreviated as Redis) service, an open source memory cache service, provides high-speed storage and inquiry functions of user data, such as activities of commodity second killing, red-rush, coupon rush, etc.;
the load, hardware resources such as a central processing unit, a memory, a disk and the like used during service operation can change along with the flow change;
The container, based on a kernel ability of an open source operating system (Linux), virtualizes a small-scale running environment, creates a service by packaging mirror images, starts the container, and can limit and isolate resources used by the service in the container, including a central processing unit, a memory, input/Output (I/O) and a network, etc., currently, the main container item in the industry is an open source application container engine (dock);
Rudders (kubernetes, k8s for short) are a transplantable container arrangement management tool for container service, and provide solutions for container management such as service deployment, service monitoring, application expansion and fault handling.
Example 1
In accordance with an embodiment of the present invention, there is provided an embodiment of a method of data processing, it being noted that the steps shown in the flowchart of the figures may be performed in a computer system, such as a set of computer executable instructions, and, although a logical order is shown in the flowchart, in some cases, the steps shown or described may be performed in an order other than that shown or described herein.
Fig. 1 is a flow chart of a method of data processing according to an embodiment of the invention. As shown in fig. 1, the method may include the steps of:
Step S102, a first request from a client is obtained, wherein the first request is used for requesting a server set matched with cluster data to be distributed, and the cluster data to be distributed is cluster data of a remote dictionary server.
In the technical solution provided in the above step S102 of the present invention, a user initiates a first request to a plurality of server sets of deployment types through a client, where the plurality of server sets of deployment types obtain a first request from the client, where the first request is used to request a server set that matches cluster data to be allocated, the first request may be a cluster resource requirement of the client, and the cluster data to be allocated may be cluster data of a remote dictionary server (dis), that is, the dis cluster data.
Optionally, the first request may further include a service type selected by the client for the cluster data to be allocated, and different service types may correspond to different sets of servers of the target deployment type.
Step S104, responding to the first request, and determining a server set of a target deployment type from a plurality of server sets of deployment types.
In the technical solution provided in the above step S104 of the present invention, based on the first request from the client, a server set of a target deployment type is determined in server sets of a plurality of deployment types, where the server sets of the plurality of deployment types may be a Redis hybrid operation and maintenance management platform that converges three different deployment modes of k8S, a bare container, and an exclusive server, the server set of the target deployment type may be a server set after the cluster data to be allocated is allocated and deployed in the server sets of the plurality of deployment types according to the first request, and the target deployment types may include a k8S deployment mode, a bare container deployment mode, and an exclusive server deployment mode.
Alternatively, the Redis hybrid operation and maintenance management platform may converge in a plurality of different deployment manners, which are not specifically limited herein.
Optionally, before determining the server set of the target deployment type in the server sets of the multiple deployment types, determining a level of cluster data to be allocated according to a service type selected by the client, determining the server set of the target deployment type based on the level of the cluster data to be allocated, where different levels correspond to different server sets of the target deployment type, for example, when the client selects a service type of a lower level, determining that the level of the Redis cluster to be allocated is a low level, and for the Redis cluster of a lower level, mixing all amounts into k8s or a bare container server pool; when the client selects a service type with a higher level, determining that the level of the Redis cluster to be allocated is a high level, and for the Redis cluster with the higher level, deploying a master library in an exclusive mode and deploying a slave library in a k8s or bare container mode based on the stability of the Redis service.
Step S106, distributing the cluster data to be distributed to the server set of the target deployment type.
In the technical solution provided in the above step S106 of the present invention, based on the first request of the client, after determining the server set of the target deployment type from the server sets of the multiple deployment types, the cluster data to be allocated is allocated to the server set of the target deployment type.
Optionally, the first request is converted into independent Redis instance requests on the server set of the target deployment type, and the instances are started one by one through the target deployment type, and in the process of distributing the cluster data to be distributed to the server set of the target deployment type, the instances can be scheduled to the appropriate servers based on the target deployment type, so that the isolation of resources is realized, and the resource competition influence of other services on the same server is avoided.
For example, in the deployment process of a platform accessing the k8s cluster, through the scheduling capability of the k8s own resource scheduling, the instances can be scheduled to a proper server, and the isolation of resources is realized through a k8s container set (pod); in the deployment process of the platform accessed to the bare container, a uniform resource management platform of the bare container server is relied on, a target server suitable for deployment is obtained by providing a resource application to the resource management platform of the bare container server, and resource isolation is realized through an open-source application container engine (dock).
Step S102 to step S106 of the application are performed to obtain a first request from a client, wherein the first request is used for requesting a server set matched with cluster data to be distributed, and the cluster data to be distributed is cluster data of a remote dictionary server; in response to the first request, determining a server set of the target deployment type among the server sets of the plurality of deployment types; and distributing the cluster data to be distributed to the server set of the target deployment type. That is, the application determines the server set of the target deployment type based on the first request initiated by the client, and distributes the cluster data to be distributed to the server set of the target deployment type, thereby achieving the purpose of scattering the cluster data to be distributed into different bottom environments, solving the technical problem of low server resource utilization rate and realizing the technical effect of improving the server resource utilization rate.
The above-described method of this embodiment is further described below.
As an optional implementation manner, step S104, determining a server set of a target deployment type from a plurality of server sets of deployment types, includes: and determining a server set of the target deployment type from the server sets of the plurality of deployment types based on the level of the cluster data to be allocated, wherein the level of the cluster data to be allocated is used for representing the service type selected by the client.
In this embodiment, the level of the cluster data to be allocated is determined according to the service type selected by the client, and the server set of the target deployment type is determined among the server sets of the multiple deployment types based on the level of the cluster data to be allocated, where different levels correspond to different server sets of the target deployment type, and the level of the cluster data to be allocated may be selected by the client.
As an optional implementation manner, step S104, based on the level of the cluster data to be allocated, determines a server set of the target deployment type from among the server sets of the multiple deployment types, including: and determining one of the plurality of deployment type server sets as the target deployment type server set in response to the level of cluster data to be allocated being the first level.
In this embodiment, when it is determined that the level of cluster data to be allocated is a first level based on the service type selected by the client, a server set of one deployment type among the plurality of server sets of deployment types may be determined as a server set of a target deployment type, where the first level may be used to characterize a Redis cluster of a lower level.
For example, when the client selects a service type of a lower level, it is determined that the level of the Redis cluster to be allocated is a low level, and for the Redis cluster of the lower level, the full amount is mixed into the server pool of k8s or bare container.
As an optional implementation manner, step S104, based on the level of the cluster data to be allocated, determines a server set of the target deployment type from among the server sets of the multiple deployment types, including: and determining at least two of the server sets of the deployment types as the server set of the target deployment type in response to the level of the cluster data to be allocated being the second level.
In this embodiment, when the level of cluster data to be allocated is determined to be the second level based on the service type selected by the client, the server set of at least two deployment types among the server sets of the plurality of deployment types may be determined to be the server set of the target deployment type, wherein the second level may be used to characterize the Redis cluster of the higher level.
For example, when the client selects a service type with a higher level, the level of the to-be-allocated Redis cluster is determined to be a high level, and for the Redis cluster with the higher level, the master library can be deployed in an exclusive mode based on the stability of the Redis service, so that the master library is not interfered by other service mixes, and the slave library is deployed in a k8s or bare container mode, thereby improving the utilization rate and reducing the overall cost.
As an alternative embodiment, the method may further comprise: converting the first request into a second request, wherein the second request is for requesting an instance of the clustered data; responding to the second request, and acquiring an instance; the instance is started on a set of servers of the target deployment type.
In this embodiment, the first request is translated into a second request on the set of servers of the target deployment type, and the instances will be started one by the target deployment type, where the second request may be a respective independent Redis instance requirement.
For example, a platform is accessed into a k8s cluster, cluster resource requirements of a client are converted into independent Redis instance requirements on the platform, and the instances are deployed and started one by one through k8 s; the platform is accessed into a bare container, the cluster resource requirements of the client are converted into independent Redis instance requirements on the platform, and the instances are deployed and started one by one through the bare container; the platform is accessed to an exclusive server, cluster resource requirements of the client are converted into independent Redis instance requirements on the platform, and the instances are deployed and started one by one through the exclusive server.
As an optional implementation, before starting the instance on the server set of the target deployment type, in step S104, the method further includes: an interaction component is deployed on each server in the set of servers, wherein the interaction component is configured to interact between each server in the set of servers and the set of servers of the target deployment type.
In this embodiment, before an instance is started on the server set of the target deployment type, an interaction component needs to be deployed on each server in the server set, and interaction between each server in the server set and the server set of the target deployment type can be achieved through the interaction component, so that deployment and operation and maintenance management capabilities of the server set of the target deployment type are further achieved, where the interaction component can be a client program (agent).
Optionally, there is no need to deploy agents between the platform accessing the k8s cluster and the server on the platform accessing the k8s cluster.
As an optional implementation, after starting the instance on the server set of the target deployment type, the method further includes step S104: based on the instance roles of the instances, a cluster topology is constructed among a plurality of instances of the to-be-allocated cluster data, wherein the cluster topology is a cluster in which the to-be-allocated cluster data allocated to the server set of the target deployment type is constructed.
In this embodiment, after an instance is started on a server set of a target deployment type, the server set of the target deployment type builds a cluster topology between multiple instances according to the instance roles of the instances to complete the build and delivery of the cluster.
In the embodiment, a first request from a client is acquired, wherein the first request is used for requesting a server set matched with cluster data to be distributed, and the cluster data to be distributed is cluster data of a remote dictionary server; in response to the first request, determining a server set of the target deployment type among the server sets of the plurality of deployment types; and distributing the cluster data to be distributed to the server set of the target deployment type. That is, the invention determines the server set of the target deployment type based on the first request initiated by the client, and distributes the cluster data to be distributed to the server set of the target deployment type, thereby achieving the purpose of scattering the cluster data to be distributed into different bottom environments, solving the technical problem of low server resource utilization rate and realizing the technical effect of improving the server resource utilization rate.
Example 2
The technical solution of the embodiment of the present invention will be illustrated in the following with reference to a preferred embodiment.
At present, through a service mixing part technology, the average resource utilization rate of a server is improved, the overall energy consumption of a data center is reduced, and the method is one of the problems to be solved urgently in the field of Internet, and the existing mixing part concept is mainly aimed at online service and offline service, so that the online service, the offline service and the offline service can be mixed, the resource utilization rate is improved by improving the number of services deployed by a single machine, the mixing part of the online service and the offline service can be further made on the basis, and more remarkable utilization rate improvement is realized by the complementation of the load characteristics of the two services. However, for database type services, such as Redis, the blending part brings greater stability risk, so the blending part is often selected not to be matched with the external service blending part, and the blending part is carried out in the self, such as two Redis services are deployed on the same server, and each uses half of the memory of the whole.
However, in the field of internet finance, the ratio of the database service in the total resources is higher than that of the traditional internet, and for the requirements of the gold convergence rule, service programs and data of different businesses need to be isolated according to different business directions such as payment, insurance, credit, fund, financial science and technology, programs among different businesses cannot be mixed, and data cannot be directly communicated, so that the database service such as Redis needs to be isolated and deployed according to the businesses, and more resource redundancy waste can occur in the internal mixed part.
In the related art, a method for internally mixing Redis services is proposed, and the idea of the method is simple, and the average utilization rate of machine resources is improved directly by increasing the number of Redis instances running on a single server. Because of the internal mixing part, the resource isolation is not a strong requirement, and the Redis service manager is relied on to ensure the resource capacity safety. However, because the method simply uses the Redis service as an internal mixing part, redis instances of different services cannot be directly mixed under the requirement of a gold fusion rule, more redundant resources need to be reserved, disk resources cannot be effectively used, and the overall resource utilization rate is limited.
In another related technology, a method for a containerized mixed part of Redis based on k8s is provided, the method starts to consider mixed parts of Redis and external services, all services are isolated by containers through a unified technical stack of k8s, resource arrangement and service scheduling are carried out by k8s, due to the fact that the mixed parts are different service mixed parts, when the scheme is used, redis needs to have stronger operation and maintenance platform management capability, cluster topology management, operation and maintenance operation and other capabilities are realized through the platform, excessive manual intervention is avoided, and Redis needs to adapt to a resource management scheme of k8s, and most of schemes in the industry are plug-in modes of k8s to independently realize Redis resource management plug-in units. But for the financial industry, this approach ignores three points: firstly, the financial industry has strict compliance requirements, some services cannot be managed through k8s, and the services are required to be considered in the mixing process; secondly, multiple sets of k8s clusters exist in the same service, the existing plug-in mode is not flexible enough when facing a multi-cluster scene, and for some larger Redis clusters, management needs to be performed across the multiple sets of k8s clusters, so that the mixed part management difficulty of the Redis is increased; thirdly, partial core Redis service is provided, and users can select an unmixed part, so that the Redis operation and maintenance management mode is split, and the Redis maintenance cost is increased.
Although the method can be used for the Redis mixing part to improve the resource utilization rate of the server, the method still has the defects of different degrees in the practical application in the field of internet finance. However, redis is a memory database, a disk is hardly used, if Redis and other services in the same service direction can be mixed, the overall resource utilization rate can be obviously improved, the resource cost pressure caused by compliance isolation is reduced, and the method can be easily applied to other internet fields in consideration of the fact that the requirements of the financial industry on service and data reliability are higher than the average level of the internet.
Therefore, from the perspective of Redis service mixed part management, the invention realizes the compatibility of three different scenes of the Redis service mixed part under exclusive server, non-k 8s architecture and mixed part under k8s architecture through the Redis mixed part operation and maintenance management platform, solves all mixed part and operation and maintenance problems through one platform, improves the resource utilization rate and simultaneously gives consideration to operation and maintenance cost and stability.
In the traditional flow, after a service user applies for creating a Redis cluster, a Redis administrator can complete resource deployment and service delivery in an internal mixing mode. However, as the containerization technology evolves, some Redis clusters may be blended with other services in order to save costs.
Fig. 2 is a schematic diagram of a hybrid operation and maintenance management platform according to an embodiment of the present invention, as shown in fig. 2, in the present invention, by using a set of remote dictionary server (Redis) hybrid operation and maintenance management platform 201, a rudder 202 (k 8 s), a bare container 203, and a remote dictionary server 204 can be abutted to each other in an exclusive three different hybrid modes, and a completely consistent operation and maintenance management capability is provided, so that a unified hybrid operation and maintenance management of the Redis is implemented, an administrator does not need to pay attention to the technical details of the underlying hybrid, only needs to pay attention to the resource deployment location, and the reliability and efficiency of the Redis hybrid are integrally improved, and the Redis resource cost is reduced.
The application scene is characterized in that the Redis service and other services are mixed and deployed on the same server by utilizing the characteristic that only a memory and a small amount of central processing units are used by the Redis and the disk is hardly used, so that the resource utilization rate of the server is improved by realizing the complementation of the resource utilization among different services. When meeting the requirements of the gold convergence rule, each service needs to introduce more redundant resources due to strict service isolation, so that the resource utilization rate is reduced, the cost is increased, and the resource utilization rate is improved and the cost is reduced by adopting a mode of mixing different services of the same service.
For different services of the same service, due to different technical characteristics and development stages, a unified mixed part scheme cannot be directly used: if the operation and maintenance standardization automation maturity of part of the service is higher, the k8s can be easily transformed and accessed, and the mixing part can be realized in a k8s cluster mode; the operation and maintenance standardization automation maturity of partial service is medium, k8s is difficult to access, but the mixed part in a bare container mode can be accessed; there are also some services that need to be deployed independently because of regulatory requirements, or that are not suitable for being mixed with other services because of stability requirements (e.g., the Redis's own partial core cluster), and that need to be managed in a physical machine exclusive manner.
The invention builds a set of unified Redis mixed part operation and maintenance management platform framework, and converges deployment and operation and maintenance operations to platform unified docking Redis administrators through platform docking different mixed part demands, so as to avoid the administrators wasting energy on being compatible with different bottom technology systems, ensure high efficiency and reliability of the Redis mixed part, and realize the purpose of illustrating a platform to solve the mixed part and operation and maintenance management problems of all Redis, wherein the Redis mixed part operation and maintenance management platform framework built by the invention can comprise:
the first part, the platform accesses the k8s cluster.
The cluster resource requirements of users are converted into independent Redis instance requirements through a platform, and the instances are deployed and started one by one through k8 s. In the deployment process, the instances can be scheduled to a proper server through the scheduling capability of the k8s self resource scheduling, and the isolation of the resources is realized through a container set (pod) of the k8s, so that the resource competition influence of other services on the same server is avoided.
Fig. 3 is a schematic diagram of a hybrid operation and maintenance platform accessing a rudderhand cluster according to an embodiment of the invention, as shown in fig. 3, the remote dictionary server hybrid operation and maintenance platform 301 deployed by accessing k8s converts the cluster resource requirements of the user into the respective independent Redis instance requirements, and deploys the instances to the server A in the rudderhand cluster 302, the server C, the server D and the server E in the rudderhand cluster 303 of the business 1a one by one through the rudderhand cluster.
After the operation and maintenance platform senses that all the examples are started, according to the example roles of the examples, redis cluster topology is built among the examples, and complete clusters are built and delivered.
Alternatively, the independent management function of each Redis instance in k8s can be realized by a plug-in (operator) mode of k8s, but the plug-in mode has no superiority in the convenience and effectiveness of actual use, and the scheme has better flexibility under the multi-k 8s cluster scene.
The second part, the platform is connected to the bare container.
The cluster resource requirements of users are converted into independent Redis instance requirements through a platform, and the instances are deployed and started one by one in a bare container mode. In the deployment process, a uniform resource management platform of a bare container server is relied on, a target server suitable for deployment is obtained by providing a resource application to the resource management platform of the bare container server, and isolation of resources is realized through a dock, so that the resource competition influence of other services on the same server is avoided. Here, unlike k8s, 1 client program (agent) needs to be deployed in advance on each server, and interaction between the Redis platform is required, and deployment and operation and maintenance management capabilities are realized through the agents.
Fig. 4 is a schematic diagram of a hybrid operation and maintenance platform of an access bare container according to an embodiment of the present invention, as shown in fig. 4, a remote dictionary server hybrid operation and maintenance platform 401 of a deployment mode of the access bare container converts a cluster resource requirement of a user into a requirement of each independent Redis instance, and deploys the instances to a server a, a server C, a server D and a server D in a server pool 401 of a service No. 1 bare container one by one through the bare container.
After the operation and maintenance platform senses that all the examples are started, according to the example roles of the examples, redis cluster topology is built among the examples, and complete clusters are built and delivered.
And the third part, the platform accesses the exclusive server.
Through the platform, the cluster resource requirements of the users are converted into independent Redis instance requirements, and the instances are placed in a Redis server pool exclusive to each service, and a server with proper resources is selected for deployment and starting. And the Redis server pool with exclusive business is subjected to resource arrangement scheduling management by the Redis mixed part operation and maintenance management platform.
Fig. 5 is a schematic diagram of a hybrid operation and maintenance platform for accessing exclusive server according to an embodiment of the present invention, as shown in fig. 5, the hybrid operation and maintenance platform 501 for accessing the remote dictionary server in the exclusive server deployment mode converts the cluster resource requirements of the user into the respective independent Redis instance requirements, and deploys the instances to the server A, the server C, the server D and the server D in the exclusive server pool 502 of the service No. 1 one by one through the exclusive server.
After the operation and maintenance platform senses that all the examples are started, according to the example roles of the examples, redis cluster topology is built among the examples, and complete clusters are built and delivered.
And fourthly, the platform unifies the operation and maintenance management of the mixed part.
After integrating the capabilities of the k8s cluster, the bare container and the exclusive server through the platform, the cluster resource requirements of the user can be mixed to any position according to the requirement. For lower-level Redis clusters, the full mix may be into k8s or bare container server pools; for the Redis cluster with higher grade, considering the stability of the Redis service, the master library can be deployed in an exclusive mode, so that the master library is not interfered by other service mixes, the slave library is deployed in a k8s or bare container mode, and the improvement of the utilization rate and the reduction of the overall cost are realized.
Fig. 6 is a schematic diagram of another hybrid operation and maintenance platform according to an embodiment of the present invention, as shown in fig. 6, by integrating the k8s cluster, the bare container and the remote dictionary server hybrid operation and maintenance platform 601 platform after monopolying the capabilities in the server, the cluster resource requirements of the user are converted into the respective independent Redis instance requirements, and the server G and the server A of the server pool 603 of the bare container service by the server C, A of the server cluster of the rudder hand of the a service on demand through the platform serve the server I and the server L of the server pool 604.
In the aspect of the mixed part of the Redis, the embodiment of the invention utilizes a unified mixed part operation and maintenance management platform to converge three different Redis deployment modes of k8s, a bare container and a physical machine so as to adapt to different technical evolution stages of business service and improve the coverage rate of the mixed part; by utilizing the structural stability characteristics of Redis, the whole Redis cluster with lower level and the Redis cluster slave library with higher level can be combined with other service mixed parts of the service, and on the premise of ensuring the stability, the characteristics of hardly using a magnetic disk, using a small amount of a central processing unit and using a large amount of memory by the Redis are utilized, so that the whole resource utilization rate of the service is further improved through the mixed parts, and the cost of a server is saved.
Example 3
According to the embodiment of the invention, a data processing device is also provided. The data processing apparatus may be used to execute the method of data processing in embodiment 1.
Fig. 7 is a schematic diagram of a data processing apparatus according to an embodiment of the present invention. As shown in fig. 4, the data processing apparatus 700 may include: an acquisition unit 701, a determination unit 702, and an allocation unit 703.
An obtaining unit 701, configured to obtain a first request from a client, where the first request is used to request a server set that matches cluster data to be allocated, where the cluster data to be allocated is cluster data of a remote dictionary server.
A determining unit 702, configured to determine, in response to the first request, a server set of a target deployment type from among server sets of a plurality of deployment types.
An allocation unit 703, configured to allocate the cluster data to be allocated to the server set of the target deployment type.
Alternatively, the determining unit 702 includes: the determining module is used for determining a server set of a target deployment type from a plurality of server sets of deployment types based on the grade of cluster data to be distributed, wherein the grade of the cluster data to be distributed is used for representing the service type selected by the client.
Optionally, the determining module includes: and the first determining submodule is used for determining one deployment type server set in the plurality of deployment type server sets as a target deployment type server set in response to the grade of cluster data to be distributed being a first grade.
Optionally, the determining module includes: and the second determining submodule is used for determining at least two of the server sets of the deployment types as the server set of the target deployment type in response to the level of the cluster data to be distributed being a second level.
Optionally, the apparatus further comprises: a conversion unit, configured to convert the first request into a second request, where the second request is used to request an instance of cluster data; a first acquiring unit configured to acquire an instance in response to the second request; and the starting unit is used for starting the instance on the server set of the target deployment type.
Optionally, before the starting unit starts the instance on the server set of the target deployment type, an interaction component is deployed on each server in the server set, wherein the interaction component is configured to interact between each server in the server set and the server set of the target deployment type.
Optionally, after the starting unit starts the instance on the server set of the target deployment type, a cluster topology is built between a plurality of instances of the cluster data to be allocated based on an instance role of the instance, wherein the cluster topology is a cluster in which the cluster data to be allocated to the server set of the target deployment type is built.
In the data processing apparatus of this embodiment, an obtaining unit is configured to obtain a first request from a client, where the first request is used to request a server set that matches cluster data to be allocated, where the cluster data to be allocated is cluster data of a remote dictionary server; the processing unit is used for responding to the first request and determining a server set of a target deployment type from a plurality of server sets of deployment types; the distribution unit is used for distributing the cluster data to be distributed to the server set of the target deployment type, so that the technical problem of low server resource utilization rate is solved, and the technical effect of improving the server resource utilization rate is realized.
Example 4
According to an embodiment of the present invention, there is also provided a computer-readable storage medium including a stored program, where the program when executed by a processor controls a device in which the computer-readable storage medium is located to perform a method of data processing in embodiment 1 of the present invention.
Example 5
According to an embodiment of the present invention, there is also provided a processor for running a program, wherein the program runs to perform the method of data processing described in embodiment 1.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present invention, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (9)

1. A method of data processing, comprising:
Acquiring a first request from a client, wherein the first request is used for requesting a server set matched with cluster data to be distributed, and the cluster data to be distributed is cluster data of a remote dictionary server;
In response to the first request, determining a server set of a target deployment type from among the server sets of multiple deployment types;
Distributing the cluster data to be distributed to the server set of the target deployment type;
Wherein the method further comprises: converting the first request into a second request, wherein the second request is used for requesting an instance of the cluster data; obtaining the instance in response to the second request; the instance is started on the set of servers of the target deployment type.
2. The method of claim 1, wherein determining a set of servers of a target deployment type from among a plurality of sets of servers of a deployment type comprises:
And determining the server set of the target deployment type from the server sets of the plurality of deployment types based on the grade of the cluster data to be distributed, wherein the grade of the cluster data to be distributed is used for representing the service type selected by the client.
3. The method of claim 2, wherein determining the set of servers of the target deployment type among the set of servers of the plurality of deployment types based on the level of cluster data to be allocated comprises:
And determining a server set of one deployment type in the plurality of server sets of deployment types as the server set of the target deployment type in response to the level of the cluster data to be allocated being a first level.
4. The method of claim 2, wherein determining the set of servers of the target deployment type among the set of servers of the plurality of deployment types based on the level of cluster data to be allocated comprises:
And determining at least two of the server sets of the deployment types as the server set of the target deployment type in response to the level of the cluster data to be allocated being a second level.
5. The method of claim 1, wherein prior to launching the instance on the set of servers of the target deployment type, the method further comprises:
An interaction component is deployed on each server in the set of servers, wherein the interaction component is configured to interact between each server in the set of servers and the set of servers of the target deployment type.
6. The method of claim 1, wherein after launching the instance on the set of servers of the target deployment type, the method further comprises:
And constructing a cluster topology among a plurality of the instances of the cluster data to be distributed based on the instance roles of the instances, wherein the cluster topology is a cluster for building the cluster data to be distributed to the server set of the target deployment type.
7. A data processing apparatus, comprising:
The server comprises an acquisition unit, a server and a server management unit, wherein the acquisition unit is used for acquiring a first request from a client, wherein the first request is used for requesting a server set matched with cluster data to be distributed, and the cluster data to be distributed is cluster data of a remote dictionary server;
A determining unit, configured to determine, in response to the first request, a server set of a target deployment type from among server sets of a plurality of deployment types;
The distribution unit is used for distributing the cluster data to be distributed to the server set of the target deployment type;
wherein the apparatus is further configured to perform the steps of: converting the first request into a second request, wherein the second request is used for requesting an instance of the cluster data; obtaining the instance in response to the second request; the instance is started on the set of servers of the target deployment type.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored program, wherein the program, when run, controls a device in which the computer-readable storage medium is located to perform the data processing method of any one of claims 1 to 6.
9. A processor for executing a program, wherein the program when executed by the processor performs the data processing method of any one of claims 1 to 6.
CN202210336070.9A 2022-03-31 2022-03-31 Data processing method, device, storage medium and processor Active CN114760307B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210336070.9A CN114760307B (en) 2022-03-31 2022-03-31 Data processing method, device, storage medium and processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210336070.9A CN114760307B (en) 2022-03-31 2022-03-31 Data processing method, device, storage medium and processor

Publications (2)

Publication Number Publication Date
CN114760307A CN114760307A (en) 2022-07-15
CN114760307B true CN114760307B (en) 2024-06-21

Family

ID=82329089

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210336070.9A Active CN114760307B (en) 2022-03-31 2022-03-31 Data processing method, device, storage medium and processor

Country Status (1)

Country Link
CN (1) CN114760307B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114090265A (en) * 2021-11-30 2022-02-25 度小满科技(北京)有限公司 Data processing method, data processing device, storage medium and computer terminal

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10044832B2 (en) * 2016-08-30 2018-08-07 Home Box Office, Inc. Data request multiplexing
US10191824B2 (en) * 2016-10-27 2019-01-29 Mz Ip Holdings, Llc Systems and methods for managing a cluster of cache servers
CN111813513B (en) * 2020-06-24 2024-05-14 中国平安人寿保险股份有限公司 Method, device, equipment and medium for scheduling real-time tasks based on distribution
CN113076112A (en) * 2021-04-07 2021-07-06 网易(杭州)网络有限公司 Database deployment method and device and electronic equipment
CN113722077B (en) * 2021-11-02 2022-03-15 腾讯科技(深圳)有限公司 Data processing method, system, related device, storage medium and product

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114090265A (en) * 2021-11-30 2022-02-25 度小满科技(北京)有限公司 Data processing method, data processing device, storage medium and computer terminal

Also Published As

Publication number Publication date
CN114760307A (en) 2022-07-15

Similar Documents

Publication Publication Date Title
US11625738B2 (en) Methods and systems that generated resource-provision bids in an automated resource-exchange system
US9792155B2 (en) Dynamic job processing based on estimated completion time and specified tolerance time
US10819776B2 (en) Automated resource-price calibration and recalibration by an automated resource-exchange system
CN111966500B (en) Resource scheduling method and device, electronic equipment and storage medium
US11502972B2 (en) Capacity optimization in an automated resource-exchange system
CN102456185B (en) Distributed workflow processing method and distributed workflow engine system
CN103270492A (en) Hardware accelerated graphics for network enabled applications
WO2018044800A1 (en) Policy-based resource-exchange life-cycle in an automated resource-exchange system
CN116541134B (en) Method and device for deploying containers in multi-architecture cluster
CN114090265A (en) Data processing method, data processing device, storage medium and computer terminal
CN112000463A (en) GPU resource allocation method, system, terminal and storage medium based on CUDA
CN109032788B (en) Reserved resource pool dynamic dispatching method, device, computer equipment and storage medium
CN114968601B (en) Scheduling method and scheduling system for AI training jobs with resources reserved in proportion
CN114760307B (en) Data processing method, device, storage medium and processor
CN114721824A (en) Resource allocation method, medium and electronic device
CN117435324B (en) Task scheduling method based on containerization
CN114721818A (en) Kubernetes cluster-based GPU time-sharing method and system
CN110225088A (en) A kind of cloud desktop management method and system
CN112003931B (en) Method and system for deploying scheduling controller and related components
US20210004250A1 (en) Harvest virtual machine for utilizing cloud-computing resources
CN116755829A (en) Method for generating host PCIe topological structure and method for distributing container resources
CN115964128A (en) Heterogeneous GPU resource management and scheduling method and system
CN111026369A (en) Security market data high-speed access and forwarding platform
Hui et al. Flexible and extensible load balancing
CN111866043B (en) Task processing method, device, computing equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant