CN113765965A - Service grid system generation method and device and service grid system - Google Patents

Service grid system generation method and device and service grid system Download PDF

Info

Publication number
CN113765965A
CN113765965A CN202010847980.4A CN202010847980A CN113765965A CN 113765965 A CN113765965 A CN 113765965A CN 202010847980 A CN202010847980 A CN 202010847980A CN 113765965 A CN113765965 A CN 113765965A
Authority
CN
China
Prior art keywords
service
sidecar
program
container
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010847980.4A
Other languages
Chinese (zh)
Inventor
张晋军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN202010847980.4A priority Critical patent/CN113765965A/en
Publication of CN113765965A publication Critical patent/CN113765965A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/30Profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/34Network arrangements or protocols for supporting network services or applications involving the movement of software or configuration parameters 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • H04L69/161Implementation details of TCP/IP or UDP/IP stack architecture; Specification of modified or new header fields

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure provides a method and apparatus for generating a service grid system, a computer system, and a computer-readable storage medium. The generation method comprises the following steps: respectively deploying sidecar containers in a plurality of operation carriers; acquiring service discovery data related to a business program in a running carrier; synchronizing service discovery data related to the business program in the running carrier into the sidecar container; and generating a service grid system according to the plurality of sidecar containers synchronized with the service discovery data.

Description

Service grid system generation method and device and service grid system
Technical Field
The present disclosure relates to the field of micro service technology, and more particularly, to a method and an apparatus for generating a service grid system, a computer system, and a computer-readable storage medium.
Background
With the further deepening of micro-service architecture and the rise of cloud-native concept, the disadvantages of the traditional micro-service architecture are more and more questioned and challenged. In order to solve some problems faced by the traditional micro service architecture, the service grid technology is developed.
In the process of implementing the disclosed concept, the inventor finds that at least the following problems exist in the current service grid technology, the current service grid system has low environment adaptation capability and usually needs to be established on a certain specific running environment or operating system, so that many users hardly enjoy the bonus brought by the service grid technology.
Disclosure of Invention
In view of the above, the present disclosure provides a method and an apparatus for generating a service grid system, a computer system, and a computer-readable storage medium.
One aspect of the present disclosure provides a method for generating a service grid system, including: respectively deploying sidecar containers in a plurality of operation carriers; acquiring service discovery data related to a business program in the running carrier; synchronizing service discovery data related to business programs within the running carrier into the sidecar container; and generating the service grid system according to the plurality of sidecar containers synchronized with the service discovery data.
According to an embodiment of the present disclosure, the obtaining service discovery data related to the business program in the running carrier includes: acquiring registration data from different cluster registration centers; acquiring configuration data from different cluster configuration centers; and using the registration data and the configuration data as the service discovery data.
According to an embodiment of the present disclosure, the deploying the sidecar containers in the plurality of operation carriers respectively comprises: compiling according to a set program to obtain a container mirror image file; acquiring target resource file information according to the container mirror image file, wherein the target resource file information has configuration information related to the container mirror image file; and deploying the sidecar container for the operation carrier according to the target resource file information.
According to an embodiment of the present disclosure, acquiring target resource file information according to the container image file includes: acquiring initial resource file information; capturing the initial resource file information through a callback program; and modifying the related configuration information of the initial resource file information according to the container mirror image file to obtain the target resource file information.
According to an embodiment of the present disclosure, the method for generating the service grid system further includes: acquiring a calling logic service program; redirecting the calling logic service program to a first sidecar container, wherein the calling logic service program and the first sidecar container both belong to a first running carrier; determining a second sidecar container according to the service discovery data in the first sidecar container; determining a service logic business program according to the service discovery data in the second sidecar container, wherein the second sidecar container and the service logic business program belong to a second operation carrier; and according to the first sidecar container and the second sidecar container, realizing data communication between the calling logic business program in the first operation carrier and the service logic business program in the second operation carrier.
According to an embodiment of the present disclosure, wherein redirecting the calling logical service program to the first sidecar container comprises: acquiring address information of outlet flow of the calling logic service program; and redirecting the address information of the outlet flow of the calling logic service program to the address of the first sidecar container according to a network address translation protocol.
According to an embodiment of the present disclosure, wherein determining a service logic business process according to the service discovery data in the second sidecar container comprises: acquiring address information of the inlet flow of the second operation carrier; and redirecting the address information of the inlet flow of the second operation carrier into the address of the service logic business program according to a network address translation protocol.
Another aspect of the present disclosure provides a service grid system, comprising: the deployment subsystem is used for respectively deploying the sidecar containers in the plurality of operation carriers; the service grid subsystem is used for acquiring service discovery data related to the service program in the running carrier; and a data plane subsystem for synchronizing service discovery data related to the business programs in the running carrier into the sidecar containers, and generating the service grid system according to the plurality of sidecar containers synchronized with the service discovery data.
Another aspect of the present disclosure provides a generation apparatus of a service grid system, including: the deployment module is used for respectively deploying the sidecar containers in the plurality of operation carriers; an acquisition module, configured to acquire service discovery data related to a service program in the running carrier; a synchronization module for synchronizing service discovery data related to the business program in the running carrier to the sidecar container; and a generation module for generating the service grid system according to the plurality of sidecar containers synchronized with the service discovery data.
Another aspect of the present disclosure provides a computer system comprising: one or more processors; memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method as described above.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions for implementing the method as described above when executed.
Another aspect of the disclosure provides a computer program comprising computer executable instructions for implementing the method as described above when executed.
According to the embodiment of the disclosure, the sidecar containers are respectively deployed in a plurality of operation carriers, the service discovery data related to the business programs in the operation carriers are acquired, the service discovery data related to the business programs in the operation carriers are synchronized into the sidecar containers, and a technical means for generating a service grid system from the plurality of sidecar containers in which the service discovery data is synchronized, since deployment of the sidecar container and acquisition and synchronization of service discovery data do not need to rely on a specific operating environment or operating system, the technical problem of low adaptability of the current service grid system environment is at least partially overcome, a service grid system is thus obtained which can run under various operating environments or operating systems, thereby avoiding the situation that the service grid technology can not be implemented due to the environmental limitation and further enjoying the bonus technical effect brought by the service grid technology.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
FIG. 1 schematically illustrates an exemplary system architecture to which a method of generating a services grid system may be applied, according to an embodiment of the disclosure;
FIG. 2 schematically illustrates a flow chart of a method of generation of a services grid system according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a flow diagram for obtaining service discovery data related to a business program within a running carrier according to an embodiment of the disclosure;
FIG. 4 schematically illustrates a flow chart for deploying sidecar containers in a plurality of operational carriers, respectively, according to an embodiment of the present disclosure;
FIG. 5 schematically illustrates a flow diagram for obtaining target resource file information from a container image file according to an embodiment of the present disclosure;
FIG. 6 schematically illustrates a flow diagram for data communication in a services grid system according to an embodiment of the present disclosure;
FIG. 7 schematically illustrates an architecture diagram of a services grid system according to an embodiment of the present disclosure;
FIG. 8 schematically illustrates a timing diagram for data communications in a K8S dependent services grid system in accordance with an embodiment of the disclosure;
FIG. 9 schematically illustrates a timing diagram for data communications in a non-K8S serving grid system, in accordance with an embodiment of the disclosure;
FIG. 10 schematically illustrates a schematic diagram of call-side traffic hijacking in a data plane subsystem, according to an embodiment of the disclosure;
FIG. 11 schematically illustrates a schematic diagram of server side traffic hijacking in a data plane subsystem, according to an embodiment of the disclosure;
FIG. 12 schematically illustrates a schematic diagram of service registration/discovery across a K8S cluster in an Istio subsystem, according to an embodiment of the disclosure;
FIG. 13 is a schematic representation ofShows Isti according to the embodiment of the present disclosureoA schematic diagram of the service configuration functions across a K8S cluster in a subsystem;
FIG. 14 schematically shows a block diagram of a generating device of a services grid system according to an embodiment of the present disclosure; and
FIG. 15 schematically illustrates a block diagram of a computer system suitable for implementing the generation method of the service grid system described above, according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Aiming at the problems that RPC SDK (a library program for completing remote procedure call, which needs to be compiled together with a service code and run in a process) in the traditional micro-service architecture is tightly coupled with the service code, the independent development of each other is limited, and the RPC SDK has bug (fault) or new functional characteristics are added, so that the service is matched to be upgraded online; the observability is poor, the problem location is difficult, the cost is high, a plurality of interdependent micro-services form a complex calling chain, the problem location has many threads and needs cross-service and cross-department cooperation for troubleshooting; the system lacks of flow management, gray scale management and the like, and does not support the characteristics of flow splitting and migration, error and overtime injection, canary release and the like; and the safety is poor, and the like, and a service grid technology is created. The Istio system, the de facto standard for current service grid technologies, is now the most popular service grid system in the industry, outstanding and large-grained in the current multitude of service grid technologies.
In the process of implementing the disclosed concept, the inventor finds that the implementation of the service grid technology requires that the isio system is completely built on a K8S (kubernets, container cluster scheduling management system, a recognized standard of a container cluster management platform) system, and the two systems are in extremely close association. Specifically, the method comprises the following steps: the core object "Service" in the Istio needs to multiplex the Service CRD of K8S; the service registration mechanism requires an internal mechanism that utilizes K8S; various rules and strategies inside the Istio need to be described based on a CRD mechanism of K8S, and the storage of the CRD mechanism needs to adopt an etcd database of K8S; the data plane of the Istio needs to run in the Pod of K8S with the service container in the form of SideCar, and needs to pass through the Hook mechanism of K8S to transparently inject the data plane container into the service Pod; deployment and cluster management of the Istio itself also require mechanisms to fully utilize K8S, such as CRD using Service, Deployment, etc. K8S. In production practice, however, although K8S is used by many users, the versions of the users are too low to meet certain requirements of the isio system, so that the service grid cannot be realized by using the isio system, and even many users do not use K8S at all, so they cannot use the isio system, but based on the strong coupling relationship between the isio system and K8S, the users need to migrate the system to K8S from zero or upgrade K8S to enjoy the benefits of the isio system when needing to build the service grid system. But due to the extreme importance and great complexity of K8S as an infrastructure, either migrating from zero or upgrading K8S, represents a significant cost and risk to the production system, with serious accidents occurring somewhat inadvertently.
The CRD is a short hand for Custom Resource Definition, and is an extension mechanism of K8S for managed Resource objects, through which a user can define many Resource objects that K8S does not have native, thereby implementing extension of the function of K8S. The etcd database is a K/v store with K8S taking it as the store for the CRD object. SideCar, commonly called "SideCar", is a container deployment mode in a container cluster management platform (such as K8S), and refers to a way to launch a container in a Pod where a service is located to complete some non-service related functions (such as logging, monitoring, etc.), and it is understood that this is a way to deploy non-service related containers according to a 1: 1 ratio. Pod is the minimum unit of K8S container scheduling, and there may be multiple containers in a Pod, and these containers share a set of environments (such as resources like TCP/IP network stack). The container is a light virtualization technology, and one physical machine can be virtualized to form a plurality of containers so as to improve the resource utilization rate. The Hook mechanism is one of plug-in mode injection modes, and the Hook mechanism is directly used for expanding service functions in the system.
In implementing the disclosed concept, the inventors have also found that because the isio strongly depends on K8S, it is easily affected by the limitations of K8S itself, such as the Service CRD of K8S naturally supports only a single cluster, and therefore does not support Service across K8S clusters (although the K8S community has a solution, it is still very immature at the experimental stage and cannot be applied to the production environment). The reasons for this problem are: currently, each K8S cluster has its own dedicated etcd database, so the services of the same name defined in different K8S clusters are respectively located in different etcd databases, and since there is no connection between the two etcd databases, the Service container in one K8S cluster can only sense the Service in the cluster, but cannot sense the services in the other K8S clusters.
Embodiments of the present disclosure provide a method and apparatus for generating a service grid system, a computer system, and a computer-readable storage medium. The generation method comprises the steps of respectively deploying sidecar containers in a plurality of operation carriers; acquiring service discovery data related to a business program in a running carrier; synchronizing service discovery data related to the business program in the running carrier into the sidecar container; and generating a service grid system according to the plurality of sidecar containers synchronized with the service discovery data.
Fig. 1 schematically illustrates an exemplary system architecture 100 to which the generation method of the services grid system may be applied, according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104 and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired and/or wireless communication links, and so forth.
The terminal devices 101, 102, 103 may interact with the server 105 via the network 104 to receive or transmit control instructions or the like. Various operating environments can be provided on the terminal devices 101, 102, 103, for example, Linux, Unix, etc. systems can be installed, or Pod, Docker, etc. can be provided, where Pod and Docker are both open-source application container engines.
The terminal devices 101, 102, 103 may be various electronic devices having display screens and supporting different system configurations, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server providing configuration information, control instructions or service discovery data for the terminal devices 101, 102, 103. The background management server may analyze the received request instruction, and feed back a processing result (e.g., instruction information generated according to the configuration information) to the terminal device.
It should be noted that the method for generating the service grid system provided by the embodiment of the present disclosure may be generally executed by the server 105. Accordingly, the service grid system provided by the disclosed embodiments may be generally disposed in the server 105. The generation method of the service grid system provided by the embodiment of the present disclosure may also be executed by a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the service grid system provided by the embodiment of the present disclosure may also be disposed in a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Alternatively, the method for generating the service grid system provided by the embodiment of the present disclosure may also be executed by the terminal device 101, 102, or 103, or may also be executed by another terminal device different from the terminal device 101, 102, or 103. Accordingly, the service grid system provided by the embodiment of the present disclosure may also be disposed in the terminal device 101, 102, or 103, or in another terminal device different from the terminal device 101, 102, or 103.
For example, sidecar containers, business programs, service discovery data, and/or control instructions based thereon may be originally stored in any of terminal devices 101, 102, or 103 (e.g., terminal device 101, but not limited thereto), or stored on an external storage device and may be imported into terminal device 101. Then, the terminal device 101 may locally execute the method for generating the service grid system provided by the embodiment of the present disclosure, or transmit the sidecar container, the service program, the service discovery data and/or the control instruction based thereon to other terminal devices, servers, or server clusters, and execute the method for generating the service grid system provided by the embodiment of the present disclosure by other terminal devices, servers, or server clusters receiving the sidecar container, the service program, the service discovery data and/or the control instruction based thereon.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 schematically shows a flow chart of a method of generating a services grid system according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S201 to S204.
In operation S201, sidecar containers are respectively deployed in a plurality of operation carriers.
According to an embodiment of the present disclosure, a runtime carrier is used to provide a runtime environment for an application, the runtime carrier may be, for example, a physical machine, a vm (virtual machine) virtual machine, or a Pod on a K8S system, and the multiple runtime carriers may be on the same physical machine, virtual machine, or K8S system, and may be, for example, different pods in the same K8S system, for managing different business programs. The container may be, for example, a container built by the Docker software on a physical machine or VM virtual machine, or a container built within a Pod. And the sidecar container is a container for running the data plane program. The data plane program may be, for example, SideCar, Proxy, or Enovy, where SideCar, Proxy, and Enovy are all programs for implementing forwarding agent work, and the data plane program running in the SideCar container is used as an intermediate agent to implement data communication between specific service programs.
According to the embodiment of the disclosure, the service program and the data plane program are operated in different containers, and various programs in different containers are independent from each other and do not interfere with each other. Therefore, to implement normal inter-service communication in the service grid system, it is first to inject a sidecar container for each service program to implement the proxy service. Each party of the service program may be, for example, a caller, a callee, a client, or a server, and each party of the service program may include a plurality of services, implementing the same or different tasks.
One or more sidecar containers may be provided, and generally one sidecar container is provided. In some cases, for example, in a case where several aspects of a complete service need to be completed together, a plurality of sidecar containers may be generated as needed, each sidecar container corresponds to a data plane program in different aspects, and the plurality of sidecar containers cooperate with each other to complete application or access of the service.
It should be further noted that each service program and the sidecar container injected corresponding to the service program are located on the same operation carrier, the service programs of different parties are located on different operation carriers, and the sidecar containers corresponding to the service programs of different parties are located on the operation carrier corresponding to the service program.
In operation S202, service discovery data related to a business program within a running carrier is acquired.
According to an embodiment of the present disclosure, the service discovery data may include, for example, a service instance and a target address and a target port associated with the service instance to determine a traffic flow direction of the associated service, thereby implementing targeted service communication.
According to the embodiment of the disclosure, service discovery data may change at different times, for example, related service instances may increase or decrease, or the like, or a situation that the service instance is unavailable due to abnormalities such as power failure, network disconnection, memory overflow, or insufficient disk capacity, which may occur in an underlying machine related to providing a certain service, or the like. Therefore, the latest service discovery data related to the service program needs to be acquired in real time to ensure the feasibility and integrity of the service completion.
It should be noted that the service discovery data may be obtained directly from the existing system data by establishing a connection relationship with the existing network system, or may be obtained in a customized manner.
In operation S203, service discovery data related to the business program within the run carrier is synchronized into the sidecar container.
According to the embodiment of the disclosure, the service discovery data with dynamic changes obtained through the operation is synchronized into the sidecar container in real time, and the data plane program is updated in real time according to the changes, so that the reliability of service communication among different running carriers is ensured.
In operation S204, a service grid system is generated from the plurality of sidecar containers synchronized with the service discovery data.
According to the embodiment of the disclosure, independent operation containers are respectively set for the service program and the data plane program in different operation carriers, and the configuration of a complete sidecar container is realized based on the obtained service discovery data and the data plane program, so that service communication (for example, service interaction such as service discovery and load balancing) between different operation carriers can be realized through the sidecar container, and a complete service grid system is formed by all the programs.
According to the embodiment of the disclosure, the sidecar containers are respectively deployed in a plurality of operation carriers, the service discovery data related to the business programs in the operation carriers are acquired, the service discovery data related to the business programs in the operation carriers are synchronized into the sidecar containers, and a technical means for generating a service grid system from the plurality of sidecar containers in which the service discovery data is synchronized, since deployment of the sidecar container and acquisition and synchronization of service discovery data do not need to rely on a specific operating environment or operating system, the technical problem of low adaptability of the current service grid system environment is at least partially overcome, a service grid system is thus obtained which can run under various operating environments or operating systems, thereby avoiding the situation that the service grid technology can not be implemented due to the environmental limitation and further enjoying the bonus technical effect brought by the service grid technology.
According to an embodiment of the present disclosure, the service grid system may be, for example, an Istio system built in part on a K8S system, or may be an Istio system built on a non-K8S system. The deployment mode of the sidecar container may be, for example, a data plane program for directly configuring the isotio in the system environment provided by K8S, or may be deployment implemented by designing a specific running program. The service discovery data may be obtained directly based on the originally existing K8S cluster, or may be obtained by establishing a connection relationship with other system clusters, for example. The process of synchronizing the service discovery data to the sidecar container may be implemented directly, for example, based on the environment provided by the K8S system, or may be implemented through other synchronization means.
Through the embodiment of the disclosure, the system environment of the K8S system can be partially or completely separated, a complete service grid system is generated, the strong dependency relationship between the Istio system and the K8S is changed, the relationship between the Istio system and the K8S is changed into a weak dependency relationship, even into an independent relationship, and therefore the implementation of falling to the ground of the Istio is greatly changed and pushed, the falling to the ground is easier and the risk is very low, and the bonus brought by the Istio can be further enjoyed.
The method shown in fig. 2 is further described with reference to fig. 3-6 in conjunction with specific embodiments.
Fig. 3 schematically shows a flow chart for obtaining service discovery data related to a business program within a running carrier according to an embodiment of the present disclosure.
As shown in fig. 3, the method includes operations S301 to S303.
In operation S301, registration data from different cluster registries is obtained.
According to the embodiment of the disclosure, the registration data characterizes the operation state of the underlying device related to the application service, such as whether the underlying device can work normally, and the like, and if the underlying device cannot work normally, the underlying device is deleted from the registration data so as to maintain the normal and stable operation of the whole system. The registry performs the above maintenance work, and the registry can be one or multiple across the cluster. The different clusters may be of any nature, and may be, for example, existing server clusters, K8S system clusters or other clusters, and so forth.
In operation S302, configuration data from different cluster configuration centers is obtained.
According to an embodiment of the present disclosure, the configuration data characterizes configuration information of the service related to the system. The configuration center completes maintenance work aiming at configuration data, and the configuration center can be one or a plurality of configuration centers crossing the cluster.
In operation S303, the above-mentioned registration data and configuration data are used as service discovery data.
According to the embodiment of the disclosure, the registration data and the configuration data distributed in different clusters synchronize their own data information to a unified control plane platform in a plug-in manner, and the unified control plane platform may be, for example, a Pilot module of the otio system (a module in the control plane of the otio system that is responsible for completing service discovery, traffic management, and intelligent routing). When the Pilot program is started, a configuration file is operated, the configuration file comprises addresses of different clusters which finish service discovery, and the communication between the Pilot and each cluster is realized by operating the configuration file, so that the synchronization process is finished.
It should be noted that the Pilot does not perform service discovery, but synchronizes the service discovery data in the cluster to its own memory, and each cluster implements the function of the registration center or the configuration center.
Through the embodiment of the disclosure, only the Pilot module of the Istio system is used in the process of realizing the service grid system, and dependence items on K8S in a plurality of native Istio are reduced. Meanwhile, the Pilot module obtained by the embodiment of the disclosure can be regarded as a common application program, data communication with different clusters can be directly realized, and the problem that native Istio does not support service registration/discovery and service configuration of a cross-K8S cluster is effectively solved.
Fig. 4 schematically illustrates a flow chart for deploying sidecar containers in a plurality of operational carriers, respectively, according to an embodiment of the present disclosure.
As shown in fig. 4, the method includes operations S401 to S403.
In operation S401, a container image file is compiled according to a set program.
According to the embodiment of the disclosure, the setting program is the data plane program, and the binary image, namely the container image file, is obtained through compiling.
In operation S402, target resource file information is acquired according to the container image file, wherein the target resource file information has configuration information related to the container image file.
According to the embodiment of the present disclosure, the target resource file information may further include, for example, a target execution carrier for the business program, specifications (such as CPU, memory, and the like) of the target execution carrier, an image file, a relevant path, and other configuration information. The configuration information associated with the container image file may include, for example, information associated with the SideCar, a path code, and some additional other attributes.
In operation S403, a sidecar container is deployed for the operation carrier according to the target resource file information.
According to the embodiment of the disclosure, a target operation carrier is determined according to target resource file information, and then the image mirror image is loaded by the target operation carrier to obtain a sidecar container.
FIG. 5 schematically shows a flowchart for obtaining target resource file information from a container image file according to an embodiment of the present disclosure.
As shown in fig. 5, the method includes operations S501 to S503.
In operation S501, initial resource file information is acquired.
According to an embodiment of the present disclosure, the initial resource file information does not include the above-described configuration information related to the container image file.
In operation S502, the initial resource file information is captured by the callback program.
According to an embodiment of the present disclosure, the callback program may be, for example, a Hook program developed on K8S.
In operation S503, the related configuration information of the initial resource file information is modified according to the container image file, so as to obtain the target resource file information.
According to the embodiment of the disclosure, under the condition that the service program needs to be issued, the Hook program in the previous operation is actively called, and the Hook program modifies the relevant configuration information so as to include the configuration information (namely, the information related to the sidecar container) related to the container mirror image file, and finally the target resource file information is obtained.
Through the above embodiment of the present disclosure, the K8S can automatically start the sidecar container when actually starting to publish the service, and in this process, the user can still use the past service publishing process without doing any extra things for starting the sidecar container, thereby achieving the effect of automatic transparency when the data plane program is injected.
Figure 6 schematically illustrates a flow diagram for data communication in a serving grid system according to an embodiment of the disclosure.
As shown in fig. 6, the method includes operations S601 to S605.
In operation S601, a calling logical service program is acquired.
According to embodiments of the present disclosure, the calling logical service program may be, for example, a specific application software having server data to be accessed when the application software is operating normally.
In operation S602, the calling logic service program is redirected to a first sidecar container, where the calling logic service program and the first sidecar container both belong to a first operation carrier.
According to the embodiment of the disclosure, the first SideCar container is a Proxy service program for the application software, and may be, for example, a SideCar or a Proxy. The calling flow or the communication data of the calling logic service program can be intercepted by the SideCar or Proxy during actual communication, and the SideCar or Proxy replaces the SideCar or Proxy to complete the subsequent communication.
In operation S603, a second sidecar container is determined according to the service discovery data within the first sidecar container.
According to the embodiment of the disclosure, the second SideCar container is the SideCar or Proxy of the service party needing to be accessed.
In operation S604, a service logic business program is determined according to the service discovery data in the second sidecar container, wherein the second sidecar container and the service logic business program belong to the same second operation carrier.
According to the embodiment of the disclosure, the service logic business program is a service party which the application software finally needs to access, and the service party also realizes final determination through the SideCar or Proxy.
In operation S605, data communication between the call logic service program in the first runtime carrier and the service logic service program in the second runtime carrier is implemented according to the first sidecar container and the second sidecar container.
According to an embodiment of the present disclosure, the operation S602 further includes: acquiring address information of outlet flow of a calling logic service program; and redirecting the address information of the outlet flow of the calling logic service program to the address of the first sidecar container according to the network address translation protocol.
According to an embodiment of the present disclosure, the operation S604 further includes: acquiring address information of the inlet flow of a second operation carrier; and redirecting the address information of the inlet flow of the second operation carrier into the address of the service logic business program according to the network address translation protocol.
Through the embodiment of the disclosure, the sidecar container can be obtained through the transparently injected data plane program, so that the communication between the business services is realized, and before the communication, the specific environment construction modes such as removing the K8S dependence, crossing the service cluster and the like are further matched, so that a service grid system with high environment compatibility is obtained.
Figure 7 schematically illustrates an architecture diagram of a services grid system according to an embodiment of the present disclosure.
It should be noted that the control plane of the native isition system includes three main modules, namely a polict, a Mixer, and an isition-Auth, where the polict is used to implement functions such as service discovery, traffic management, and intelligent routing, the Mixer is used to implement functions related to telemetry, and the isition-Auth mainly completes functions of mTLS secure communication. mTLS is a bidirectional TLS (secure transport layer protocol) secure communication mechanism. There is a strong coupling relationship between these three modules of the native Istio system and K8S. The purpose of the present disclosure is to reduce the strong dependence of isto on K8S, thereby reducing the risk of migrating and upgrading K8S, even making isto independent of K8S, and construct a service grid system that can be used in a more open environment.
Before the implementation of the present disclosure, considering that the Mixer is eliminated in the present disclosure because of performance problems, the mTLS secure communication function of the isio-Auth is not particularly needed in the intra-enterprise network environment, so that the two modules are eliminated in the present disclosure, and many related dependencies of the native isio on K8S can be eliminated. The Polit module is mainly reserved and completes the functions of service discovery and service configuration, the Polit module communicates with the data plane through xDS protocol, and the data plane can complete the functional characteristics of back-end service addressing, flow management, gray level management and the like after receiving the relevant information. The three modules of the native Istio mainly rely on the apis erver of K8S (application program interface service, specifically referred to as apis erver exposed outside K8S, through which other programs complete interaction with K8S) to access the corresponding CRD object, and by modifying the native Istio and adding related switch parameters therein to control the poll, the Mixer and the Istio-Auth to access the apis erver of K8S, the interaction with K8S directly (other dependencies on K8S can also be solved in a similar way) can be avoided, so as to achieve the goal of completely removing K8S dependency.
As shown in FIG. 7, the service grid system of the present disclosure includes a number subsystem, a data plane subsystem, and a service grid subsystem.
And the deployment subsystem is used for respectively deploying the sidecar containers in the plurality of operation carriers.
According to an embodiment of the present disclosure, the "deployment subsystem" mainly addresses the problem of automatic injection (i.e., deployment of sidecar containers) of a data plane program (here, Proxy is taken as an example). In the service grid technology, the Proxy and the business program are located at one place (such as in the same physical machine, VM virtual machine or Pod), but they are two independent processes, and only the same TCP/IP network stack facility is shared between the two processes. From the deployment perspective, the business program has no relation with the Proxy, and the user should not be aware of the existence of the Proxy, so the problem of Proxy automatic injection must be solved.
And the service grid subsystem is used for acquiring service discovery data related to the service program in the running carrier.
According to an embodiment of the present disclosure, the "issue subsystem" (i.e., the service grid subsystem) plays a role of "brain" in the service grid technology, and is a control plane of the service grid technology. The isio subsystem communicates with the sidecar container with Proxy injected in the data plane through xDS protocol (API for interaction between the data plane and the control plane in the service grid, and configuration and control of the data plane can be dynamically completed through xDS control plane), and dynamically completes configuration and control of the sidecar container, such as informing IP/Port (address/Port), routing rule, etc. of the relevant process backend service instance in the sidecar container.
And the data plane subsystem is used for synchronizing the service discovery data related to the business programs in the running carriers into the sidecar containers and generating the service grid system according to the plurality of sidecar containers synchronized with the service discovery data.
According to the embodiment of the disclosure, the "data plane subsystem" is a place for specifically completing mutual communication between micro services in the service grid technology, the access flow of a service program (including a calling end and a service end) is redirected to the sidecar container through a flow hijacking mechanism, and the sidecar container completes tasks including protocol coding and decoding, service registration/discovery, routing, health detection, load balancing and the like. The business logic and the Proxy in the sidecar container are two independent processes, the two processes realize thorough decoupling, the business logic only needs to concentrate on completing the business, the rest non-business logic is completely completed by the Proxy, and the upgrading of the Proxy through a hot upgrading technology can not cause the interruption of the business logic. The currently available Proxy has open source implementations such as Envoy (data plane for Istio standard), Mosn (data plane implementation provided by ant golden suit), etc., all of which are compatible with the xDS protocol.
Through the embodiment of the disclosure, the Istio subsystem only reserves the Pilot module in the native Istio system for synchronizing service discovery data, and eliminates the two modules of the Mixer and the Istio-Auth. The Istio subsystem after K8S is relied on can be understood as just a common application program, can be treated as a common application program, can be deployed in an environment without K8S, and can also be deployed in a low-version K8S environment which does not support Istio originally.
According to the embodiments of the present disclosure, there are two solutions to the above-mentioned problem of automatic injection of data plane programs (taking Proxy as an example) in the deployment subsystem level.
The first scheme is as follows: compiling according to a set program to obtain a container mirror image file; acquiring initial resource file information; capturing initial resource file information through a callback program; modifying the related configuration information of the initial resource file information according to the container mirror image file to obtain target resource file information; and deploying the sidecar container for the operation carrier according to the target resource file information. Completing the automatic injection of Proxy.
Fig. 8 schematically shows a timing diagram of data communication in a K8S dependent serving grid system according to an embodiment of the disclosure.
As shown in fig. 8, operations S801 to S807 are included, wherein operations S801 to S803 complete the automatic injection of Proxy depending on K8S.
In operation S801, when the business program is issued, the K8S resource file and configuration are transferred.
According to the embodiment of the disclosure, when the deployment platform needs to release an application, some resource files and configurations are filled first and are further transferred, and the resource files and configurations are initial resource file information in the first scheme.
In operation S802, the relevant information is searched, and the K8S resource file is modified.
According to an embodiment of the present disclosure, the deployment platform may transmit the original edited resource file (i.e., the initial resource file information) to a callback program (e.g., a Hook program), and then the callback program modifies the resource file.
In operation S803, the modified K8S resource file (containing Proxy-related information) is returned.
According to the embodiment of the present disclosure, the modified K8S resource file is the target resource file information in the first scheme, and the target resource file information is returned to the deployment platform.
In operation S804, the Proxy-injected sidecar container is started.
In operation S805, work is initiated while establishing communication with a service discovery data provider (Istio Pilot) of the service grid subsystem to receive configuration information.
In operation S806, the service logic container is started.
In operation S807, work is started, and data communication is performed (by Proxy) when necessary.
According to the embodiment of the present disclosure, in the solution that depends on K8S, by developing the Hook program of K8S and then registering it in K8S, when K8S wants to publish a service program, the Hook program will be actively called, and the Hook program will include Proxy related information by modifying the related configuration information, so that K8S will automatically start Proxy when really starting to publish the service (i.e., operation S804). In the process, the user still continues the past service publishing process without doing any additional things for the process, so that the automatic transparent effect is achieved.
Scheme II: compiling according to a set program to obtain a container mirror image file; acquiring target resource file information according to the container mirror image file, wherein the target resource file information has configuration information related to the container mirror image file; and deploying the sidecar container for the operation carrier according to the target resource file information. The automatic injection of Proxy other than K8S is completed.
Fig. 9 schematically illustrates a timing diagram for data communication in a non-K8S serving grid system in accordance with an embodiment of the disclosure.
As shown in FIG. 9, operations S901-S907 are included, wherein operations S901-S903 complete the automatic injection of the Proxy other than K8S.
In operation S901, the release of the service program is started.
In operation S902, Proxy-related information (package location, etc.) is searched for, and corresponding configuration information is generated.
According to an embodiment of the present disclosure, the corresponding configuration information is the configuration information related to the container image file in the second scheme.
In operation S903, the program package and the configuration file of the Proxy are downloaded to the corresponding runtime carrier.
According to the embodiment of the disclosure, the obtained object is the sidecar container with the target resource file information obtained by the second scheme.
In operation S904, the Proxy program is started on the corresponding runtime carrier.
In operation S905, work is started while establishing communication with a service discovery data provider (issue Pilot) of the service grid subsystem to receive configuration information.
In operation S906, return is made.
In operation S907, the original other steps are continued.
According to the embodiment of the disclosure, in the non-K8S solution, a service publishing process of an existing deployment platform is modified, and before a service program is started, a new link is inserted, in which a binary file and configuration information of a Proxy are downloaded to a corresponding operation carrier (such as a physical machine, a virtual machine VM or Pod), then the Proxy is started, and then the process is continued according to the original process. Through the improvement of the service release process of the existing deployment platform, the transparency of Proxy injection to users can be achieved.
As can be seen from the above two solutions, whether the K8S platform or the non-K8S platform can implement automatic transparent injection to Proxy, so by the above-mentioned embodiments of the present disclosure, the sidecar container can be deployed on any running carrier (physical machine, virtual machine VM or Pod) in any environment.
According to embodiments of the present disclosure, the above data plane subsystem level is addressed.
FIG. 10 schematically illustrates a schematic diagram of call side traffic hijacking in a data plane subsystem, according to an embodiment of the disclosure.
According to the embodiment of the present disclosure, in the data plane subsystem, the problem of traffic hijacking is involved in sending a request to a server without changing a calling end code, a network address translation protocol (an Iptables technology, a tool for configuring and controlling the TCP/IP protocol stack behavior of an operating system kernel, which is ubiquitous in the Linux operating system) is generally adopted, and the traffic hijacking at the calling end involves hijacking an egress traffic of a calling logic, and the principle is as shown in fig. 10.
Referring to fig. 10, since the calling logic service program (taking a calling end process as an example) and the first sidecar container (taking a Proxy process as an example) belong to the same TCP/IP Network stack, the egress traffic of the calling end accessing the server can be redirected to Proxy, which utilizes an NAT (Network Address Translation) mechanism of Iptables, and writes a relevant rule by aiming at an OUTPUT basic chain of Iptables, here, to change an IP Address of a destination in a communication process, thereby achieving the purpose of redirecting the egress traffic.
Fig. 11 schematically illustrates a schematic diagram of service side traffic hijacking in a data plane subsystem according to an embodiment of the present disclosure.
According to the embodiment of the present disclosure, in combination with the above traffic hijacking principle for the calling end, the service end receives the request relating to the hijacking problem of the ingress traffic without changing the code of the service end, and the principle is as shown in fig. 11.
Referring to fig. 11, since the service logic service program (taking the server process as an example) and the second sidecar container (taking the Proxy process as an example) belong to the same TCP/IP network stack, the ingress traffic of the operation carrier where the server is located can be redirected to Proxy, and the NAT mechanism of Iptables is also utilized, and the IP address of the communication destination is also changed here by writing the relevant rule according to the PREROUTING basic chain of Iptables, so as to achieve the purpose of redirecting the ingress traffic.
Based on the traffic hijacking method of fig. 10 and fig. 11, the data communication process for the data plane subsystem includes: acquiring a calling logic service program of a calling terminal instance; redirecting the calling logic service program to a first sidecar container of the calling end instance; determining a second sidecar container of the server instance according to the service discovery data in the first sidecar container of the calling instance; determining a service logic business program of the server instance according to the service discovery data in the second sidecar container of the server instance; and realizing data communication between the calling end instance and the service end instance according to the first sidecar container of the calling end instance and the second sidecar container of the service end instance.
According to the hijacking principle of the outlet flow and the inlet flow in the embodiment of the disclosure, the calling end and the service end fully utilize the function of the Proxy-injected sidecar container to complete various functions of the micro-service, but only do business-related matters. Due to the universality of the Iptables mechanism, traffic hijacking can be done regardless of the running carrier (physical machine, virtual machine VM or Pod) and whether it is a K8S environment.
According to embodiments of the present disclosure, the above-described Istio subsystem level is addressed.
Fig. 12 schematically illustrates a schematic diagram of service registration/discovery across a K8S cluster in an isto subsystem, according to an embodiment of the disclosure.
According to the embodiment of the present disclosure, as shown in fig. 12, there is only one centralized "custom registry" in the whole system, and all server instances (regardless of the cross-K8S cluster or other deployment forms) register their own IP/PORT to the custom registry for unified registration and management. There is only one module providing service discovery data related to service registration/discovery in the whole system, for example, the module may be an issue Pilot module, and all the call terminals obtain service IP/PORT information through xDS protocol and poll communication. And realizing data synchronization of service IP/PORT with the custom registration center in a list/watch mode by realizing a plug-in the policy. In this way, the native Istio's support problem for cross K8S cluster service registration/discovery is solved. The Watch mechanism is characterized in that besides loading the data of the registration center to the memory through the whole list, the Watch mechanism also comprises a set of Watch mechanisms, informs the Pilot of data change, and then combines the changed data with the original list data to form new data so as to ensure the accuracy of the data in the Pilot.
Similar issues exist for service configuration information, as native Istio does not support service registration/discovery across K8S clusters. The issue Pilot provides rules such as VirtualService, DestinationRule, and policy objects, through which the management and control of the data plane are completed.
FIG. 13 schematically illustrates a schematic diagram of service configuration functionality across a K8S cluster in an Istio subsystem, according to an embodiment of the disclosure.
According to the embodiment of the present disclosure, as shown in fig. 13, there is only one centralized "custom configuration center" in the whole system, and all configuration information is uniformly managed by the centralized "custom configuration center". There is only one module providing service discovery data related to service configuration in the whole system, for example, the module may be an issue Pilot module, and all sidecar containers (including a calling end or a service end, and regardless of whether they are deployed across a K8S cluster or in other deployment modalities) communicate with Pilot through a xDS protocol to obtain related configuration information. And realizing data synchronization of service configuration with a custom configuration center in a list/watch mode by realizing a plug-in the Pilot. In this way, the native Istio's support problem for cross K8S cluster service configuration is solved.
Through the embodiment of the disclosure, the service registration/discovery function of the cross-K8S cluster is completed by the 'custom registration center' through the extension mechanism of the Istio, the service configuration function of the cross-K8S cluster is completed by the 'custom configuration center', and the problem of unsupported of the native Istio to the service registration/discovery and service configuration of the cross-K8S cluster is effectively solved.
Fig. 14 schematically shows a block diagram of a generating device of a services grid system according to an embodiment of the present disclosure.
As shown in fig. 14, the generating apparatus 1400 of the service grid system includes a deploying module 1410, an obtaining module 1420, a synchronizing module 1430 and a generating module 1440.
A deployment module 1410 configured to deploy the sidecar containers in a plurality of operation carriers, respectively.
An obtaining module 1420, configured to obtain service discovery data related to the service program in the running carrier.
A synchronization module 1430 for synchronizing service discovery data related to the business program within the running carrier into the sidecar container.
A generating module 1440 is configured to generate the service grid system according to the plurality of sidecar containers synchronized with the service discovery data.
According to the embodiment of the disclosure, the sidecar containers are respectively deployed in a plurality of operation carriers, the service discovery data related to the business programs in the operation carriers are acquired, the service discovery data related to the business programs in the operation carriers are synchronized into the sidecar containers, and a technical means for generating a service grid system from the plurality of sidecar containers in which the service discovery data is synchronized, since deployment of the sidecar container and acquisition and synchronization of service discovery data do not need to rely on a specific operating environment or operating system, the technical problem of low adaptability of the current service grid system environment is at least partially overcome, a service grid system is thus obtained which can run under various operating environments or operating systems, thereby avoiding the situation that the service grid technology can not be implemented due to the environmental limitation and further enjoying the bonus technical effect brought by the service grid technology.
According to an embodiment of the present disclosure, the generating device of the service grid system further includes a first obtaining submodule, a second obtaining submodule, and a defining submodule.
And the first acquisition submodule is used for acquiring the registration data from different cluster registration centers.
And the second acquisition submodule is used for acquiring the configuration data from different cluster configuration centers.
And the definition submodule is used for taking the registration data and the configuration data as service discovery data.
According to an embodiment of the present disclosure, the generating device of the service grid system further includes a compiling submodule, a third obtaining submodule, and a deploying submodule.
And the compiling submodule is used for compiling the container mirror image file according to the set program.
And the third obtaining submodule is used for obtaining target resource file information according to the container mirror image file, wherein the target resource file information has configuration information related to the container mirror image file.
And the deployment submodule is used for deploying the sidecar container for the operation carrier according to the target resource file information.
According to an embodiment of the present disclosure, the generation apparatus of the service grid system further includes an obtaining unit, a capturing unit, and a modifying unit.
And the acquisition unit is used for acquiring the initial resource file information.
And the capturing unit is used for capturing the initial resource file information through the callback program.
And the modifying unit is used for modifying the related configuration information of the initial resource file information according to the container mirror image file to obtain the target resource file information.
According to an embodiment of the present disclosure, the generation apparatus of the service grid system further includes an obtaining unit, a redirecting unit, a first determining unit, a second determining unit, and a communication unit.
And the acquisition unit is used for acquiring and calling the logic service program.
And the redirection unit is used for redirecting the calling logic service program to the first sidecar container, wherein the calling logic service program and the first sidecar container belong to the first operation carrier.
A first determining unit for determining a second sidecar container according to the service discovery data in the first sidecar container.
And the second determining unit is used for determining the service logic business program according to the service discovery data in the second sidecar container, wherein the second sidecar container and the service logic business program belong to a second running carrier.
And the communication unit is used for realizing data communication between the calling logic business program in the first running carrier and the service logic business program in the second running carrier according to the first sidecar container and the second sidecar container.
According to an embodiment of the present disclosure, the generating device of the service grid system further includes a first obtaining subunit and a first redirecting subunit.
And the first acquiring subunit is used for acquiring the address information of the outlet flow of the calling logic service program.
And the first redirection subunit is used for redirecting the address information of the outlet flow of the calling logic service program into the address of the first sidecar container according to the network address translation protocol.
According to an embodiment of the present disclosure, the generation apparatus of the service grid system described above further includes a second acquisition subunit and a second redirection subunit.
And the second acquisition subunit is used for acquiring the address information of the inlet flow of the second operation carrier.
And the second redirection subunit is used for redirecting the address information of the inlet flow of the second running carrier into the address of the service logic business program according to the network address translation protocol.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any number of the deployment module 1410, the acquisition module 1420, the synchronization module 1430, and the generation module 1440 may be combined and implemented in one module/sub-module/unit/sub-unit, or any one of the modules/sub-modules/units/sub-units may be split into multiple modules/sub-modules/units/sub-units. Alternatively, at least part of the functionality of one or more of these modules/sub-modules/units/sub-units may be combined with at least part of the functionality of other modules/sub-modules/units/sub-units and implemented in one module/sub-module/unit/sub-unit. According to an embodiment of the present disclosure, at least one of the deploying module 1410, the acquiring module 1420, the synchronizing module 1430 and the generating module 1440 may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or in any other reasonable manner of integrating or packaging a circuit, as hardware or firmware, or in any one of three implementations of software, hardware and firmware, or in any suitable combination of any of them. Alternatively, at least one of the deployment module 1410, the acquisition module 1420, the synchronization module 1430, and the generation module 1440 may be implemented at least in part as a computer program module that, when executed, may perform corresponding functions.
It should be noted that, in the embodiment of the present disclosure, the service grid system portion and the generation device portion of the service grid system correspond to the generation method portion of the service grid in the embodiment of the present disclosure, and the descriptions of the service grid system portion and the generation device portion of the service grid system specifically refer to the generation method portion of the service grid system, and are not described herein again.
FIG. 15 schematically illustrates a block diagram of a computer system suitable for implementing the generation method of the service grid system described above, according to an embodiment of the present disclosure. The computer system illustrated in FIG. 15 is only one example and should not impose any limitations on the scope of use or functionality of embodiments of the disclosure.
As shown in fig. 15, a computer system 1500 according to an embodiment of the present disclosure includes a processor 1501 which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)1502 or a program loaded from a storage section 1508 into a Random Access Memory (RAM) 1503. Processor 1501 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset(s) and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), and so forth. The processor 1501 may also include on-board memory for caching purposes. Processor 1501 may include a single processing unit or multiple processing units for performing different acts of a method flow in accordance with embodiments of the present disclosure.
In the RAM 1503, various programs and data necessary for the operation of the system 1500 are stored. The processor 1501, the ROM 1502, and the RAM 1503 are connected to each other by a bus 1504. The processor 1501 executes various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM 1502 and/or RAM 1503. Note that the programs may also be stored in one or more memories other than the ROM 1502 and RAM 1503. The processor 1501 may also execute various operations of the method flows according to the embodiments of the present disclosure by executing programs stored in the one or more memories.
In accordance with an embodiment of the present disclosure, system 1500 may also include an input/output (I/O) interface 1505, input/output (I/O) interface 1505 also connected to bus 1504. The system 1500 may also include one or more of the following components connected to the I/O interface 1505: an input portion 1506 including a keyboard, a mouse, and the like; an output portion 1507 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 1508 including a hard disk and the like; and a communication section 1509 including a network interface card such as a LAN card, a modem, or the like. The communication section 1509 performs communication processing via a network such as the internet. A drive 1510 is also connected to the I/O interface 1505 as needed. A removable medium 1511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1510 as necessary, so that a computer program read out therefrom is mounted into the storage section 1508 as necessary.
According to embodiments of the present disclosure, method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 1509, and/or installed from the removable medium 1511. The computer program, when executed by the processor 1501, performs the above-described functions defined in the system of the embodiments of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to an embodiment of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium. Examples may include, but are not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
For example, according to embodiments of the present disclosure, a computer-readable storage medium may include the ROM 1502 and/or RAM 1503 described above and/or one or more memories other than the ROM 1502 and RAM 1503.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (10)

1. A method of generating a services grid system, comprising:
respectively deploying sidecar containers in a plurality of operation carriers;
acquiring service discovery data related to a business program in the running carrier;
synchronizing service discovery data related to business programs within the running carrier into the sidecar container; and
the service grid system is generated from the plurality of sidecar containers with the service discovery data synchronized.
2. The method of claim 1, wherein obtaining service discovery data related to a business program within the runtime carrier comprises:
acquiring registration data from different cluster registration centers;
acquiring configuration data from different cluster configuration centers; and
taking the registration data and the configuration data as the service discovery data.
3. The method of claim 1, wherein said deploying sidecar containers in a plurality of operational carriers, respectively, comprises:
compiling according to a set program to obtain a container mirror image file;
acquiring target resource file information according to the container mirror image file, wherein the target resource file information has configuration information related to the container mirror image file; and
and deploying the sidecar container for the operation carrier according to the target resource file information.
4. The method of claim 3, wherein obtaining target resource file information from the container image file comprises:
acquiring initial resource file information;
capturing the initial resource file information through a callback program; and
and modifying the relevant configuration information of the initial resource file information according to the container mirror image file to obtain the target resource file information.
5. The method of claim 1, further comprising:
acquiring a calling logic service program;
redirecting the calling logic service program to a first sidecar container, wherein the calling logic service program and the first sidecar container both belong to a first running carrier;
determining a second sidecar container according to the service discovery data in the first sidecar container;
determining a service logic business program according to the service discovery data in the second sidecar container, wherein the second sidecar container and the service logic business program belong to a second operation carrier; and
and according to the first sidecar container and the second sidecar container, realizing data communication between the calling logic service program in the first operation carrier and the service logic service program in the second operation carrier.
6. The method of claim 5, wherein redirecting the calling logical service program to a first sidecar container comprises:
acquiring address information of outlet flow of the calling logic service program; and
and redirecting the address information of the outlet flow of the calling logic service program into the address of the first sidecar container according to a network address translation protocol.
7. The method of claim 5, wherein determining a service logic business process from the service discovery data within the second sidecar container comprises:
acquiring address information of the inlet flow of the second operation carrier; and
and redirecting the address information of the inlet flow of the second operation carrier into the address of the service logic business program according to a network address translation protocol.
8. A services grid system, comprising:
the deployment subsystem is used for respectively deploying the sidecar containers in the plurality of operation carriers;
the service grid subsystem is used for acquiring service discovery data related to the service program in the running carrier; and
and the data plane subsystem is used for synchronizing the service discovery data related to the business programs in the running carrier into the sidecar containers and generating the service grid system according to the plurality of sidecar containers synchronized with the service discovery data.
9. A computer system, comprising:
one or more processors;
a memory for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-7.
10. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to carry out the method of any one of claims 1 to 7.
CN202010847980.4A 2020-08-21 2020-08-21 Service grid system generation method and device and service grid system Pending CN113765965A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010847980.4A CN113765965A (en) 2020-08-21 2020-08-21 Service grid system generation method and device and service grid system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010847980.4A CN113765965A (en) 2020-08-21 2020-08-21 Service grid system generation method and device and service grid system

Publications (1)

Publication Number Publication Date
CN113765965A true CN113765965A (en) 2021-12-07

Family

ID=78785631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010847980.4A Pending CN113765965A (en) 2020-08-21 2020-08-21 Service grid system generation method and device and service grid system

Country Status (1)

Country Link
CN (1) CN113765965A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114024826A (en) * 2022-01-05 2022-02-08 苏州博纳讯动软件有限公司 Application multi-active system based on service grid technology and used in distributed ESB scene
CN114553959A (en) * 2022-02-21 2022-05-27 南京航空航天大学 Situation awareness-based cloud native service grid configuration on-demand issuing method and application
CN114710445A (en) * 2022-05-24 2022-07-05 阿里巴巴(中国)有限公司 Voice soft switching service method, device, system, electronic equipment and storage medium
CN114726863A (en) * 2022-04-27 2022-07-08 阿里云计算有限公司 Method, device, system and storage medium for load balancing
CN114745431A (en) * 2022-03-18 2022-07-12 上海道客网络科技有限公司 Side car technology-based non-invasive authority authentication method, system, medium and equipment
CN114844941A (en) * 2022-04-27 2022-08-02 南京亚信软件有限公司 Interface level service management method based on Istio and related device
CN114884959A (en) * 2022-03-23 2022-08-09 中国人寿保险股份有限公司 Deployment method of multi-cloud and multi-activity architecture and related equipment
CN114942797A (en) * 2022-05-28 2022-08-26 平安银行股份有限公司 System configuration method, device, equipment and storage medium based on sidecar mode
CN115065720A (en) * 2022-06-15 2022-09-16 中电云数智科技有限公司 Method and device for automatically adapting a plurality of external registries to service grid Istio
CN115174659A (en) * 2022-06-30 2022-10-11 重庆长安汽车股份有限公司 Vehicle-mounted service container, service calling method, device and medium
CN116048538A (en) * 2023-01-13 2023-05-02 中科驭数(北京)科技有限公司 Service grid deployment method and device for DPU
CN116319951A (en) * 2022-08-31 2023-06-23 京东科技信息技术有限公司 Data processing method and device

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114024826A (en) * 2022-01-05 2022-02-08 苏州博纳讯动软件有限公司 Application multi-active system based on service grid technology and used in distributed ESB scene
CN114553959A (en) * 2022-02-21 2022-05-27 南京航空航天大学 Situation awareness-based cloud native service grid configuration on-demand issuing method and application
CN114745431A (en) * 2022-03-18 2022-07-12 上海道客网络科技有限公司 Side car technology-based non-invasive authority authentication method, system, medium and equipment
CN114745431B (en) * 2022-03-18 2023-09-29 上海道客网络科技有限公司 Non-invasive authority authentication method, system, medium and equipment based on side car technology
CN114884959A (en) * 2022-03-23 2022-08-09 中国人寿保险股份有限公司 Deployment method of multi-cloud and multi-activity architecture and related equipment
CN114726863A (en) * 2022-04-27 2022-07-08 阿里云计算有限公司 Method, device, system and storage medium for load balancing
CN114844941A (en) * 2022-04-27 2022-08-02 南京亚信软件有限公司 Interface level service management method based on Istio and related device
CN114726863B (en) * 2022-04-27 2024-01-09 阿里云计算有限公司 Method, device, system and storage medium for load balancing
CN114710445A (en) * 2022-05-24 2022-07-05 阿里巴巴(中国)有限公司 Voice soft switching service method, device, system, electronic equipment and storage medium
CN114942797B (en) * 2022-05-28 2023-07-14 平安银行股份有限公司 System configuration method, device, equipment and storage medium based on side car mode
CN114942797A (en) * 2022-05-28 2022-08-26 平安银行股份有限公司 System configuration method, device, equipment and storage medium based on sidecar mode
CN115065720A (en) * 2022-06-15 2022-09-16 中电云数智科技有限公司 Method and device for automatically adapting a plurality of external registries to service grid Istio
CN115065720B (en) * 2022-06-15 2024-02-13 中电云计算技术有限公司 Method and device for automatically adapting multiple external registries to service grid Istio
CN115174659A (en) * 2022-06-30 2022-10-11 重庆长安汽车股份有限公司 Vehicle-mounted service container, service calling method, device and medium
CN115174659B (en) * 2022-06-30 2023-08-29 重庆长安汽车股份有限公司 Vehicle-mounted service container, service calling method, device and medium
CN116319951A (en) * 2022-08-31 2023-06-23 京东科技信息技术有限公司 Data processing method and device
CN116048538A (en) * 2023-01-13 2023-05-02 中科驭数(北京)科技有限公司 Service grid deployment method and device for DPU
CN116048538B (en) * 2023-01-13 2023-11-28 中科驭数(北京)科技有限公司 Service grid deployment method and device for DPU

Similar Documents

Publication Publication Date Title
CN113765965A (en) Service grid system generation method and device and service grid system
KR102430869B1 (en) Live migration of clusters in containerized environments
US11507364B2 (en) Cloud services release orchestration with a reusable deployment pipeline
US11128707B2 (en) Omnichannel approach to application sharing across different devices
KR102391806B1 (en) Integrated apis and uis for consuming services across different distributed networks
US11625281B2 (en) Serverless platform request routing
EP3170071B1 (en) Self-extending cloud
US9244817B2 (en) Remote debugging in a cloud computing environment
US20170329588A1 (en) Method and Deployment Module for Managing a Container to be Deployed on a Software Platform
US10999405B2 (en) Method for processing access requests and web browser
WO2024077885A1 (en) Management method, apparatus and device for container cluster, and non-volatile readable storage medium
US10721335B2 (en) Remote procedure call using quorum state store
US10824511B2 (en) Data migration for a shared database
CN113342457A (en) Kubernetes scheduling method based on registration and discovery of Eureka service
CN114840310A (en) Container creation method, device, electronic equipment and computer-readable storage medium
AU2018250278B2 (en) System and method for self-deploying and self-adapting contact center components
CN115378993B (en) Method and system for supporting namespace-aware service registration and discovery
CN114296953B (en) Multi-cloud heterogeneous system and task processing method
US11811878B1 (en) Session manager providing bidirectional data transport
US11853783B1 (en) Identifying hosts for dynamically enabling specified features when resuming operation of a virtual compute instance
US20240036838A1 (en) Management of resource sharing among devices operating on different platforms
CN117591140A (en) Service grid, service upgrading method, device, equipment and medium
CN116866365A (en) Multi-cluster service gateway fusion method and device and electronic equipment
KR20240067674A (en) Container-based dynamic workload processing system, apparatus and method considering data local information
CN114968476A (en) Method and terminal for realizing remote mirror image editing function of cloud computer scheme

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination