WO2024078025A1 - Traffic isolation method, apparatus, and system, and computer-readable storage medium - Google Patents

Traffic isolation method, apparatus, and system, and computer-readable storage medium Download PDF

Info

Publication number
WO2024078025A1
WO2024078025A1 PCT/CN2023/103668 CN2023103668W WO2024078025A1 WO 2024078025 A1 WO2024078025 A1 WO 2024078025A1 CN 2023103668 W CN2023103668 W CN 2023103668W WO 2024078025 A1 WO2024078025 A1 WO 2024078025A1
Authority
WO
WIPO (PCT)
Prior art keywords
microservice
lane
routing
grayscale
deployment unit
Prior art date
Application number
PCT/CN2023/103668
Other languages
French (fr)
Chinese (zh)
Inventor
林生琴
陶建波
王先荣
薛军
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024078025A1 publication Critical patent/WO2024078025A1/en

Links

Definitions

  • the present application relates to the field of Internet technology, and in particular to a traffic isolation method, device, system and computer-readable storage medium.
  • Grayscale release is to deploy the old and new versions of the application in the environment at the same time.
  • Business requests may be routed to the old version (official version) or the new version (grayscale version).
  • the grayscale release process needs to be combined with the grayscale release strategy to determine which traffic is routed to the new version, so as to control which users use the new version functions.
  • the grayscale release process since there are multiple versions of software in the environment, in order to avoid business failures due to version compatibility issues, it is necessary to isolate the traffic versions, that is, the traffic flowing into the old version will keep calling the old version when calling between microservices; the traffic flowing into the new version will keep calling the new version when calling between microservices.
  • the current traffic isolation solution mainly implements the isolation between the grayscale version of traffic and the official version of traffic by transmitting grayscale marks through traffic.
  • the gateway compares the content of the traffic (composed of request messages) with the grayscale release policy, adds a grayscale mark to the header of the request message that complies with the grayscale release policy, and routes the request message with the grayscale mark to the first service of the grayscale version, otherwise the traffic is routed to the first service of the official version. If this request causes a call to the next service, the service will put the grayscale mark in the header of the call request message when sending a call request to the next service. Traffic routing is implemented according to the grayscale mark of the traffic to achieve traffic isolation.
  • the grayscale mark is lost because the grayscale mark does not support cross-thread transmission, causing the isolation of different versions of traffic to fail.
  • the present application provides a traffic isolation method, device, system and computer-readable storage medium to achieve traffic isolation between microservices of different versions.
  • the first aspect provides a traffic isolation method.
  • the method is applied to a microservice cluster, the microservice cluster includes multiple lanes, and each lane includes deployment units corresponding to multiple microservices.
  • each deployment unit is a microservice instance of a microservice.
  • the deployment unit is configured with a routing address set, and the addresses in the routing address set are all addresses of other deployment units in the lane to which the deployment unit belongs.
  • the routing address set indicates the deployment unit that can be selected when calling other microservices.
  • the method includes that the deployment unit sends a call request to the target microservice to call the target microservice, the routing module of the deployment unit intercepts the call request, and routes the call request to the deployment unit corresponding to the target microservice according to the routing address set.
  • the deployment unit in the grayscale lane by configuring the deployment unit in the grayscale lane with only the address routing address set of the deployment unit in the grayscale lane, it is ensured that when the deployment unit of a certain service calls the deployment unit of other services, the call request can be routed to the deployment unit in the same grayscale lane without passing the grayscale label through the traffic, so as to ensure the traffic isolation in the grayscale lane and make the upgrade smoother.
  • each lane includes at least one deployment unit corresponding to all microservices on the call link. Therefore, the call of microservices of the entire call link can be implemented in the same lane without calling deployment units in other lanes, and traffic isolation between different lanes can be achieved.
  • the multiple lanes include a grayscale lane, which includes at least one grayscale version of a microservice, and the grayscale lane is used to implement the grayscale release of at least one grayscale version of a microservice.
  • the traffic in the grayscale lane is completely isolated from the traffic in other lanes, which can avoid business failures caused by version compatibility issues and smoothly implement microservice upgrades.
  • the method further includes: a routing module of the deployment unit obtains a routing strategy from the control module, the routing strategy includes a routing address set and a routing rule.
  • the routing module of the deployment unit routes the call request to the deployment unit corresponding to the target microservice according to the routing address set, including: the routing module of the deployment unit determines the target address of the deployment unit corresponding to the target microservice in the routing address set according to the routing rule; the routing module of the deployment unit routes the call request to the deployment unit corresponding to the target microservice according to the target address.
  • the second aspect provides a traffic isolation method.
  • the method is implemented by a lane state management module and a control module of the management plane.
  • the method includes: the lane state management module manages multiple lanes in the microservice cluster according to the target lane state instruction, each lane includes multiple deployment units, each A deployment unit is a microservice instance of a microservice; the control module sends a routing policy to the deployment unit in each lane, and the routing policy includes a routing address set.
  • the addresses in the routing address set are all addresses of the deployment units in the lane to which the deployment unit belongs, so that the deployment unit calls other deployment units in the lane to which the deployment unit belongs according to the routing address set.
  • each lane includes at least one deployment unit corresponding to all microservices on the call link.
  • the multiple lanes include a grayscale lane, the grayscale lane includes at least one grayscale version of a microservice, and the grayscale lane is used to implement the grayscale release of at least one grayscale version of a microservice.
  • the third aspect provides a traffic isolation device.
  • the traffic isolation device is a device in a microservice cluster, the microservice cluster includes multiple lanes, each lane includes multiple traffic isolation devices, each traffic isolation device is a microservice instance of a microservice, the traffic isolation device is configured with a routing address set, the addresses in the routing address set are all addresses of other traffic isolation devices in the lane to which the traffic isolation device belongs, and the device includes a calling module and a routing module.
  • the calling module is used to send a call request to the target microservice.
  • the routing module is used to route the call request to the deployment unit corresponding to the target microservice according to the routing address set.
  • each lane includes at least one traffic isolation device corresponding to all microservices on the call link.
  • the multiple lanes include a grayscale lane, the grayscale lane includes at least one grayscale version of a microservice, and the grayscale lane is used to implement the grayscale release of at least one grayscale version of a microservice.
  • the routing module is used to obtain a routing strategy from the control module, the routing strategy including a routing address set and a routing rule.
  • the routing module is specifically used to determine a target address of a deployment unit corresponding to a target microservice in the routing address set according to the routing rule.
  • the routing module is specifically used to route a call request to a deployment unit corresponding to a target microservice according to the target address.
  • the fourth aspect provides a traffic isolation system.
  • the system includes a lane state management module, a control module and multiple deployment units.
  • the lane state management module is used to manage multiple lanes in the microservice cluster according to the target lane state instruction
  • each lane includes multiple deployment units
  • each deployment unit is a microservice instance of a microservice.
  • the control module is used to issue a routing policy to the deployment unit in each lane, and the routing policy includes a routing address set.
  • the addresses in the routing address set are all addresses of the deployment units in the lane to which the deployment unit belongs.
  • the deployment unit is used to call other deployment units in the lane to which the deployment unit belongs according to the routing address set.
  • each lane includes at least one deployment unit corresponding to all microservices on the call link.
  • the multiple lanes include a grayscale lane, the grayscale lane includes at least one grayscale version of a microservice, and the grayscale lane is used to implement the grayscale release of at least one grayscale version of a microservice.
  • the fifth aspect provides a traffic isolation device, which includes a processor and a memory, the processor is coupled to the memory, and the processor is configured to execute the traffic isolation method in the first aspect or any possible implementation of the first aspect, or the traffic isolation method in the second aspect or any possible implementation of the second aspect based on instructions stored in the memory.
  • the sixth aspect provides a computer-readable storage medium, comprising instructions, which, when the computer-readable storage medium is run on a computer, enables the computer to perform the operations of the traffic isolation method in the first aspect or any possible implementation of the first aspect, or the traffic isolation method in the second aspect or any possible implementation of the second aspect.
  • FIG1a is a schematic diagram of traffic distribution before grayscale release provided by the present application.
  • FIG1b is a schematic diagram of traffic distribution during grayscale release provided by the present application.
  • FIG1c is a schematic diagram of traffic distribution after the grayscale release is completed provided by the present application.
  • FIG2 is a schematic diagram of a method of grayscale marking based on SDK transparent transmission provided by the present application
  • FIG3 is a schematic diagram of a grayscale marking method based on a load balancer transparent transmission provided by the present application
  • FIG4 is a schematic diagram of the structure of a flow isolation system provided by the present application.
  • FIG5 is a flow chart of a flow isolation method provided by the present application.
  • FIG6 is a schematic diagram of the structure of another flow isolation system provided by the present application.
  • FIG7 is a flow chart of another flow isolation method provided by the present application.
  • FIG8 is a schematic diagram of a traffic isolation scenario provided by the present application.
  • FIG9 is a schematic structural diagram of a flow isolation device provided by the present application.
  • FIG. 10 is a schematic diagram of the structure of another flow isolation device provided in the present application.
  • the present application provides a traffic isolation method, device, system and computer-readable storage medium to achieve traffic isolation between microservices of different versions without transmitting grayscale labels through traffic.
  • first and second are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of the indicated technical features.
  • a feature defined as “first” or “second” may explicitly or implicitly include one or more of the features.
  • plural means two or more.
  • the size of the serial number of each process does not mean the order of execution.
  • the execution order of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
  • determining B based on A does not mean determining B only based on A.
  • B can also be determined based on A and/or other information.
  • the term “if” may be interpreted to mean “when” or “upon” or “in response to determining” or “in response to detecting.”
  • the phrase “if it is determined that " or “if [a stated condition or event] is detected” may be interpreted to mean “upon determining that ## or “in response to determining that " or “upon detecting [a stated condition or event]” or “in response to detecting [a stated condition or event],” depending on the context.
  • Microservices are used to provide external services, and each microservice supports mutual connection and access to realize the functional services of the overall application.
  • Each microservice is a small functional block focusing on a single responsibility and function, and several microservices are combined to form a complex large-scale application.
  • Each microservice has its own process, which can communicate with other microservices through lightweight communication mechanisms, such as application programming interfaces (APIs) based on hypertext transfer protocol (HTTP).
  • APIs application programming interfaces
  • HTTP hypertext transfer protocol
  • Microservice architecture is an architecture that splits applications (especially large applications, such as e-commerce systems, social platforms, or financial systems) into several microservices.
  • microservice architecture can specifically include classic architectures such as Spring Cloud and Dubbo, which are less invasive.
  • the microservice architecture achieves business decoupling by decomposing functions into discrete services.
  • Each business team maintains the microservices corresponding to their respective businesses, which reduces maintenance complexity and improves maintenance efficiency.
  • other microservices can continue to work, which improves system stability.
  • microservice A calls microservice B
  • microservice B calls microservice C and microservice D.
  • a calling chain is from microservice A to microservice B, and from microservice B to microservice C and microservice D.
  • the microservice of the product system may also call the microservice of the evaluation system to display the user's historical evaluation of the product when displaying the product details. price.
  • Service governance is the process of handling the relationship between service calls in a distributed service framework. Service governance can be divided into the following levels: service registration and discovery, load balancing, fault handling and recovery (current limiting, circuit breaking, timeout, retry), grayscale release, and service tracking.
  • Service mesh is an infrastructure layer used to handle communication between services (including microservices).
  • One way to implement service mesh is to assign a separate agent to each microservice, which is also called a sidecar.
  • the agent is responsible for handling communication, monitoring and some security-related work between services, thereby providing functions such as service discovery, load balancing, encryption, identity authentication, authorization and fuse.
  • the microservice itself is no longer responsible for processing the specific logic of business requests, such as balancing based on business requests, etc., and only needs to complete business processing.
  • the sidecar can intercept all network traffic of the corresponding microservice, allowing traffic control functions to be provided according to the configuration set by the user.
  • the sidecar is deployed with each microservice started in the cluster, or runs with microservices running on virtual machines or containers.
  • canary release generally refers to a method of smoothly transitioning and upgrading between at least two versions of a microservice.
  • the grayscale release can deploy the old version v1 and the new version v2 of the microservice in the environment at the same time, and the access request may be routed to the old version v1 or the new version v2.
  • the new version can also be called a grayscale version or a test version.
  • the traffic proportion of the v1 version and the v2 version can be adjusted by defining a grayscale release strategy.
  • the grayscale release can customize the user and traffic proportion of the new version, gradually complete the full launch of the new version of the application, and maximize the control of the business risks brought by the release of the new version and reduce the impact of the failure.
  • the grayscale release process needs to be combined with the grayscale release strategy to determine which traffic is routed to the v2 version, so as to control which users use the new version functions.
  • traffic ratio is configured for different versions, for example, 20% of the traffic uses the new version and 80% of the traffic uses the old version.
  • traffic content specifically, the user identity information (cookie), message header (header), parameters (param) and other content in the traffic are parsed. Only access traffic that meets the rule constraints, such as access traffic from specific users or clients, can access the new version.
  • FIG 2 is a schematic diagram of the method of SDK transparent transmission of grayscale tags provided by this application.
  • the gateway adds a grayscale tag to the header of the request for traffic that meets the grayscale policy (composed of multiple request messages, the request message will be referred to as the request below).
  • the request message will be referred to as the request below.
  • the SDK extracts the grayscale tag in the header of the request and caches it.
  • microservice A wants to send a request to microservice B
  • the SDK extracts the cached grayscale tag and determines whether to send the request to the address of the grayscale version (v2) or the address of the official version (v1) based on the value of the grayscale tag.
  • the SDK sends a request, it also carries this grayscale tag in the header of the request for subsequent microservices to determine the destination of the request.
  • microservice A wants to send a request to microservice B
  • microservice A carries the grayscale mark of the upper entry request in the request and sends it to the load balancer.
  • the load balancer confirms the address of the next-hop microservice based on the grayscale mark in the request.
  • microservices need to continue to transparently transmit grayscale marks to help subsequent microservices determine the address of the next-hop calling object.
  • grayscale mark is lost.
  • the received request and the request to be sent are in different threads, and the grayscale mark does not support cross-thread transmission, resulting in the loss of the grayscale mark.
  • Grayscale marking cannot achieve subsequent traffic isolation, affecting the stability of the system and making different versions of microservices incompatible during the grayscale release process.
  • the present application provides the following embodiments to solve the problem that different versions of microservices cannot be smoothly compatible due to the loss of grayscale marks.
  • the present application provides a software routing isolation solution to achieve lane-level traffic isolation. Specifically, when there are multiple versions of microservices in a microservice cluster (each version of the microservice corresponds to at least one deployment unit), and the traffic of multiple versions of microservices needs to be isolated, the present application uses the deployment unit as the granularity to divide the deployment units of microservices belonging to different versions into different lanes in the microservice cluster.
  • the lane that includes the grayscale version of the microservice is called the grayscale lane
  • the lane that does not include the grayscale version of the microservice is called the formal lane.
  • the grayscale lane also includes the deployment units of the microservices that have a call relationship with the grayscale version of the microservice, that is, the grayscale lane includes the deployment units corresponding to all microservices in the call link where the grayscale version is located.
  • the routing modules in the deployment units corresponding to these microservices that need to call other microservices are all configured with a routing address set.
  • the addresses in the routing address set are all the addresses of the deployment units in the lane where the deployment unit is located.
  • the routing module of the deployment unit that calls the target microservice determines the address of the deployment unit of the target microservice in the routing address set, and routes the call request to the corresponding deployment unit according to the address.
  • the call request will only be routed to the deployment unit in the same lane, and will not be routed to the deployment units in other lanes, that is, the deployment unit in one lane will not call the deployment unit in other lanes.
  • the traffic in different lanes is isolated from each other. Since the call request is routed through the routing address set, the traffic can be kept in the same lane, and isolation can be achieved without the need to pass the grayscale mark through the traffic, so that the grayscale mark will not be lost, resulting in the failure of traffic isolation, so that the grayscale release process will be smoothly transitioned.
  • Figure 4 is a schematic diagram of the structure of a traffic isolation system provided by the present application.
  • the traffic isolation system includes a lane state management module, a control module and multiple deployment units.
  • the lane state management module is used to manage multiple lanes in the microservice cluster.
  • Managing lanes can refer to creating, modifying and deleting lanes.
  • Creating a lane specifically includes, for example, establishing a mapping relationship between a lane and a deployment unit.
  • Modifying a lane includes, for example, adding or subtracting deployment units in a lane.
  • Deleting a lane includes, for example, deleting the mapping relationship between a lane and a deployment unit to release the deployment unit.
  • a lane is a logically isolated environment.
  • the lane state management module specifically manages multiple lanes according to the target lane state instruction.
  • the target lane state instruction for example, includes one or more of the lanes that need to be modified/created, the microservices included in the lane, the version corresponding to each microservice in the lane, and the number of deployment units corresponding to each microservice in the lane.
  • the lane state management module creates or modifies the lane according to the target lane state instruction.
  • the lane created by the lane state management module includes deployment units of multiple microservices, and the microservices in the lane, the version of each microservice, and the number of deployment units of each microservice are the same as those indicated by the target lane state instruction.
  • Each deployment unit is a microservice instance, which is the smallest deployable computing unit created and managed in a microservice cluster.
  • Each deployment unit is the smallest scheduling unit, or it can be called an atomic scheduling unit.
  • a deployment unit belongs to only one microservice and only to the same lane.
  • the multiple lanes of a microservice cluster can include one formal lane and at least one grayscale lane.
  • the versions of microservices in the formal lane are all formal versions.
  • the versions of microservices in the grayscale lane can all be grayscale versions, or some microservices can be formal versions and some can be grayscale versions. For example, if microservice C has only one version, but when it is called by a grayscale version of a microservice, the grayscale lane will also include the deployment unit of microservice C.
  • microservice D has two versions v1 and v2, only one version of microservice D is included in any lane, and there will be no situation where both v1 and v2 of microservice D exist in the same lane.
  • each lane includes at least one deployment unit corresponding to all microservices in the call link. That is, the call of all microservices on the entire call link can be completed in the same lane without calling the deployment unit of the microservices in other lanes, ensuring the traffic isolation between different lanes.
  • a deployment unit can be a pod.
  • the lane state management module establishes a mapping relationship between lanes and deployment units, which can be achieved by labeling deployment units.
  • the labels of deployment units in the same lane are the same, and the labels of deployment units in different lanes are different.
  • Labels are key-value pairs attached to Kubernetes objects (such as pods). Labels can be used to organize and select subsets of objects. Labels can be attached to objects when they are created, and can be added and modified at any time. Each object can define a set of key/value labels. Each key must be unique for a given object.
  • the control module reorganizes the routing strategy for each deployment unit in each lane according to the mapping relationship between lanes and deployment units.
  • the strategy includes routing rules and routing address sets. Routing rules, such as round-robin or weighted round-robin, are used to instruct the deployment unit to select the deployment unit to call from the routing address set according to the routing rules.
  • the routing address set is a collection of the addresses of the deployment units in the same lane.
  • the deployment unit routes the traffic according to the routing strategy, that is, the scheduling of microservices.
  • the deployment unit includes an application function module and a routing module.
  • the application function module includes an application program of the microservice, which is used to perform the corresponding function.
  • the routing module is used for the request issued by the application function module, and routes and forwards the request according to the routing strategy. Specifically, when the traffic passes through the gateway, the gateway routes the traffic to the corresponding lane according to the grayscale release strategy.
  • the grayscale release strategy can be based on the traffic ratio or the traffic content, which is not limited here. If the request in the traffic causes the deployment unit to call other microservices, the application function module of the deployment unit sends a call request to the target microservice that needs to be called.
  • the routing module of the deployment unit determines the address of a deployment unit of a target microservice in the routing address set according to the routing rules, and routes the call request to the corresponding deployment unit according to the address.
  • the routing address set includes the addresses corresponding to 10 deployment units of the target service.
  • the routing module of the deployment unit determines an address among the 10 addresses by polling or weighted polling, and sends the call request to the corresponding deployment unit, thereby realizing traffic isolation and load balancing.
  • Figure 5 is a flow chart of a traffic isolation method provided by the present application.
  • the execution subject of this embodiment is a deployment unit corresponding to any microservice in the microservice cluster that participates in calling other microservices.
  • a first deployment unit sends a call request to a target microservice.
  • the first deployment unit belongs to a deployment unit in a lane in a microservice cluster.
  • the microservice cluster includes multiple lanes.
  • Each lane includes multiple deployment units.
  • Each deployment unit is a microservice instance of a microservice.
  • the routing module of the first deployment unit routes the call request to the second deployment unit corresponding to the target microservice according to the routing address set, where the addresses in the routing address set are all addresses of other deployment units in the lane to which the first deployment unit belongs, and the addresses in the routing address set include the address of the second deployment unit.
  • the routing module of the deployment unit does not need to add a tag for traffic isolation in the request message, because the routing module of the deployment unit can achieve the isolation of traffic in different lanes through the routing address set. Traffic does not need to be isolated by passing tags, so there is no problem of traffic isolation confusion and failure caused by tag loss, which leads to smooth compatibility of the system during the upgrade process, thus achieving a smooth and stable upgrade.
  • the microservice cluster is a k8s cluster
  • the deployment unit is a pod
  • the control module is a service grid istiod
  • the routing module of the deployment unit is envoy.
  • Figure 6 is a structural diagram of another traffic isolation system provided by the present application
  • Figure 7 is a flow diagram of another traffic isolation method provided by the present application.
  • the lane state management module and istiod belong to the lane-level grayscale routing technology management plane, which is used to manage the lane and control the traffic.
  • the pod and the envoy in the pod belong to the working plane, which is used to process the traffic.
  • the control plane of k8s includes the application programming interface (API) server and etcd components.
  • the API server is a component of the k8s control plane.
  • the API server verifies and configures the data of API objects, which include pods, microservices, etc. It provides HTTP Rest interfaces such as addition, deletion, modification, query, and monitoring (watch) of various k8s resource objects (pod, service, etc.).
  • It is the data bus and data center of the entire system, and provides a front end for the shared state of the microservice cluster. All other components interact through this front end.
  • etcd is a key-value database that takes into account consistency and high availability. It can be used as a background database for storing microservice cluster data. The mapping relationship between deployment units and lanes in a microservice cluster is saved in etcd.
  • the lane status management module is used to configure lanes for deployment units of microservices according to the lane status expected by users. It calls the interface of the k8s API server to set the lanes of the deployment units, and the API server calls etcd to save the mapping relationship between lanes and deployment units in etcd.
  • Istiod provides a unified and more efficient way to protect, connect and monitor service traffic.
  • Istio can push the routing policy of envoy according to the lane to which the pod corresponding to envoy belongs, and push the routing policy to each envoy when the lane status changes.
  • Envoy is an edge and service proxy, a sidecar of the Istio service grid, and is dynamically managed by Istiod. Envoy receives the routing policy pushed by Istiod, performs lane-level traffic routing according to the routing policy, and achieves lane-level traffic isolation.
  • S701 The user sends a target lane status instruction to the k8s API server.
  • the target lane status instructions include: the lane to be modified/created, the microservices included in the lane, and the version corresponding to each microservice in the lane. One or more of the number of deployment units corresponding to each microservice in the lane, etc.
  • the target lane state instruction can be carried in the form of a custom resource definition (CRD), which is a resource extension method of kubernetes.
  • CRD custom resource definition
  • the k8s API server When a new CRD is created, the k8s API server generates a representational state transfer (RESTful) resource path based on the CRD.
  • SRE can call the RESTful path of this API server to submit the target lane state instruction to the API server.
  • the API server stores the information in the target lane status instruction in etcd of K8S.
  • the API server sends a first notification message carrying information in the target lane status instruction to the lane status management module.
  • the first notification message is used to indicate the start of the grayscale release, and the first notification message also includes information in the target lane state instruction.
  • the API server has a list-watch mechanism. When the CRD content changes, the API server can notify the lane status management module that monitors this CRD.
  • the lane status management module establishes a mapping relationship between the lane and the pod according to the information in the target lane status instruction.
  • the lane status management module adds lane labels to the pods in the microservice cluster based on the information in the target lane status instruction obtained, that is, establishes a mapping relationship between lanes and deployment units. For example, add a label of swimlane-type:gray to the pod.
  • a label is a key-value pair attached to a pod.
  • the lane status management module submits the mapping relationship between the lane and the pod to the API server.
  • S706 The API server stores the mapping relationship between lanes and pods in etcd.
  • S707 The API server sends a second notification message to Istiod to notify the lane change.
  • the second notification is used to notify Istiod of the lane status change.
  • the second notification message carries the mapping relationship between the lane and the deployment unit.
  • the API server sends the second notification message to Istiod through the list-watch mechanism.
  • the routing strategy includes routing rules and routing address sets.
  • the addresses in the routing address set are all the addresses of the deployment units in the lanes described by the deployment units.
  • the routing strategy can be found in the above description, which will not be repeated here.
  • S710 Envoy routes traffic in the lane according to the routing policy.
  • Envoy routes traffic in the lane according to the routing strategy. Please refer to the description of the routing module routing call request of the deployment unit above, which will not be repeated here.
  • the lane state management module of the management plane updates/creates the mapping relationship between the lane and the deployment unit according to the target lane state instruction issued by the user
  • the control module updates the routing policy of the deployment unit according to the mapping relationship between the lane and the deployment unit
  • the addresses in the routing policy are all the addresses of the deployment units in the lanes described by the deployment units. Therefore, when the deployment unit forwards the call request according to the routing policy, it can only forward the call request to other deployment units in the same lane, thereby realizing lane-level traffic isolation, without the need for traffic transparent marking, and will not cause system smooth compatibility problems caused by the grayscale implementation mode upgrade.
  • the microservice cluster includes a formal lane and a grayscale lane.
  • the microservices in the formal lane and the grayscale lane include deployment units of users, shopping carts, and orders.
  • the call link is user ⁇ shopping cart ⁇ order.
  • the version of deployment unit 1 corresponding to the user in the formal lane is v1
  • the version of deployment unit 2 corresponding to the shopping cart is v1
  • the version of deployment unit 3 corresponding to the order is v1.
  • the version of deployment unit 4 corresponding to the user in the grayscale lane is v2
  • the version of deployment unit 5 corresponding to the shopping cart is v1
  • the version of deployment unit 6 corresponding to the order is v2.
  • the address in the routing address set configured by deployment unit 1 is the address of deployment unit 2
  • the address in the routing address set configured by deployment unit 2 is the address of deployment unit 3.
  • the address in the routing address set configured by deployment unit 4 is the address of deployment unit 5
  • the address in the routing address set configured by deployment unit 5 is the address of deployment unit 6.
  • deployment unit 1 When the gateway routes the access request to deployment unit 1 in the formal lane, if deployment unit 1 needs to call the microservice shopping cart, the deployment unit sends a call request to the microservice shopping cart, and the routing module of deployment unit 1 intercepts the call request. According to the microservice shopping cart that needs to be called in the call request, the address of the microservice shopping cart is obtained from the routing address set of the deployment unit, which is the address of deployment unit 2.
  • Deployment unit 2 calls deployment unit 3 in the same way. The calling process in the grayscale lane is similar. Therefore, different lanes are isolated from each other, and cross-lane calls will not be made. The isolation of traffic between different lanes is achieved without the need for traffic to carry identifiers.
  • FIG. 9 is a schematic diagram of the structure of a flow isolation device provided by the present application.
  • the flow isolation device 900 is a device in a microservice cluster.
  • the microservice cluster includes multiple lanes.
  • Each lane includes multiple flow isolation devices 900.
  • Each flow isolation device 900 is In a microservice instance of a microservice, the traffic isolation device configuration 900 has a routing address set, and the addresses in the routing address set are all addresses of other traffic isolation devices 900 in the lane to which the traffic isolation device 900 belongs.
  • the traffic isolation device 900 is used to implement the operation of the above deployment unit.
  • the traffic isolation device 900 includes an application function module 901 and a routing module 902.
  • the application function module 901 is used to send a call request to a target microservice.
  • the routing module 902 is used to route the call request to a deployment unit corresponding to the target microservice according to a routing address set.
  • each lane includes at least one traffic isolation device 900 corresponding to all microservices on the call link.
  • the multiple lanes include a grayscale lane, the grayscale lane includes at least one grayscale version of a microservice, and the grayscale lane is used to implement the grayscale release of at least one grayscale version of a microservice.
  • the routing module 902 is used to obtain a routing strategy from the control module, where the routing strategy includes a routing address set and a routing rule.
  • the routing module 902 is specifically used to determine the target address of the deployment unit corresponding to the target microservice in the routing address set according to the routing rule.
  • the routing module 902 is specifically used to route the call request to the deployment unit corresponding to the target microservice according to the target address.
  • FIG 10 is a schematic diagram of the structure of another flow isolation device provided by the present application.
  • the flow isolation device 1000 includes a processor 1001 and a memory 1002, the processor 1001 is coupled to the memory 1002, and the processor 1001 is configured to execute the flow isolation method in any of the above embodiments based on the instructions stored in the memory 1002.
  • the present application also provides a computer-readable storage medium having a computer program stored thereon, which, when executed by a computer, implements the flow isolation method process of any of the above-mentioned method embodiments.
  • the disclosed systems, devices and methods can be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the units is only a logical function division. There may be other division methods in actual implementation, such as multiple units or components can be combined or integrated into another system, or some features can be ignored or not executed.
  • Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or units, which can be electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • a software product which is stored in a storage medium and includes a number of instructions for a computer device (which can be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM, read-only memory), random access memory (RAM, random access memory), disk or optical disk and other media that can store program code.

Abstract

Disclosed are a traffic isolation method, apparatus, and system, and a computer readable storage medium, for achieving traffic isolation between different versions of a microservice. The method is applied to a microservice cluster, the microservice cluster comprising a plurality of swimlanes, each swimlane comprising a plurality of deployment units, and each deployment unit being a microservice instance of a microservice; each deployment unit is configured with a routing address set, and addresses in the routing address set are all addresses of other deployment units in a swimlane to which the deployment unit belongs. The method comprises: a deployment unit sending a call request to a target microservice; and a routing module of a deployment unit using the routing address set to route the call request to a deployment unit corresponding to the target microservice.

Description

流量隔离方法、装置、系统及计算机可读存储介质Traffic isolation method, device, system and computer readable storage medium
本申请要求于2022年10月10日提交中国专利局、申请号为202211234556.8、发明名称为“流量隔离方法、装置、系统及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims priority to the Chinese patent application filed with the China Patent Office on October 10, 2022, with application number 202211234556.8 and invention name “Traffic isolation method, device, system and computer-readable storage medium”, the entire contents of which are incorporated by reference in this application.
技术领域Technical Field
本申请涉及互联网技术领域,特别是涉及一种流量隔离方法、装置、系统及计算机可读存储介质。The present application relates to the field of Internet technology, and in particular to a traffic isolation method, device, system and computer-readable storage medium.
背景技术Background technique
灰度发布,是将应用的旧版本与新版本同时部署在环境中,业务请求可能会被路由到旧版本(正式版本)上,也可能会被路由到新版本(灰度版本)上。灰度发布流程,需要结合灰度发布策略决定哪些流量路由到新版本,以此来控制哪些用户使用新版本功能。灰度发布过程中,由于环境上有多个版本的软件,为避免因版本兼容问题引入业务故障,需对流量做版本隔离,即流入旧版本的流量,微服务间调用时保持调用到旧版本上;流入新版本的流量,微服务间调用时保持调用到新版本上。Grayscale release is to deploy the old and new versions of the application in the environment at the same time. Business requests may be routed to the old version (official version) or the new version (grayscale version). The grayscale release process needs to be combined with the grayscale release strategy to determine which traffic is routed to the new version, so as to control which users use the new version functions. During the grayscale release process, since there are multiple versions of software in the environment, in order to avoid business failures due to version compatibility issues, it is necessary to isolate the traffic versions, that is, the traffic flowing into the old version will keep calling the old version when calling between microservices; the traffic flowing into the new version will keep calling the new version when calling between microservices.
目前的流量隔离方案主要是通过流量传递灰度标记实现灰度版本的流量与正式版本的流量间的隔离。具体地,网关将流量(由请求报文组成)内容与灰度发布策略进行比较,对符合灰度发布策略的请求报文,在该请求报文的头部中增加灰度标记,并将添加灰度标记的请求报文路由到灰度版本的首服务,否则将流量路由到正式版本的首服务。若此请求引起对下个服务的调用,服务在向下一个服务在发送调用请求时,将灰度标记再放到调用请求报文的头部中。根据流量的灰度标记实现流量的路由,达到流量的隔离。The current traffic isolation solution mainly implements the isolation between the grayscale version of traffic and the official version of traffic by transmitting grayscale marks through traffic. Specifically, the gateway compares the content of the traffic (composed of request messages) with the grayscale release policy, adds a grayscale mark to the header of the request message that complies with the grayscale release policy, and routes the request message with the grayscale mark to the first service of the grayscale version, otherwise the traffic is routed to the first service of the official version. If this request causes a call to the next service, the service will put the grayscale mark in the header of the call request message when sending a call request to the next service. Traffic routing is implemented according to the grayscale mark of the traffic to achieve traffic isolation.
但在一些场景下,例如当接收到的请求和要发送的调用请求不在同一线程内时,由于灰度标记不支持跨线程传递,导致灰度标记丢失,造成不同版本的流量隔离失效。However, in some scenarios, such as when the received request and the call request to be sent are not in the same thread, the grayscale mark is lost because the grayscale mark does not support cross-thread transmission, causing the isolation of different versions of traffic to fail.
发明内容Summary of the invention
本申请提供了一种流量隔离方法、装置、系统及计算机可读存储介质,以实现不同版本的微服务间的流量隔离。The present application provides a traffic isolation method, device, system and computer-readable storage medium to achieve traffic isolation between microservices of different versions.
第一方面提供一种流量隔离方法。该方法应用于微服务集群,微服务集群包括多个泳道,每一泳道中包括多个微服务对应的部署单元。其中,每一部署单元为一个微服务的微服务实例。部署单元配置有路由地址集,路由地址集中的地址均为部署单元所属的泳道中的其他部署单元的地址。路由地址集指示部署单元在调用其他微服务时可以选择的部署单元。该方法包括,部署单元向目标微服务发送调用请求,以调用目标微服务,部署单元的路由模块拦截调用请求,并根据路由地址集将调用请求路由到目标微服务对应的部署单元。从而,通过为灰度泳道中的部署单元配置仅包括灰度泳道中的部署单元的地址路由地址集,从而在不通过流量传递灰度标签的情况下,确保在某一服务的部署单元调用其他服务的部署单元时,调用请求能够路由到同一灰度泳道中的部署单元,保证灰度泳道中的流量隔离,使升级更加平滑。The first aspect provides a traffic isolation method. The method is applied to a microservice cluster, the microservice cluster includes multiple lanes, and each lane includes deployment units corresponding to multiple microservices. Among them, each deployment unit is a microservice instance of a microservice. The deployment unit is configured with a routing address set, and the addresses in the routing address set are all addresses of other deployment units in the lane to which the deployment unit belongs. The routing address set indicates the deployment unit that can be selected when calling other microservices. The method includes that the deployment unit sends a call request to the target microservice to call the target microservice, the routing module of the deployment unit intercepts the call request, and routes the call request to the deployment unit corresponding to the target microservice according to the routing address set. Thus, by configuring the deployment unit in the grayscale lane with only the address routing address set of the deployment unit in the grayscale lane, it is ensured that when the deployment unit of a certain service calls the deployment unit of other services, the call request can be routed to the deployment unit in the same grayscale lane without passing the grayscale label through the traffic, so as to ensure the traffic isolation in the grayscale lane and make the upgrade smoother.
在一种可能的实现方式中,每一泳道中包括至少一个调用链路上所有的微服务对应的部署单元。因此,可以在同一个泳道中实现全调用链路的微服务的调用,无需调用其他泳道中的部署单元,能够实现不同泳道间的流量隔离。In a possible implementation, each lane includes at least one deployment unit corresponding to all microservices on the call link. Therefore, the call of microservices of the entire call link can be implemented in the same lane without calling deployment units in other lanes, and traffic isolation between different lanes can be achieved.
在一种可能的实现方式中,多个泳道中包括灰度泳道,灰度泳道中包括至少一个灰度版本的微服务,灰度泳道用于实现至少一个灰度版本的微服务的灰度发布。基于灰度泳道,灰度泳道中的流量与其他泳道的流量是完全隔离的,能够避免因版本兼容问题引入业务故障,平滑地实现微服务升级。In a possible implementation, the multiple lanes include a grayscale lane, which includes at least one grayscale version of a microservice, and the grayscale lane is used to implement the grayscale release of at least one grayscale version of a microservice. Based on the grayscale lane, the traffic in the grayscale lane is completely isolated from the traffic in other lanes, which can avoid business failures caused by version compatibility issues and smoothly implement microservice upgrades.
在一种可能的实现方式中,方法还包括:部署单元的路由模块获取来自控制模块的路由策略,路由策略包括路由地址集和路由规则。部署单元的路由模块根据路由地址集将调用请求路由到目标微服务对应的部署单元,包括:部署单元的路由模块根据路由规则在路由地址集中确定目标微服务对应的部署单元的目标地址;部署单元的路由模块根据目标地址将调用请求路由到目标微服务对应的部署单元。In a possible implementation, the method further includes: a routing module of the deployment unit obtains a routing strategy from the control module, the routing strategy includes a routing address set and a routing rule. The routing module of the deployment unit routes the call request to the deployment unit corresponding to the target microservice according to the routing address set, including: the routing module of the deployment unit determines the target address of the deployment unit corresponding to the target microservice in the routing address set according to the routing rule; the routing module of the deployment unit routes the call request to the deployment unit corresponding to the target microservice according to the target address.
第二方面提供一种流量隔离方法。该方法由管理面的泳道状态管理模块和控制模块实现。该方法包括:泳道状态管理模块根据目标泳道状态指令管理微服务集群中的多个泳道,每一泳道包括多个部署单元,每 一部署单元为一个微服务的微服务实例;控制模块为每一泳道中的部署单元下发路由策略,路由策略包括路由地址集,路由地址集中的地址均为部署单元所属的泳道中的部署单元的地址,以使部署单元根据路由地址集调用部署单元所属的泳道中的其他部署单元。The second aspect provides a traffic isolation method. The method is implemented by a lane state management module and a control module of the management plane. The method includes: the lane state management module manages multiple lanes in the microservice cluster according to the target lane state instruction, each lane includes multiple deployment units, each A deployment unit is a microservice instance of a microservice; the control module sends a routing policy to the deployment unit in each lane, and the routing policy includes a routing address set. The addresses in the routing address set are all addresses of the deployment units in the lane to which the deployment unit belongs, so that the deployment unit calls other deployment units in the lane to which the deployment unit belongs according to the routing address set.
在一种可能的实现方式中,每一泳道中包括至少一个调用链路上所有的微服务对应的部署单元。In a possible implementation, each lane includes at least one deployment unit corresponding to all microservices on the call link.
在一种可能的实现方式中,多个泳道中包括灰度泳道,灰度泳道中包括至少一个灰度版本的微服务,灰度泳道用于实现至少一个灰度版本的微服务的灰度发布。In a possible implementation, the multiple lanes include a grayscale lane, the grayscale lane includes at least one grayscale version of a microservice, and the grayscale lane is used to implement the grayscale release of at least one grayscale version of a microservice.
第三方面提供一种流量隔离装置。流量隔离装置为微服务集群中的装置,微服务集群包括多个泳道,每一泳道中包括多个流量隔离装置,每一流量隔离装置为一个微服务的微服务实例,流量隔离装置配置有路由地址集,路由地址集中的地址均为流量隔离装置所属的泳道中的其他流量隔离装置的地址,装置包括调用模块和路由模块。其中,调用模块,用于向目标微服务发送调用请求。路由模块,用于根据路由地址集将调用请求路由到目标微服务对应的部署单元。The third aspect provides a traffic isolation device. The traffic isolation device is a device in a microservice cluster, the microservice cluster includes multiple lanes, each lane includes multiple traffic isolation devices, each traffic isolation device is a microservice instance of a microservice, the traffic isolation device is configured with a routing address set, the addresses in the routing address set are all addresses of other traffic isolation devices in the lane to which the traffic isolation device belongs, and the device includes a calling module and a routing module. Among them, the calling module is used to send a call request to the target microservice. The routing module is used to route the call request to the deployment unit corresponding to the target microservice according to the routing address set.
在一种可能的实现方式中,每一泳道中包括至少一个调用链路上所有的微服务对应的流量隔离装置。In a possible implementation, each lane includes at least one traffic isolation device corresponding to all microservices on the call link.
在一种可能的实现方式中,多个泳道中包括灰度泳道,灰度泳道中包括至少一个灰度版本的微服务,灰度泳道用于实现至少一个灰度版本的微服务的灰度发布。In a possible implementation, the multiple lanes include a grayscale lane, the grayscale lane includes at least one grayscale version of a microservice, and the grayscale lane is used to implement the grayscale release of at least one grayscale version of a microservice.
在一种可能的实现方式中,路由模块,用于获取来自控制模块的路由策略,路由策略包括路由地址集和路由规则。路由模块,具体用于根据路由规则在路由地址集中确定目标微服务对应的部署单元的目标地址。路由模块,具体用于根据目标地址将调用请求路由到目标微服务对应的部署单元。In a possible implementation, the routing module is used to obtain a routing strategy from the control module, the routing strategy including a routing address set and a routing rule. The routing module is specifically used to determine a target address of a deployment unit corresponding to a target microservice in the routing address set according to the routing rule. The routing module is specifically used to route a call request to a deployment unit corresponding to a target microservice according to the target address.
第四方面提供一种流量隔离系统。该系统包括泳道状态管理模块、控制模块和多个部署单元。其中,泳道状态管理模块,用于根据目标泳道状态指令管理微服务集群中的多个泳道,每一泳道包括多个部署单元,每一部署单元为一个微服务的微服务实例。控制模块,用于为每一泳道中的部署单元下发路由策略,路由策略包括路由地址集,路由地址集中的地址均为部署单元所属的泳道中的部署单元的地址。部署单元,用于根据路由地址集调用部署单元所属的泳道中的其他部署单元。The fourth aspect provides a traffic isolation system. The system includes a lane state management module, a control module and multiple deployment units. Among them, the lane state management module is used to manage multiple lanes in the microservice cluster according to the target lane state instruction, each lane includes multiple deployment units, and each deployment unit is a microservice instance of a microservice. The control module is used to issue a routing policy to the deployment unit in each lane, and the routing policy includes a routing address set. The addresses in the routing address set are all addresses of the deployment units in the lane to which the deployment unit belongs. The deployment unit is used to call other deployment units in the lane to which the deployment unit belongs according to the routing address set.
在一种可能的实现方式中,每一泳道中包括至少一个调用链路上所有的微服务对应的部署单元。In a possible implementation, each lane includes at least one deployment unit corresponding to all microservices on the call link.
在一种可能的实现方式中,多个泳道中包括灰度泳道,灰度泳道中包括至少一个灰度版本的微服务,灰度泳道用于实现至少一个灰度版本的微服务的灰度发布。In a possible implementation, the multiple lanes include a grayscale lane, the grayscale lane includes at least one grayscale version of a microservice, and the grayscale lane is used to implement the grayscale release of at least one grayscale version of a microservice.
第五方面提供一种流量隔离设备,设备包括处理器和存储器,处理器耦接存储器,处理器被配置为基于存储在存储器中的指令,执行第一方面或第一方面的任一种可能的实现方式中的流量隔离方法,或第二方面或第二方面的任一种可能的实现方式中的流量隔离方法。The fifth aspect provides a traffic isolation device, which includes a processor and a memory, the processor is coupled to the memory, and the processor is configured to execute the traffic isolation method in the first aspect or any possible implementation of the first aspect, or the traffic isolation method in the second aspect or any possible implementation of the second aspect based on instructions stored in the memory.
第六方面提供一种计算机可读存储介质,包括指令,当计算机可读存储介质在计算机上运行时,使得计算机执行第一方面或第一方面的任一种可能的实现方式中的流量隔离方法,或第二方面或第二方面的任一种可能的实现方式中的流量隔离方法的操作。The sixth aspect provides a computer-readable storage medium, comprising instructions, which, when the computer-readable storage medium is run on a computer, enables the computer to perform the operations of the traffic isolation method in the first aspect or any possible implementation of the first aspect, or the traffic isolation method in the second aspect or any possible implementation of the second aspect.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1a为本申请提供的在灰度发布前的流量分布示意图;FIG1a is a schematic diagram of traffic distribution before grayscale release provided by the present application;
图1b为本申请提供的在灰度发布过程中的流量分布示意图;FIG1b is a schematic diagram of traffic distribution during grayscale release provided by the present application;
图1c为本申请提供的在灰度发布完成后的流量分布示意图;FIG1c is a schematic diagram of traffic distribution after the grayscale release is completed provided by the present application;
图2为本申请提供的基于SDK透传灰度标记的方式的示意图;FIG2 is a schematic diagram of a method of grayscale marking based on SDK transparent transmission provided by the present application;
图3为本申请提供的基于负载均衡器透传灰度标记的方式的示意图;FIG3 is a schematic diagram of a grayscale marking method based on a load balancer transparent transmission provided by the present application;
图4为本申请提供的一种流量隔离系统的结构示意图;FIG4 is a schematic diagram of the structure of a flow isolation system provided by the present application;
图5为本申请提供的一种流量隔离方法的流程示意图;FIG5 is a flow chart of a flow isolation method provided by the present application;
图6为本申请提供的另一种流量隔离系统的结构示意图;FIG6 is a schematic diagram of the structure of another flow isolation system provided by the present application;
图7为本申请提供的另一种流量隔离方法的流程示意图;FIG7 is a flow chart of another flow isolation method provided by the present application;
图8为本申请提供的一种流量隔离场景的示意图;FIG8 is a schematic diagram of a traffic isolation scenario provided by the present application;
图9为本申请提供的一种流量隔离装置的结构示意图;FIG9 is a schematic structural diagram of a flow isolation device provided by the present application;
图10为本申请提供的另一种流量隔离装置的结构示意图。 FIG. 10 is a schematic diagram of the structure of another flow isolation device provided in the present application.
具体实施方式Detailed ways
本申请提供了一种流量隔离方法、装置、系统及计算机可读存储介质,以在不通过流量传递灰度标签的情况下,实现不同版本的微服务间的流量隔离。The present application provides a traffic isolation method, device, system and computer-readable storage medium to achieve traffic isolation between microservices of different versions without transmitting grayscale labels through traffic.
在本申请中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。In this application, the words "exemplary" or "for example" are used to indicate examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "for example" in the embodiments of this application should not be interpreted as being more preferred or more advantageous than other embodiments or designs. Specifically, the use of words such as "exemplary" or "for example" is intended to present related concepts in a specific way.
在本申请的实施例中,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本申请的描述中,除非另有说明,“多个”的含义是两个或两个以上。In the embodiments of the present application, the terms "first" and "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of the indicated technical features. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of the features. In the description of the present application, unless otherwise specified, "plurality" means two or more.
本申请中术语“至少一个”的含义是指一个或多个,本申请中术语“多个”的含义是指两个或两个以上,例如,多个第二报文是指两个或两个以上的第二报文。本文中术语“系统”和“网络”经常可互换使用。The term "at least one" in this application means one or more, and the term "multiple" in this application means two or more, for example, multiple second messages means two or more second messages. The terms "system" and "network" are often used interchangeably herein.
应理解,在本文中对各种所述示例的描述中所使用的术语只是为了描述特定示例,而并非旨在进行限制。如在对各种所述示例的描述和所附权利要求书中所使用的那样,单数形式“一个(“a”,“an”)”和“该”旨在也包括复数形式,除非上下文另外明确地指示。It should be understood that the terms used in the description of the various examples herein are only for describing specific examples and are not intended to be limiting. As used in the description of the various examples and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
还应理解,本文中所使用的术语“和/或”是指并且涵盖相关联的所列出的项目中的一个或多个项目的任何和全部可能的组合。术语“和/或”,是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本申请中的字符“/”,一般表示前后关联对象是一种“或”的关系。It should also be understood that the term "and/or" used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. The term "and/or" is a description of the association relationship of associated objects, indicating that three relationships may exist. For example, A and/or B can represent: A exists alone, A and B exist at the same time, and B exists alone. In addition, the character "/" in this application generally indicates that the associated objects before and after are in an "or" relationship.
还应理解,在本申请的各个实施例中,各个过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should also be understood that in the various embodiments of the present application, the size of the serial number of each process does not mean the order of execution. The execution order of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
应理解,根据A确定B并不意味着仅仅根据A确定B,还可以根据A和/或其它信息确定B。It should be understood that determining B based on A does not mean determining B only based on A. B can also be determined based on A and/or other information.
还应理解,术语“包括”(也称“includes”、“including”、“comprises”和/或“comprising”)当在本说明书中使用时指定存在所陈述的特征、整数、步骤、操作、元素、和/或部件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元素、部件、和/或其分组。It should also be understood that the term “comprise” (also known as “includes,” “including,” “comprises” and/or “comprising”) when used in this specification specifies the presence of stated features, integers, steps, operations, elements, and/or components, but does not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
还应理解,术语“如果”可被解释为意指“当...时”(“when”或“upon”)或“响应于确定”或“响应于检测到”。类似地,根据上下文,短语“如果确定...”或“如果检测到[所陈述的条件或事件]”可被解释为意指“在确定...时”或“响应于确定...”或“在检测到[所陈述的条件或事件]时”或“响应于检测到[所陈述的条件或事件]”。It should also be understood that the term "if" may be interpreted to mean "when" or "upon" or "in response to determining" or "in response to detecting." Similarly, the phrase "if it is determined that ..." or "if [a stated condition or event] is detected" may be interpreted to mean "upon determining that ..." or "in response to determining that ..." or "upon detecting [a stated condition or event]" or "in response to detecting [a stated condition or event]," depending on the context.
应理解,说明书通篇中提到的“一个实施例”、“一实施例”、“一种可能的实现方式”意味着与实施例或实现方式有关的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”、“一种可能的实现方式”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。It should be understood that the references to "one embodiment", "an embodiment", or "a possible implementation" throughout the specification mean that specific features, structures, or characteristics related to the embodiment or implementation are included in at least one embodiment of the present application. Therefore, the references to "in one embodiment" or "in an embodiment", or "a possible implementation" appearing throughout the specification do not necessarily refer to the same embodiment. In addition, these specific features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
随着计算机应用技术的发展,为更好发挥资源的利用率及实现应用的快速部署,可将应用拆分为多个微服务(microservice)。利用微服务提供对外的服务,各个微服务之间支持相互关联和访问,以实现整体应用的功能服务。每个微服务是一个专注于单一责任与功能的小型功能区块,若干个微服务组合出复杂的大型应用程序。每个微服务拥有自己的进程,其可以通过轻量级通信机制,例如基于超文本传输协议(hypertext transferprotocol,HTTP)的应用程序编程接口(application programming interface,API),实现与其他微服务的通信。With the development of computer application technology, in order to better utilize resources and realize rapid deployment of applications, applications can be split into multiple microservices. Microservices are used to provide external services, and each microservice supports mutual connection and access to realize the functional services of the overall application. Each microservice is a small functional block focusing on a single responsibility and function, and several microservices are combined to form a complex large-scale application. Each microservice has its own process, which can communicate with other microservices through lightweight communication mechanisms, such as application programming interfaces (APIs) based on hypertext transfer protocol (HTTP).
微服务架构(microservice architecture)是一种将应用(尤其是大型应用,如电商系统、社交平台或金融系统等)拆分成若干个微服务的架构。在一些示例中,微服务架构具体可以包括具有较低侵入性的Spring Cloud、Dubbo等经典架构。与传统单体架构将所有功能服务集中在一个容器相比,微服务架构通过将功能分解到各个离散的服务中,实现了业务解耦。每个业务团队分别维护各自业务对应的微服务,降低了维护复杂度,提高了维护效率。而且单个微服务出现故障时,其他微服务仍可继续工作,提高了系统稳定性。Microservice architecture is an architecture that splits applications (especially large applications, such as e-commerce systems, social platforms, or financial systems) into several microservices. In some examples, microservice architecture can specifically include classic architectures such as Spring Cloud and Dubbo, which are less invasive. Compared with the traditional monolithic architecture that concentrates all functional services in one container, the microservice architecture achieves business decoupling by decomposing functions into discrete services. Each business team maintains the microservices corresponding to their respective businesses, which reduces maintenance complexity and improves maintenance efficiency. Moreover, when a single microservice fails, other microservices can continue to work, which improves system stability.
在一些情况下,应用的多个微服务之间还可以存在调用关系,即多个微服务之间可以组成至少一个调用链路。例如,在执行某次请求时,微服务A调用微服务B,微服务B调用微服务C和微服务D,从微服务A至微服务B,以及从微服务B至微服务C和微服务D为一条调用链。例如,在电商系统的应用场景中,商品系统的微服务还可以调用评价系统的微服务,以在显示商品详情时显示用户对该商品的历史评 价。In some cases, there may be a calling relationship between multiple microservices of an application, that is, multiple microservices may form at least one calling link. For example, when executing a request, microservice A calls microservice B, and microservice B calls microservice C and microservice D. A calling chain is from microservice A to microservice B, and from microservice B to microservice C and microservice D. For example, in the application scenario of an e-commerce system, the microservice of the product system may also call the microservice of the evaluation system to display the user's historical evaluation of the product when displaying the product details. price.
所谓服务治理即是在分布式服务框架下处理服务调用之间的关系。服务治理具体可以分为以下层面:服务注册与发现、负载均衡、故障处理与恢复(限流、熔断、超时、重试)、灰度发布以及服务追踪等。Service governance is the process of handling the relationship between service calls in a distributed service framework. Service governance can be divided into the following levels: service registration and discovery, load balancing, fault handling and recovery (current limiting, circuit breaking, timeout, retry), grayscale release, and service tracking.
当前,基于微服务架构的应用还可以通过服务网格(service mesh)实现服务治理。服务网格是一个用于处理服务(包括微服务)间通信的基础设施层。服务网格的一种实现方式是为每个微服务分配一个单独的代理,该代理也被称之为边车(sidecar)。其中,代理用于负责处理服务间的通信、监控以及一些与安全的工作,从而实现提供诸如服务发现、负载均衡、加密、身份鉴定、授权以及熔断等功能。微服务自身不再负责处理业务请求的具体逻辑,例如基于业务请求进行负责均衡等等,仅需完成业务处理。Currently, applications based on microservice architecture can also implement service governance through service mesh. Service mesh is an infrastructure layer used to handle communication between services (including microservices). One way to implement service mesh is to assign a separate agent to each microservice, which is also called a sidecar. Among them, the agent is responsible for handling communication, monitoring and some security-related work between services, thereby providing functions such as service discovery, load balancing, encryption, identity authentication, authorization and fuse. The microservice itself is no longer responsible for processing the specific logic of business requests, such as balancing based on business requests, etc., and only needs to complete business processing.
边车可以拦截对应的微服务的所有的网络流量,允许根据用户设置的配置提供流量控制功能。边车与集群中启动的每个微服务一起部署,或者与运行在虚拟机或容器上的微服务一起运行。The sidecar can intercept all network traffic of the corresponding microservice, allowing traffic control functions to be provided according to the configuration set by the user. The sidecar is deployed with each microservice started in the cluster, or runs with microservices running on virtual machines or containers.
灰度发布(canary release),在微服务架构中,一般是指在微服务的至少两个版本之间能够平滑过渡升级的一种方式。In a microservices architecture, canary release generally refers to a method of smoothly transitioning and upgrading between at least two versions of a microservice.
示例性地,如图1a至图1c所示,灰度发布可以将微服务的旧版本v1与新版本v2同时部署在环境中,访问请求可能会被路由到旧版本v1上,也可能会被路由到新版本v2上。在灰度发布过程中,新版本也可以称为灰度版本或测试版本。可以通过定义灰度发布策略,调整v1版本和v2版本的流量占比。灰度发布可以在应用发布新版本时,自定义控制新版本的用户与流量比重,渐进式完成应用新版本的全量上线,最大限度地控制新版本发布带来的业务风险,降低故障带来的影响面。如图1a所示,灰度发布前,100%的流量均在v1版本的服务上。如图1a所示,灰度发布时,部署上v2版本的服务,通过发布策略控制将少量用户的流量引入到新版本的服务上。如图1c所示,灰度发布验证通过后,逐渐将所有用户的流量引入到新版本的服务上来,达到版本升级的目的。Exemplarily, as shown in Figures 1a to 1c, the grayscale release can deploy the old version v1 and the new version v2 of the microservice in the environment at the same time, and the access request may be routed to the old version v1 or the new version v2. In the grayscale release process, the new version can also be called a grayscale version or a test version. The traffic proportion of the v1 version and the v2 version can be adjusted by defining a grayscale release strategy. When the application releases a new version, the grayscale release can customize the user and traffic proportion of the new version, gradually complete the full launch of the new version of the application, and maximize the control of the business risks brought by the release of the new version and reduce the impact of the failure. As shown in Figure 1a, before the grayscale release, 100% of the traffic is on the v1 version of the service. As shown in Figure 1a, during the grayscale release, the v2 version of the service is deployed, and the traffic of a small number of users is introduced to the new version of the service through the release strategy control. As shown in Figure 1c, after the grayscale release verification is passed, the traffic of all users is gradually introduced to the new version of the service to achieve the purpose of version upgrade.
灰度发布流程,需要结合灰度发布策略决定哪些流量路由到v2版本,以此来控制哪些用户使用新版本功能。目前主要有两种策略方式:基于流量比例和基于流量内容。其中,基于流量比例,为不同版本配置流量占比,例如20%的流量使用新版本,80%的流量使用旧版本。基于流量内容,具体为解析流量中的用户身份信息(cookie)、报文头部(header)、参数(param)等内容,只有满足规则约束的访问流量,例如来自特定的用户或客户端等的访问流量才可访问到新版本。The grayscale release process needs to be combined with the grayscale release strategy to determine which traffic is routed to the v2 version, so as to control which users use the new version functions. There are currently two main strategies: based on traffic ratio and based on traffic content. Among them, based on traffic ratio, the traffic ratio is configured for different versions, for example, 20% of the traffic uses the new version and 80% of the traffic uses the old version. Based on traffic content, specifically, the user identity information (cookie), message header (header), parameters (param) and other content in the traffic are parsed. Only access traffic that meets the rule constraints, such as access traffic from specific users or clients, can access the new version.
灰度发布过程中,由于环境上有多个版本的微服务,为避免因版本兼容问题引入业务故障,需对流量做版本隔离,即流入旧版本v1的流量,微服务间调用时保持调用到旧版本v1上;流入新版本v2的流量,微服务间调用时保持调用到新版本v2上。During the grayscale release process, since there are multiple versions of microservices in the environment, in order to avoid business failures due to version compatibility issues, traffic needs to be version isolated. That is, traffic flowing into the old version v1 will keep calling the old version v1 when calling between microservices; traffic flowing into the new version v2 will keep calling the new version v2 when calling between microservices.
目前,可以通过软件路由隔离实现不同版本的流量的隔离。软件路由隔离是通过在流量中透传灰度标记,使得微服务根据请求中的灰度标记确定下一跳路由到哪个版本的微服务。具体的实现方式有两种,一种是通过软件开发套件(software development kit,SDK)透传请求中的灰度标记,另一种是通过负载均衡器(load balancer,LB)透传请求中的灰度标记。Currently, different versions of traffic can be isolated through software routing isolation. Software routing isolation is achieved by transparently transmitting grayscale marks in traffic, so that microservices can determine which version of microservice to route to next hop based on the grayscale marks in the request. There are two specific implementation methods, one is to transparently transmit the grayscale marks in the request through the software development kit (SDK), and the other is to transparently transmit the grayscale marks in the request through the load balancer (LB).
如图2所示,图2为本申请提供的基于SDK透传灰度标记的方式的示意图。具体地,流量经过网关后,网关为满足灰度策略的流量(由多个请求报文组成,一下将请求报文简称为请求),在请求的头部(header)上增加灰度标记。例如图2中,在请求的header增加字段color=read。流量进入首服务微服务A后,SDK提取请求的header中的灰度标记并缓存。微服务A要发请求给微服务B时,SDK提取出缓存的灰度标记,根据灰度标记的值,确定将请求发送给灰度版本(v2)的地址还是正式版本(v1)的地址。SDK在发送请求时,同时在请求的header中携带上此灰度标记,供后续微服务确定请求目的地。As shown in Figure 2, Figure 2 is a schematic diagram of the method of SDK transparent transmission of grayscale tags provided by this application. Specifically, after the traffic passes through the gateway, the gateway adds a grayscale tag to the header of the request for traffic that meets the grayscale policy (composed of multiple request messages, the request message will be referred to as the request below). For example, in Figure 2, the field color=read is added to the header of the request. After the traffic enters the first service microservice A, the SDK extracts the grayscale tag in the header of the request and caches it. When microservice A wants to send a request to microservice B, the SDK extracts the cached grayscale tag and determines whether to send the request to the address of the grayscale version (v2) or the address of the official version (v1) based on the value of the grayscale tag. When the SDK sends a request, it also carries this grayscale tag in the header of the request for subsequent microservices to determine the destination of the request.
如图3所示,图3为本申请提供的基于负载均衡器透传灰度标记的方式的示意图。具体地,流量经过网关后,网关为满足灰度策略的流量,在请求的header上增加灰度标记。例如图2中,在请求的header增加字段color=read。微服务A要发请求给微服务B时,微服务A在请求中携带上入口请求的灰度标记,发送给负载均衡器。负载均衡器根据请求中的灰度标记确认下一跳微服务的地址。同理,后续流量在调用链路上,微服务需要持续的透传灰度标记,帮助后续微服务确定下一跳调用对象的地址。As shown in Figure 3, Figure 3 is a schematic diagram of the method of transparent transmission of grayscale marks based on the load balancer provided by the present application. Specifically, after the traffic passes through the gateway, the gateway adds a grayscale mark to the header of the request for traffic that meets the grayscale policy. For example, in Figure 2, the field color=read is added to the header of the request. When microservice A wants to send a request to microservice B, microservice A carries the grayscale mark of the upper entry request in the request and sends it to the load balancer. The load balancer confirms the address of the next-hop microservice based on the grayscale mark in the request. Similarly, for subsequent traffic on the calling link, microservices need to continue to transparently transmit grayscale marks to help subsequent microservices determine the address of the next-hop calling object.
然而,不管是上述的哪种流量隔离方式,会存在灰度标记丢失的情况。例如,接收到的请求与要发送的请求,在不同的线程内,灰度标记不支持跨线程传递,导致灰度标记丢失。灰度标记则无法实现后续的流量隔离,影响系统的稳定性,使灰度发布过程中不同版本的微服务无法平滑兼容。 However, no matter which of the above traffic isolation methods is used, there may be a situation where the grayscale mark is lost. For example, the received request and the request to be sent are in different threads, and the grayscale mark does not support cross-thread transmission, resulting in the loss of the grayscale mark. Grayscale marking cannot achieve subsequent traffic isolation, affecting the stability of the system and making different versions of microservices incompatible during the grayscale release process.
因此,本申请提供如下实施例,以解决灰度标记丢失造成不同版本的微服务无法平滑兼容的问题。Therefore, the present application provides the following embodiments to solve the problem that different versions of microservices cannot be smoothly compatible due to the loss of grayscale marks.
本申请的提供一种软件路由隔离方案,实现泳道级的流量隔离。具体地,在微服务集群中存在微服务有多个版本(每个版本的微服务对应有至少一个部署单元),需要隔离多个版本的微服务的流量时,本申请以部署单元为粒度,在微服务集群中将属于不同版本的微服务的部署单元划分到不同的泳道中。本实施例中,将包括灰度版本的微服务的泳道称为灰度泳道,不包括灰度版本的微服务的泳道称为正式泳道。并且,灰度泳道中还包括与灰度版本的微服务存在调用关系的微服务的部署单元,即灰度泳道中包括灰度版本所在的调用链路中的所有微服务对应的部署单元。The present application provides a software routing isolation solution to achieve lane-level traffic isolation. Specifically, when there are multiple versions of microservices in a microservice cluster (each version of the microservice corresponds to at least one deployment unit), and the traffic of multiple versions of microservices needs to be isolated, the present application uses the deployment unit as the granularity to divide the deployment units of microservices belonging to different versions into different lanes in the microservice cluster. In this embodiment, the lane that includes the grayscale version of the microservice is called the grayscale lane, and the lane that does not include the grayscale version of the microservice is called the formal lane. In addition, the grayscale lane also includes the deployment units of the microservices that have a call relationship with the grayscale version of the microservice, that is, the grayscale lane includes the deployment units corresponding to all microservices in the call link where the grayscale version is located.
进一步地,泳道中有些微服务需要调用其他微服务,这些需要调用其他微服务的微服务对应的部署单元中的路由模块均配置有路由地址集,路由地址集中的地址均为该部署单元所在的泳道中的部署单元的地址。当某一微服务的部署单元需要调用其他微服务时,会向需要调用的目标微服务发送调用请求,调用目标微服务的部署单元的路由模块在路由地址集中确定目标微服务的部署单元的地址,并根据该地址将调用请求路由到对应的部署单元。由于路由地址集中的地址均为同一泳道中的部署单元的地址,因此调用请求仅会路由到同一泳道中的部署单元,而不会路由到其他泳道中的部署单元,即一个泳道内的部署单元不会调用其他的泳道中的部署单元。从而,不同的泳道中的流量是互相隔离的。由于通过路由地址集路由调用请求,能够实现流量保持在同一个泳道内,无需通过流量透传灰度标记即可实现隔离,从而不会出现灰度标记丢失导致流量隔离失效的情况,使灰度发布的过程平滑过渡。Furthermore, some microservices in the lane need to call other microservices. The routing modules in the deployment units corresponding to these microservices that need to call other microservices are all configured with a routing address set. The addresses in the routing address set are all the addresses of the deployment units in the lane where the deployment unit is located. When the deployment unit of a certain microservice needs to call other microservices, it will send a call request to the target microservice that needs to be called. The routing module of the deployment unit that calls the target microservice determines the address of the deployment unit of the target microservice in the routing address set, and routes the call request to the corresponding deployment unit according to the address. Since the addresses in the routing address set are all the addresses of the deployment units in the same lane, the call request will only be routed to the deployment unit in the same lane, and will not be routed to the deployment units in other lanes, that is, the deployment unit in one lane will not call the deployment unit in other lanes. Thus, the traffic in different lanes is isolated from each other. Since the call request is routed through the routing address set, the traffic can be kept in the same lane, and isolation can be achieved without the need to pass the grayscale mark through the traffic, so that the grayscale mark will not be lost, resulting in the failure of traffic isolation, so that the grayscale release process will be smoothly transitioned.
以上为本申请的实现原理,以下对实现上述原理的可能的实现方式进行详细描述。The above is the implementation principle of the present application. The following is a detailed description of possible implementation methods for implementing the above principle.
如图4所示,图4为本申请提供的一种流量隔离系统的结构示意图。本实施例中,流量隔离系统包括泳道状态管理模块、控制模块和多个部署单元。As shown in Figure 4, Figure 4 is a schematic diagram of the structure of a traffic isolation system provided by the present application. In this embodiment, the traffic isolation system includes a lane state management module, a control module and multiple deployment units.
其中,泳道状态管理模块用于管理微服务集群中的多个泳道。管理泳道可以是指创建、修改和删除泳道。创建泳道具体例如为建立泳道与部署单元的映射关系。修改泳道例如为对泳道内的部署单元进行增减处理等。删除泳道例如为删除泳道与部署单元的映射关系,以释放部署单元。本实施例中,一个泳道为一个逻辑隔离环境。Among them, the lane state management module is used to manage multiple lanes in the microservice cluster. Managing lanes can refer to creating, modifying and deleting lanes. Creating a lane specifically includes, for example, establishing a mapping relationship between a lane and a deployment unit. Modifying a lane includes, for example, adding or subtracting deployment units in a lane. Deleting a lane includes, for example, deleting the mapping relationship between a lane and a deployment unit to release the deployment unit. In this embodiment, a lane is a logically isolated environment.
泳道状态管理模块具体是根据目标泳道状态指令管理多个泳道的。目标泳道状态指令中例如包括需要修改/创建的泳道、泳道中包括的微服务、泳道中的每一微服务对应的版本,和泳道内每一微服务对应的部署单元的数量等中的一个或多个。泳道状态管理模块根据目标泳道状态指令创建或修改泳道。泳道状态管理模块创建的泳道中包括多个微服务的部署单元,泳道中的微服务、每个微服务的版本,以及每个微服务的部署单元的数量与目标泳道状态指令所指示的相同。The lane state management module specifically manages multiple lanes according to the target lane state instruction. The target lane state instruction, for example, includes one or more of the lanes that need to be modified/created, the microservices included in the lane, the version corresponding to each microservice in the lane, and the number of deployment units corresponding to each microservice in the lane. The lane state management module creates or modifies the lane according to the target lane state instruction. The lane created by the lane state management module includes deployment units of multiple microservices, and the microservices in the lane, the version of each microservice, and the number of deployment units of each microservice are the same as those indicated by the target lane state instruction.
每个部署单元为一个微服务实例,为在微服务集群中创建和管理的、最小的可部署的计算单元。每个部署单元为最小调度单位,或者可以称为原子调度单位。一个部署单元仅属于一个微服务,以及仅属于同一个泳道。微服务集群的多个泳道中,可以包括一个正式泳道和至少一个灰度泳道。正式泳道中的微服务的版本均为正式版本。灰度泳道的微服务的版本可以均为灰度版本,也可以部分微服务的版本为正式版本,部分微服务的版本为灰度版本。例如,微服务C仅有一个版本,但是会被灰度版本的微服务调用时,灰度泳道中也会包括微服务C的部署单元。Each deployment unit is a microservice instance, which is the smallest deployable computing unit created and managed in a microservice cluster. Each deployment unit is the smallest scheduling unit, or it can be called an atomic scheduling unit. A deployment unit belongs to only one microservice and only to the same lane. The multiple lanes of a microservice cluster can include one formal lane and at least one grayscale lane. The versions of microservices in the formal lane are all formal versions. The versions of microservices in the grayscale lane can all be grayscale versions, or some microservices can be formal versions and some can be grayscale versions. For example, if microservice C has only one version, but when it is called by a grayscale version of a microservice, the grayscale lane will also include the deployment unit of microservice C.
为了保证不同版本流量的隔离,针对同一个微服务,在一个泳道中仅包括一个版本。例如微服务D有两个版本v1和v2,在任一个泳道中只包括其中一个版本的微服务D,而不会出现一个泳道中同时存在v1和v2的微服务D的情况。In order to ensure the isolation of traffic of different versions, only one version of the same microservice is included in a lane. For example, if microservice D has two versions v1 and v2, only one version of microservice D is included in any lane, and there will be no situation where both v1 and v2 of microservice D exist in the same lane.
本实施例中,每一泳道中包括至少一个调用链路的所有微服务对应的部署单元。即在同一个泳道中能够完成整个调用链路上所有微服务的调用,而无需调用其他泳道中的微服务的部署单元,保证不同泳道间的流量隔离。In this embodiment, each lane includes at least one deployment unit corresponding to all microservices in the call link. That is, the call of all microservices on the entire call link can be completed in the same lane without calling the deployment unit of the microservices in other lanes, ensuring the traffic isolation between different lanes.
以基于Kubernetes(k8s)的微服务集群为例,一个部署单元可以为一个pod。泳道状态管理模块建立泳道和部署单元的映射关系,具体可以是通过给部署单元打标实现的,同一个泳道中的部署单元的标签(labels)相同,不同泳道的部署单元的标签互不相同。标签是附加到Kubernetes对象(比如pods)上的键值对。标签可以用于组织和选择对象的子集。标签可以在创建时附加到对象,随后可以随时添加和修改。每个对象都可以定义一组键/值标签。每个键对于给定对象必须是唯一的。Taking a microservice cluster based on Kubernetes (k8s) as an example, a deployment unit can be a pod. The lane state management module establishes a mapping relationship between lanes and deployment units, which can be achieved by labeling deployment units. The labels of deployment units in the same lane are the same, and the labels of deployment units in different lanes are different. Labels are key-value pairs attached to Kubernetes objects (such as pods). Labels can be used to organize and select subsets of objects. Labels can be attached to objects when they are created, and can be added and modified at any time. Each object can define a set of key/value labels. Each key must be unique for a given object.
控制模块则根据泳道和部署单元的映射关系,为每一泳道中的每一部署单元重组路由策略。其中,路 由策略包括路由规则和路由地址集。路由规则例如为轮询或加权轮询等,用于指示部署单元根据路由规则在路由地址集中选择调用的部署单元。路由地址集则为同一泳道中的部署单元的地址的集合。控制模块生成每一部署单元的路由策略后,向部署单元下发对应的路由策略。The control module reorganizes the routing strategy for each deployment unit in each lane according to the mapping relationship between lanes and deployment units. The strategy includes routing rules and routing address sets. Routing rules, such as round-robin or weighted round-robin, are used to instruct the deployment unit to select the deployment unit to call from the routing address set according to the routing rules. The routing address set is a collection of the addresses of the deployment units in the same lane. After the control module generates the routing strategy for each deployment unit, it sends the corresponding routing strategy to the deployment unit.
部署单元则根据路由策略进行流量的路由,即微服务的调度。部署单元包括应用功能模块和路由模块,应用功能模块包括微服务的应用程序,用于执行相应的功能,路由模块用于应用功能模块发出的请求,并根据路由策略对请求进行路由转发。具体地,流量经过网关时,网关根据灰度发布策略将流量路由到对应的泳道中。灰度发布策略可以是基于流量比例,也可以是基于流量内容,此处不做限制。若流量中的请求引起部署单元对其他微服务的调用时,部署单元的应用功能模块向需要被调用的目标微服务发送调用请求。部署单元的路由模块根据路由规则在路由地址集中确定一个目标微服务的部署单元的地址,并根据该地址将调用请求路由到对应的部署单元。例如,路由地址集中包括目标服务的10个部署单元对应的地址,部署单元的路由模块通过轮询或加权轮询的方式在10个地址中确定一个地址,将调用请求发送到对应的部署单元,从而实现流量的隔离和负载均衡。The deployment unit routes the traffic according to the routing strategy, that is, the scheduling of microservices. The deployment unit includes an application function module and a routing module. The application function module includes an application program of the microservice, which is used to perform the corresponding function. The routing module is used for the request issued by the application function module, and routes and forwards the request according to the routing strategy. Specifically, when the traffic passes through the gateway, the gateway routes the traffic to the corresponding lane according to the grayscale release strategy. The grayscale release strategy can be based on the traffic ratio or the traffic content, which is not limited here. If the request in the traffic causes the deployment unit to call other microservices, the application function module of the deployment unit sends a call request to the target microservice that needs to be called. The routing module of the deployment unit determines the address of a deployment unit of a target microservice in the routing address set according to the routing rules, and routes the call request to the corresponding deployment unit according to the address. For example, the routing address set includes the addresses corresponding to 10 deployment units of the target service. The routing module of the deployment unit determines an address among the 10 addresses by polling or weighted polling, and sends the call request to the corresponding deployment unit, thereby realizing traffic isolation and load balancing.
如图5所示,图5为本申请提供的一种流量隔离方法的流程示意图。本实施例的执行主体为微服务集群中的任意一个参与调用其他微服务的微服务对应的部署单元。As shown in Figure 5, Figure 5 is a flow chart of a traffic isolation method provided by the present application. The execution subject of this embodiment is a deployment unit corresponding to any microservice in the microservice cluster that participates in calling other microservices.
S501:第一部署单元向目标微服务发送调用请求,第一部署单元属于微服务集群中的一个泳道中的部署单元,微服务集群包括多个泳道,每一泳道中包括多个部署单元,每一部署单元为一个微服务的微服务实例。S501: A first deployment unit sends a call request to a target microservice. The first deployment unit belongs to a deployment unit in a lane in a microservice cluster. The microservice cluster includes multiple lanes. Each lane includes multiple deployment units. Each deployment unit is a microservice instance of a microservice.
S502:第一部署单元的路由模块根据路由地址集将调用请求路由到目标微服务对应的第二部署单元,路由地址集中的地址均为第一部署单元所属的泳道中的其他部署单元的地址,路由地址集中的地址包括第二部署单元的地址。S502: The routing module of the first deployment unit routes the call request to the second deployment unit corresponding to the target microservice according to the routing address set, where the addresses in the routing address set are all addresses of other deployment units in the lane to which the first deployment unit belongs, and the addresses in the routing address set include the address of the second deployment unit.
具体实现过程可参阅上述的相关描述,故此处不再赘述。The specific implementation process can be found in the above-mentioned related description, so it will not be repeated here.
该过程中,部署单元的路由模块不需要在请求报文中添加用于流量隔离的标记,因为部署单元的路由模块通过路由地址集即可实现不同泳道流量的隔离。流量不需要通过传递标记来实现流量隔离,也就不存在标记丢失造成流量隔离混乱、失效而导致升级过程中系统平滑兼容的问题,从而实现平滑稳定地升级。In this process, the routing module of the deployment unit does not need to add a tag for traffic isolation in the request message, because the routing module of the deployment unit can achieve the isolation of traffic in different lanes through the routing address set. Traffic does not need to be isolated by passing tags, so there is no problem of traffic isolation confusion and failure caused by tag loss, which leads to smooth compatibility of the system during the upgrade process, thus achieving a smooth and stable upgrade.
示例性地,以微服务集群为k8s集群、部署单元为pod、控制模块为服务网格istiod、部署单元的路由模块为envoy为例进行说明。如图6和图7所示,图6为本申请提供的另一种流量隔离系统的结构示意图;图7为本申请提供的另一种流量隔离方法的流程示意图。其中,泳道状态管理模块和istiod属于泳道级灰度路由技术管理面,用于对泳道进行管理和流量控制。pod和pod中的envoy属于工作面,用于对流量进行处理。Exemplarily, the microservice cluster is a k8s cluster, the deployment unit is a pod, the control module is a service grid istiod, and the routing module of the deployment unit is envoy. As shown in Figures 6 and 7, Figure 6 is a structural diagram of another traffic isolation system provided by the present application; Figure 7 is a flow diagram of another traffic isolation method provided by the present application. Among them, the lane state management module and istiod belong to the lane-level grayscale routing technology management plane, which is used to manage the lane and control the traffic. The pod and the envoy in the pod belong to the working plane, which is used to process the traffic.
k8s的控制面包括应用程序接口(application programming interface,API)服务器(server)和etcd组件。API server是k8s控制面的组件,API server验证并配置API对象的数据,这些对象包括pods、微服务等。提供了k8s各类资源对象(pod,service等)的增删改查及监视(watch)等HTTP Rest接口,是整个系统的数据总线和数据中心,并为微服务集群的共享状态提供前端,所有其他组件都通过该前端进行交互。etcd是兼顾一致性与高可用性的键值数据库,可以作为保存微服务集群数据的后台数据库。微服务集群中的部署单元与泳道的映射关系,保存在etcd中。The control plane of k8s includes the application programming interface (API) server and etcd components. The API server is a component of the k8s control plane. The API server verifies and configures the data of API objects, which include pods, microservices, etc. It provides HTTP Rest interfaces such as addition, deletion, modification, query, and monitoring (watch) of various k8s resource objects (pod, service, etc.). It is the data bus and data center of the entire system, and provides a front end for the shared state of the microservice cluster. All other components interact through this front end. etcd is a key-value database that takes into account consistency and high availability. It can be used as a background database for storing microservice cluster data. The mapping relationship between deployment units and lanes in a microservice cluster is saved in etcd.
泳道状态管理模块,用于根据用户期望的泳道状态,为微服务的部署单元配置泳道。其调用k8s API server的接口设置部署单元的泳道,并由API server调用etcd,将泳道与部署单元之间的映射关系保存在etcd内。The lane status management module is used to configure lanes for deployment units of microservices according to the lane status expected by users. It calls the interface of the k8s API server to set the lanes of the deployment units, and the API server calls etcd to save the mapping relationship between lanes and deployment units in etcd.
Istiod,提供了一种统一和更有效的方式来保护、连接和监视服务流量。本实施例中,Istio可以根据envoy对应的pod所归属的泳道,向推送envoy的路由策略,并在泳道状态变化时,再将路由策略推送给各个envoy。Istiod provides a unified and more efficient way to protect, connect and monitor service traffic. In this embodiment, Istio can push the routing policy of envoy according to the lane to which the pod corresponding to envoy belongs, and push the routing policy to each envoy when the lane status changes.
Envoy是一个边缘和服务代理,是Istio服务网格的边车(sidecar),由Istiod进行动态管理。Envoy接受到Istiod推送的路由策略,按照路由策略进行泳道级的流量路由,达成流量的泳道级的流量隔离。Envoy is an edge and service proxy, a sidecar of the Istio service grid, and is dynamically managed by Istiod. Envoy receives the routing policy pushed by Istiod, performs lane-level traffic routing according to the routing policy, and achieves lane-level traffic isolation.
基于图6的架构,泳道的管理、路由策略的生成和下发流程可参阅图7。Based on the architecture of FIG6 , the process of lane management, routing strategy generation and delivery can be seen in FIG7 .
S701:用户向k8s的API服务器发送目标泳道状态指令。S701: The user sends a target lane status instruction to the k8s API server.
目标泳道状态指令包括:修改/创建的泳道、泳道中包括的微服务、泳道中的每一微服务对应的版本, 和泳道内每一微服务对应的部署单元的数量等中的一个或多个。The target lane status instructions include: the lane to be modified/created, the microservices included in the lane, and the version corresponding to each microservice in the lane. One or more of the number of deployment units corresponding to each microservice in the lane, etc.
目标泳道状态指令,可以以自定义资源(custom resource definition,CRD)的形式承载,CRD是kubernets的一种资源扩展方式。当创建新的CRD时,k8s API服务器会根据CRD生成一个表现层状态转换(representational state transfer,RESTful)的资源路径。SRE可以调用此API服务器的RESTful路径,将目标泳道状态指令提交给API服务器。The target lane state instruction can be carried in the form of a custom resource definition (CRD), which is a resource extension method of kubernetes. When a new CRD is created, the k8s API server generates a representational state transfer (RESTful) resource path based on the CRD. SRE can call the RESTful path of this API server to submit the target lane state instruction to the API server.
S702:API服务器将目标泳道状态指令中的信息存储到K8S的etcd内。S702: The API server stores the information in the target lane status instruction in etcd of K8S.
S703:API服务器向泳道状态管理模块发送携带目标泳道状态指令中的信息的第一通知消息。S703: The API server sends a first notification message carrying information in the target lane status instruction to the lane status management module.
第一通知消息用于指示灰度发布开始,第一通知消息中还包括目标泳道状态指令中的信息。The first notification message is used to indicate the start of the grayscale release, and the first notification message also includes information in the target lane state instruction.
API服务器具有列表监视(list-watch)机制,当CRD内容发生变化时,API服务器可以通知给监听此CRD的泳道状态管理模块。The API server has a list-watch mechanism. When the CRD content changes, the API server can notify the lane status management module that monitors this CRD.
S704:泳道状态管理模块根据目标泳道状态指令中的信息建立泳道与pod的映射关系。S704: The lane status management module establishes a mapping relationship between the lane and the pod according to the information in the target lane status instruction.
泳道状态管理模块根据获取到的目标泳道状态指令中的信息,对微服务集群中的pod打泳道标签,即建立泳道与部署单元的映射关系。例如,为pod新增swimlane-type:gray的标签。标签是附加到pod上的键值对。The lane status management module adds lane labels to the pods in the microservice cluster based on the information in the target lane status instruction obtained, that is, establishes a mapping relationship between lanes and deployment units. For example, add a label of swimlane-type:gray to the pod. A label is a key-value pair attached to a pod.
S705:泳道状态管理模块将泳道与pod的映射关系提交给API服务器。S705: The lane status management module submits the mapping relationship between the lane and the pod to the API server.
S706:API服务器将泳道与pod的映射关系存储到etcd中。S706: The API server stores the mapping relationship between lanes and pods in etcd.
S707:API服务器向Istiod发送通知泳道变化的第二通知消息。S707: The API server sends a second notification message to Istiod to notify the lane change.
第二通知用于向Istiod通知泳道状态变化,第二通知消息携带泳道与部署单元的映射关系。API服务器通过list-watch机制向Istiod发送第二通知消息。The second notification is used to notify Istiod of the lane status change. The second notification message carries the mapping relationship between the lane and the deployment unit. The API server sends the second notification message to Istiod through the list-watch mechanism.
S708:Istiod根据泳道与pod的映射关系获得每一pod的路由策略。S708: Istiod obtains the routing strategy of each pod according to the mapping relationship between the lanes and the pods.
路由策略包括路由规则和路由地址集。路由地址集中的地址均为部署单元所述的泳道中的部署单元的地址。路由策略可参阅上文的相关描述,此处不再赘述。The routing strategy includes routing rules and routing address sets. The addresses in the routing address set are all the addresses of the deployment units in the lanes described by the deployment units. The routing strategy can be found in the above description, which will not be repeated here.
S709:Istiod向部署单元的Envoy下发对应的路由策略。S709: Istiod sends the corresponding routing policy to the Envoy of the deployment unit.
S710:Envoy根据路由策略路由泳道中的流量。S710: Envoy routes traffic in the lane according to the routing policy.
Envoy根据路由策略路由泳道中的流量,可参阅上文部署单元的路由模块路由调用请求的相关描述,此处不再赘述。Envoy routes traffic in the lane according to the routing strategy. Please refer to the description of the routing module routing call request of the deployment unit above, which will not be repeated here.
本实施例中,管理面的泳道状态管理模块根据用户下发的目标泳道状态指令更新/创建泳道与部署单元之间的映射关系,并由控制模块根据泳道与部署单元之间的映射关系更新部署单元的路由策略,路由策略中的地址均为部署单元所述的泳道中的部署单元的地址。从而,部署单元在根据路由策略转发调用请求时,只能将调用请求转发到同一个泳道中的其他部署单元,从而实现泳道级的流量隔离,无需通过流量透传标记,不会带来灰度实现方式升级引起的系统平滑兼容问题。In this embodiment, the lane state management module of the management plane updates/creates the mapping relationship between the lane and the deployment unit according to the target lane state instruction issued by the user, and the control module updates the routing policy of the deployment unit according to the mapping relationship between the lane and the deployment unit, and the addresses in the routing policy are all the addresses of the deployment units in the lanes described by the deployment units. Therefore, when the deployment unit forwards the call request according to the routing policy, it can only forward the call request to other deployment units in the same lane, thereby realizing lane-level traffic isolation, without the need for traffic transparent marking, and will not cause system smooth compatibility problems caused by the grayscale implementation mode upgrade.
为了使本方案更易于理解,示例性地,结合场景对本方案的技术进行说明。如图8所示,图8中,微服务集群包括正式泳道和灰度泳道,正式泳道和灰度泳道中的微服务包括用户、购物车、订单的部署单元。调用链路为用户→购物车→订单。其中,正式泳道中用户对应的部署单元1的版本为v1,购物车对应的部署单元2的版本为v1,订单对应的部署单元3的版本为v1。灰度泳道中的用户对应的部署单元4的版本为v2,购物车对应的部署单元5的版本为v1,订单对应的部署单元6的版本为v2。In order to make this solution easier to understand, the technology of this solution is explained in combination with the scenario as an example. As shown in Figure 8, in Figure 8, the microservice cluster includes a formal lane and a grayscale lane. The microservices in the formal lane and the grayscale lane include deployment units of users, shopping carts, and orders. The call link is user→shopping cart→order. Among them, the version of deployment unit 1 corresponding to the user in the formal lane is v1, the version of deployment unit 2 corresponding to the shopping cart is v1, and the version of deployment unit 3 corresponding to the order is v1. The version of deployment unit 4 corresponding to the user in the grayscale lane is v2, the version of deployment unit 5 corresponding to the shopping cart is v1, and the version of deployment unit 6 corresponding to the order is v2.
部署单元1配置的路由地址集中的地址为部署单元2的地址,部署单元2配置的路由地址集中的地址为部署单元3的地址。部署单元4配置的路由地址集中的地址为部署单元5的地址,部署单元5配置的路由地址集中的地址为部署单元6的地址。The address in the routing address set configured by deployment unit 1 is the address of deployment unit 2, and the address in the routing address set configured by deployment unit 2 is the address of deployment unit 3. The address in the routing address set configured by deployment unit 4 is the address of deployment unit 5, and the address in the routing address set configured by deployment unit 5 is the address of deployment unit 6.
网关将访问请求路由到正式泳道中的部署单元1时,若部署单元1需要调用微服务购物车,部署单元向微服务购物车发送调用请求,部署单元1的路由模块拦截调用请求,根据调用请求需要调用的微服务购物车,在部署单元的路由地址集中获得微服务购物车的地址,该地址为部署单元2的地址。部署单元2采用相同的方式调用部署单元3。灰度泳道中的调用过程类似。从而,不同泳道间互相隔离,不会跨泳道调用,在无需流量携带标识的情况下,实现不同泳道间的流量的隔离。When the gateway routes the access request to deployment unit 1 in the formal lane, if deployment unit 1 needs to call the microservice shopping cart, the deployment unit sends a call request to the microservice shopping cart, and the routing module of deployment unit 1 intercepts the call request. According to the microservice shopping cart that needs to be called in the call request, the address of the microservice shopping cart is obtained from the routing address set of the deployment unit, which is the address of deployment unit 2. Deployment unit 2 calls deployment unit 3 in the same way. The calling process in the grayscale lane is similar. Therefore, different lanes are isolated from each other, and cross-lane calls will not be made. The isolation of traffic between different lanes is achieved without the need for traffic to carry identifiers.
如图9所示,图9为本申请提供的一种流量隔离装置的结构示意图。流量隔离装置900为微服务集群中的装置,微服务集群包括多个泳道,每一泳道中包括多个流量隔离装置900,每一流量隔离装置900为 一个微服务的微服务实例,流量隔离装置配置900有路由地址集,路由地址集中的地址均为流量隔离装置900所属的泳道中的其他流量隔离装置900的地址。流量隔离装置900用于实现上述的部署单元的操作。As shown in FIG. 9 , FIG. 9 is a schematic diagram of the structure of a flow isolation device provided by the present application. The flow isolation device 900 is a device in a microservice cluster. The microservice cluster includes multiple lanes. Each lane includes multiple flow isolation devices 900. Each flow isolation device 900 is In a microservice instance of a microservice, the traffic isolation device configuration 900 has a routing address set, and the addresses in the routing address set are all addresses of other traffic isolation devices 900 in the lane to which the traffic isolation device 900 belongs. The traffic isolation device 900 is used to implement the operation of the above deployment unit.
流量隔离装置900包括应用功能模块901和路由模块902。其中,应用功能模块901,用于向目标微服务发送调用请求。路由模块902,用于根据路由地址集将调用请求路由到目标微服务对应的部署单元。The traffic isolation device 900 includes an application function module 901 and a routing module 902. The application function module 901 is used to send a call request to a target microservice. The routing module 902 is used to route the call request to a deployment unit corresponding to the target microservice according to a routing address set.
在一种可能的实现方式中,每一泳道中包括至少一个调用链路上所有的微服务对应的流量隔离装置900。In a possible implementation, each lane includes at least one traffic isolation device 900 corresponding to all microservices on the call link.
在一种可能的实现方式中,多个泳道中包括灰度泳道,灰度泳道中包括至少一个灰度版本的微服务,灰度泳道用于实现至少一个灰度版本的微服务的灰度发布。In a possible implementation, the multiple lanes include a grayscale lane, the grayscale lane includes at least one grayscale version of a microservice, and the grayscale lane is used to implement the grayscale release of at least one grayscale version of a microservice.
在一种可能的实现方式中,路由模块902,用于获取来自控制模块的路由策略,路由策略包括路由地址集和路由规则。路由模块902,具体用于根据路由规则在路由地址集中确定目标微服务对应的部署单元的目标地址。路由模块902,具体用于根据目标地址将调用请求路由到目标微服务对应的部署单元。In a possible implementation, the routing module 902 is used to obtain a routing strategy from the control module, where the routing strategy includes a routing address set and a routing rule. The routing module 902 is specifically used to determine the target address of the deployment unit corresponding to the target microservice in the routing address set according to the routing rule. The routing module 902 is specifically used to route the call request to the deployment unit corresponding to the target microservice according to the target address.
如图10所示,图10为本申请提供的另一种流量隔离装置的结构示意图。流量隔离装置1000包括处理器1001和存储器1002,处理器1001耦接存储器1002,处理器1001被配置为基于存储在存储器1002中的指令,执行上述任意实施例中的流量隔离方法。As shown in Figure 10, Figure 10 is a schematic diagram of the structure of another flow isolation device provided by the present application. The flow isolation device 1000 includes a processor 1001 and a memory 1002, the processor 1001 is coupled to the memory 1002, and the processor 1001 is configured to execute the flow isolation method in any of the above embodiments based on the instructions stored in the memory 1002.
本申请还提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被计算机执行时实现上述任一方法实施例的流量隔离方法流程。The present application also provides a computer-readable storage medium having a computer program stored thereon, which, when executed by a computer, implements the flow isolation method process of any of the above-mentioned method embodiments.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and brevity of description, the specific working processes of the systems, devices and units described above can refer to the corresponding processes in the aforementioned method embodiments and will not be repeated here.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices and methods can be implemented in other ways. For example, the device embodiments described above are only schematic. For example, the division of the units is only a logical function division. There may be other division methods in actual implementation, such as multiple units or components can be combined or integrated into another system, or some features can be ignored or not executed. Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or units, which can be electrical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit. The above-mentioned integrated unit may be implemented in the form of hardware or in the form of software functional units.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,read-only memory)、随机存取存储器(RAM,random access memory)、磁碟或者光盘等各种可以存储程序代码的介质。 If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium. Based on this understanding, all or part of the technical solution of the present application can be embodied in the form of a software product, which is stored in a storage medium and includes a number of instructions for a computer device (which can be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in each embodiment of the present application. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM, read-only memory), random access memory (RAM, random access memory), disk or optical disk and other media that can store program code.

Claims (16)

  1. 一种流量隔离方法,其特征在于,应用于微服务集群,所述微服务集群包括多个泳道,每一所述泳道中包括多个部署单元,每一所述部署单元为一个微服务的微服务实例,所述部署单元配置有路由地址集,所述路由地址集中的地址均为所述部署单元所属的泳道中的其他部署单元的地址,所述方法包括:A traffic isolation method, characterized in that it is applied to a microservice cluster, the microservice cluster includes multiple lanes, each of the lanes includes multiple deployment units, each of the deployment units is a microservice instance of a microservice, the deployment unit is configured with a routing address set, and the addresses in the routing address set are all addresses of other deployment units in the lane to which the deployment unit belongs, the method comprising:
    所述部署单元向目标微服务发送调用请求;The deployment unit sends a call request to the target microservice;
    所述部署单元的路由模块根据所述路由地址集将所述调用请求路由到所述目标微服务对应的部署单元。The routing module of the deployment unit routes the call request to the deployment unit corresponding to the target microservice according to the routing address set.
  2. 根据权利要求1所述的方法,其特征在于,每一所述泳道中包括至少一个调用链路上所有的微服务对应的部署单元。The method according to claim 1 is characterized in that each of the lanes includes at least one deployment unit corresponding to all microservices on the call link.
  3. 根据权利要求1或2所述的方法,其特征在于,所述多个泳道中包括灰度泳道,所述灰度泳道中包括至少一个灰度版本的微服务,所述灰度泳道用于实现所述至少一个灰度版本的微服务的灰度发布。The method according to claim 1 or 2 is characterized in that the multiple lanes include a grayscale lane, the grayscale lane includes at least one grayscale version of a microservice, and the grayscale lane is used to implement the grayscale release of the at least one grayscale version of the microservice.
  4. 根据权利要求1至3中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 3, characterized in that the method further comprises:
    所述部署单元的路由模块获取来自控制模块的路由策略,所述路由策略包括所述路由地址集和路由规则;The routing module of the deployment unit obtains a routing strategy from the control module, wherein the routing strategy includes the routing address set and the routing rule;
    所述部署单元的路由模块根据所述路由地址集将所述调用请求路由到所述目标微服务对应的部署单元,包括:The routing module of the deployment unit routes the call request to the deployment unit corresponding to the target microservice according to the routing address set, including:
    所述部署单元的路由模块根据所述路由规则在所述路由地址集中确定所述目标微服务对应的部署单元的目标地址;The routing module of the deployment unit determines the target address of the deployment unit corresponding to the target microservice in the routing address set according to the routing rule;
    所述部署单元的路由模块根据所述目标地址将所述调用请求路由到所述目标微服务对应的部署单元。The routing module of the deployment unit routes the call request to the deployment unit corresponding to the target microservice according to the target address.
  5. 一种流量隔离方法,其特征在于,所述方法包括:A traffic isolation method, characterized in that the method comprises:
    泳道状态管理模块根据目标泳道状态指令管理微服务集群中的多个泳道,每一所述泳道包括多个部署单元,每一所述部署单元为一个微服务的微服务实例;The lane state management module manages multiple lanes in the microservice cluster according to the target lane state instruction, each of the lanes includes multiple deployment units, and each of the deployment units is a microservice instance of a microservice;
    控制模块为每一所述泳道中的部署单元下发路由策略,所述路由策略包括路由地址集,所述路由地址集中的地址均为所述部署单元所属的泳道中的部署单元的地址,以使所述部署单元根据所述路由地址集调用所述部署单元所属的泳道中的其他部署单元。The control module sends a routing strategy to each deployment unit in the lane, wherein the routing strategy includes a routing address set, and the addresses in the routing address set are all addresses of deployment units in the lane to which the deployment unit belongs, so that the deployment unit calls other deployment units in the lane to which the deployment unit belongs according to the routing address set.
  6. 根据权利要求5所述的方法,其特征在于,每一所述泳道中包括至少一个调用链路上所有的微服务对应的部署单元。The method according to claim 5 is characterized in that each of the lanes includes at least one deployment unit corresponding to all microservices on the call link.
  7. 根据权利要求5或6所述的方法,其特征在于,所述多个泳道中包括灰度泳道,所述灰度泳道中包括至少一个灰度版本的微服务,所述灰度泳道用于实现所述至少一个灰度版本的微服务的灰度发布。The method according to claim 5 or 6 is characterized in that the multiple lanes include a grayscale lane, the grayscale lane includes at least one grayscale version of a microservice, and the grayscale lane is used to implement the grayscale release of the at least one grayscale version of the microservice.
  8. 一种流量隔离装置,其特征在于,所述流量隔离装置为微服务集群中的装置,所述微服务集群包括多个泳道,每一所述泳道中包括多个所述流量隔离装置,每一所述流量隔离装置为一个微服务的微服务实例,所述流量隔离装置配置有路由地址集,所述路由地址集中的地址均为所述流量隔离装置所属的泳道中的其他流量隔离装置的地址,所述装置包括:A traffic isolation device, characterized in that the traffic isolation device is a device in a microservice cluster, the microservice cluster includes multiple lanes, each lane includes multiple traffic isolation devices, each traffic isolation device is a microservice instance of a microservice, the traffic isolation device is configured with a routing address set, the addresses in the routing address set are all addresses of other traffic isolation devices in the lane to which the traffic isolation device belongs, and the device includes:
    调用模块,用于向目标微服务发送调用请求;The calling module is used to send a calling request to the target microservice;
    路由模块,用于根据所述路由地址集将所述调用请求路由到所述目标微服务对应的部署单元。A routing module is used to route the call request to the deployment unit corresponding to the target microservice according to the routing address set.
  9. 根据权利要求8所述的装置,其特征在于,每一所述泳道中包括至少一个调用链路上所有的微服务对应的流量隔离装置。The device according to claim 8 is characterized in that each of the lanes includes at least one traffic isolation device corresponding to all microservices on the call link.
  10. 根据权利要求8或9所述的装置,其特征在于,所述多个泳道中包括灰度泳道,所述灰度泳道中包括至少一个灰度版本的微服务,所述灰度泳道用于实现所述至少一个灰度版本的微服务的灰度发布。The device according to claim 8 or 9 is characterized in that the multiple lanes include a grayscale lane, the grayscale lane includes at least one grayscale version of a microservice, and the grayscale lane is used to implement the grayscale release of the at least one grayscale version of the microservice.
  11. 根据权利要求9至10中任一项所述的装置,其特征在于,The device according to any one of claims 9 to 10, characterized in that
    所述路由模块,用于获取来自控制模块的路由策略,所述路由策略包括所述路由地址集和路由规则;The routing module is used to obtain a routing strategy from the control module, wherein the routing strategy includes the routing address set and routing rules;
    所述路由模块,具体用于根据所述路由规则在所述路由地址集中确定所述目标微服务对应的部署单元的目标地址;The routing module is specifically used to determine the target address of the deployment unit corresponding to the target microservice in the routing address set according to the routing rule;
    所述路由模块,具体用于根据所述目标地址将所述调用请求路由到所述目标微服务对应的部署单元。The routing module is specifically used to route the call request to the deployment unit corresponding to the target microservice according to the target address.
  12. 一种流量隔离系统,其特征在于,所述系统包括:A flow isolation system, characterized in that the system comprises:
    泳道状态管理模块,用于根据目标泳道状态指令管理微服务集群中的多个泳道,每一所述泳道包括多个部署单元,每一所述部署单元为一个微服务的微服务实例; A lane state management module, used to manage multiple lanes in a microservice cluster according to a target lane state instruction, each lane including multiple deployment units, each deployment unit being a microservice instance of a microservice;
    控制模块,用于为每一所述泳道中的部署单元下发路由策略,所述路由策略包括路由地址集,所述路由地址集中的地址均为所述部署单元所属的泳道中的部署单元的地址;A control module, configured to issue a routing policy to each deployment unit in the lane, wherein the routing policy includes a routing address set, and the addresses in the routing address set are all addresses of deployment units in the lane to which the deployment unit belongs;
    部署单元,用于根据所述路由地址集调用所述部署单元所属的泳道中的其他部署单元。The deployment unit is used to call other deployment units in the lane to which the deployment unit belongs according to the routing address set.
  13. 根据权利要求12所述的系统,其特征在于,每一所述泳道中包括至少一个调用链路上所有的微服务对应的部署单元。The system according to claim 12 is characterized in that each of the lanes includes at least one deployment unit corresponding to all microservices on the call link.
  14. 根据权利要求12或13所述的系统,其特征在于,所述多个泳道中包括灰度泳道,所述灰度泳道中包括至少一个灰度版本的微服务,所述灰度泳道用于实现所述至少一个灰度版本的微服务的灰度发布。The system according to claim 12 or 13 is characterized in that the multiple lanes include a grayscale lane, the grayscale lane includes at least one grayscale version of a microservice, and the grayscale lane is used to implement the grayscale release of the at least one grayscale version of the microservice.
  15. 一种流量隔离装置,其特征在于,所述装置包括处理器和存储器,所述处理器耦接所述存储器,所述处理器被配置为基于存储在所述存储器中的指令,执行如权利要求1至4,或5至7中任一项所述的流量隔离方法。A flow isolation device, characterized in that the device includes a processor and a memory, the processor is coupled to the memory, and the processor is configured to execute the flow isolation method as described in any one of claims 1 to 4, or 5 to 7 based on instructions stored in the memory.
  16. 一种计算机可读存储介质,其特征在于,包括指令,当所述计算机可读存储介质在计算机上运行时,使得所述计算机执行如权利要求1至4,或5至7中任一项所述的流量隔离方法。 A computer-readable storage medium, characterized in that it includes instructions, which, when the computer-readable storage medium is run on a computer, enables the computer to execute the traffic isolation method as described in any one of claims 1 to 4, or 5 to 7.
PCT/CN2023/103668 2022-10-10 2023-06-29 Traffic isolation method, apparatus, and system, and computer-readable storage medium WO2024078025A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211234556.8A CN117908942A (en) 2022-10-10 2022-10-10 Traffic isolation method, device, system and computer readable storage medium
CN202211234556.8 2022-10-10

Publications (1)

Publication Number Publication Date
WO2024078025A1 true WO2024078025A1 (en) 2024-04-18

Family

ID=90668690

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/103668 WO2024078025A1 (en) 2022-10-10 2023-06-29 Traffic isolation method, apparatus, and system, and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN117908942A (en)
WO (1) WO2024078025A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200112487A1 (en) * 2018-10-05 2020-04-09 Cisco Technology, Inc. Canary release validation mechanisms for a containerized application or service mesh
CN111464380A (en) * 2020-03-19 2020-07-28 时时同云科技(成都)有限责任公司 Method, device and system for parallel testing of multiple service items
CN111586095A (en) * 2020-03-26 2020-08-25 中国平安财产保险股份有限公司 Micro-service-based gray scale publishing method and device, computer equipment and storage medium
CN113918193A (en) * 2021-10-29 2022-01-11 平安普惠企业管理有限公司 Gray level calling method, device, equipment and storage medium suitable for micro-service
CN114265607A (en) * 2022-03-03 2022-04-01 杭州朗澈科技有限公司 Gray scale publishing method and system, electronic device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200112487A1 (en) * 2018-10-05 2020-04-09 Cisco Technology, Inc. Canary release validation mechanisms for a containerized application or service mesh
CN111464380A (en) * 2020-03-19 2020-07-28 时时同云科技(成都)有限责任公司 Method, device and system for parallel testing of multiple service items
CN111586095A (en) * 2020-03-26 2020-08-25 中国平安财产保险股份有限公司 Micro-service-based gray scale publishing method and device, computer equipment and storage medium
CN113918193A (en) * 2021-10-29 2022-01-11 平安普惠企业管理有限公司 Gray level calling method, device, equipment and storage medium suitable for micro-service
CN114265607A (en) * 2022-03-03 2022-04-01 杭州朗澈科技有限公司 Gray scale publishing method and system, electronic device and storage medium

Also Published As

Publication number Publication date
CN117908942A (en) 2024-04-19

Similar Documents

Publication Publication Date Title
US11677860B2 (en) Decentralization processing method, communication proxy, host, and storage medium
US9307017B2 (en) Member-oriented hybrid cloud operating system architecture and communication method thereof
EP3471342A1 (en) Method and device for service deployment in virtualized network
CN102124449B (en) Method and system for low-overhead data transfer
US10313380B2 (en) System and method for centralized virtual interface card driver logging in a network environment
CN101207550B (en) Load balancing system and method for multi business to implement load balancing
US20090070456A1 (en) Protocol for enabling dynamic and scalable federation of enterprise service buses
JP4588704B2 (en) Self-management mediation information flow
US20050198351A1 (en) Content-based routing
CN111338773B (en) Distributed timing task scheduling method, scheduling system and server cluster
KR102341809B1 (en) System and method for supporting a bypass-domain model and a proxy model and updating service information for across-domain messaging in a transactional middleware machine environment
CN108063813B (en) Method and system for parallelizing password service network in cluster environment
CN103516777A (en) A method of provisioning in a cloud compute environment
US8769100B2 (en) Method and apparatus for cluster data processing
JP5920668B2 (en) Security policy enforcement system and security policy enforcement method
JP2009540717A (en) Self-managed distributed mediation network
WO2019100266A1 (en) Mobile edge host-machine service notification method and apparatus
CN116633775B (en) Container communication method and system of multi-container network interface
US11240308B2 (en) Implicit discovery controller registration of non-volatile memory express (NVMe) elements in an NVME-over-fabrics (NVMe-oF) system
CN113709220B (en) High-availability implementation method and system of virtual load equalizer and electronic equipment
WO2024078025A1 (en) Traffic isolation method, apparatus, and system, and computer-readable storage medium
CN116055426B (en) Method, equipment and medium for traffic offload forwarding in multi-binding mode
US20220131737A1 (en) Network data management framework
WO2022022313A1 (en) Method for transmitting routing information, apparatus, and communication system
EP4160407A1 (en) Protecting instances of resources of a container orchestration platform from unintentional deletion