CN113765816A - Flow control method, system, equipment and medium based on service grid - Google Patents

Flow control method, system, equipment and medium based on service grid Download PDF

Info

Publication number
CN113765816A
CN113765816A CN202110881328.9A CN202110881328A CN113765816A CN 113765816 A CN113765816 A CN 113765816A CN 202110881328 A CN202110881328 A CN 202110881328A CN 113765816 A CN113765816 A CN 113765816A
Authority
CN
China
Prior art keywords
service
target
physical network
network card
flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110881328.9A
Other languages
Chinese (zh)
Other versions
CN113765816B (en
Inventor
叶磊
钟成
贺环宇
庄清惠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Innovation Co
Original Assignee
Alibaba Singapore Holdings Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Singapore Holdings Pte Ltd filed Critical Alibaba Singapore Holdings Pte Ltd
Priority to CN202110881328.9A priority Critical patent/CN113765816B/en
Publication of CN113765816A publication Critical patent/CN113765816A/en
Application granted granted Critical
Publication of CN113765816B publication Critical patent/CN113765816B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/02Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
    • H04L63/0227Filtering policies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the application provides a flow control method, a system, equipment and a medium based on a service grid. In the embodiment of the application, the architecture of the service grid is improved, and the processing work of the data plane in the service grid is sunk into the physical network card in a close cooperation mode of software and hardware, so that the processing work of the data plane in the service grid does not occupy the computing resources on a host machine any more, and the host machine can be more concentrated on the micro-service itself; in addition, the physical network card has higher forwarding performance, which can effectively improve the network throughput and reduce the network delay, thereby being capable of serving the communication performance between services under the grid.

Description

Flow control method, system, equipment and medium based on service grid
Technical Field
The present application relates to the field of cloud network technologies, and in particular, to a method, a system, a device, and a medium for traffic control based on a service grid.
Background
Service mesh: is a dedicated infrastructure layer for handling inter-service communications. It is responsible for reliably passing requests through a complex service topology that encompasses modern cloud-native applications. In practice, the service grid is typically implemented by a set of lightweight network agents that are deployed with the application code without the need to be aware of the application itself.
The purpose of the service grid is to implement treatment of service traffic, in the cloud native field, a sidacr form may be generally adopted, and a "Proxy sidacr" is built in each deployed atomic unit Pod to take over all service traffic entering and exiting the Pod, which needs to occupy a lot of memory resources; moreover, the flow control can be completed only by entering and exiting the kernel and the user mode for multiple times, and the processing performance is poor, which results in poor connection efficiency between services.
Disclosure of Invention
Aspects of the present application provide a method, system, device and medium for flow control based on a service grid, so as to improve inter-service communication performance under the service grid.
An embodiment of the present application provides a flow control system, including: the system comprises a calling end and a plurality of service ends, wherein the calling end is provided with a first physical network card, and the service ends are provided with second physical network cards; the first physical network card and the second physical network card subscribe flow forwarding rules corresponding to specified services;
the calling terminal is used for initiating a service calling request to the first physical network card;
the first physical network card is used for determining a target micro-service pointed by the service calling request; according to a flow forwarding rule corresponding to the target micro service, forwarding the service calling request to a second physical network card assembled on a target server side capable of providing the target micro service;
and the second physical network card is used for initiating calling to the target micro-service on the target service terminal according to the resource positioning identifier in the service calling request and the flow forwarding rule corresponding to the target micro-service.
The embodiment of the application also provides a communication terminal, which comprises a memory, a processor and a physical network card;
the memory to store one or more computer instructions;
the processor is coupled to the memory and configured to execute the one or more computer instructions to provide a traffic forwarding rule corresponding to at least one microservice running on the communication end to the physical network card;
the physical network card is used for distributing input traffic flowing to the communication terminal to a destination container group (POD) on the communication terminal based on the traffic forwarding rule; based on the flow forwarding rule, forwarding the output flow sent by the communication end to a destination container group POD on the communication end or other communication ends;
wherein the micro-services required for the input traffic or the output traffic are run in the destination container group POD.
The embodiment of the present application further provides a traffic control method based on a service grid, which is applicable to a communication terminal in the service grid, where the communication terminal is equipped with a physical network card, and the physical network card includes a traffic forwarding rule corresponding to at least one microservice operating on the communication terminal, and the method includes:
under the condition of receiving input flow, distributing the input flow to a destination container group POD on the communication terminal by using the physical network card;
under the condition of sending out output flow, the physical network card is utilized to forward the output flow to a destination container group POD on the communication end or other communication ends;
wherein the micro-services required for the input traffic or the output traffic are run in the destination container group POD.
Embodiments of the present application also provide a computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform the aforementioned service grid-based flow control method.
In the embodiment of the application, the architecture of the service grid is improved, and the processing work of the data plane in the service grid is sunk into the physical network card in a close cooperation mode of software and hardware, so that the processing work of the data plane in the service grid does not occupy the computing resources on a host machine any more, and the host machine can be more concentrated on the micro-service itself; in addition, the physical network card has higher forwarding performance, which can effectively improve the network throughput and reduce the network delay, thereby being capable of serving the communication performance between services under the grid.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic structural diagram of a flow control system according to an exemplary embodiment of the present application;
FIG. 2 is a logical schematic diagram of an exemplary implementation of sinking processing functions of a data plane in a service grid into a physical network card;
FIG. 3 is a logic diagram of another exemplary implementation of sinking processing functions of a data plane in a service grid into a physical network card;
FIG. 4 is a logic diagram of an exemplary implementation of sinking a rule subscription function in a service grid into a physical network card;
FIG. 5 is a logic diagram of another exemplary implementation of sinking a rule subscription function in a service grid into a physical network card;
fig. 6 is a schematic structural diagram of a communication terminal according to another exemplary embodiment of the present application;
fig. 7 is a flowchart illustrating a method for controlling traffic based on a service grid according to another exemplary embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
At present, under a service grid, the flow control work needs to be carried out in a kernel and a user mode, so that the occupied resources are large, and the processing performance is poor. To this end, in some embodiments of the present application: the architecture of the service grid is improved, and the processing work of the data plane in the service grid is sunk into the physical network card in a close cooperation mode of software and hardware, so that the processing work of the data plane in the service grid does not occupy the computing resources on a host machine, and the host machine can be more concentrated on the micro-service per se; in addition, the physical network card has higher forwarding performance, which can effectively improve the network throughput and reduce the network delay, thereby being capable of serving the communication performance between services under the grid.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic structural diagram of a flow control system according to an exemplary embodiment of the present application. As shown in fig. 1, the system includes: a calling terminal 10 and a plurality of service terminals 20. The invoking terminal 10 and the server terminal 20 may be nodes, cloud servers, and the like in a cloud network, and the physical implementation forms of the invoking terminal 10 and the server terminal 20 are not limited in this embodiment.
The flow control scheme provided by the embodiment can be applied to a scene of performing flow management on micro services by using a service grid. The micro service is a framework scheme for constructing the application, and the micro service framework is different from a more traditional single-body scheme and can divide the application into a plurality of core functions. Each function is called a service, and can be built and deployed independently, which means that the services do not affect each other when working (and failing), and the services can communicate with each other and cooperate with each other to provide the required content for the user. A service grid is a dedicated infrastructure layer for handling inter-service communications. It is responsible for reliably passing requests through a complex service topology that encompasses modern cloud-native applications. In practice, the service grid is typically implemented by a set of lightweight network agents that are deployed with the application code without the need to be aware of the application itself.
The flow control scheme provided by the embodiment makes architectural improvement on the traditional service grid. It is proposed to sink the processing of the data plane in the service mesh to the physical network card on the host. Fig. 1 shows a schematic structure of a flow control system, taking one inter-service communication as an example. It should be appreciated that there are substantially more communication terminals under the service grid.
Referring to fig. 1, the calling terminal 10 and the service terminal 20 are both equipped with a physical network card, and for convenience of distinction, the physical network card equipped on the calling terminal 10 is described as a first physical network card 30, and the physical network card on the service terminal 20 is described as a second physical network card 40. In this embodiment, the physical network card may adopt an intelligent network card having an independent processor and capable of supporting a programming customization function. The microservice may be run in a container group, and the container group may be deployed on a host machine, i.e., a communication end (the communication end is used as a general term for the calling end and the service end in this embodiment). From the aspect of deployment, a communication end may include multiple container groups, where the multiple container groups share a physical network card, and one container group may run multiple microservices.
Therefore, in this embodiment, the processing program for the data plane in the service mesh may be written for the physical network card in advance, and written into the physical network card, so that the physical network card has the processing capability of the data plane in the service mesh. In this embodiment, the data plane processing program can be written by referring to the relevant logic of sidecar agents in the conventional service grid, and details thereof are not described herein. The processing functions of the sidecar agent in the traditional service grid on the data plane may include, but are not limited to, traffic interception, 4-7-layer network packet parsing, routing forwarding, and the like, and these processing functions may all implement sinking by writing related processing programs into the physical network card.
After the physical network card is given the processing capability of the data plane in the service mesh, referring to fig. 1, the calling terminal 10 may initiate a service calling request to the first physical network card 30. Here, all service invocation requests initiated by the invocation terminal 10 all flow into the first physical network card 30, and the physical network card may perform flow guiding filtering on the service invocation requests, that is, determine which service invocation requests need to be flow-controlled through the sidecar mode. Compared with the traditional service grid, the call terminal 10 does not need to adopt the iptable and other soft diversion modes to perform flow filtration, so that the resource consumption of the host machine in the aspect can be effectively reduced.
For the first physical network card 30, the traffic forwarding rule, the service registration information, and the like of each micro service in the service grid may be obtained in advance, and based on this, the first physical network card 30 may determine the target micro service to which the service invocation request points. For example, the first physical network card 30 may parse the micro-service name and the port to be called from the service call request. The first physical network card 30 may further forward the service invocation request to a second physical network card 40 mounted on the target server 20 capable of providing the target micro service according to the traffic forwarding rule corresponding to the target micro service. The traffic forwarding rule of the target micro service may include an address of at least one container group capable of providing the target micro service, the first physical network card 30 may determine, through policies such as load balancing, a target container group for the current service invocation request, and configure an access address of the target container group in the local service invocation request, so that the first physical network card 30 may forward the service invocation request to the second physical network card 40 assembled on the target server 20 where the target container group is located.
In this embodiment, the communication process between the first physical network card 30 and the second physical network card 40 is similar to the communication process between the sdecard agents in the conventional service network card, and the access address in the service invocation request may be continuously routed in a manner consistent with the communication process between the sdecard agents, so that the service invocation request reaches the second physical network card 40 on the target service end 20.
For the second physical network card 40, the call may be initiated to the target micro service on the target server 20 according to the resource location identifier in the service call request and the traffic forwarding rule corresponding to the target micro service. Wherein, the traffic under the service grid is usually layer 4-7, for example, in this embodiment, the service invocation request may adopt layer 4-7 network protocol; moreover, as mentioned above, the target service end 20 has a plurality of micro services, and the invoking end 10 identifies the micro service that is expected to be invoked in the service invoking request by means of the resource positioning identifier, so that the second physical network card 40 can determine the target micro service that is expected to be invoked in the service invoking request according to the resource positioning identifier in the service invoking request, and continue routing the service invoking request according to the traffic forwarding rule corresponding to the target micro service, so as to initiate invocation to the internal target micro service.
The target microservice may respond to the service invocation request and may generate service response data. In this regard, the second physical network card 40 may return service response data generated by the target service to the initiator microservice in the calling terminal 10 that initiated the service invocation request. The traffic forwarding rule corresponding to the target microservice includes forwarding rules for input traffic and output traffic. In the process of invoking the target service, the first physical network card 30 forwards the service invocation request according to the forwarding rule of the output traffic, and the second physical network card 40 forwards the service invocation request according to the forwarding rule of the input traffic. Here, in the response reflow process of the target service, the second physical network card 40 may process the service response data according to the forwarding rule of the output traffic, and the first physical network card 30 may forward the service response data according to the forwarding rule of the input traffic, so that it may be ensured that the service response data may reach the initiator microservice in the calling terminal 10. The response reflow process is symmetrical to the service invocation process and will not be described in detail here.
In summary, in this embodiment, the architecture of the service grid may be improved, and a way of tightly cooperating software and hardware is proposed to sink the processing job of the data plane in the service grid into the physical network card, so that the processing job of the data plane in the service grid does not occupy the computing resources on the host computer any longer, and the host computer may concentrate on the micro-service itself; in addition, the physical network card has higher forwarding performance, which can effectively improve the network throughput and reduce the network delay, thereby being capable of serving the communication performance between services under the grid.
In the above or below embodiments, various implementations may be employed to perform the sinking of the sidecar agent's associated functionality.
In one implementation, only the processing functions of the data plane in the service grid may be sunk into the physical network card.
In this implementation, fig. 2 is a logic diagram of an exemplary implementation scheme for sinking a processing function of a data plane in a service grid into a physical network card, and referring to fig. 2, the implementation scheme may be: the target server 20 may be deployed with a plurality of container groups, the target microservice may run on a target container group of the plurality of container groups, and the second physical network card 40 may configure a flow control component for each container group on the target server 20. The flow control component may be used to perform various processing tasks for the data plane in the service grid, including but not limited to traffic interception, 4-7 layer network packet parsing, and routing forwarding.
In this exemplary implementation, the second physical network card 40 may forward the service invocation request to the target flow control component corresponding to the target container group when receiving the service invocation request; the target traffic control component may execute an operation of initiating a call to the target micro service on the target server 20 according to the resource location identifier in the service call request and the traffic forwarding rule corresponding to the target service. Accordingly, in the exemplary implementation, the flow control component may be configured separately for each container group on the target server 20 according to the correspondence of 1:1 in the second physical network card 40, which is consistent with the deployment manner of the sidecar agent in the conventional service grid. Therefore, function change work in the sinking process of the processing function of the data plane in the service grid can be reduced, and the work load in the sinking process can be reduced.
In this implementation, fig. 3 is a logic diagram of another exemplary implementation scheme for sinking a processing function of a data plane in a service grid into a physical network card, and referring to fig. 3, the implementation scheme may be: a plurality of container groups are deployed on the target server 20, the target microservice may run on a target container group of the plurality of container groups, and a flow control component shared by the plurality of container groups may be configured on the second physical network card 40.
In this exemplary implementation, the second physical network card 40 may forward the service invocation request to the common flow control component when receiving the service invocation request; the shared flow control component may execute an operation of initiating a call to a target microservice on the target service end 20 according to the resource location identifier in the service call request and the flow forwarding rule corresponding to the target service. Accordingly, in the exemplary implementation scheme, only one common flow control component needs to be configured in the second physical network card 40, which has a lower requirement on the processing capability of the physical network card, and therefore, the hardware cost of the physical network card can be effectively saved.
In addition, the sinking process is described above by taking the second physical network card 40 as an example, and it should be understood that the processing work of the relevant data plane can also be sunk into the first physical network card 30 in the calling terminal 10 by using the same implementation manner.
In summary, in this implementation, the processing of the data plane in the service mesh can be sunk into the physical network card, and one or more flow control components configured in the physical network card perform the processing of the data plane, such as flow interception, 4-7-layer network packet parsing, and routing forwarding, which can effectively reduce the processing pressure of the communication end.
In addition to the data plane processing work, there are other work that needs to be performed in the services grid, such as rule subscription work, etc.
In another implementation, the rule subscription work can also be sunk into the physical network card. In this implementation, fig. 4 is a logic diagram of an exemplary implementation scheme for sinking a rule subscription function in a service grid into a physical network card, and referring to fig. 4, the implementation scheme may be: respectively configuring a rule subscription component for each container group on the target server 20 on the second physical network card 40; the target rule subscription component corresponding to the target container group can be used for acquiring the flow forwarding rule subscribed for the target microservice.
Based on this, in the case that the common traffic control component is configured on the second physical network card 40, the target rule subscribing component may provide the traffic forwarding rule subscribed for the target micro service to the common traffic control component. In this case, the rule subscription component is configured for each container group on the communication terminal in the physical network card, each rule subscription component performs its own role, and the shared flow control component can acquire the flow forwarding rule from the corresponding rule subscription component as required, so that the shared flow control component is supported to perform flow control on the plurality of micro-services on the communication terminal, and the interaction efficiency under this situation is higher.
And under the condition that the second physical network card 40 is configured with the flow control component corresponding to each container group, the target rule subscription component may provide the flow forwarding rule subscribed for the target micro-service to the target flow control component corresponding to the target container group where the target micro-service is located. Under the condition, the deployment structure of the sidecar agent in the traditional service grid is followed, namely, the sidecar agent in the traditional service grid is completely sunk into the physical network card in a 1:1 manner; in the physical network card, a rule subscription component and a flow control component are respectively configured for each container group on the communication end. The implementation scheme has better ecological universality and has less change on functions in the sinking process.
Fig. 5 is a logic diagram of another exemplary implementation for sinking a rule subscription function in a service grid into a physical network card, and referring to fig. 5, the implementation may be: configuring a rule subscription component shared by a plurality of container groups on the second physical network card 40; the rule subscription component may obtain traffic forwarding rules subscribed for each microservice on the target server 20.
Based on this, when the shared traffic control component is configured on the second physical network card 40, the rule subscription component may provide the traffic forwarding rule subscribed for each micro service on the target server 20 to the shared traffic control component. Under the condition, each micro service on the communication end shares the rule subscription component and the flow control component, the shared rule subscription component and the flow control component can cooperate with each other to support the flow control of a plurality of micro services on the communication end, the requirement on the hardware of the physical network card is low, and the hardware cost of the physical network card can be saved.
Under the condition that the second physical network card 40 is configured with the flow control component corresponding to each container group, the rule subscription component may provide the flow forwarding rule subscribed for the target micro-service to the target flow control component corresponding to the target container group where the target micro-service is located. In this case, the rule subscription component shared on the physical network card may provide the forwarding rules needed by the multiple traffic control components to support the multiple traffic control components to perform the processing work of the data plane.
In addition, the sinking process of the rule subscription work is described above by taking the second physical network card 40 as an example, and it should be understood that the same implementation manner may also be adopted to sink the relevant rule subscription work into the first physical network card 30 in the calling terminal 10.
In summary, in this implementation manner, the rule subscription work in the service grid can be sunk into the physical network card, and the rule subscription work is executed by one or more rule subscription components configured in the physical network card, which can effectively reduce the processing pressure of the communication end.
In yet another implementation, the rule subscription work may be retained in the communication end. In this implementation, an exemplary implementation may be: each group of containers is configured with a regular subscription component on the target server 20. Based on the above, the target rule subscription component corresponding to the target container group can be used for acquiring the flow forwarding rule subscribed for the target microservice; the traffic forwarding rules are provided to the second physical network card 40.
In the case that a shared flow control component is configured on the second physical network card 40, the target rule subscription component on the target server 20 may provide the traffic forwarding rule subscribed for the target micro service to the shared flow control component on the second physical network card 40. In this case, the communication terminal configures a rule subscription component for each container group, each rule subscription component performs its own role, and the shared traffic control component on the second physical network card 40 can acquire the traffic forwarding rule from the corresponding rule subscription component as required, thereby supporting the shared traffic control component to perform traffic control on the plurality of microservices on the communication terminal.
And under the condition that the second physical network card 40 is configured with the flow control component corresponding to each container group, the target rule subscribing component on the target server 20 may provide the flow forwarding rule subscribed for the target micro service to the target flow control component corresponding to the target container group where the target micro service is located on the second physical network card 40. In this case, the rule subscription component and the flow control component of each container group are configured one-to-one. The implementation scheme has better ecological universality and has less change on the rule remote components in the communication terminal.
Another exemplary implementation may be: a rule subscription component common to multiple groups of containers is configured on the target server 20. Based on this, the rule subscription component may obtain the traffic forwarding rule subscribed for each micro service on the target server 20, and provide the traffic forwarding rule to the second physical network card 40.
In the case that a shared flow control component is configured on the second physical network card 40, the shared rule subscription component on the target server 20 may provide the traffic forwarding rule subscribed for each micro service on the target server 20 to the shared flow control component on the second physical network card 40. In this case, the rule subscription component shared by the microservices on the communication end and the shared flow control component on the physical network card may cooperate with each other.
Under the condition that the second physical network card 40 is configured with the flow control component corresponding to each container group, the shared rule subscription component on the target server 20 may provide the traffic forwarding rule subscribed for the target micro service to the target flow control component corresponding to the target container group where the target micro service is located on the second physical network card 40. In this case, the common rule subscription component on the communication terminal may provide the required forwarding rule for the multiple flow control components on the physical network card, so as to support the multiple flow control components on the physical network card to perform the processing work of the data plane.
In addition, the above describes the retention scheme of the rule subscription work by taking the second physical network card 40 as an example, and it should be understood that the related rule subscription work can also be retained in the calling terminal 10 by using the same implementation manner.
In the implementation mode, a communication process between the rule subscription component on the communication terminal and the flow control component on the physical network card is involved, and in order to ensure data security, a secure channel can be established between the rule subscription component on the communication terminal and the flow control component on the physical network card, and based on this, a scheme such as encryption and the like can be adopted to securely transmit the flow forwarding rule.
In summary, in this implementation manner, the rule subscription work in the service grid may be retained in the communication end, and may cooperate with the flow control component in the physical network card to implement flow control.
The flow control scheme provided by the present embodiment is exemplified below by taking a shopping application as an example.
For example, the shopping application may split a plurality of core functions, each core function may be deployed as one micro service, and taking two micro services a and B as examples, for example, the micro service a corresponds to a search function, and the micro service B corresponds to a product recommendation function. During the course of a user's use of the shopping application, communication may need to occur between microservices A and B.
First, the micro service a and the micro service B may register services with a control plane of the service grid, so that the control plane may register service information such as an address and a port of a physical network card, a service name, and a traffic forwarding rule, which are installed at a communication end where the micro service a and the micro service B are respectively located. Based on this, the micro-services a and B may subscribe to each other's service information.
Take microservice a and microservice B as http services for example.
The micro service A can be used as a request initiator to initiate a call request for the micro service B by inputting a URL, and the call request flows to a physical network card a' arranged on a communication terminal a where the micro service A is positioned.
After receiving the calling request, the physical network card a 'can analyze information such as a called service name, a called port and the like, and can forward the calling request to a physical network card B' assembled on a communication terminal B where the micro-service B is located according to a flow forwarding rule of the subscribed micro-service B;
after receiving the call request, the physical network card B' may initiate a call to the microservice B inside the communication terminal B in a transparent transmission manner or according to the URL and the traffic forwarding rule of the microservice B.
The microservice B may respond to the invocation request and return service impact data to microservice a on its way to enable communication between microservice a and microservice B.
Fig. 6 is a schematic structural diagram of a communication terminal according to another exemplary embodiment of the present application. As shown in fig. 6, the communication terminal may include: a memory 60 and a processor 61, and a physical network card 62.
A processor 61, coupled to the memory 60, for executing a computer program in the memory 60, for providing a traffic forwarding rule corresponding to at least one microservice running on the communication terminal to the physical network card;
the physical network card is used for distributing input flow flowing to the communication terminal to a destination container group POD on the communication terminal based on a flow forwarding rule; based on the flow forwarding rule, forwarding the output flow sent by the communication end to a destination container group POD on the communication end or other communication ends;
wherein the micro-services required for the incoming or outgoing traffic are run in the destination container group POD.
The communication terminal provided in this embodiment can provide the flow forwarding rule corresponding to the micro service to the physical network card, and based on this, the flow control work can be sunk into the physical network card, and the physical network card controls the input flow and the output flow. It should be noted that the microservices running on the communication end may also communicate with each other, in this case, the two microservices in communication may share the same physical network card, that is, the physical network card may forward the output traffic initiated by the communication end to the corresponding destination container group POD on the communication end.
In an alternative embodiment, the physical network card 62 may be used to:
acquiring first output flow, wherein the first output flow comprises a first service calling request;
determining a target microservice pointed by a first service calling request;
and forwarding the first service calling request to a third physical network card assembled on a target communication end capable of providing the target micro service according to a flow forwarding rule corresponding to the target micro service, so that the third physical network card initiates calling to the target micro service in a target container group POD on the target communication end according to the resource positioning identifier in the first service calling request and the flow forwarding rule corresponding to the target micro service.
The above process is a step executed when the communication end where the physical network card is located is used as the calling end in the foregoing system embodiment.
In addition, the communication side where the physical network card is located may also serve as a service side in the foregoing system embodiment, in this case, the physical network card is used in a process of distributing input traffic flowing to the communication side to a destination container group POD on the communication side, for:
receiving a first input flow, wherein the first input flow comprises a second service calling request forwarded by a fourth physical network card;
and initiating calling to the target micro-service in the destination container group POD on the communication terminal according to the resource positioning identifier in the second service calling request and the flow forwarding rule corresponding to the requested target micro-service.
To take this into account, in an optional embodiment, the communication component in the physical network card 62 may include a flow control component corresponding to each container group on the communication end where the communication component is located, and based on this, the processor 61 may be specifically configured to:
forwarding the second service calling request to a target flow control component corresponding to a target container group where the target micro service is located; and initiating calling to the target micro service by using the target flow control component according to the resource positioning identifier in the second service calling request and the flow forwarding rule corresponding to the target service.
In an alternative embodiment, the communication component in the physical network card 62 may include a flow control component shared by a plurality of container groups on the communication end where the communication component is located, and based on this, the processor 61 may be specifically configured to:
forwarding the second service invocation request to the shared flow control component; and initiating calling to the target micro service by using the shared flow control component according to the resource positioning identifier in the second service calling request and the flow forwarding rule corresponding to the target service.
In an alternative embodiment, the communication terminal may be configured with a rule subscription component for each container group, and based on this, the processor 61 may be specifically configured to:
and acquiring the flow forwarding rule subscribed for the target micro service from a target rule subscription component corresponding to the target container group where the target micro service is located on the communication end.
In an alternative embodiment, the communication terminal may be configured with a common rule subscription component for a plurality of container groups, based on which the processor 61 is specifically configured to:
and acquiring the traffic forwarding rule subscribed for the target micro service from the shared rule subscription component on the communication terminal.
In an optional embodiment, the physical network card 62 may be deployed with a rule subscription component corresponding to each container group on the communication end where the physical network card is located, and based on this, the processor 61 may be specifically configured to:
acquiring a flow forwarding rule subscribed for the target micro service by using a target rule subscription component corresponding to a target container group where the target micro service is located; under the condition that a shared flow control component is configured on a physical network card, providing a flow forwarding rule subscribed for the target micro service to the shared flow control component; and under the condition that the flow control component corresponding to each container group is configured on the physical network card, providing the flow forwarding rule subscribed for the target micro service to the target flow control component corresponding to the target container group where the target micro service is located.
In an alternative embodiment, a rule subscription component common to a plurality of container groups on the communication end where the physical network card 62 is located may be deployed in the physical network card 62, and based on this, the processor 61 may be specifically configured to:
acquiring a flow forwarding rule subscribed for each micro service on a communication terminal by using the shared rule subscription component;
under the condition that a shared flow control component is configured on the physical network card, providing a flow forwarding rule subscribed for each micro service to the shared flow control component;
and under the condition that the flow control component corresponding to each container group is configured on the physical network card, providing the flow forwarding rule subscribed for the target micro service to the target flow control component corresponding to the target container group where the target micro service is located.
In an alternative embodiment, the service invocation request may employ a seven-layer network protocol.
In an optional embodiment, the processor 61 may be further configured to return service response data generated by the target service to the initiator micro-service initiating the second service invocation request in the third physical network card.
Further, as shown in fig. 6, the communication terminal further includes: power supply components 63, and the like. Only some of the components are schematically shown in fig. 6, and it is not meant that the communication terminal includes only the components shown in fig. 6.
It should be noted that, for the above technical details of the embodiments of the communication end, reference may be made to the related descriptions of the calling end and the service end in the foregoing system embodiments, which are not described herein for brevity, but this should not cause a loss of the protection scope of the present application.
Fig. 7 is a flowchart illustrating a method for flow control based on a service grid according to another exemplary embodiment of the present application, where the method may be performed by a flow control device, and the flow control device may be implemented as a combination of software and/or hardware, and the flow control device may be integrated in a communication terminal. Referring to fig. 7, a first physical network card is installed on the communication end, and the first physical network card includes a traffic forwarding rule corresponding to at least one microservice operating on the communication end, and the method may include:
700, under the condition of receiving the input flow, distributing the input flow to a destination container group POD on the communication terminal by utilizing a first physical network card;
step 701, under the condition of sending out output flow, utilizing a first physical network card to forward the output flow to a destination container group POD on the communication end or other communication ends;
wherein the micro-services required for the incoming or outgoing traffic are run in the destination container group POD.
In an alternative embodiment, the step of forwarding the output traffic to the destination container group POD on the other communication end using the first physical network card may include:
acquiring first output flow, wherein the first output flow comprises a first service calling request;
determining a target microservice pointed by a first service calling request;
and forwarding the first service calling request to a second physical network card assembled on a target communication terminal capable of providing the target micro service according to a flow forwarding rule corresponding to the target micro service, so that the second physical network card initiates calling to the target micro service in a target container group (POD) on the target communication terminal according to the resource positioning identifier in the first service calling request and the flow forwarding rule corresponding to the target micro service.
In the flowchart illustrated in fig. 7, steps performed when the communication terminal is used as the calling terminal in the foregoing system embodiment are illustrated.
In addition, the communication side may also serve as a service side in the foregoing system embodiment, in this case, the step of distributing the input traffic to the destination container group POD on the communication side using the first physical network card may include:
receiving a first input flow, wherein the first input flow comprises a second service calling request initiated by a third physical network card;
and initiating calling to the target micro-service in the destination container group POD on the communication terminal according to the resource positioning identifier in the second service calling request and the flow forwarding rule corresponding to the requested target micro-service.
To take care of this situation, in an optional embodiment, the first physical network card may have disposed therein a flow control component corresponding to each container group on the communication end where the first physical network card is located, and based on this, the method may specifically include:
forwarding the second service calling request to a target flow control component corresponding to a target container group where the target micro service is located; and initiating calling to the target micro service by using the target flow control component according to the resource positioning identifier in the second service calling request and the flow forwarding rule corresponding to the target service.
In an optional embodiment, a flow control component shared by a plurality of container groups on a communication end where the first physical network card is located may be deployed in the first physical network card, and based on this, the method may specifically include:
forwarding the second service invocation request to the shared flow control component; and initiating calling to the target micro service by using the shared flow control component according to the resource positioning identifier in the second service calling request and the flow forwarding rule corresponding to the target service.
In an alternative embodiment, the communication terminal may be configured with a rule subscription component for each container group, and based on this, the method may specifically include:
and acquiring the flow forwarding rule subscribed for the target micro service from a target rule subscription component corresponding to a target container group where the target micro service is located on the communication terminal.
In an alternative embodiment, a common rule subscription component may be configured on the communication terminal for a plurality of container groups, and based on this, the method may specifically include:
and acquiring the flow forwarding rule subscribed for the target micro service from a shared rule subscription component on the communication terminal.
In an optional embodiment, a rule subscription component corresponding to each container group on the communication end where the first physical network card is located may be deployed in the first physical network card, and based on this, the method may specifically include:
acquiring a flow forwarding rule subscribed for the target micro service by using a target rule subscription component corresponding to a target container group where the target micro service is located; under the condition that a shared flow control component is configured on a physical network card, providing a flow forwarding rule subscribed for the target micro service to the shared flow control component; and under the condition that the flow control component corresponding to each container group is configured on the physical network card, providing the flow forwarding rule subscribed for the target micro service to the target flow control component corresponding to the target container group where the target micro service is located.
In an optional embodiment, a rule subscription component shared by a plurality of container groups on the communication end where the first physical network card is located may be deployed in the first physical network card, and based on this, the method may specifically include:
acquiring a flow forwarding rule subscribed for each micro service on a communication terminal by using the shared rule subscription component;
under the condition that a shared flow control component is configured on the physical network card, providing a flow forwarding rule subscribed for each micro service to the shared flow control component;
and under the condition that the flow control component corresponding to each container group is configured on the physical network card, providing the flow forwarding rule subscribed for the target micro service to the target flow control component corresponding to the target container group where the target micro service is located.
In an alternative embodiment, the service invocation request may employ a seven-layer network protocol.
In an optional embodiment, the method may further include returning, by using the first physical network card, service response data generated by the target service to the initiator microservice that initiated the second service invocation request in the third physical network card.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subjects of steps 700 to 702 may be device a; for another example, the execution subject of steps 700 and 701 may be device a, and the execution subject of step 702 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 700, 701, etc., are used only for distinguishing the different operations, and the sequence numbers do not represent any execution order per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first" and "second" in this document are used to distinguish different requests, physical network cards, and the like, and do not represent a sequence, and do not limit that "first" and "second" are different types.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program can implement the steps that can be executed by the communication terminal in the above method embodiments when executed.
The memory of FIG. 6, described above, is used to store a computer program and may be configured to store other various data to support operations on a computing platform. Examples of such data include instructions for any application or method operating on the computing platform, contact data, phonebook data, messages, pictures, videos, and so forth. The memory may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The communication component in fig. 6 is configured to facilitate wired or wireless communication between the device in which the communication component is located and other devices. The device where the communication component is located can access a wireless network based on a communication standard, such as a WiFi, a 2G, 3G, 4G/LTE, 5G and other mobile communication networks, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
The power supply assembly of fig. 6 described above provides power to the various components of the device in which the power supply assembly is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (14)

1. A flow control system, comprising: the system comprises a calling end and a plurality of service ends, wherein the calling end is provided with a first physical network card, and the service ends are provided with second physical network cards; the first physical network card and the second physical network card subscribe flow forwarding rules corresponding to specified services;
the calling terminal is used for initiating a service calling request to the first physical network card;
the first physical network card is used for determining a target micro-service pointed by the service calling request; according to a flow forwarding rule corresponding to the target micro service, forwarding the service calling request to a second physical network card assembled on a target server side capable of providing the target micro service;
and the second physical network card is used for initiating calling to the target micro-service on the target service terminal according to the resource positioning identifier in the service calling request and the flow forwarding rule corresponding to the target micro-service.
2. The system according to claim 1, wherein a plurality of container groups are deployed on the target server, the target microservice runs on a target container group of the plurality of container groups, and the second physical network card is configured with a flow control component for each container group on the target server;
the second physical network card is specifically configured to forward the service invocation request to a target flow control component corresponding to the target container group;
and the target flow control component is used for initiating calling to a target micro service on the target service terminal according to the resource positioning identifier in the service calling request and the flow forwarding rule corresponding to the target service.
3. The system according to claim 1, wherein a plurality of container groups are deployed on the target server, the target microservice runs on a target container group of the plurality of container groups, and a flow control component common to the plurality of container groups is configured on the second physical network card;
the second physical network card is used for forwarding the service calling request to the flow control component;
and the flow control component is used for initiating calling to the target micro service on the target service terminal according to the resource positioning identifier in the service calling request and the flow forwarding rule corresponding to the target service.
4. The system according to any one of claims 2 or 3, wherein a regular subscription component is configured on the target server for each container group respectively;
the target rule subscription component corresponding to the target container group is used for acquiring a flow forwarding rule subscribed for the target micro service;
and providing the flow forwarding rule for the second physical network card.
5. The system according to any one of claims 2 or 3, wherein a rule subscription component common to the plurality of container groups is configured on the target server;
the rule subscription component is used for acquiring a flow forwarding rule subscribed for each micro service on the target server;
and providing the flow forwarding rule for the second physical network card.
6. The system according to any one of claims 2 or 3, wherein the second physical network card is respectively configured with a regular subscription component for each container group;
the target rule subscription component corresponding to the target container group is used for acquiring a flow forwarding rule subscribed for the target micro service;
under the condition that a shared flow control component is configured on the second physical network card, providing a flow forwarding rule subscribed for the target micro service to the shared flow control component;
and under the condition that the flow control component corresponding to each container group is configured on the second physical network card, providing the flow forwarding rule subscribed for the target micro service to the target flow control component corresponding to the target container group where the target micro service is located.
7. The system according to any one of claims 2 or 3, wherein the second physical network card is configured with a rule subscription component common to the plurality of container groups;
the rule subscription component is used for acquiring a flow forwarding rule subscribed for each micro service on the target server;
under the condition that a shared flow control component is configured on the second physical network card, providing a flow forwarding rule subscribed for each micro service on the target service terminal to the shared flow control component;
and under the condition that the flow control component corresponding to each container group is configured on the second physical network card, providing the flow forwarding rule subscribed for the target micro service to the target flow control component corresponding to the target container group where the target micro service is located.
8. The system of claim 1, wherein the service invocation request employs a seven-layer network protocol.
9. The system according to claim 1, wherein the second physical network card is further configured to return service response data generated by the target service to an initiator micro-service in the invoking terminal that initiated the service invoking request.
10. A communication terminal is characterized by comprising a memory, a processor and a physical network card, wherein the physical network card is assembled;
the memory to store one or more computer instructions;
the processor is coupled to the memory and configured to execute the one or more computer instructions to provide a traffic forwarding rule corresponding to at least one microservice running on the communication end to the physical network card;
the physical network card is used for distributing input traffic flowing to the communication terminal to a destination container group (POD) on the communication terminal based on the traffic forwarding rule; based on the flow forwarding rule, forwarding the output flow sent by the communication end to a destination container group POD on the communication end or other communication ends;
wherein the micro-services required for the input traffic or the output traffic are run in the destination container group POD.
11. The communication end according to claim 10, wherein the physical network card is configured to, in forwarding output traffic sent by the communication end to a destination container group POD on another communication end,:
acquiring a first output flow, wherein the first output flow comprises a first service calling request;
determining a target microservice pointed to by the first service calling request;
and forwarding the first service calling request to a third physical network card assembled on a target communication terminal capable of providing the target micro service according to the flow forwarding rule corresponding to the target micro service, so that the third physical network card initiates calling to the target micro service in a target container group (POD) on the target communication terminal according to the resource positioning identifier in the first service calling request and the flow forwarding rule corresponding to the target micro service.
12. The headend of claim 10 wherein the physical network card, during distribution of incoming traffic to the headend to a destination container group, POD, on the headend is configured to:
receiving a first input flow, wherein the first input flow comprises a second service calling request forwarded by a fourth physical network card;
and initiating calling to the target micro-service in the destination container group POD on the communication terminal according to the resource positioning identifier in the second service calling request and the flow forwarding rule corresponding to the requested target micro-service.
13. A flow control method based on a service grid is characterized in that the method is suitable for a communication terminal in the service grid, a physical network card is assembled on the communication terminal, and the physical network card comprises a flow forwarding rule corresponding to at least one micro service operated on the communication terminal, and the method comprises the following steps:
under the condition of receiving input flow, distributing the input flow to a destination container group POD on the communication terminal by using the physical network card;
under the condition of sending out output flow, the physical network card is utilized to forward the output flow to a destination container group POD on the communication end or other communication ends;
wherein the micro-services required for the input traffic or the output traffic are run in the destination container group POD.
14. A computer-readable storage medium storing computer instructions which, when executed by one or more processors, cause the one or more processors to perform the service grid-based flow control method of claim 13.
CN202110881328.9A 2021-08-02 2021-08-02 Flow control method, system, equipment and medium based on service grid Active CN113765816B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110881328.9A CN113765816B (en) 2021-08-02 2021-08-02 Flow control method, system, equipment and medium based on service grid

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110881328.9A CN113765816B (en) 2021-08-02 2021-08-02 Flow control method, system, equipment and medium based on service grid

Publications (2)

Publication Number Publication Date
CN113765816A true CN113765816A (en) 2021-12-07
CN113765816B CN113765816B (en) 2023-12-15

Family

ID=78788398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110881328.9A Active CN113765816B (en) 2021-08-02 2021-08-02 Flow control method, system, equipment and medium based on service grid

Country Status (1)

Country Link
CN (1) CN113765816B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114579199A (en) * 2022-02-22 2022-06-03 阿里巴巴(中国)有限公司 Method, system and storage medium for extending proxy in service grid
CN114826906A (en) * 2022-04-13 2022-07-29 北京奇艺世纪科技有限公司 Flow control method and device, electronic equipment and storage medium
CN117061338A (en) * 2023-08-16 2023-11-14 中科驭数(北京)科技有限公司 Service grid data processing method, device and system based on multiple network cards
CN117395141A (en) * 2023-12-07 2024-01-12 江苏征途技术股份有限公司 Method for simplifying station room intelligent auxiliary and artificial intelligent visual gateway configuration
CN117061338B (en) * 2023-08-16 2024-06-07 中科驭数(北京)科技有限公司 Service grid data processing method, device and system based on multiple network cards

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103581042A (en) * 2013-10-30 2014-02-12 华为技术有限公司 Method and device for sending data package
CN105550130A (en) * 2015-12-14 2016-05-04 中电科华云信息技术有限公司 Container based dynamic arrangement method for application environment and system applying method
US20160124742A1 (en) * 2014-10-30 2016-05-05 Equinix, Inc. Microservice-based application development framework
CN106375131A (en) * 2016-10-20 2017-02-01 浪潮电子信息产业股份有限公司 Uplink load balancing method of virtual network
CN107395781A (en) * 2017-06-29 2017-11-24 北京小度信息科技有限公司 Network communication method and device
US10007509B1 (en) * 2015-12-08 2018-06-26 Amazon Technologies, Inc. Container handover for device updates
CN108494607A (en) * 2018-04-19 2018-09-04 云家园网络技术有限公司 The design method and system of big double layer network framework based on container
US10313495B1 (en) * 2017-07-09 2019-06-04 Barefoot Networks, Inc. Compiler and hardware interactions to remove action dependencies in the data plane of a network forwarding element
CN110149231A (en) * 2019-05-21 2019-08-20 优刻得科技股份有限公司 Update method, apparatus, storage medium and the equipment of virtual switch
CN110858138A (en) * 2018-08-22 2020-03-03 北京航天长峰科技工业集团有限公司 Alarm receiving and processing system based on micro-service technology
CN112398687A (en) * 2020-11-13 2021-02-23 广东省华南技术转移中心有限公司 Configuration method of cloud computing network, cloud computing network system and storage medium
US20210058316A1 (en) * 2019-08-23 2021-02-25 Vmware, Inc. Dynamic multipathing using programmable data plane circuits in hardware forwarding elements
CN112511611A (en) * 2020-11-19 2021-03-16 腾讯科技(深圳)有限公司 Communication method, device and system of node cluster and electronic equipment
CN112910692A (en) * 2021-01-19 2021-06-04 中原银行股份有限公司 Method, system and medium for controlling service grid flow based on micro service gateway
CN113037812A (en) * 2021-02-25 2021-06-25 中国工商银行股份有限公司 Data packet scheduling method and device, electronic equipment, medium and intelligent network card

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103581042A (en) * 2013-10-30 2014-02-12 华为技术有限公司 Method and device for sending data package
US20160124742A1 (en) * 2014-10-30 2016-05-05 Equinix, Inc. Microservice-based application development framework
US10007509B1 (en) * 2015-12-08 2018-06-26 Amazon Technologies, Inc. Container handover for device updates
CN105550130A (en) * 2015-12-14 2016-05-04 中电科华云信息技术有限公司 Container based dynamic arrangement method for application environment and system applying method
CN106375131A (en) * 2016-10-20 2017-02-01 浪潮电子信息产业股份有限公司 Uplink load balancing method of virtual network
CN107395781A (en) * 2017-06-29 2017-11-24 北京小度信息科技有限公司 Network communication method and device
US10313495B1 (en) * 2017-07-09 2019-06-04 Barefoot Networks, Inc. Compiler and hardware interactions to remove action dependencies in the data plane of a network forwarding element
CN108494607A (en) * 2018-04-19 2018-09-04 云家园网络技术有限公司 The design method and system of big double layer network framework based on container
CN110858138A (en) * 2018-08-22 2020-03-03 北京航天长峰科技工业集团有限公司 Alarm receiving and processing system based on micro-service technology
CN110149231A (en) * 2019-05-21 2019-08-20 优刻得科技股份有限公司 Update method, apparatus, storage medium and the equipment of virtual switch
US20210058316A1 (en) * 2019-08-23 2021-02-25 Vmware, Inc. Dynamic multipathing using programmable data plane circuits in hardware forwarding elements
CN112398687A (en) * 2020-11-13 2021-02-23 广东省华南技术转移中心有限公司 Configuration method of cloud computing network, cloud computing network system and storage medium
CN112511611A (en) * 2020-11-19 2021-03-16 腾讯科技(深圳)有限公司 Communication method, device and system of node cluster and electronic equipment
CN112910692A (en) * 2021-01-19 2021-06-04 中原银行股份有限公司 Method, system and medium for controlling service grid flow based on micro service gateway
CN113037812A (en) * 2021-02-25 2021-06-25 中国工商银行股份有限公司 Data packet scheduling method and device, electronic equipment, medium and intelligent network card

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨鑫;吴之南;钱松荣;: "基于Macvlan的docker容器网络架构", 微型电脑应用 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114579199A (en) * 2022-02-22 2022-06-03 阿里巴巴(中国)有限公司 Method, system and storage medium for extending proxy in service grid
CN114579199B (en) * 2022-02-22 2024-04-26 阿里巴巴(中国)有限公司 Method, system and storage medium for expanding agent in service grid
CN114826906A (en) * 2022-04-13 2022-07-29 北京奇艺世纪科技有限公司 Flow control method and device, electronic equipment and storage medium
CN114826906B (en) * 2022-04-13 2023-09-22 北京奇艺世纪科技有限公司 Flow control method, device, electronic equipment and storage medium
CN117061338A (en) * 2023-08-16 2023-11-14 中科驭数(北京)科技有限公司 Service grid data processing method, device and system based on multiple network cards
CN117061338B (en) * 2023-08-16 2024-06-07 中科驭数(北京)科技有限公司 Service grid data processing method, device and system based on multiple network cards
CN117395141A (en) * 2023-12-07 2024-01-12 江苏征途技术股份有限公司 Method for simplifying station room intelligent auxiliary and artificial intelligent visual gateway configuration
CN117395141B (en) * 2023-12-07 2024-05-24 江苏征途技术股份有限公司 Method for simplifying station room intelligent auxiliary and artificial intelligent visual gateway configuration

Also Published As

Publication number Publication date
CN113765816B (en) 2023-12-15

Similar Documents

Publication Publication Date Title
EP3398305B1 (en) Method and architecture for virtualized network service provision
JP6622394B2 (en) Managing multiple active subscriber identity module profiles
CN113765816B (en) Flow control method, system, equipment and medium based on service grid
CN110049070B (en) Event notification method and related equipment
CN113760452B (en) Container scheduling method, system, equipment and storage medium
US10462260B2 (en) Context-aware and proximity-aware service layer connectivity management
CN111224821B (en) Security service deployment system, method and device
CN113596191B (en) Data processing method, network element equipment and readable storage medium
US20210028992A1 (en) Multi-access edge computing cloud discovery and communications
CN112491944A (en) Edge application discovery method and device, and edge application service support method and device
US20200280892A1 (en) Session context handling method, network element, and terminal device
CN114189885B (en) Network element information processing method, device and storage medium
CN113572864B (en) Data processing method, network element equipment and readable storage medium
US11696167B2 (en) Systems and methods to automate slice admission control
CN112533177A (en) Method, device, apparatus and medium for providing and discovering moving edge calculation
WO2022022440A1 (en) Network reconnection method, and device, system and storage medium
CN116326199A (en) Radio access node device and interface method executed by radio access node device
US11595871B2 (en) Systems and methods for securely sharing context between MEC clusters
CN112653716B (en) Service binding method and device
CN112752352A (en) Method and equipment for determining intermediate session management function I-SMF
CN112995311B (en) Service providing method, device and storage medium
US20230275974A1 (en) Network functionality (nf) aware service provision based on service communication proxy (scp)
US20140341033A1 (en) Transmission management device, system, and method
CN112565086A (en) Distributed network system, message forwarding method, device and storage medium
WO2022022842A1 (en) Service request handling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40069109

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240311

Address after: # 03-06, Lai Zan Da Building 1, 51 Belarusian Road, Singapore

Patentee after: Alibaba Innovation Co.

Country or region after: Singapore

Address before: Room 01, 45th Floor, AXA Building, 8 Shanton Road, Singapore

Patentee before: Alibaba Singapore Holdings Ltd.

Country or region before: Singapore

TR01 Transfer of patent right