Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
At present, under the service grid, flow control work needs to be carried out in a kernel and a user mode, so that resources are occupied, and the processing performance is poor. To this end, in some embodiments of the application: the architecture of the service grid is improved, and the processing work of the data surface in the service grid is sunk into the physical network card in a tightly coordinated manner of software and hardware, so that the processing work of the data surface in the service grid does not occupy the computing resources on the host machine any more, and the host machine can concentrate on the micro-service itself; in addition, the physical network card has higher forwarding performance, which can effectively improve network throughput and reduce network delay, thereby being capable of serving inter-service communication performance under the grid.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a schematic structural diagram of a flow control system according to an exemplary embodiment of the present application. As shown in fig. 1, the system includes: call end 10 and several service ends 20. The calling terminal 10 and the service terminal 20 may be nodes in a cloud network, a cloud server, etc., and the physical implementation forms of the calling terminal 10 and the service terminal 20 are not limited in this embodiment.
The flow control scheme provided by the embodiment can be applied to a scene of using a service grid to conduct flow management on micro services. The micro-service is an architecture scheme for constructing applications, and the micro-service architecture is different from a more traditional single-body scheme and can split the applications into a plurality of core functions. Each function is called a service, and can be individually constructed and deployed, which means that the services do not affect each other when working (and failing), and the services can communicate with each other and cooperate with each other to provide the user with the desired content. A service grid is a specialized infrastructure layer for handling inter-service communications. It is responsible for reliably delivering requests through complex service topologies including modern cloud-native applications. In practice, the service grid is typically implemented by a set of lightweight network agents that are deployed with application code without the need to perceive the application itself.
The flow control scheme provided by the embodiment improves the traditional service grid. It is proposed to sink the processing work of the data plane in the service grid into the physical network card on the host. In fig. 1, a schematic configuration of a flow control system is shown by taking one-time inter-service communication as an example. It should be appreciated that there are substantially more communication ends under the service grid.
Referring to fig. 1, both the calling end 10 and the service end 20 are equipped with physical network cards, and for convenience of distinction, we describe the physical network card equipped on the calling end 10 as a first physical network card 30 and the physical network card on the service end 20 as a second physical network card 40. In this embodiment, the physical network card may be an intelligent network card with an independent processor, which can support a programming customization function. The microservices may run in a set of containers, which may be deployed on a host, i.e., a communication end in this embodiment (communication end is used herein as a generic term for calling end and service end). From the deployment relationship, a communication end can comprise a plurality of container groups, the container groups share a physical network card, and a plurality of micro services can be operated in one container group.
Based on this, in this embodiment, a processing program of a data plane in a service grid may be written in advance for the physical network card, and written into the physical network card, so that the physical network card has the processing capability of the data plane in the service grid. In this embodiment, the writing of the processing program of the data plane may be performed with reference to the relevant logic of the side car agent in the conventional service grid, which is not described in detail herein. The processing functions of the sidecar agent on the data plane in the traditional service grid can include, but are not limited to, traffic interception, 4-7 layer network packet parsing, routing forwarding and the like, and the processing functions can be sunk by writing related processing programs into the physical network card.
After the physical network card is given the processing capability of the data plane in the service grid, referring to fig. 1, the calling end 10 may initiate a service call request to the first physical network card 30. Here, all the service call requests initiated by the calling end 10 are all streamed to the first physical network card 30, and the physical network card may perform stream filtering on the service call requests, that is, determine which service call requests need to be flow controlled in the sidecar mode. Compared with the traditional service grid, the call end 10 does not need to adopt the soft stream guidance modes such as iptable and the like to carry out stream filtration, so that the resource consumption of the host in this aspect can be effectively reduced.
The first physical network card 30 may obtain, in advance, a traffic forwarding rule and service registration information of each micro service in the service grid, and based on this, the first physical network card 30 may determine the target micro service to which the service call request is directed. For example, the first physical network card 30 may parse the micro service name and port of the desired call from the service call request. The first physical network card 30 may further forward the service call request to the second physical network card 40 assembled on the target server 20 that may provide the target micro service according to the traffic forwarding rule corresponding to the target micro service. The flow forwarding rule of the target micro service may include an address of at least one container group capable of providing the target micro service, and the first physical network card 30 may determine, through policies such as load balancing, a target container group for a service call request of this time, and configure an access address of the target container group to a local service call request, so that the first physical network card 30 may forward the service call request to the second physical network card 40 assembled on the target server 20 where the target container group is located.
In this embodiment, the communication process between the first physical network card 30 and the second physical network card 40 is similar to the communication process between the sidecar agents in the conventional service network card, and the access address in the service call request can be continuously routed in a manner consistent with the communication process between the sidecar agents, so that the service call request arrives at the second physical network card 40 on the target server 20.
For the second physical network card 40, a call may be initiated to a target micro service on the target server 20 according to the resource location identifier in the service call request and the traffic forwarding rule corresponding to the target micro service. Wherein the traffic under the service grid is typically 4-7 layers, for example, in this embodiment, the service invocation request may use a 4-7 layer network protocol; in addition, as mentioned above, there are multiple micro services on the target service end 20, the calling end 10 will identify the micro service that is expected to be called in the service call request by means of the resource location identifier, so that the second physical network card 40 can determine the target micro service that is expected to be called by the service call request according to the resource location identifier in the service call request, and continue to route the service call request according to the traffic forwarding rule corresponding to the target micro service, so as to initiate the call to the internal target micro service.
The target micro-service may respond to the current service invocation request and may generate service response data. In this regard, the second physical network card 40 may return service response data generated by the target service to the micro-service of the initiator that initiated the service invocation request in the initiator 10. The traffic forwarding rules corresponding to the target micro-service comprise forwarding rules for input traffic and output traffic. In the above-mentioned process of calling the target service, the first physical network card 30 forwards the service call request according to the forwarding rule of the output flow, and the second physical network card 40 forwards the service call request according to the forwarding rule of the input flow. Here, in the response reflux process of the target service, the second physical network card 40 may process the service response data according to the forwarding rule of the output flow, and the first physical network card 30 may forward the service response data according to the forwarding rule of the input flow, so as to ensure that the service response data can reach the micro-service of the initiator in the calling end 10. The response reflow process is symmetrical to the service invocation process and will not be described in detail here.
In summary, in this embodiment, the architecture of the service grid may be improved, and it is proposed that the processing work of the data plane in the service grid is sunk into the physical network card in a tightly coordinated manner of software and hardware, so that the processing work of the data plane in the service grid does not need to occupy the computing resource on the host, and the host may concentrate on the micro-service itself; in addition, the physical network card has higher forwarding performance, which can effectively improve network throughput and reduce network delay, thereby being capable of serving inter-service communication performance under the grid.
In the above or below embodiments, the sinking of the relevant functions of the sidecar agent may be performed in a variety of implementations.
In one implementation, only the processing functions of the data plane in the service grid may be sunk into the physical network card.
In this implementation, fig. 2 is a logic diagram of an exemplary implementation of sinking a processing function of a data plane in a service grid into a physical network card, and referring to fig. 2, this implementation may be: the target server 20 may have a plurality of container groups deployed thereon, the target micro service may operate on a target container group of the plurality of container groups, and the second physical network card 40 may be configured with a flow control component on each container group on the target server 20. The flow control component may be used to perform various processing tasks for the data plane in the service grid including, but not limited to, traffic interception, 4-7 layer network packet parsing, and routing forwarding.
In this exemplary implementation, the second physical network card 40 may forward the service invocation request to the target flow control component corresponding to the target container group when receiving the service invocation request; and the target flow control component may execute an operation of initiating a call to the target micro-service on the target server 20 according to the resource location identifier in the service call request and the flow forwarding rule corresponding to the target service. Accordingly, in this exemplary implementation, the flow control component may be configured in the second physical network card 40 for each container group on the target server 20 separately according to a 1:1 correspondence, which is consistent with the deployment of the sidecar agent in the legacy service grid. Thus, the function change work in the sinking process of the processing function of the data surface in the service grid can be reduced, and the workload in the sinking process is reduced.
In this implementation, fig. 3 is a logic diagram of another exemplary implementation of sinking a processing function of a data plane in a service grid into a physical network card, and referring to fig. 3, this implementation may be: a plurality of container groups are deployed on the target service end 20, the target micro service may run on a target container group of the plurality of container groups, and a flow control component common to the plurality of container groups may be configured on the second physical network card 40.
In this exemplary implementation, the second physical network card 40 may forward the service invocation request onto the common flow control component upon receipt of the service invocation request; the shared flow control component may perform an operation of initiating a call to a target micro-service on the target server 20 according to the resource location identifier in the service call request and the flow forwarding rule corresponding to the target service. Accordingly, in this exemplary implementation, only one common flow control component needs to be configured in the second physical network card 40, which has a lower requirement on the processing capability of the physical network card, so that the hardware cost of the physical network card can be effectively saved.
In addition, the foregoing description has been given by taking the second physical network card 40 as an example, it should be understood that the processing work of the related data plane may be sunk into the first physical network card 30 in the calling end 10 in the same implementation manner.
In summary, in this implementation manner, the processing work of the data plane in the service grid may be sunk into the physical network card, and the processing work of the data plane such as flow interception, 4-7 layer network packet parsing, and routing forwarding is performed by one or more flow control components configured in the physical network card, which may effectively reduce the processing pressure of the communication end.
In addition to the processing work of the data plane, there are other works in the service grid that need to be performed, such as rule subscription works, etc.
In another implementation, the rule subscription job may also be sunk into the physical network card. In this implementation, fig. 4 is a logic diagram of an exemplary implementation of sinking rule subscription functions in a service grid into a physical network card, and referring to fig. 4, this implementation may be: configuring a rule subscription component on the second physical network card 40 for each container group on the target server 20; the target rule subscription component corresponding to the target container group can be used for acquiring the traffic forwarding rule subscribed for the target micro-service.
Based on this, in the case where a common flow control component is configured on the second physical network card 40, the target rule subscription component may provide the flow forwarding rule subscribed for the target micro service to the common flow control component. In this case, rule subscription components are configured for each container group on the communication end in the physical network card, each rule subscription component performs its own role, and the shared flow control component can acquire the flow forwarding rule from the corresponding rule subscription component as required, so as to support the shared flow control component to perform flow control on a plurality of micro services on the communication end, and the interaction efficiency under this condition is higher.
In the case that the second physical network card 40 is configured with a flow control component corresponding to each container group, the target rule subscription component may provide the flow forwarding rule subscribed for the target micro service to the target flow control component corresponding to the target container group where the target micro service is located. In this case, the deployment structure of the sidecar agent in the traditional service grid is inherited, that is, the sidecar agent in the traditional service grid is completely sunk into the physical network card in a 1:1 mode; in the physical network card, a rule subscription component and a flow control component are respectively configured for each container group on the communication end. The ecological universality of the implementation scheme is better, and the modification to the functions is smaller in the sinking process.
Fig. 5 is a logic diagram of another exemplary implementation of sinking rule subscription functionality in a service grid into a physical network card, referring to fig. 5, the implementation may be: configuring a rule subscription component shared by a plurality of container groups on the second physical network card 40; the rule subscription component may obtain traffic forwarding rules for each micro-service subscription on the target server 20.
Based on this, in the case where a common flow control component is configured on the second physical network card 40, the rule subscription component may provide the flow forwarding rule subscribed for each micro service on the target server 20 to the common flow control component. Under the condition, each micro-service on the communication end shares the rule subscription component and the flow control component, the shared rule subscription component and the flow control component can cooperate with each other to support the flow control of a plurality of micro-services on the communication end, the requirement on the hardware of the physical network card is lower, and the hardware cost of the physical network card can be saved.
In the case where the second physical network card 40 is configured with a flow control component corresponding to each container group, the rule subscription component may provide the flow forwarding rule subscribed for the target micro-service to the target flow control component corresponding to the target container group where the target micro-service is located. In this case, a rule subscription component shared on the physical network card may provide forwarding rules required for the plurality of flow control components to support the plurality of flow control components to perform the processing of the data plane.
In addition, the foregoing describes the sinking process of the rule subscription work by taking the second physical network card 40 as an example, and it should be understood that the same implementation manner may be adopted to sink the relevant rule subscription work into the first physical network card 30 in the calling end 10.
In summary, in this implementation manner, the rule subscription in the service grid may be sunk into the physical network card, and the rule subscription is performed by one or more rule subscription components configured in the physical network card, which may effectively reduce the processing pressure of the communication end.
In yet another implementation, the rule subscription work may be maintained in the communication end. In this implementation, an exemplary implementation may be: a rule subscription component is configured on the target server 20 for each container group, respectively. Based on the above, the target rule subscription component corresponding to the target container group can be used for acquiring the traffic forwarding rule subscribed for the target micro-service; the traffic forwarding rule is provided to the second physical network card 40.
In the case where a common flow control component is configured on the second physical network card 40, the target rule subscription component on the target server 20 may provide the flow forwarding rule subscribed for the target micro service to the common flow control component on the second physical network card 40. In this case, the communication end configures a rule subscription component for each container group, where each rule subscription component performs its own role, and the shared flow control component on the second physical network card 40 may obtain the flow forwarding rule from the corresponding rule subscription component as required, so as to support the shared flow control component to perform flow control on multiple micro services on the communication end.
In the case that the second physical network card 40 is configured with a flow control component corresponding to each container group, the target rule subscription component on the target server 20 may provide the flow forwarding rule subscribed for the target micro service to the target flow control component corresponding to the target container group where the target micro service is located on the second physical network card 40. In this case, the rule subscription component and the flow control component of each container group are configured one-to-one. The implementation scheme has better ecological universality and less change on a rule distancing component in the communication end.
Another exemplary implementation may be: a rule subscription component common to multiple container groups is configured on the target server 20. Based on this, the rule subscription component may obtain the traffic forwarding rule subscribed for each micro service on the target server 20, and provide the traffic forwarding rule to the second physical network card 40.
In the case where a common flow control component is configured on the second physical network card 40, the common rule subscription component on the target server 20 may provide the flow forwarding rules subscribed for each micro service on the target server 20 to the common flow control component on the second physical network card 40. In this case, the rule subscription component shared by the micro services on the communication end and the shared flow control component on the physical network card may cooperate with each other.
In the case where the second physical network card 40 is configured with a flow control component corresponding to each container group, the common rule subscription component on the target server 20 may provide the flow forwarding rule subscribed for the target micro service to the target flow control component corresponding to the target container group where the target micro service is located on the second physical network card 40. In this case, the common rule subscription component on the communication end can provide the required forwarding rules for the multiple flow control components on the physical network card, so as to support the multiple flow control components on the physical network card to execute the processing work of the data plane.
In addition, the foregoing description has been made with respect to the reservation scheme of the rule subscription work by taking the second physical network card 40 as an example, and it should be understood that the same implementation manner may be adopted to reserve the related rule subscription work in the calling end 10.
In the implementation manner, the communication process between the rule subscription component on the communication end and the flow control component on the physical network card is related, and in order to ensure data security, a secure channel can be established between the rule subscription component on the communication end and the flow control component on the physical network card, and based on the secure channel, the flow forwarding rule can be safely transmitted by adopting schemes such as encryption.
In summary, in this implementation, the rule subscription in the service grid may be retained in the communication end, and may cooperate with the flow control component in the physical network card to implement flow control.
The flow control scheme provided in this embodiment is exemplarily described below using a shopping application as an example.
For example, the shopping application may split multiple core functions, each of which may be deployed as a micro-service, such as a micro-service a corresponding to a search function and a micro-service B corresponding to a product recommendation function, for example. Communication may need to occur between microservices a and B during the use of the shopping application by the user.
Firstly, the micro service A and the micro service B can register the service with the control surface of the service grid, so that the control surface can register the service information such as the address and the port of the physical network card, the service name, the traffic forwarding rule and the like assembled on the communication end where the micro service A and the micro service B are respectively positioned. Based on this, the micro services a and B can subscribe to each other's service information.
Taking micro service a and micro service B as http services as examples.
The micro service a can act as a request initiator to initiate a call request to the micro service B by inputting a URL, and the call request will flow to the physical network card a' assembled on the communication end a where the micro service a is located.
After receiving the call request, the physical network card a 'can analyze the information of the called service name, port and the like, and can forward the call request to the physical network card B' assembled on the communication terminal B where the micro service B is located according to the flow forwarding rule of the subscribed micro service B;
after the physical network card B' receives the call request, call can be initiated to the micro service B in the communication terminal B in a transparent transmission mode or according to the URL and the flow forwarding rule of the micro service B.
The micro service B may respond to the call request and return service impact data to the micro service a in the original path to enable communication between the micro service a and the micro service B.
Fig. 6 is a schematic structural diagram of a communication terminal according to another exemplary embodiment of the present application. As shown in fig. 6, the communication terminal may include: the memory 60 and the processor 61 are also equipped with a physical network card 62.
A processor 61 coupled to the memory 60 for executing a computer program in the memory 60 for providing the physical network card with traffic forwarding rules corresponding to at least one micro-service running on the communication side;
the physical network card is used for distributing the input flow flowing to the communication terminal to the target container group POD on the communication terminal based on the flow forwarding rule; based on a flow forwarding rule, forwarding output flow sent by a communication terminal to a destination container group (POD) on the communication terminal or other communication terminals;
wherein the micro-services required for incoming traffic or outgoing traffic are run in the destination container group POD.
The communication terminal provided in this embodiment may provide the flow forwarding rule corresponding to the micro service to the physical network card, based on which the flow control job may be sunk into the physical network card, and the physical network card controls the input flow and the output flow. It should be noted that, communication may be performed between micro services running on the communication end, so that in this case, two micro services in communication may share the same physical network card, that is, the physical network card may forward the output traffic initiated by the communication end to the corresponding destination container group POD on the communication end.
In an alternative embodiment, the physical network card 62 may be used to, in forwarding the output traffic sent by the communication end to the destination container group POD on the other communication end:
acquiring a first output flow, wherein the first output flow comprises a first service call request;
determining a target micro-service pointed by the first service call request;
and forwarding the first service call request to a third physical network card assembled on a target communication end capable of providing the target micro service according to the flow forwarding rule corresponding to the target micro service, so that the third physical network card initiates call to the target micro service in the target container group POD on the target communication end according to the resource positioning identifier in the first service call request and the flow forwarding rule corresponding to the target micro service.
The above procedure is the step executed when the communication end where the physical network card is located is used as the calling end in the foregoing system embodiment.
In addition, the communication end where the physical network card is located may also be used as the service end in the foregoing system embodiment, where in this case, the physical network card is used to, in a process of distributing the input traffic flowing to the communication end to the destination container group POD on the communication end:
receiving a first input flow, wherein the first input flow comprises a second service call request forwarded by a fourth physical network card;
And according to the resource positioning identification in the second service calling request and the flow forwarding rule corresponding to the requested target micro-service, calling the target micro-service in the target container group POD on the communication terminal.
In response to this, in an alternative embodiment, the communication components in the physical network card 62 may include respective flow control components for each container group on the communication end where it is located, based on which the processor 61 may be specifically configured to:
forwarding the second service call request to a target flow control component corresponding to a target container group where the target micro-service is located; and calling the target micro-service by utilizing the target flow control component according to the resource positioning identification in the second service calling request and the flow forwarding rule corresponding to the target service.
In an alternative embodiment, the communication components in the physical network card 62 may include a flow control component that is common to multiple container groups on the communication end where it is located, based on which the processor 61 may be specifically configured to:
forwarding the second service invocation request to the shared flow control component; and calling the target micro-service according to the resource positioning identification in the second service calling request and the flow forwarding rule corresponding to the target service by utilizing the shared flow control component.
In an alternative embodiment, the rule subscription component may be configured separately for each container group on the communication side, based on which the processor 61 may be specifically configured to:
and acquiring the traffic forwarding rule subscribed for the target micro-service from a target rule subscription component corresponding to the target container group where the target micro-service is located on the communication terminal.
In an alternative embodiment, a common rule subscription component may be configured for multiple container groups on the communication side, based on which the processor 61 may be specifically configured to:
and acquiring the traffic forwarding rule subscribed for the target micro-service from a common rule subscription component on the communication terminal.
In an alternative embodiment, the physical network card 62 may be deployed with a rule subscription component corresponding to each container group on the communication end, and based on this, the processor 61 may be specifically configured to:
acquiring a traffic forwarding rule subscribed for the target micro-service by using a target rule subscription component corresponding to a target container group in which the target micro-service is located; under the condition that a shared flow control component is configured on the physical network card, providing the flow forwarding rule subscribed for the target micro-service to the shared flow control component; under the condition that the flow control assembly corresponding to each container group is configured on the physical network card, the flow forwarding rule subscribed for the target micro-service is provided for the target flow control assembly corresponding to the target container group where the target micro-service is located.
In an alternative embodiment, the physical network card 62 may be deployed with a rule subscription component that is common to multiple container groups on the communication end where it is located, based on which the processor 61 may be specifically configured to:
acquiring a traffic forwarding rule subscribed for each micro service on a communication end by using the shared rule subscription component;
under the condition that a shared flow control component is configured on the physical network card, providing the flow forwarding rule subscribed for each micro-service to the shared flow control component;
under the condition that the flow control assembly corresponding to each container group is configured on the physical network card, the flow forwarding rule subscribed for the target micro-service is provided for the target flow control assembly corresponding to the target container group where the target micro-service is located.
In an alternative embodiment, the service invocation request may employ a seven-layer network protocol.
In an alternative embodiment, the processor 61 may be further configured to return service response data generated by the target service to the micro-service of the initiator that initiates the second service invocation request in the third physical network card.
Further, as shown in fig. 6, the communication terminal further includes: power supply assembly 63, and the like. Only some of the components are schematically shown in fig. 6, which does not mean that the communication terminal only comprises the components shown in fig. 6.
It should be noted that, for the technical details of the embodiments of the communication end, reference may be made to the related descriptions of the calling end and the service end in the foregoing system embodiments, which are not repeated herein for the sake of brevity, but should not cause a loss of the protection scope of the present application.
Fig. 7 is a flow chart of a service grid-based flow control method according to another exemplary embodiment of the present application, which may be performed by a flow control device, which may be implemented as a combination of software and/or hardware, and which may be integrated in a communication terminal. Referring to fig. 7, a communication end is equipped with a first physical network card, where the first physical network card includes a traffic forwarding rule corresponding to at least one micro service running on the communication end, and the method may include:
step 700, under the condition that the input flow is received, distributing the input flow to a target container group POD on the local communication terminal by using a first physical network card;
step 701, under the condition of sending out output flow, forwarding the output flow to a destination container group (POD) on the communication terminal or other communication terminals by using a first physical network card;
wherein the micro-services required for incoming traffic or outgoing traffic are run in the destination container group POD.
In an alternative embodiment, the step of forwarding the output traffic to the destination container group POD on the other communication end using the first physical network card may include:
acquiring a first output flow, wherein the first output flow comprises a first service call request;
determining a target micro-service pointed by the first service call request;
and forwarding the first service call request to a second physical network card assembled on a target communication end capable of providing the target micro service according to the flow forwarding rule corresponding to the target micro service, so that the second physical network card initiates call to the target micro service in the target container group POD on the target communication end according to the resource positioning identifier in the first service call request and the flow forwarding rule corresponding to the target micro service.
In the flow chart shown in fig. 7, the steps performed when the communication terminal is the calling terminal in the foregoing system embodiment are shown.
In addition, the communication end may also be used as a service end in the foregoing system embodiment, in which case, the step of distributing the input traffic to the destination container group POD on the local communication end by using the first physical network card may include:
receiving a first input flow, wherein the first input flow comprises a second service call request initiated by a third physical network card;
And according to the resource positioning identification in the second service calling request and the flow forwarding rule corresponding to the requested target micro-service, calling the target micro-service in the target container group POD on the communication terminal.
With this in mind, in an alternative embodiment, the first physical network card may be deployed with a flow control component corresponding to each container group on the communication end, where the method may specifically include:
forwarding the second service call request to a target flow control component corresponding to a target container group where the target micro-service is located; and calling the target micro-service by utilizing the target flow control component according to the resource positioning identification in the second service calling request and the flow forwarding rule corresponding to the target service.
In an alternative embodiment, the first physical network card may be deployed with a flow control component that is common to a plurality of container groups on a communication end where the first physical network card is located, and based on this, the method may specifically include:
forwarding the second service invocation request to the shared flow control component; and calling the target micro-service according to the resource positioning identification in the second service calling request and the flow forwarding rule corresponding to the target service by utilizing the shared flow control component.
In an alternative embodiment, a rule subscription component may be configured on the communication end for each container group, and based on this, the method may specifically include:
and acquiring the traffic forwarding rule subscribed for the target micro-service from a target rule subscription component corresponding to the target container group where the target micro-service is located on the communication terminal.
In an alternative embodiment, a common rule subscription component may be configured for a plurality of container groups on the communication end, based on which the method may specifically include:
and acquiring the traffic forwarding rule subscribed for the target micro-service from a common rule subscription component on the communication terminal.
In an alternative embodiment, a rule subscription component corresponding to each container group on the communication end of the first physical network card may be deployed in the first physical network card, and based on this, the method may specifically include:
acquiring a traffic forwarding rule subscribed for the target micro-service by using a target rule subscription component corresponding to a target container group in which the target micro-service is located; under the condition that a shared flow control component is configured on the physical network card, providing the flow forwarding rule subscribed for the target micro-service to the shared flow control component; under the condition that the flow control assembly corresponding to each container group is configured on the physical network card, the flow forwarding rule subscribed for the target micro-service is provided for the target flow control assembly corresponding to the target container group where the target micro-service is located.
In an alternative embodiment, a rule subscription component shared by a plurality of container groups on a communication end where the rule subscription component is located can be deployed in the first physical network card, and based on this, the method can specifically include:
acquiring a traffic forwarding rule subscribed for each micro service on a communication end by using the shared rule subscription component;
under the condition that a shared flow control component is configured on the physical network card, providing the flow forwarding rule subscribed for each micro-service to the shared flow control component;
under the condition that the flow control assembly corresponding to each container group is configured on the physical network card, the flow forwarding rule subscribed for the target micro-service is provided for the target flow control assembly corresponding to the target container group where the target micro-service is located.
In an alternative embodiment, the service invocation request may employ a seven-layer network protocol.
In an alternative embodiment, the method may further include returning, by the first physical network card, service response data generated by the target service to an initiator micro-service that initiates the second service invocation request in the third physical network card.
It should be noted that, the execution subjects of each step of the method provided in the above embodiment may be the same device, or the method may also be executed by different devices. For example, the execution subject of steps 700 to 702 may be device a; for another example, the execution body of steps 700 and 701 may be device a, and the execution body of step 702 may be device B; etc.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations appearing in a specific order are included, but it should be clearly understood that the operations may be performed out of the order in which they appear herein or performed in parallel, the sequence numbers of the operations such as 700, 701, etc. are merely used to distinguish between the various operations, and the sequence numbers themselves do not represent any order of execution. In addition, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first" and "second" herein are used to distinguish between different requests, physical network cards, and the like, and do not represent a sequence, and are not limited to the "first" and "second" being of different types.
Accordingly, the embodiment of the present application also provides a computer readable storage medium storing a computer program, where the computer program when executed can implement each step of the above method embodiment that can be executed by the communication end.
The memory of FIG. 6 described above is used to store a computer program and may be configured to store various other data to support operations on a computing platform. Examples of such data include instructions for any application or method operating on a computing platform, contact data, phonebook data, messages, pictures, videos, and the like. The memory may be implemented by any type of volatile or nonvolatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The communication assembly of fig. 6 is configured to facilitate wired or wireless communication between the device in which the communication assembly is located and other devices. The device where the communication component is located can access a wireless network based on a communication standard, such as a mobile communication network of WiFi,2G, 3G, 4G/LTE, 5G, etc., or a combination thereof. In one exemplary embodiment, the communication component receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component further comprises a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
The power supply assembly shown in fig. 6 provides power for various components of the device in which the power supply assembly is located. The power components may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the devices in which the power components are located.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.