CN117834739A - Service calling method and device - Google Patents

Service calling method and device Download PDF

Info

Publication number
CN117834739A
CN117834739A CN202211184674.2A CN202211184674A CN117834739A CN 117834739 A CN117834739 A CN 117834739A CN 202211184674 A CN202211184674 A CN 202211184674A CN 117834739 A CN117834739 A CN 117834739A
Authority
CN
China
Prior art keywords
service
discovery data
cache
calling
grid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211184674.2A
Other languages
Chinese (zh)
Inventor
李来
杨奕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Cloud Computing Technologies Co Ltd
Original Assignee
Huawei Cloud Computing Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Cloud Computing Technologies Co Ltd filed Critical Huawei Cloud Computing Technologies Co Ltd
Priority to CN202211184674.2A priority Critical patent/CN117834739A/en
Priority to PCT/CN2023/101382 priority patent/WO2024066503A1/en
Publication of CN117834739A publication Critical patent/CN117834739A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5682Policies or rules for updating, deleting or replacing the stored data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to a service calling method and a device, wherein the method is applied to a first service, the first service is a micro-service running in a service grid, and the method comprises the following steps: judging whether service discovery data of a second service exist in a cache of the first service when the first service calls the second service; intercepting a call request sent by the first service when service discovery data of the second service does not exist in the cache; acquiring service discovery data of the second service from a control surface of the service grid, and storing the service discovery data of the second service into the cache; and executing the call of the first service to the second service based on the service discovery data of the second service. According to the embodiment of the application, the resource consumption during service call can be reduced, and the processing efficiency during service call can be improved, so that the service performance is improved.

Description

Service calling method and device
Technical Field
The present disclosure relates to the field of cloud computing technologies, and in particular, to a service calling method and device.
Background
The micro-service architecture is a cloud native architecture. Based on a microservice architecture, an application may be made up of many loosely coupled and independently deployable smaller components (commonly referred to as microservices), each with a respective area of responsibility. When processing a user request, a microservice-based application may invoke multiple internal microservices to collectively generate its response.
In micro-service architecture, the loading of service discovery data is typically based on a network proxy implemented in the manner of a sidecar (sidecar) when making service calls. In this way, the sidecar needs to occupy a process of a container separately, so that resource consumption is high, and because the sidecar and the service mounted by the sidecar belong to different containers, inter-process communication between the sidecar and the service brings time delay, and the processing efficiency during service call is affected, so that the service performance is affected.
Disclosure of Invention
In view of this, a service calling method and apparatus are provided.
In a first aspect, embodiments of the present application provide a service invocation method applied to a first service, where the first service is a micro-service running in a service grid, the method including: judging whether service discovery data of a second service exist in a cache of the first service when the first service calls the second service; intercepting a call request sent by the first service when service discovery data of the second service does not exist in the cache; acquiring service discovery data of the second service from a control surface of the service grid, and storing the service discovery data of the second service into the cache; and executing the call of the first service to the second service based on the service discovery data of the second service.
According to the service calling method, when the first service calls the second service, whether service discovery data of the second service exist in a cache of the first service is judged, when the service discovery data of the second service do not exist in the cache, a calling request sent by the first service is intercepted, then the service discovery data of the second service is obtained from a control surface of a service grid, the service discovery data of the second service is stored in the cache of the first service, and then calling of the first service to the second service is executed based on the service discovery data of the second service.
By the method, when the first service calls the second service, when service discovery data of the second service does not exist in the cache of the first service, a call request is intercepted, the service discovery data of the second service is directly obtained from the control surface of the service grid, lazy loading of the service discovery data can be achieved, resource consumption during loading of the service discovery data is reduced, and when the service discovery data is loaded, a sidecar is not required to be used as a network proxy, so that cross-process communication is avoided, resource consumption during service call can be reduced, processing efficiency during service call is improved, and service performance is further improved.
In a first possible implementation manner of the service invocation method according to the first aspect, the method further includes: and when the service discovery data of the second service exists in the cache, calling the second service according to the service discovery data of the second service in the cache.
In this embodiment, when service discovery data of a second service exists in a cache of a first service, the first service can directly call the second service according to the service discovery data of the second service in the cache of the first service, so that processing efficiency in service call can be improved.
In a second possible implementation manner of the service invocation method according to the first possible implementation manner of the first aspect, the service discovery data includes an IP address and load information of an instance of a service, and invoking the second service according to the service discovery data of the second service in the cache includes: selecting a target instance from the instances of the second service according to a preset load balancing rule and load information of the instances of the second service; and calling the target instance according to the IP address of the target instance.
In this embodiment, when the second service is invoked according to the service discovery data of the second service in the cache, a target instance may be selected from the instances of the second service according to a preset load balancing rule and load information of the instances of the second service; and then calling the target instance according to the IP address of the target instance, thereby realizing the calling of the first service to the second service. In this way, the processing efficiency at the time of service call can be improved.
In a third possible implementation manner of the service invocation method according to the first aspect or the first possible implementation manner of the first aspect or the second possible implementation manner of the first aspect, the method further includes: and in the running process of the first service, recording the link information of the first service through the link tracking service in the service grid.
In this embodiment, during the operation of the first service, the link information of the first service is recorded through the link tracking service in the service grid, so that when the first service is restarted, the service discovery data is preloaded according to the historical link information of the first service.
In a fourth possible implementation manner of the service invocation method according to the third possible implementation manner of the first aspect, the method further includes: in the restarting process of the first service, acquiring history calling information of the first service by calling the link tracking service, wherein the history calling information comprises an identifier of the service called by the first service, which is determined by the link tracking service according to the history link information; after the first service is started, service discovery data of each service included in the history calling information is obtained from a control surface of the service grid; and storing the acquired service discovery data into the cache.
In this embodiment, in the process of restarting the first service, the link tracking service is invoked to obtain the history call information of the first service, and after the first service is started, service discovery data of each service included in the history call information is obtained from the control plane of the service grid, and the obtained service discovery data is stored in the cache. By the method, the service discovery data can be preloaded when the first service is started, so that the time cost for acquiring the service discovery data for the first time can be reduced, the processing efficiency when the service is called is improved, and the service performance is further improved.
In a fifth possible implementation manner of the service invocation method according to the third possible implementation manner of the first aspect, the method further includes: and in the running process of the first service, periodically updating the service discovery data in the cache by periodically calling the link tracking service.
In this embodiment, in the running process of the first service, the link tracking service is periodically invoked to periodically update the service discovery data in the cache, so that dynamic lazy loading of the service discovery data can be realized.
In a sixth possible implementation manner of the service invocation method according to the first aspect or any one of the first possible implementation manner to the fifth possible implementation manner of the first aspect, the method further includes: and subscribing the service discovery data in the cache to the control surface of the service grid, so that the control surface of the service grid pushes the updated service discovery data to the first service when the service discovery data is updated.
In this embodiment, the service discovery data in the cache of the first service may be subscribed to the control plane of the service grid, so that when the service discovery data is updated, the control plane of the service grid pushes the updated service discovery data to the first service, so that the service discovery data in the cache of the first service may be kept dynamically updated, and accuracy of the service discovery data in the cache of the first service may be improved.
In a seventh possible implementation form of the service invocation method according to the first aspect as such or according to any of the first possible implementation forms of the first aspect to the sixth possible implementation form of the first aspect, the method is implemented in a plug-in manner based on a Java agent, the plug-in being mounted to the first service.
In this embodiment, the plug-in is implemented in a plug-in manner based on the Java proxy and the plug-in is mounted to the first service, so that the plug-in implementing the embodiment of the application can be dynamically loaded to the first service on the premise of no invasion, and the plug-in and the first service belong to the same process, so that inter-process communication is reduced, and therefore, when the service is called, the Java plug-in can be used for replacing the sidecar in the existing service grid, so that the resource consumption of the service is less and the performance is better.
In a second aspect, embodiments of the present application provide a service invocation apparatus applied to a first service, the first service being a micro-service running in a service grid, the apparatus comprising: the judging module is used for judging whether service discovery data of the second service exist in a cache of the first service when the first service calls the second service; the interception module is used for intercepting a call request sent by the first service when the service discovery data of the second service does not exist in the cache; the first acquisition module is used for acquiring service discovery data of the second service from the control surface of the service grid and storing the service discovery data of the second service into the cache; and the first calling module is used for executing the calling of the first service to the second service based on the service discovery data of the second service.
When a first service calls a second service, the service calling device firstly judges whether service discovery data of the second service exists in a cache of the first service, intercepts a calling request sent by the first service when the service discovery data of the second service does not exist in the cache, acquires the service discovery data of the second service from a control surface of a service grid, stores the service discovery data of the second service in the cache of the first service, and then executes calling of the first service to the second service based on the service discovery data of the second service.
By the method, when the first service calls the second service, when service discovery data of the second service does not exist in the cache of the first service, a call request is intercepted, the service discovery data of the second service is directly obtained from the control surface of the service grid, lazy loading of the service discovery data can be achieved, resource consumption during loading of the service discovery data is reduced, and when the service discovery data is loaded, a sidecar is not required to be used as a network proxy, so that cross-process communication is avoided, resource consumption during service call can be reduced, processing efficiency during service call is improved, and service performance is further improved.
In a first possible implementation manner of the service invocation apparatus according to the second aspect, the apparatus further comprises: and the second calling module is used for calling the second service according to the service discovery data of the second service in the cache when the service discovery data of the second service exists in the cache.
In this embodiment, when service discovery data of a second service exists in a cache of a first service, the first service can directly call the second service according to the service discovery data of the second service in the cache of the first service, so that processing efficiency in service call can be improved.
In a second possible implementation manner of the service invocation apparatus according to the first possible implementation manner of the second aspect, the service discovery data includes an IP address and load information of an instance of a service, and the second invocation module includes: the target instance selecting sub-module is used for selecting a target instance from the instances of the second service according to a preset load balancing rule and the load information of the instances of the second service; and the target instance calling sub-module is used for calling the target instance according to the IP address of the target instance.
In this embodiment, when the second service is invoked according to the service discovery data of the second service in the cache, a target instance may be selected from the instances of the second service according to a preset load balancing rule and load information of the instances of the second service; and then calling the target instance according to the IP address of the target instance, thereby realizing the calling of the first service to the second service. In this way, the processing efficiency at the time of service call can be improved.
In a third possible implementation manner of the service invocation apparatus according to the second aspect or the first possible implementation manner of the second aspect or the second possible implementation manner of the second aspect, the apparatus further includes: and the link information recording module is used for recording the link information of the first service through the link tracking service in the service grid in the running process of the first service.
In this embodiment, during the operation of the first service, the link information of the first service is recorded through the link tracking service in the service grid, so that when the first service is restarted, the service discovery data is preloaded according to the historical link information of the first service.
In a fourth possible implementation manner of the service invocation apparatus according to the third possible implementation manner of the second aspect, the apparatus further includes: the history call information acquisition module is used for acquiring history call information of the first service by calling the link tracking service in the restarting process of the first service, wherein the history call information comprises an identifier of the service called by the first service, which is determined by the link tracking service according to the history link information; the second acquisition module is used for acquiring service discovery data of each service included in the history calling information from the control surface of the service grid after the first service is started; and the storage module is used for storing the acquired service discovery data into the cache.
In this embodiment, in the process of restarting the first service, the link tracking service is invoked to obtain the history call information of the first service, and after the first service is started, service discovery data of each service included in the history call information is obtained from the control plane of the service grid, and the obtained service discovery data is stored in the cache. By the method, the service discovery data can be preloaded when the first service is started, so that the time cost for acquiring the service discovery data for the first time can be reduced, the processing efficiency when the service is called is improved, and the service performance is further improved.
In a fifth possible implementation manner of the service invocation apparatus according to the third possible implementation manner of the second aspect, the apparatus further includes: and the updating module is used for periodically updating the service discovery data in the cache by periodically calling the link tracking service in the running process of the first service.
In this embodiment, in the running process of the first service, the link tracking service is periodically invoked to periodically update the service discovery data in the cache, so that dynamic lazy loading of the service discovery data can be realized.
In a sixth possible implementation form of the service invocation apparatus according to the second aspect as such or according to any of the first possible implementation form of the second aspect to the fifth possible implementation form of the second aspect, the apparatus further comprises: and the subscription module is used for subscribing the service discovery data in the cache to the control surface of the service grid so as to push the updated service discovery data to the first service when the service discovery data is updated by the control surface of the service grid.
In this embodiment, the service discovery data in the cache of the first service may be subscribed to the control plane of the service grid, so that when the service discovery data is updated, the control plane of the service grid pushes the updated service discovery data to the first service, so that the service discovery data in the cache of the first service may be kept dynamically updated, and accuracy of the service discovery data in the cache of the first service may be improved.
In a seventh possible implementation form of the service invocation apparatus according to the second aspect as such or according to any of the first possible implementation form of the second aspect to the sixth possible implementation form of the second aspect, the apparatus is implemented in a plug-in manner based on a Java agent, the plug-in being mounted to the first service.
In this embodiment, the plug-in is implemented in a plug-in manner based on the Java proxy and the plug-in is mounted to the first service, so that the plug-in implementing the embodiment of the application can be dynamically loaded to the first service on the premise of no invasion, and the plug-in and the first service belong to the same process, so that inter-process communication is reduced, and therefore, when the service is called, the Java plug-in can be used for replacing the sidecar in the existing service grid, so that the resource consumption of the service is less and the performance is better.
In a third aspect, embodiments of the present application provide a cluster of computing devices, including at least one computing device, each computing device including a processor and a memory; the processor of the at least one computing device is configured to execute instructions stored in the memory of the at least one computing device to cause the cluster of computing devices to perform the service invocation method of the first aspect or one or more of the plurality of possible implementations of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer program product comprising instructions that, when executed by a cluster of computing devices, cause the cluster of computing devices to perform the service invocation method of the first aspect or one or more of the plurality of possible implementations of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer-readable storage medium comprising computer program instructions which, when executed by a cluster of computing devices, perform the service invocation method of the first aspect or one or more of the plurality of possible implementations of the first aspect.
These and other aspects of the application will be apparent from and elucidated with reference to the embodiment(s) described hereinafter.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features and aspects of the present application and together with the description, serve to explain the principles of the present application.
Fig. 1 shows a schematic diagram of an application scenario of a service invocation method according to an embodiment of the present application.
Fig. 2 shows a flowchart of a service invocation method according to an embodiment of the present application.
Fig. 3 is a schematic diagram showing a processing procedure of a service invocation method according to an embodiment of the present application.
Fig. 4 is a schematic diagram showing a processing procedure of a service invocation method according to an embodiment of the present application.
Fig. 5 is a schematic diagram showing a processing procedure of a service invocation method according to an embodiment of the present application.
Fig. 6 is a schematic diagram showing a processing procedure of a service invocation method according to an embodiment of the present application.
Fig. 7 shows a block diagram of a service invocation apparatus according to an embodiment of the present application.
Detailed Description
Various exemplary embodiments, features and aspects of the present application will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
In addition, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present application. It will be understood by those skilled in the art that the present application may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits have not been described in detail as not to unnecessarily obscure the present application.
For ease of understanding, related terms related to embodiments of the present application will first be described:
service management: micro services are managed and administered, such as service registration, service discovery, load balancing, application routing, etc.
Service discovery: refers to the ability of a service as a client in a micro-service architecture to automatically discover a list of service addresses. The process of service discovery may be regarded as a process in which a service as a client automatically loads service discovery data. By means of automated service discovery, communication between micro services can be achieved without the need to perceive the opposite end location and IP address.
Service grid: i.e., service mesh, is an infrastructure layer in the micro-service architecture that handles communication between services, supporting reliable delivery of network requests of cloud native applications in complex topology environments. The service grid is deployed with the application, but transparent to the application. The service grid typically includes a data plane (data plane) and a control plane (control plane).
Service visibility: refers to service discovery data for a service that the current service can obtain from the control plane of the service grid for which it calls.
The service grid is exemplarily described below by taking an open source service grid Istio as an example.
The open source service grid Istio includes a data plane and a control plane. The data plane of Istio refers to the layer of Envoy proxy (Envoy proxy). Istio injects the Envoy agent as a sidecar container beside the service container, and then the Envoy agent intercepts all inbound and outbound traffic for the service. The control plane of the Istio refers to a layer of Istiod components that provides functions such as service discovery (discovery), configuration (configuration), and certificate management (certificates).
In Istio, the control plane passes service information and governance rules to the data plane based on a service discovery (x discovery service, xDS) protocol; the data plane has the xDS protocol as its application programming interface (application programming interface, API) and communicates with the control plane based on the xDS protocol.
Where x in the xDS protocol indicates xDS protocol, it does not refer to a specific protocol, but is a generic term for a set of service discovery protocols based on different data sources, including a listener discovery service (listener discovery service, LDS), a cluster discovery service (cluster discovery service, CDS), a cluster member discovery service (endpoint discovery service, EDS), a route discovery service (route discovery service, RDS), an aggregation discovery service (aggregated discovery service, ADS), a health discovery service (health discovery service, HDS), a key discovery service (secret discovery service, SDS), a metric discovery service (MS), a current limit discovery service (rate limit service, RLS), and the like.
In micro-service architecture, the loading of service discovery data is typically based on a network proxy implemented in a sidecar fashion when a service call is made. For example, in the open source service grid Istio, a sidecar container for implementing an Envoy agent is located beside each service container, and when service call is made, service discovery data is loaded through the sidecar container. However, the sidecar needs to occupy a process of a container separately, so that the resource consumption is high, the sidecar and the service mounted by the sidecar belong to different containers, and inter-process communication between the sidecar and the service brings time delay, and the processing efficiency of service call is affected, so that the service performance is affected.
In addition, the loading of service discovery data based on the sidecar (i.e. based on the network proxy implemented in the sidecar manner) also has the problems of inapplicability to large-scale scenarios, complex deployment and operation and maintenance, etc. In one example, the Istio issues service discovery data adopts a full-volume issue policy, i.e., all the sidecar processes in the service grid hold all the service discovery data within the entire grid, but most of the service discovery data is useless for workload, and in a large-scale scenario, the full-volume issue of service discovery data may result in serious resource consumption, i.e., the full-volume issue policy of service discovery data is not applicable to the large-scale scenario.
In another example, the loading of service discovery data employs an on-demand loading policy. For example, the dependency relationship between services is pre-configured in advance, and service discovery data is loaded according to the pre-configured dependency relationship between services in the service calling process, however, this method generally requires manual configuration, is not suitable for a large-scale scene, and is difficult to maintain. For another example, a special intermediate gateway service and a controller service may be deployed in the Istio to implement on-demand loading of service discovery data, but this approach requires that additional intermediate gateway services and controller services be started in the service grid, which not only causes resource consumption, but also makes deployment and operation of the service more complex.
To solve the above technical problem, an embodiment of the present application provides a service invocation method applied to a first service, where the first service is a micro service running in a service grid, the method includes: judging whether service discovery data of a second service exist in a cache of the first service when the first service calls the second service; intercepting a call request sent by the first service when service discovery data of the second service does not exist in the cache; acquiring service discovery data of the second service from a control surface of the service grid, and storing the service discovery data of the second service into the cache; and executing the call of the first service to the second service based on the service discovery data of the second service.
According to the service calling method, when the first service calls the second service, whether service discovery data of the second service exist in a cache of the first service is judged, when the service discovery data of the second service do not exist in the cache, a calling request sent by the first service is intercepted, then the service discovery data of the second service is obtained from a control surface of a service grid, the service discovery data of the second service is stored in the cache of the first service, and then calling of the first service to the second service is executed based on the service discovery data of the second service.
By the method, when the first service calls the second service, when service discovery data of the second service does not exist in the cache of the first service, a call request is intercepted, the service discovery data of the second service is directly obtained from the control surface of the service grid, lazy loading of the service discovery data can be achieved, resource consumption during loading of the service discovery data is reduced, and when the service discovery data is loaded, a sidecar is not required to be used as a network proxy, so that cross-process communication is avoided, resource consumption during service call can be reduced, processing efficiency during service call is improved, and service performance is further improved.
Among them, lazy loading, i.e. lazy loading, is a strategy where unnecessary resources are not preloaded and only loaded when needed. Lazy loading is an on-demand loading that can reduce resource occupancy.
The service calling method can be applied to the service governance scene in the micro-service architecture. In a service grid of an actual application scene, more than one hundred thousand or even millions of service instances are deployed, and in the large-scale scene, service call is performed through the service call method of the embodiment of the application, so that lazy loading of service discovery data based on the sidecar is not needed during service call, resource consumption during service call can be reduced, and processing efficiency during service call can be improved, thereby improving service performance.
Fig. 1 shows a schematic diagram of an application scenario of a service invocation method according to an embodiment of the present application. As shown in fig. 1, the terminal device 130 is connected to the cloud computing center 110 through a network (e.g., a wired network, a wireless network, etc.), and the cloud computing center 110 includes a server cluster 120, where the server cluster 120 provides services for applications on the terminal device 130 based on a micro-service architecture. The terminal device 130 may be a smart phone, a netbook, a tablet computer, a notebook computer, a wearable electronic device (such as a smart bracelet, a smart watch, etc.), a TV, a virtual reality device, a sound, electronic ink, etc. The specific type of the terminal device is not limited in the present application.
When a user uses an application on terminal device 130, server cluster 120 invokes multiple internal micro services to collectively generate a response for the application. When the server cluster 120 calls a plurality of internal micro services, there may be calls between the micro services, for example, a call from one micro service to another micro service, in which case, the service call method of the embodiment of the present application may be used to implement a call from one micro service to another micro service.
Fig. 2 shows a flowchart of a service invocation method according to an embodiment of the present application. The service calling method can be applied to a first service, wherein the first service is a micro service running in a service grid. For example, a service grid runs on the server cluster 120 shown in FIG. 1, with the first service being a micro-service running in the service grid as the call initiator.
As shown in fig. 2, the service invocation method in the embodiment of the present application includes:
step S210, when the first service invokes the second service, determining whether there is service discovery data of the second service in the cache of the first service.
Wherein the first service and the second service are both micro services running in a service grid. The first service and the second service each have a corresponding cache (e.g., cache). The service discovery data for the first service load may be stored in a cache for the first service and the service discovery data for the second service load may be stored in a cache for the second service.
In a scenario where a first service invokes a second service, the second service may be considered a service provider and the first service may be considered a service consumer or client. When the first service invokes the second service, it may first be determined whether service discovery data for the second service exists in a cache of the first service. For example, assuming that the first service invokes the second service through an identifier of the second service (for example, an ID, a name, etc. that can uniquely identify the second service), when determining whether service discovery data of the second service exists in the cache of the first service, it may be determined whether service discovery data of the second service exists in the cache of the first service through a lookup and comparison method according to the identifier of the second service.
In one possible implementation, the service visibility list of the first service may be set in a cache of the first service. The service visibility list of the first service may be used to store an identification of the service to which the service discovery data stored in the cache of the first service belongs. For example, assuming that the service discovery data of the service S1, the service S2, the service S3, the service S4, and the service S5 is stored in the cache of the first service S0, the identifiers of the service S1, the service S2, the service S3, the service S4, and the service S5 may be stored in the service visibility list of the first service S0.
When judging whether the service discovery data of the second service exists in the cache of the first service, judging whether the identification of the second service exists in the service visibility list of the first service. If there is an identification of a second service in the service visibility list of the first service, then the service discovery data of the second service may be considered to be present in the cache of the first service. If the identification of the second service does not exist in the service visibility list of the first service, it may be considered that the service discovery data of the second service does not exist in the cache of the first service.
It should be noted that, those skilled in the art may determine whether the service discovery data of the second service exists in the cache of the first service in other manners, which is not limited in this application.
Step S220, intercepting a call request sent by the first service when the service discovery data of the second service does not exist in the cache.
When the service discovery data of the second service does not exist in the cache of the first service, a call request sent by the first service can be intercepted, so that the service discovery data of the second service can be obtained from the control surface of the service grid.
Step S230, obtaining service discovery data of the second service from the control plane of the service grid, and storing the service discovery data of the second service in the cache.
After intercepting the call request sent by the first service, service discovery data of the second service can be obtained from the control surface of the service grid according to the identifier of the second service. The service discovery data of the second service may include an identification of the second service (e.g., an ID, a name, etc., which may uniquely identify the second service), a number of instances of the second service, and an identification of each instance (e.g., an ID, a name, etc., which may uniquely identify each instance), an IP address, load information, etc. It should be noted that, the service discovery data of the second service may further include other information, and those skilled in the art may set specific contents included in the service discovery data according to actual situations, which is not limited in this application.
After obtaining the service discovery data of the second service from the control plane of the service grid, the obtained service discovery data of the second service may be stored in the cache of the first service.
In one possible implementation, where the service visibility list of the first service is set in the cache of the first service, the identity of the second service may be stored into the service visibility list of the first service.
In one possible implementation, the service discovery data of the second service may be subscribed to the control plane of the service grid at the same time as the service discovery data of the second service is obtained from the control plane of the service grid. After subscribing the service discovery data of the second service to the control surface of the service grid, when the control surface of the service grid discovers that the service discovery data of the second service is updated, the control surface of the service grid pushes the updated service discovery data of the second service to the first service. The first service, upon receiving updated service discovery data (i.e., new service discovery data) of the second service pushed by the control plane of the service grid, replaces the service discovery data (i.e., old service discovery data) of the second service in the cache with the updated service discovery data (i.e., new service discovery data) of the second service. In this way, the service discovery data of the second service in the cache of the first service can be kept dynamically updated, so that the accuracy of the service discovery data of the second service in the cache is improved.
And step S240, executing the call of the first service to the second service based on the service discovery data of the second service.
After storing the service discovery data of the second service in the cache of the first service, the invocation of the second service by the first service may be performed according to the service discovery data of the second service in the cache of the first service. Specifically, for example, service discovery data of the second service may be obtained from a cache of the first service, and load information of an instance of the second service may be obtained from the service discovery data; then selecting a target instance from the instances of the second service according to a preset load balancing rule (such as lowest load priority and the like) and the load information of the instances of the second service; and then, the IP address of the target instance can be acquired from the service discovery data of the second service, and the target instance is called according to the IP address of the target instance, so that the first service can call the second service.
It should be noted that the number of the second service instances may be plural, and the specific number of the second service instances is not limited in this application.
In one possible implementation manner, when the first service calls the second service, when service discovery data of the second service exists in a cache of the first service, the second service can be directly called according to the service discovery data of the second service in the cache of the first service, so that processing efficiency in service call can be improved.
In one possible implementation manner, according to service discovery data of a second service in a cache of a first service, when the second service is called, service discovery data of the second service can be obtained from the cache of the first service, and load information of an instance of the second service is obtained from the service discovery data; then selecting a target instance from the instances of the second service according to a preset load balancing rule (such as lowest load priority and the like) and the load information of the instances of the second service; and then, the IP address of the target instance can be acquired from the service discovery data of the second service, and the target instance is called according to the IP address of the target instance, so that the first service can call the second service.
Fig. 3 is a schematic diagram showing a processing procedure of a service invocation method according to an embodiment of the present application. As shown in fig. 3, the cache 311 of the first service 310 stores service discovery data of the second service320 (service 320), and examples thereof are as follows:
xds{
cds[service320]
eds[service320]
……
}
when the first service 310 invokes the second service320, it is first determined whether there is service discovery data of the second service320 in the cache 311. It is determined that the service discovery data of the second service320 exists in the cache 311, in which case the first service 310 may acquire the service discovery data of the second service320 from the cache 311 and directly call the second service320 according to the acquired service discovery data of the second service 320.
Fig. 4 is a schematic diagram showing a processing procedure of a service invocation method according to an embodiment of the present application. As shown in fig. 4, the cache 411 of the first service 410 stores service discovery data of a service F (serviceF), and examples thereof are as follows:
xds{
cds[serviceF]
eds[serviceF]
……
}
during operation of the first service 410, the first service 410 generates a dependency on the second service420 due to actions such as hot updates. When the first service 410 invokes the second service420, it is first determined whether or not there is service discovery data of the second service420 (service 420) in the cache 411. It is determined that the service discovery data of the second service420 does not exist in the cache 411, in which case, the call request sent by the first service 410 may be intercepted, then the service discovery data of the second service420 may be acquired from the control plane 430 of the service grid, and the acquired service discovery data of the second service420 may be stored in the cache 411, and in this process, the control plane 430 of the service grid may be further subscribed to the service discovery data of the second service420 to acquire an update of the service discovery data of the second service 420.
After storing the service discovery data of the second service420 in the cache 411, the service discovery data in the cache 411 may be exemplified as follows:
xds{
cds[serviceF]
eds[serviceF]
……
cds[service420]
eds[service420]
……
}
thereafter, the invocation of the second service420 by the first service 410 may continue, specifically: the first service 410 acquires service discovery data of the second service420 from the cache 411, and invokes the second service420 according to the acquired service discovery data of the second service 420.
In one possible implementation, during the operation of the first service, the link information of the first service may also be recorded by tracking the service through links in the service grid. Wherein the link information may include service dependency information (e.g., information of a service called by the first service). In some examples, the link information may also include other information such as call links, call request amounts, and the like, which is not limited in this application. The link tracking service may provide functions that invoke link restoration, invoke request volume statistics, link topology, dependency analysis, and the like.
And in the running process of the first service, tracking and recording the link information of the first service through the link tracking service in the service grid, so that when the first service is restarted, the service discovery data of the service are preloaded according to the historical link information of the first service.
In one possible implementation, during the restart of the first service, historical call information for the first service may be obtained by invoking a link tracking service in the service grid. Wherein the history call information may include an identification of a service that the link tracking service has called by the first service, as determined from the history link information of the first service. After the first service is started, service discovery data of each service called by the first service, which is included in the history call information, can be obtained from a control surface of the service grid, and the obtained service discovery data is stored in a cache of the first service.
For example, it is assumed that during the start-up of the first service S6, the history call information of the first service S6 may be obtained by calling the link tracking service in the service grid, where the history call information includes an identifier of a service that has been called by the first service S6 within a preset time period before the current time. The identification of service S7, service S8 and service S9; after the first service S6 is started, service discovery data of the service S7, the service S8 and the service S9 may be obtained from the control plane of the service grid according to the identifiers of the service S7, the service S8 and the service S9, and the obtained service discovery data of the service S7, the service S8 and the service S9 may be stored in the cache of the first service S6.
By the method, when the first service is restarted, the historical call information of the first service can be obtained through the link tracking service, and the preloading of the service discovery data is realized according to the historical call information of the first service, so that the time cost for obtaining the service discovery data for the first time is reduced, the processing efficiency of the service call is improved, and the service performance is further improved.
In one possible implementation, in a case where a service visibility list of the first service is set in a cache of the first service, an identifier of each service called by the first service included in the history call information may be stored in the service visibility list of the first service.
In one possible implementation, the service discovery data of each third service (referred to herein as a third service) may be subscribed to the control plane of the service grid while the service discovery data of each service called by the first service included in the history call information is obtained from the control plane of the service grid.
After subscribing the service discovery data of each third service to the control surface of the service grid, for any third service, when the control surface of the service grid discovers that the service discovery data of the third service is updated, the control surface of the service grid pushes the updated service discovery data of the third service to the first service. The first service, upon receiving updated service discovery data (i.e., new service discovery data) of the third service pushed by the control plane of the service grid, replaces the service discovery data (i.e., old service discovery data) of the third service in the cache with the updated service discovery data (i.e., new service discovery data) of the third service. In this way, the service discovery data of the third service in the cache of the first service can be kept dynamically updated, so that the accuracy of the service discovery data of the third service in the cache is improved.
Fig. 5 is a schematic diagram showing a processing procedure of a service invocation method according to an embodiment of the present application. As shown in fig. 5, during the restart of the first service 510, historical call information for the first service 510 may be obtained by invoking the link tracking service 520. After the first service 510 is started, service discovery data of each service included in the history call information of the first service 510 may be acquired from the control plane 530 of the service grid, and the acquired service discovery data may be stored in the cache 511; in this process, the control plane 530 of the service grid may also be subscribed to service discovery data of each service included in the history call information of the first service 510 to obtain an update of the service discovery data.
In one possible implementation, during the operation of the first service, the service discovery data in the cache may also be updated periodically by periodically invoking the link tracking service in the service grid. Specifically, in the running process of the first service, the link tracking service in the service grid can be called according to a preset period, the history calling information of the first service in the last period is obtained, and then the service discovery data in the cache of the first service is updated according to the history calling information in the last period.
For example, assuming that service discovery data in the cache of the first service D1 is service discovery data of the service D2, the service D3, and the service D4, in the operation process of the first service D1, according to a preset period, the link tracking service in the service grid is invoked, and the acquired historical invocation information of the first service D1 in the last period includes: the identifiers of the service D2, the service D3, the service D5 and the service D6 may update the service discovery data in the cache of the first service D1 according to the history call information of the first service D1 in the last period, specifically as follows:
deleting service discovery data of the service D4 from the cache of the first service D1;
acquiring service discovery data of a service D5 from a control surface of a service grid, and storing the acquired service discovery data of the service D5 into a cache of a first service D1;
service discovery data of the service D6 is acquired from a control plane of the service grid, and the acquired service discovery data of the service D6 is stored in a cache of the first service D1.
In this way, in the running process of the first service, the link tracking service is periodically invoked to periodically update the service discovery data in the cache, so that dynamic lazy loading of the service discovery data is realized.
In one possible implementation manner, when the service visibility list of the first service is set in the cache of the first service, the service visibility list of the first service may be updated according to the history call information in the last period, so as to improve accuracy of the service visibility list of the first service.
In one possible implementation manner, the service calling method of the embodiment of the application may be implemented in a plug-in manner based on a Java Agent (Java Agent), and the plug-in is mounted to the first service. Loading Java Agent at start-up is a new feature introduced after Java development kit (Java Development Kit, JDK) 1.5, which provides the user with the ability to modify its bytecode before the JVM uses the corresponding byte stream to generate a Class object in the Java heap after the Java virtual machine (Java virtual machine, JVM) reads the bytecode file into memory, so that the JVM will also use the user modified bytecode to create the Class object, thereby achieving the purpose of dynamically enhancing program logic.
According to the service calling method, the plug-in is realized in a plug-in mode based on the Java agent, and the plug-in is mounted to the first service, so that the plug-in for realizing the service calling method can be dynamically loaded to the first service on the premise of no invasion, and the plug-in and the first service belong to the same process, so that inter-process communication is reduced, and therefore, when the service is called, the Java plug-in can be used for replacing the sidecar in the existing service grid, the resource consumption of the service is less, and the performance is better.
In one possible implementation manner, when the service calling method of the embodiment of the application is implemented in a plug-in manner based on a Java Agent (Java Agent), the plug-in may include a service discovery data subscription module and a service call interception module. The service discovery data subscription module is mainly used for establishing connection with a control surface of the service grid and acquiring and subscribing service discovery data. The service discovery data subscription module can also be used for realizing the service management functions of load balancing, flow control, current limiting degradation and the like. In the process of restarting the first service, the service discovery data subscription module can call the link tracking module in the service grid to realize the preloading of the service discovery data. In the first service operation process, the service discovery data subscription module can also periodically call the link tracking service in the service grid to periodically update the service discovery data in the cache. The service call interception module may intercept a call request of the first service and notify the service call interception module to acquire service discovery data of the second service when the first service calls the second service and the service discovery data of the second service does not exist in a cache of the first service.
Fig. 6 is a schematic diagram showing a processing procedure of a service invocation method according to an embodiment of the present application. As shown in fig. 6, in the process of starting the service a, by calling the link tracking service, the history call information of the service a is obtained, specifically: the service A calls the service B; after the service A is started, service discovery data of the service B are obtained from a control surface of the service grid, and the service discovery data of the service B are stored in a cache of the service A, so that the service discovery data are preloaded. The service discovery data of the service B can be obtained from the control plane of the service grid, and the service discovery data of the service B can be subscribed to the control plane of the service grid, so that the update of the service discovery data of the service B can be obtained in time.
In the running process of the service A, when the service A calls the service B, whether the service discovery data of the service B exists in the cache of the service A can be judged. And judging that the service discovery data of the service B exists in the cache of the service A, and directly calling the service B by the service A according to the service discovery data of the service B in the cache.
In the running process of the service A, when the service A calls the service C, whether the service discovery data of the service C exists in the cache of the service A can be judged. According to the judgment, service C service discovery data does not exist in the cache of the service A, in this case, a call request sent by the service A can be intercepted, then the service C service discovery data is obtained from a control surface of a service grid, and the service C service discovery data is stored in the cache of the service A, so that lazy loading of the service discovery data is realized. The invocation of service C by service a may then be performed based on the service discovery data of service C in the cache of service a. In addition, the service discovery data of the service C is acquired from the control plane of the service grid, and the service discovery data of the service C can be subscribed to the control plane of the service grid, and the update of the service discovery data of the service C can be timely acquired.
In the running process of the service A, the link information of the service A can be recorded by calling the link tracking service. In the running process of the service A, the service discovery data in the cache of the service A can be updated periodically by periodically calling the link tracking service.
According to the service calling method, the Java Agent is based and is realized in a plug-in mode, and the Java Agent is mounted in the host service (namely the first service), so that data interaction between the sidecar and the control surface of the service grid is replaced, frequent cross-process communication is avoided, and service performance under high concurrency, large scale and other scenes is effectively improved.
The service calling method of the embodiment of the application realizes the preloading of service discovery data in the service starting stage and lazy loading in the service running stage, and realizes the minimum service visibility configuration on the premise of not influencing the self business of the host service, thereby obviously reducing the resource consumption.
Compared with the related art, the service calling method of the embodiment of the application can use the dependency analysis and subscription push mechanism to replace the sidecar to realize communication and data interaction with the control surface of the service grid, and can adopt the link tracking service to pre-load service discovery data in the starting stage and lazy load service discovery data in the service running stage.
Fig. 7 shows a block diagram of a service invocation apparatus of an embodiment of the present application. The service invocation means is applied to a first service, which is a micro-service running in a service grid.
As shown in fig. 7, the service invocation apparatus includes:
a judging module 710, configured to judge, when the first service invokes a second service, whether service discovery data of the second service exists in a cache of the first service;
an interception module 720, configured to intercept a call request sent by the first service when service discovery data of the second service does not exist in the cache;
a first obtaining module 730, configured to obtain service discovery data of the second service from a control plane of the service grid, and store the service discovery data of the second service in the cache;
the first calling module 740 executes the calling of the second service by the first service based on the service discovery data of the second service.
In one possible implementation, the apparatus further includes: and the second calling module is used for calling the second service according to the service discovery data of the second service in the cache when the service discovery data of the second service exists in the cache.
In one possible implementation manner, the service discovery data includes an IP address and load information of an instance of a service, and the second calling module includes: the target instance selecting sub-module is used for selecting a target instance from the instances of the second service according to a preset load balancing rule and the load information of the instances of the second service; and the target instance calling sub-module is used for calling the target instance according to the IP address of the target instance.
In one possible implementation, the apparatus further includes: and the link information recording module is used for recording the link information of the first service through the link tracking service in the service grid in the running process of the first service.
In one possible implementation, the apparatus further includes: the history call information acquisition module is used for acquiring history call information of the first service by calling the link tracking service in the restarting process of the first service, wherein the history call information comprises an identifier of the service called by the first service, which is determined by the link tracking service according to the history link information; the second acquisition module is used for acquiring service discovery data of each service included in the history calling information from the control surface of the service grid after the first service is started; and the storage module is used for storing the acquired service discovery data into the cache.
In one possible implementation, the apparatus further includes: and the updating module is used for periodically updating the service discovery data in the cache by periodically calling the link tracking service in the running process of the first service.
In one possible implementation, the apparatus further includes: and the subscription module is used for subscribing the service discovery data in the cache to the control surface of the service grid so as to push the updated service discovery data to the first service when the service discovery data is updated by the control surface of the service grid.
In one possible implementation, the apparatus is implemented in a plug-in manner based on a Java agent, the plug-in being mounted to the first service.
Embodiments of the present application provide a cluster of computing devices, including at least one computing device, each computing device including a processor and a memory; the processor of the at least one computing device is configured to execute instructions stored in the memory of the at least one computing device to cause the cluster of computing devices to perform the method described above.
Embodiments of the present application provide a computer program product comprising instructions that, when executed by a cluster of computing devices, cause the cluster of computing devices to perform the above-described method.
Embodiments of the present application provide a computer readable storage medium comprising computer program instructions which, when executed by a cluster of computing devices, perform the above-described method.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disk, hard disk, random Access Memory (Random Access Memory, RAM), read Only Memory (ROM), erasable programmable Read Only Memory (Electrically Programmable Read-Only-Memory, EPROM or flash Memory), static Random Access Memory (SRAM), portable compact disk Read Only Memory (Compact Disc Read-Only Memory, CD-ROM), digital versatile disk (Digital Video Disc, DVD), memory stick, floppy disk, mechanical coding devices, punch cards or in-groove protrusion structures having instructions stored thereon, and any suitable combination of the foregoing.
The computer readable program instructions or code described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of the present application may be assembly instructions, instruction set architecture (Instruction Set Architecture, ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (Local Area Network, LAN) or a wide area network (Wide Area Network, WAN), or it may be connected to an external computer (e.g., through the internet using an internet service provider). In some embodiments, aspects of the present application are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field programmable gate arrays (Field-Programmable Gate Array, FPGA), or programmable logic arrays (Programmable Logic Array, PLA), with state information of computer readable program instructions.
Various aspects of the present application are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by hardware (e.g., circuits or ASICs (Application Specific Integrated Circuit, application specific integrated circuits)) which perform the corresponding functions or acts, or combinations of hardware and software, such as firmware, etc.
Although the invention is described herein in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
The embodiments of the present application have been described above, the foregoing description is exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (19)

1. A service invocation method applied to a first service, the first service being a micro-service running in a service grid, the method comprising:
judging whether service discovery data of a second service exist in a cache of the first service when the first service calls the second service;
intercepting a call request sent by the first service when service discovery data of the second service does not exist in the cache;
acquiring service discovery data of the second service from a control surface of the service grid, and storing the service discovery data of the second service into the cache;
and executing the call of the first service to the second service based on the service discovery data of the second service.
2. The method according to claim 1, wherein the method further comprises:
and when the service discovery data of the second service exists in the cache, calling the second service according to the service discovery data of the second service in the cache.
3. The method of claim 2, wherein the service discovery data includes IP addresses and load information for instances of the service,
The calling the second service according to the service discovery data of the second service in the cache comprises the following steps:
selecting a target instance from the instances of the second service according to a preset load balancing rule and load information of the instances of the second service;
and calling the target instance according to the IP address of the target instance.
4. A method according to any one of claims 1-3, characterized in that the method further comprises:
and in the running process of the first service, recording the link information of the first service through the link tracking service in the service grid.
5. The method according to claim 4, wherein the method further comprises:
in the restarting process of the first service, acquiring history calling information of the first service by calling the link tracking service, wherein the history calling information comprises an identifier of the service called by the first service, which is determined by the link tracking service according to the history link information;
after the first service is started, service discovery data of each service included in the history calling information is obtained from a control surface of the service grid;
And storing the acquired service discovery data into the cache.
6. The method according to claim 4, wherein the method further comprises:
and in the running process of the first service, periodically updating the service discovery data in the cache by periodically calling the link tracking service.
7. The method according to any one of claims 1-6, further comprising:
and subscribing the service discovery data in the cache to the control surface of the service grid, so that the control surface of the service grid pushes the updated service discovery data to the first service when the service discovery data is updated.
8. The method according to any of claims 1-7, characterized in that the method is implemented in a plug-in based on a Java agent, the plug-in being mounted to the first service.
9. A service invocation apparatus is characterized by being applied to a first service, the first service being a micro-service running in a service grid,
the device comprises:
the judging module is used for judging whether service discovery data of the second service exist in a cache of the first service when the first service calls the second service;
The interception module is used for intercepting a call request sent by the first service when the service discovery data of the second service does not exist in the cache;
the first acquisition module is used for acquiring service discovery data of the second service from the control surface of the service grid and storing the service discovery data of the second service into the cache;
and the first calling module is used for executing the calling of the first service to the second service based on the service discovery data of the second service.
10. The apparatus of claim 9, wherein the apparatus further comprises:
and the second calling module is used for calling the second service according to the service discovery data of the second service in the cache when the service discovery data of the second service exists in the cache.
11. The apparatus of claim 10, wherein the service discovery data comprises IP addresses and load information for instances of services,
the second calling module comprises:
the target instance selecting sub-module is used for selecting a target instance from the instances of the second service according to a preset load balancing rule and the load information of the instances of the second service;
And the target instance calling sub-module is used for calling the target instance according to the IP address of the target instance.
12. The apparatus according to any one of claims 9-11, wherein the apparatus further comprises:
and the link information recording module is used for recording the link information of the first service through the link tracking service in the service grid in the running process of the first service.
13. The apparatus of claim 12, wherein the apparatus further comprises:
the history call information acquisition module is used for acquiring history call information of the first service by calling the link tracking service in the restarting process of the first service, wherein the history call information comprises an identifier of the service called by the first service, which is determined by the link tracking service according to the history link information;
the second acquisition module is used for acquiring service discovery data of each service included in the history calling information from the control surface of the service grid after the first service is started;
and the storage module is used for storing the acquired service discovery data into the cache.
14. The apparatus of claim 12, wherein the apparatus further comprises:
And the updating module is used for periodically updating the service discovery data in the cache by periodically calling the link tracking service in the running process of the first service.
15. The apparatus according to any one of claims 9-14, wherein the apparatus further comprises:
and the subscription module is used for subscribing the service discovery data in the cache to the control surface of the service grid so as to push the updated service discovery data to the first service when the service discovery data is updated by the control surface of the service grid.
16. The apparatus according to any of claims 9-15, wherein the apparatus is implemented in a plug-in based on a Java agent, the plug-in being mounted to the first service.
17. A cluster of computing devices, comprising at least one computing device, each computing device comprising a processor and a memory;
the processor of the at least one computing device is configured to execute instructions stored in a memory of the at least one computing device to cause the cluster of computing devices to perform the method of any of claims 1-8.
18. A computer program product containing instructions that, when executed by a cluster of computing devices, cause the cluster of computing devices to perform the method of any of claims 1-8.
19. A computer readable storage medium comprising computer program instructions which, when executed by a cluster of computing devices, perform the method of any of claims 1-8.
CN202211184674.2A 2022-09-27 2022-09-27 Service calling method and device Pending CN117834739A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211184674.2A CN117834739A (en) 2022-09-27 2022-09-27 Service calling method and device
PCT/CN2023/101382 WO2024066503A1 (en) 2022-09-27 2023-06-20 Service invocation method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211184674.2A CN117834739A (en) 2022-09-27 2022-09-27 Service calling method and device

Publications (1)

Publication Number Publication Date
CN117834739A true CN117834739A (en) 2024-04-05

Family

ID=90475946

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211184674.2A Pending CN117834739A (en) 2022-09-27 2022-09-27 Service calling method and device

Country Status (2)

Country Link
CN (1) CN117834739A (en)
WO (1) WO2024066503A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110401696B (en) * 2019-06-18 2020-11-06 华为技术有限公司 Decentralized processing method, communication agent, host and storage medium
CN113055421B (en) * 2019-12-27 2022-06-21 南京亚信软件有限公司 Service grid management method and system
CN112346871A (en) * 2020-11-24 2021-02-09 深圳前海微众银行股份有限公司 Request processing method and micro-service system
CN113364885B (en) * 2021-06-29 2022-11-22 天翼云科技有限公司 Micro-service calling method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
WO2024066503A1 (en) 2024-04-04

Similar Documents

Publication Publication Date Title
CN109547570B (en) Service registration method, device, registration center management equipment and storage medium
US10637817B2 (en) Managing messaging protocol communications
CN110311983B (en) Service request processing method, device and system, electronic equipment and storage medium
US10452372B2 (en) Method and deployment module for managing a container to be deployed on a software platform
EP3664372A1 (en) Network management method and related device
CN113039763B (en) NF Service Consumer Restart Detection Using Direct Signaling Between NFs
CN106331065B (en) Proxy application and system for host system with service container
US11432137B2 (en) Service notification method for mobile edge host and apparatus
CN116018788A (en) Configuring service grid networking resources for dynamically discovered peers or network functions
WO2021196597A1 (en) Service plug-in loading implementation method and apparatus, and terminal device
CN111010304A (en) Method for integrating Dubbo service and Kubernetes system
CN110601981A (en) Service routing method, service provider cloud domain and service calling cloud domain
CN106713469B (en) Dynamic loading method, device and system for distributed container
WO2023011274A1 (en) Communication protocol conversion method, and device, system, and gateway device
CN111064626B (en) Configuration updating method, device, server and readable storage medium
CN114448895B (en) Application access method, device, equipment and medium
US11647103B1 (en) Compression-as-a-service for data transmissions
CN111245634A (en) Virtualization management method and device
CN117834739A (en) Service calling method and device
CN115061796A (en) Execution method and system for calling between subtasks and electronic equipment
US11366648B2 (en) Compiling monoglot function compositions into a single entity
CN114461424A (en) Inter-unit service discovery method, device and system under unitized deployment architecture
CN114625479A (en) Cloud edge collaborative application management method in edge computing and corresponding device
CN113556370A (en) Service calling method and device
US20150282121A1 (en) Local resource sharing method of machine to machine component and apparatus thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication