CN108055322B - Request message processing method and device - Google Patents
Request message processing method and device Download PDFInfo
- Publication number
- CN108055322B CN108055322B CN201711321264.7A CN201711321264A CN108055322B CN 108055322 B CN108055322 B CN 108055322B CN 201711321264 A CN201711321264 A CN 201711321264A CN 108055322 B CN108055322 B CN 108055322B
- Authority
- CN
- China
- Prior art keywords
- api
- gateway
- state
- service instance
- request message
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
- H04L67/63—Routing a service request depending on the request content or context
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer And Data Communications (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The embodiment of the invention provides a request message processing method and a device, wherein the method comprises the following steps: the gateway receives a request message sent by a client; the gateway determines a first Application Programming Interface (API) corresponding to the request message; the gateway determines the running state of the first API according to the currently executed publishing step in a preset publishing scheme, wherein the running state comprises a publishing state and a non-publishing state, and the publishing scheme comprises a plurality of publishing steps and the corresponding identifications of the APIs in the publishing state; the gateway determines a target service instance according to the running state of the first API, wherein the version of the API loaded by the target service instance corresponds to the running state of the first API; and the gateway sends a request message to the target service instance so that the target service instance processes the request message according to the loaded first API. For improving the efficiency of service distribution.
Description
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a request message processing method and device.
Background
At present, in a Client/Server (CS) architecture, a Server generally provides a service to a Client, and in an actual application process, a new version may be released for the service provided by the Server to upgrade the service provided by the Server.
In the prior art, after determining a new version of a service, in order to ensure that the version of the service is stable and excessive, a part of users is usually released with the new version of the service, and another part of users still use the old version of the service. That is, after receiving a request message sent by a user, a gateway forwards the request message to a new version of service instance or an old version of service instance according to an identifier of the user, where one service instance usually includes multiple Application Programming Interfaces (APIs), an API loaded by the new version of service instance is the new version of API, and an API loaded by the old version of service instance is the old version of API.
However, in the prior art, if a failure occurs in the process of providing a service to a user through a new version of a service instance, the failed API of the new service instance cannot be located, so that the whole service release fails, resulting in low efficiency of service release.
Disclosure of Invention
The embodiment of the invention provides a request message processing method and device, which improve the service publishing efficiency.
In a first aspect, an embodiment of the present invention provides a method for processing a request message, including:
the gateway receives a request message sent by a client;
the gateway determines a first Application Programming Interface (API) corresponding to the request message;
the gateway determines the running state of the first API according to the currently executed publishing step in a preset publishing scheme, wherein the running state comprises a publishing state and a non-publishing state, and the publishing scheme comprises a plurality of publishing steps and the corresponding identifications of the APIs in the publishing state;
the gateway determines the target service instance according to the running state of the first API, wherein the version of the API loaded by the target service instance corresponds to the running state of the first API;
the gateway sends the request message to the target service instance so that the target service instance processes the request message according to the first API loaded by the target service instance.
In a possible implementation manner, the determining, by the gateway, the first API corresponding to the request message includes:
the gateway judges whether the request message comprises an API mark or not;
if so, the gateway determines the API corresponding to the API identification as the first API;
if not, the gateway obtains a first corresponding relationship and the type of the request message, and determines the first API according to the first corresponding relationship and the type of the request message, wherein the first corresponding relationship comprises at least one API identifier and the type of the request message corresponding to each API identifier.
In another possible implementation manner, the determining, by the gateway, the operating state of the first API according to the currently executed publishing step in the preset publishing scheme includes:
the gateway acquires the identifier of the API in the release state in the internal cache of the gateway;
the gateway judges whether the identifier of the first API is the same as the API identifier in the internal cache;
if so, the gateway determines that the running state of the first API is a release state;
if not, the gateway determines that the running state of the first API is an unpublished state.
In another possible embodiment, the internal cache comprises a master cache and a slave cache; the gateway obtains the identifier of the API in the release state in the internal cache of the gateway, and the method comprises the following steps:
the gateway acquires processing states of the main cache and the auxiliary cache, wherein the processing states comprise a valid state and an invalid state; at the same time, the processing state of one cache in the main cache and the slave cache is an effective state, and the processing state of the other cache is an invalid state;
and the gateway acquires the identifier of the API in the release state from the cache in the effective state.
In another possible implementation, before the gateway obtains the identifier of the API in the release state from the cache in the valid state, the method further includes:
the gateway receives the identifier of the API in the release state, which is sent by a first server;
the gateway stores the identifier of the API in the release state in the cache in the invalid state;
the gateway replaces the processing state of the master cache and the slave cache.
In another possible implementation manner, the determining, by the gateway, the target service instance according to the operating state of the first API includes:
the gateway judges whether the running state of the first API is a release state;
if yes, the gateway determines the target service instance according to a second corresponding relation in an external cache and the identifier of the first API, wherein the second corresponding relation comprises the identifiers of the plurality of APIs and the identifier of the target service instance corresponding to the identifier of each API;
if not, the gateway determines a default service instance as the target service instance, wherein the default service instance is a service instance which is not loaded with the API of the latest version.
In another possible implementation, the sending, by the gateway, the request message to the target service instance includes:
if the first API is in a non-release state, acquiring default path information from an internal cache of the gateway, and sending the request message to the target service instance according to the default path information, wherein the default path information is the path information of the default service instance, and the default service instance is a service instance without loading the API of the latest version;
and if the first API is in a release state, requesting a registration center to acquire the path information of the target service instance, and sending the request message to the target service instance according to the path information of the target service instance.
In another possible embodiment, the method further comprises:
the gateway receives a response message sent by the target service instance;
the gateway adds the identification of the first API in the response message;
and the gateway sends a response message including the identifier of the first API to the client so that the client carries the identifier of the first API when sending a request message next time.
In a second aspect, an embodiment of the present invention provides a method for processing a request message, including:
the method comprises the steps that a first server determines a first issuing step currently executed in an issuing scheme and an Application Programming Interface (API) issued in the first issuing step, wherein the issuing scheme comprises a plurality of issuing steps and corresponding identifications of the APIs in issuing states;
the first server sends the identifier of the API to a gateway, so that the gateway stores the identifier of the API in an internal cache of the gateway.
In another possible embodiment, the method further comprises:
the first server determines a second corresponding relation according to the publishing scheme, wherein the second corresponding relation comprises the identifiers of the plurality of APIs and the identifier of the target service instance corresponding to the identifier of each API;
the first server stores the second correspondence in an external cache.
In a third aspect, an embodiment of the present invention provides a request message processing apparatus, including a receiving module, a first determining module, a second determining module, a third determining module, and a sending module, where,
the receiving module is used for receiving a request message sent by a client;
the first determining module is used for determining a first Application Programming Interface (API) corresponding to the request message;
the second determining module is configured to determine an operating state of the first API according to a currently executed publishing step in a preset publishing scheme, where the operating state includes a publishing state and a non-publishing state, and the publishing scheme includes multiple publishing steps and corresponding identifiers of APIs in the publishing state;
the third determining module is configured to determine the target service instance according to the running state of the first API, where a version of the API loaded by the target service instance corresponds to the running state of the first API;
the sending module is configured to send the request message to the target service instance, so that the target service instance processes the request message according to the loaded first API.
In a possible implementation manner, the first determining module is specifically configured to:
judging whether the request message comprises an API mark or not;
if yes, determining the API corresponding to the API identification as the first API;
if not, acquiring a first corresponding relation and the type of the request message, and determining the first API according to the first corresponding relation and the type of the request message, wherein the first corresponding relation comprises at least one API identifier and the type of the request message corresponding to each API identifier.
In another possible implementation manner, the second determining module is specifically configured to:
acquiring an identifier of the API in a release state in an internal cache of the gateway;
judging whether the identifier of the first API is the same as the API identifier in the internal cache or not;
if so, determining that the running state of the first API is a release state;
and if not, determining that the running state of the first API is an unpublished state.
In another possible embodiment, the internal cache comprises a master cache and a slave cache; the second determining module is specifically configured to:
acquiring processing states of the master cache and the slave cache, wherein the processing states comprise a valid state and an invalid state; at the same time, the processing state of one cache in the main cache and the slave cache is an effective state, and the processing state of the other cache is an invalid state;
and acquiring the identifier of the API in the release state in a cache in the valid state.
In another possible embodiment, the device further comprises a storage module and a replacement module, wherein,
the receiving module is further configured to receive the identifier of the API in the release state sent by the first server before the second determining module obtains the identifier of the API in the release state in the cache in the valid state;
the storage module is used for storing the identifier of the API in the release state in the cache in the invalid state;
the replacing module is used for replacing the processing states of the main cache and the auxiliary cache.
In another possible implementation manner, the third determining module is specifically configured to:
judging whether the running state of the first API is a release state;
if yes, determining the target service instance according to a second corresponding relation in an external cache and the identifier of the first API, wherein the second corresponding relation comprises the identifiers of the plurality of APIs and the identifier of the target service instance corresponding to the identifier of each API;
if not, determining a default service instance as the target service instance, wherein the default service instance is a service instance which is not loaded with the API of the latest version.
In another possible implementation manner, the sending module is specifically configured to:
if the first API is in a non-release state, acquiring default path information from an internal cache of the gateway, and sending the request message to the target service instance according to the default path information, wherein the default path information is the path information of the default service instance, and the default service instance is a service instance without loading the API of the latest version;
and if the first API is in a release state, requesting a registration center to acquire the path information of the target service instance, and sending the request message to the target service instance according to the path information of the target service instance.
In another possible embodiment, the apparatus further comprises an adding module, wherein,
the receiving module is further configured to receive a response message sent by the target service instance;
the adding module is used for adding the identifier of the first API in the response message;
the sending module is further configured to send a response message including the identifier of the first API to the client, so that the client carries the identifier of the first API when sending the request message next time.
In a fourth aspect, an embodiment of the present invention provides a request message processing apparatus, including a first determining module and a sending module, where,
the first determining module is used for determining a first publishing step currently executed in a publishing scheme and an Application Programming Interface (API) published in the first publishing step, wherein the publishing scheme comprises a plurality of publishing steps and corresponding identifications of the APIs in a publishing state;
and the sending module is used for sending the identifier of the API to a gateway so that the gateway stores the identifier of the API in an internal cache of the gateway.
In a possible implementation, the apparatus further comprises a second determining module and a storing module, wherein,
the second determining module is configured to determine a second corresponding relationship according to the publishing scheme, where the second corresponding relationship includes identifiers of multiple APIs and an identifier of a target service instance corresponding to each API identifier;
the storage module is configured to store the second correspondence in an external cache.
In the method and the device for processing the request message provided by the embodiment of the invention, the pre-established issuing scheme comprises a plurality of issuing steps and the corresponding identifications of the APIs in the issuing state, each issuing step corresponds to the API with the latest version, namely, when service issuing is carried out, only the API with the latest version is issued in each issuing step. Correspondingly, after the gateway receives the request message sent by the client, the first API corresponding to the client is obtained, and the target service instance is determined according to the running state of the first API, the version of the first API loaded by the target service instance corresponds to the running state of the first API, for example, when the first API is in a release state, it is determined that the version of the API loaded by the target service instance is the latest version, and when the first API is in a non-release state, it is determined that the version of the API loaded by the target service instance is not the latest version, so that different APIs of the latest version can be released to the corresponding client in different release steps. In the process, when the service is published, the service is published according to the granularity of the API, so that the service publishing granularity is reduced, and if the service is failed in the process of publishing the service, the failed API can be timely positioned, and the service publishing efficiency is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is an architecture diagram of a request message processing method according to an embodiment of the present invention;
fig. 2 is a first schematic diagram illustrating a request message processing method according to an embodiment of the present invention;
fig. 3 is a second schematic diagram illustrating a request message processing method according to an embodiment of the present invention;
fig. 4 is a first schematic structural diagram of a request message processing apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a request message processing apparatus according to an embodiment of the present invention;
fig. 6 is a first schematic structural diagram of another request message processing apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of another request message processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is an architecture diagram of a request message processing method according to an embodiment of the present invention. Referring to fig. 1, the system includes a client 101, a gateway 102, a service cluster 103, a first server 104, a container management server 105, an external cache 106, and a registry 107.
The client 101 may be a device such as a mobile phone and a computer, and the client 101 may send a request message to the service cluster 103 through the gateway 102 and receive a response message sent by the service cluster 103 through the gateway 102.
The gateway 102 is provided with an internal cache, and the internal cache stores the identifier of the API in the release state and default path information. First server 104 may determine the API in the publication state based on the publishing step currently being performed in the publishing scheme and update the identification of the API in the publication state in the internal cache. The default path information refers to path information of a default service instance, and the default service instance refers to a service instance in which the latest version of the API is not loaded. Of course, the internal cache may also store the running state of each API, which includes a published state and an unpublished state.
A plurality of service instances are included in the service cluster 103, each of which may provide services to the client 102 through the gateway 102. The first server 104 may control the container management server 105 for service instance deployment according to the publishing step currently being performed in the publishing scheme.
The first server 104 is loaded with a distribution scheme for indicating distribution steps of APIs in a distribution service and switching conditions between the distribution steps, where each distribution step indicates to distribute one API to a preset user group. For example, a first publishing step of a publication scheme may be to publish the API1 to a first group of users, a second publishing step of the publication scheme may be performed one day after the first publishing step is performed, and a second publishing step of the publication scheme may be to publish the API2 to a second group of users.
The container management server 105 may perform the deployment of the service instance under the control of the first server 104. For example, the number of service instances deployed by the container management server 105 may be different under different publishing steps.
The external cache 106 includes a correspondence between the identity of the API and the identity of the service instance. Optionally, the first server 104 may store the corresponding relationship in the external cache 106 during the initialization process.
The registry 107 includes path information for the service instance. Alternatively, only the path information of the service instance loaded with the API of the latest version may be included in the registry 107, that is, the path information of the default service instance may not be included in the registry 107.
In the application, the distribution scheme includes a plurality of distribution steps, each of which indicates to distribute an API of the latest version to a preset user group, that is, to distribute different APIs at different time intervals when the service is distributed. Accordingly, after the gateway receives the request message sent by the client, the target service instance to which the request message is forwarded may be determined according to the publishing step currently being performed in the publishing scheme, for example, assuming that the client corresponds to the first API, after receiving the request message sent by the client, when the first API is in an unpublished state, the request message is forwarded to the service instance loading the old version of the first API, and when the first API is in a publishing state, the request message is forwarded to the service instance loading the new version of the first API, so as to implement publishing of different latest versions of APIs to corresponding clients at different time periods. In the process, when the service is published, the service is published according to the granularity of the API, so that the service publishing granularity is reduced, and if the service is failed in the process of publishing the service, the failed API can be timely positioned, and the service publishing efficiency is further improved.
The technical means shown in the present application will be described in detail below with reference to specific examples. It should be noted that the following embodiments may be combined with each other, and the description of the same or similar contents in different embodiments is not repeated.
Fig. 2 is a first schematic diagram of a request message processing method according to an embodiment of the present invention. Please refer to fig. 2, which includes:
s201, the first server determines a first publishing step currently executed in a publishing scheme and an API published in the first publishing step.
The issuing scheme comprises a plurality of issuing steps and the corresponding identifications of the APIs in the issuing state. Each issuing step is used for indicating to issue a corresponding API, and only one issuing step is executed at the same time, so that the API in the issuing step currently being executed is the API in the issuing state.
Optionally, the first publishing step is any one step in the publishing scheme. The first server executes the publication schema in accordance with the execution steps in the publication schema.
Optionally, the distribution scheme further includes a switching condition between the distribution steps. For example, the switching condition may be that the execution duration is greater than a preset duration, the execution success rate is greater than a preset success rate, and the like. Of course, in the actual application process, the switching condition may be set according to actual needs, and this is not specifically limited in the embodiment of the present invention.
For example, the publication scheme may be as follows: executing a first publishing step after the starting, wherein the first publishing step is to publish the API1 to the client 1-the client 1000; after the execution time of the first publishing step is longer than 10 hours, executing a second publishing step, wherein the second publishing step is used for publishing the API to the client 1-the client 2000; and when the execution success rate of the second issuing step is more than 90%, executing a third issuing step, and so on until executing the last issuing step.
It should be noted that, after S201, the first server further sends an instance deployment notification to the container management server, so that the container management server performs deployment of the service instance.
Optionally, the number of service instances may also be different in different publishing steps. In an actual application process, the number of service instances in each publishing step may be set according to actual needs, which is not specifically limited in the embodiment of the present invention.
Optionally, the service instances deployed by the container management server include at least one default service instance and at least one new service instance. And the APIs loaded by the default service instance are all the APIs of the old version. The APIs loaded by the new service instance are all the latest versions of APIs.
It should be further noted that, before S201, the first server may determine a second corresponding relationship according to the publishing scheme, where the second corresponding relationship includes identifiers of the multiple APIs and an identifier of a target service instance corresponding to the identifier of each API, and store the second corresponding relationship in an external cache.
For example, the second correspondence may be as shown in table 1:
TABLE 1
Identification of API | Identification of service instances |
API1 | Service instance 1 |
API2 | Service instance 2 |
API3 | Service instance 3 |
…… | …… |
It should be noted that table 2 illustrates the second corresponding relationship by way of example only, and does not limit the second corresponding relationship.
S202, the first server sends the identifier of the API in the release state to the gateway.
S203, the gateway stores the identification of the API in the release state in the internal cache of the gateway.
Optionally, the gateway may determine whether the API identifier exists in the internal cache of the gateway, and if so, update the API identifier in the internal cache to the API identifier; if not, the first server stores the identifier of the API in an internal cache.
And S204, the client sends a request message to the gateway.
S205, the gateway determines a first API corresponding to the request message.
In the embodiment of the present invention, for different types of request messages, different APIs are required to process the request messages, that is, a preset first corresponding relationship exists between the type of the request message and the APIs, where the first corresponding relationship includes an identifier of at least one API and a type of the request message corresponding to the identifier of each API. For example, the first correspondence may be as shown in table 2:
TABLE 2
Type of request message | Identification of API |
First type request message | API1 |
Second type request message | API2 |
Third type request message | API3 |
…… | …… |
Optionally, when the API is published, a part of the user APIs may be further included, correspondingly, the first corresponding relationship may further include an identifier of the client, and correspondingly, in S206, the gateway needs to determine the first API according to the identifier of the client and the type of the request message. For example, the first correspondence may be as shown in table 3:
TABLE 3
Identification of client | Type of request message | Identification of API |
Client 1-client 1000 | First type request message | API1 |
Client 200-client 1000 | Second type request message | API2 |
Client 500-client 2000 | Third type request message | API3 |
…… | …… | …… |
It should be noted that, table 2 and table 3 only illustrate the first corresponding relationship between the type of the request message and the API in an exemplary form, and do not limit the first corresponding relationship, and in an actual application process, the first corresponding relationship may be set according to actual needs, which is not specifically limited in the embodiment of the present invention.
Optionally, after the client sends the request message to the gateway once, if the gateway determines that the API corresponding to the client is the first API according to the first correspondence, the gateway carries the identifier of the first API in the response message sent to the client. Therefore, when the client sends the request message next time, the identifier of the first API can be carried in the request message, so that after the gateway receives the request message, the API corresponding to the API identifier in the request message can be directly determined as the first API, and the gateway can determine the first API corresponding to the client without searching the first corresponding relation, thereby improving the processing efficiency of the gateway.
It should be noted that, in the service publishing process, any latest API may not be published to a certain part of the clients, and accordingly, if the certain part of the clients does not correspond to any API, the gateway cannot obtain the first API corresponding to the certain part of the clients. In this case, the gateway may send the request message sent by the client to the service instance that did not load the latest version of the API.
S206, the gateway determines the running state of the first API according to a preset issuing scheme.
Wherein the running state comprises an issued state and an unpublished state. When the API issued in the issuing step being executed in the issuing scheme is the first API, the operating state of the first API is the issuing state, and when the API issued in the issuing step being executed in the issuing scheme is not the first API, the operating state of the first API is the non-issuing state.
Optionally, the identifier of the API in the release state may be stored in the internal cache of the gateway, so that the gateway may obtain the identifier of the API in the release state from the internal cache, and determine whether the identifier of the first API is the same as the identifier of the API in the release state, if so, determine that the state of the first API is the release state, and if not, determine that the state of the first API is the unpublished state.
Since the gateway can quickly access the internal cache, the gateway can quickly determine and obtain the running state of the first API by the method. Furthermore, by storing the identifier of the API in the release state in the internal cache of the gateway, when the release service is started and the release step is switched, only the identifier of the API in the release state in the internal cache needs to be updated, and the gateway does not need to be restarted.
Of course, the running state of each API may also be stored in the internal cache of the gateway. For example, the operating state of each API stored in the internal cache may be as shown in table 4:
TABLE 4
Identification of API | Operating state |
API1 | Publishing state |
API2 | Unpublished state |
API3 | Unpublished state |
…… | …… |
It should be noted that table 4 illustrates the operating state of each API by way of example only, and does not limit the operating state of the API.
S207, the gateway determines a target service instance according to the running state of the first API.
And the version of the first API loaded by the target service instance corresponds to the running state of the first API. That is, when the running state of the first API is the published state, the version of the API loaded by the target service instance is the latest version, and when the state of the first API is the unpublished state, the version of the API loaded by the target service instance is not the latest version.
Optionally, the gateway may determine whether the running state of the first API is the release state; if so, the gateway determines a target service instance according to the second corresponding relation in the external cache and the identifier of the first API; if not, the gateway determines the default service instance as the target service instance, and the default service instance is the service instance which is not loaded with the API of the latest version.
The gateway can rapidly access the external cache, so that the gateway can rapidly determine the target service instance according to the second corresponding relation in the external cache, and the efficiency of determining the target service instance is improved.
S208, the gateway sends a request message to the target service instance.
After the gateway selects and obtains the target service instance, the path information of the target service instance can be obtained, and the request message is sent to the target service instance according to the path information of the target service instance, so that the target service instance processes the request message.
Optionally, the path information of the target service instance may be represented by an Internet Protocol (IP) address and/or a Medium Access Control (MAC) address of the target service instance.
Optionally, when the first API is in the unpublished state, the path information of the target service instance may be obtained through the following feasible implementation manners: and acquiring default path information in an internal cache of the gateway, and determining the default path information as the path information of the target service instance.
Optionally, when the first API is in the release state, the path information of the target service instance may be obtained through the following feasible implementation manners: and requesting the registration center to acquire the path information of the target service instance.
S209, the target server sends a response message corresponding to the request message to the gateway.
The target server may process the request message according to the loaded first API to obtain a response message.
When the first API is in the release state, it is determined that the obtained API loaded in the target server is the API of the latest version, that is, the first API loaded in the target server is also the API of the latest version, so that the target server can process the request message through the first API of the latest version.
S210, the gateway sends a response message to the client.
In the request message processing method provided by the embodiment of the present invention, the predetermined distribution scheme includes a plurality of distribution steps, each distribution step corresponds to an API of the latest version, that is, when service distribution is performed, only an API of the latest version is distributed in each distribution step. Correspondingly, after the gateway receives the request message sent by the client, the first API corresponding to the client is obtained, and the target service instance is determined according to the running state of the first API, the version of the first API loaded by the target service instance corresponds to the running state of the first API, for example, when the first API is in a release state, it is determined that the version of the API loaded by the target service instance is the latest version, and when the first API is in a non-release state, it is determined that the version of the API loaded by the target service instance is not the latest version, so that different APIs of the latest version can be released to the corresponding client in different release steps. In the process, when the service is published, the service is published according to the granularity of the API, so that the service publishing granularity is reduced, and if the service is failed in the process of publishing the service, the failed API can be timely positioned, and the service publishing efficiency is further improved.
In addition to the embodiment shown in fig. 2, the embodiment shown in fig. 2 will be described in further detail below with reference to the embodiment shown in fig. 3.
Fig. 3 is a second schematic diagram of a request message processing method according to an embodiment of the present invention. Please refer to fig. 3, which includes:
s301, the first server determines a first publishing step currently executed in the publishing scheme and a second API published in the first publishing step.
S302, the first server determines the number of the service instances according to the first publishing step.
Optionally, the first service instance may determine, according to the first publishing step, the number of default service instances that need to be deployed and the number of new service instances.
For example, assuming that the API published in the first publishing step has a limited load-bearing capacity, it may be determined that the number of new service instances is plural. Assuming that the API published in the first publishing step has a strong load bearing capability, it may be determined that the number of new service instances is 1.
S303, the first server sends an instance deployment request to the container management server, wherein the instance deployment request comprises the number of the service instances.
S304, the container management server deploys the service instances according to the number of the service instances.
It should be noted that after S304, the service instances in the service cluster send path information (not shown in the figure) of each service instance to the registry.
S305, the first server sends the identifier of the second API to the gateway.
S306, the gateway stores the identifier of the second API in the internal cache of the gateway.
It should be noted that the execution process of S306 may refer to S203, and details are not described here.
S307, the client sends a request message to the gateway.
S308, the gateway determines a first API corresponding to the request message.
Optionally, the gateway may determine whether the request message includes an API identifier; if so, the gateway determines the API corresponding to the API identification as the first API; if not, the gateway obtains the first corresponding relation and the type of the request message, and determines the first API according to the first corresponding relation and the type of the request message.
S309, the gateway determines the running state of the first API according to a preset issuing scheme.
Optionally, the running state of the first API may be determined according to the identifier of the first API and the identifier of the second API in the internal cache of the gateway. And when the identifier of the first API is the same as the identifier of the second API, determining that the running state of the first API is the release state, otherwise, determining that the running state of the first API is not the release state.
It should be noted that in S311, the gateway needs to read the content in the internal cache, and in S308, the first server needs to write the content in the internal cache. In order to avoid read-write conflict, a master cache and a slave cache may be set, and in different issuing steps, one of the master cache and the slave cache may be set as a valid cache. Correspondingly, when data reading operation is needed, data is read from the effective cache, and when writing operation is needed, writing operation is carried out on the non-effective cache.
Optionally, the internal cache includes a master cache and a slave cache; the gateway may obtain the identification of the API in the published state in the internal cache of the gateway by a possible implementation as follows: the gateway acquires the processing states of a master cache and a slave cache, wherein the processing states comprise an effective state and an invalid state; at the same time, the processing state of one cache is an effective state, and the processing state of the other cache is an invalid state; and the gateway acquires the identifier of the API in the release state in the cache in the valid state.
Further, before the gateway acquires the identifier of the API in the release state from the cache in the valid state, the gateway also receives the identifier of the API in the release state sent by the first server; the gateway stores the identifier of the API in the release state in the cache in the invalid state; the gateway replaces the processing state of the master cache and the slave cache.
For example, in the first publishing step, the identifier of the API in the publishing state (assumed as API1) is written into the master cache and the slave cache, respectively, and at this time, API1 is stored in both the master cache and the slave cache. And setting the processing state of the main cache to be a valid state, so that in the first publishing step, the gateway reads data from the main cache, and the obtained identifier of the API in the publishing state is the API 1.
After the first publishing step is finished, when the second publishing step needs to be executed, assuming that the API in the publishing state in the second publishing step is API2, API2 is written into the slave cache, at this time, API1 is stored in the master cache, and API2 is stored in the slave cache. And setting the processing state set from the cache to be a valid state, so that in the second issuing step, the gateway reads data from the cache, and the obtained identifier of the API in the issuing state is the API 2.
The above-described process is repeatedly performed in subsequent steps.
S310, the gateway determines a target service instance according to the running state of the first API.
It should be noted that the execution process of S310 may refer to S207, and details are not described here.
S311, the gateway acquires the path information of the target service instance.
It should be noted that the execution process of S311 may refer to S208, and details are not repeated here.
S312, the gateway sends a request message to the target service instance according to the path information of the target service instance.
S313, the target service instance sends a response message to the gateway.
S314, the target service instance adds the identification of the first API in the response message.
S315, the target service instance sends a response message including the identification of the first API to the client.
It should be noted that, if the response message sent by the gateway to the client includes the identifier of the first API, the client carries the identifier of the first API in the request message when sending the request message next time.
Through the embodiment shown in fig. 3, different APIs of the latest version can be issued to corresponding clients in different issuing steps, and only one API is issued in one issuing step, so that if a fault occurs in the service issuing process, the faulty API can be located in time, and the service issuing efficiency is further improved.
Fig. 4 is a first schematic structural diagram of a request message processing apparatus according to an embodiment of the present invention. Referring to fig. 4, the apparatus may include a receiving module 11, a first determining module 12, a second determining module 13, a third determining module 14, and a transmitting module 15, wherein,
the receiving module 11 is configured to receive a request message sent by a client;
the first determining module 12 is configured to determine a first application programming interface API corresponding to the request message;
the second determining module 13 is configured to determine an operating state of the first API according to a currently executed publishing step in a preset publishing scheme, where the operating state includes a publishing state and a non-publishing state, and the publishing scheme includes multiple publishing steps and corresponding identifiers of APIs in the publishing state;
the third determining module 14 is configured to determine the target service instance according to the running state of the first API, where a version of the API loaded by the target service instance corresponds to the running state of the first API;
the sending module 15 is configured to send the request message to the target service instance, so that the target service instance processes the request message according to the loaded first API.
The request message processing apparatus provided in the embodiment of the present invention may execute the technical solutions shown in the above method embodiments, and the implementation principles and beneficial effects thereof are similar, and are not described herein again.
In a possible implementation, the first determining module 12 is specifically configured to:
judging whether the request message comprises an API mark or not;
if yes, determining the API corresponding to the API identification as the first API;
if not, acquiring a first corresponding relation and the type of the request message, and determining the first API according to the first corresponding relation and the type of the request message, wherein the first corresponding relation comprises at least one API identifier and the type of the request message corresponding to each API identifier.
In another possible implementation, the second determining module 13 is specifically configured to:
acquiring an identifier of the API in a release state in an internal cache of the gateway;
judging whether the identifier of the first API is the same as the API identifier in the internal cache or not;
if so, determining that the running state of the first API is a release state;
and if not, determining that the running state of the first API is an unpublished state.
In another possible embodiment, the internal cache comprises a master cache and a slave cache; the second determining module 13 is specifically configured to:
acquiring processing states of the master cache and the slave cache, wherein the processing states comprise a valid state and an invalid state; at the same time, the processing state of one cache in the main cache and the slave cache is an effective state, and the processing state of the other cache is an invalid state;
and acquiring the identifier of the API in the release state in a cache in the valid state.
Fig. 5 is a schematic structural diagram of a request message processing apparatus according to an embodiment of the present invention. On the basis of the embodiment shown in fig. 4, referring to fig. 5, the apparatus further comprises a storage module 16 and a replacement module 17, wherein,
the receiving module 11 is further configured to receive the identifier of the API in the release state sent by the first server before the second determining module 13 obtains the identifier of the API in the release state in the cache in the valid state;
the storage module 16 is configured to store, in the cache in the invalid state, an identifier of the API in the release state;
the replacement module 17 is configured to replace the processing states of the master cache and the slave cache.
In another possible implementation, the third determining module 14 is specifically configured to:
judging whether the running state of the first API is a release state;
if yes, determining the target service instance according to a second corresponding relation in an external cache and the identifier of the first API, wherein the second corresponding relation comprises the identifiers of the plurality of APIs and the identifier of the target service instance corresponding to the identifier of each API;
if not, determining a default service instance as the target service instance, wherein the default service instance is a service instance which is not loaded with the API of the latest version.
In another possible implementation, the sending module 15 is specifically configured to:
if the first API is in a non-release state, acquiring default path information from an internal cache of the gateway, and sending the request message to the target service instance according to the default path information, wherein the default path information is the path information of the default service instance, and the default service instance is a service instance without loading the API of the latest version;
and if the first API is in a release state, requesting a registration center to acquire the path information of the target service instance, and sending the request message to the target service instance according to the path information of the target service instance.
In another possible embodiment, the apparatus further comprises an adding module 18, wherein,
the receiving module 11 is further configured to receive a response message sent by the target service instance;
the adding module 18 is configured to add the identifier of the first API in the response message;
the sending module 15 is further configured to send a response message including the identifier of the first API to the client, so that the client carries the identifier of the first API when sending the request message next time.
The request message processing apparatus provided in the embodiment of the present invention may execute the technical solutions shown in the above method embodiments, and the implementation principles and beneficial effects thereof are similar, and are not described herein again.
Fig. 6 is a schematic structural diagram of another request message processing apparatus according to an embodiment of the present invention, referring to fig. 6, the apparatus may include a first determining module 21 and a sending module 22, wherein,
the first determining module 21 is configured to determine a first publishing step currently executed in a publishing scheme and an application programming interface API published in the first publishing step, where the publishing scheme includes a plurality of publishing steps and identifiers of APIs in a publishing state corresponding to the publishing steps;
the sending module 22 is configured to send the identifier of the API to the gateway, so that the gateway stores the identifier of the API in an internal cache of the gateway.
The request message processing apparatus provided in the embodiment of the present invention may execute the technical solutions shown in the above method embodiments, and the implementation principles and beneficial effects thereof are similar, and are not described herein again.
Fig. 7 is a schematic structural diagram of another request message processing apparatus according to an embodiment of the present invention. On the basis of the embodiment shown in fig. 6, please refer to fig. 7, the apparatus further comprises a second determining module 23 and a storing module 24, wherein,
the second determining module 23 is configured to determine a second corresponding relationship according to the publishing scheme, where the second corresponding relationship includes identifiers of multiple APIs and an identifier of a target service instance corresponding to each API identifier;
the storage module 24 is configured to store the second corresponding relationship in an external cache.
The request message processing apparatus provided in the embodiment of the present invention may execute the technical solutions shown in the above method embodiments, and the implementation principles and beneficial effects thereof are similar, and are not described herein again.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the embodiments of the present invention, and are not limited thereto; although embodiments of the present invention have been described in detail with reference to the foregoing embodiments, those skilled in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the embodiments of the present invention.
Claims (12)
1. A method for processing a request message, comprising:
the gateway receives a request message sent by a client;
the gateway determines a first Application Programming Interface (API) corresponding to the request message;
the gateway determines the running state of a first API according to the currently executed issuing step in a preset issuing scheme, wherein the running state comprises an issuing state and a non-issuing state, and the issuing scheme comprises a plurality of issuing steps and the corresponding identifications of the APIs in the issuing state;
the gateway determines a target service instance according to the running state of the first API, wherein the version of the API loaded by the target service instance corresponds to the running state of the first API;
the gateway sends the request message to the target service instance so that the target service instance processes the request message according to the first API loaded by the target service instance.
2. The method of claim 1, wherein the determining, by the gateway, the first API to which the request message corresponds comprises:
the gateway judges whether the request message comprises an API mark or not;
if so, the gateway determines the API corresponding to the API identification as the first API;
if not, the gateway obtains a first corresponding relationship and the type of the request message, and determines the first API according to the first corresponding relationship and the type of the request message, wherein the first corresponding relationship comprises at least one API identifier and the type of the request message corresponding to each API identifier.
3. The method according to claim 1 or 2, wherein the gateway determines the running state of the first API according to the currently executed publishing step in the preset publishing scheme, and comprises:
the gateway acquires the identifier of the API in the release state in the internal cache of the gateway;
the gateway judges whether the identifier of the first API is the same as the API identifier in the internal cache;
if so, the gateway determines that the running state of the first API is a release state;
if not, the gateway determines that the running state of the first API is an unpublished state.
4. The method of claim 3, wherein the internal cache comprises a master cache and a slave cache; the gateway obtains the identifier of the API in the release state in the internal cache of the gateway, and the method comprises the following steps:
the gateway acquires processing states of the main cache and the auxiliary cache, wherein the processing states comprise a valid state and an invalid state; at the same time, the processing state of one cache in the main cache and the slave cache is an effective state, and the processing state of the other cache is an invalid state;
and the gateway acquires the identifier of the API in the release state from the cache in the effective state.
5. The method of claim 4, wherein before the gateway obtains the identity of the API in the published state from the cache in the valid state, the method further comprises:
the gateway receives the identifier of the API in the release state, which is sent by a first server;
the gateway stores the identifier of the API in the release state in the cache in the invalid state;
the gateway replaces the processing state of the master cache and the slave cache.
6. The method of claim 1 or 2, wherein the determining, by the gateway, the target service instance according to the running state of the first API comprises:
the gateway judges whether the running state of the first API is a release state;
if yes, the gateway determines the target service instance according to a second corresponding relation in an external cache and the identifier of the first API, wherein the second corresponding relation comprises the identifiers of the plurality of APIs and the identifier of the target service instance corresponding to the identifier of each API;
if not, the gateway determines a default service instance as the target service instance, wherein the default service instance is a service instance which is not loaded with the API of the latest version.
7. The method of claim 1 or 2, wherein the gateway sending the request message to the target service instance comprises:
if the first API is in a non-release state, acquiring default path information from an internal cache of the gateway, and sending the request message to the target service instance according to the default path information, wherein the default path information is the path information of the default service instance, and the default service instance is a service instance without loading the API of the latest version;
and if the first API is in a release state, requesting a registration center to acquire the path information of the target service instance, and sending the request message to the target service instance according to the path information of the target service instance.
8. The method according to claim 1 or 2, characterized in that the method further comprises:
the gateway receives a response message sent by the target service instance;
the gateway adds the identification of the first API in the response message;
and the gateway sends a response message including the identifier of the first API to the client so that the client carries the identifier of the first API when sending a request message next time.
9. A method for processing a request message, comprising:
the method comprises the steps that a first server determines a first issuing step currently executed in an issuing scheme and an Application Programming Interface (API) issued in the first issuing step, wherein the issuing scheme comprises a plurality of issuing steps and corresponding identifications of the APIs in issuing states; enabling a gateway to determine an operation state of a first API according to a currently executed issuing step in a preset issuing scheme, wherein the operation state comprises an issuing state and a non-issuing state, and the gateway determines a target service instance according to the operation state of the first API, wherein the version of the API loaded by the target service instance corresponds to the operation state of the first API, so that the target service instance processes the request message according to the loaded first API;
the first server sends the identifier of the API to a gateway, so that the gateway stores the identifier of the API in an internal cache of the gateway.
10. The method of claim 9, further comprising:
the first server determines a second corresponding relation according to the publishing scheme, wherein the second corresponding relation comprises the identifiers of the plurality of APIs and the identifier of the target service instance corresponding to the identifier of each API;
the first server stores the second correspondence in an external cache.
11. A request message processing device is characterized by comprising a receiving module, a first determining module, a second determining module, a third determining module and a sending module, wherein,
the receiving module is used for receiving a request message sent by a client;
the first determining module is used for determining a first Application Programming Interface (API) corresponding to the request message;
the second determining module is used for determining the running state of the first API according to the currently executed publishing step in a preset publishing scheme, wherein the running state comprises a publishing state and a non-publishing state, and the publishing scheme comprises a plurality of publishing steps and the corresponding identifications of the APIs in the publishing state;
the third determining module is configured to determine a target service instance according to the running state of the first API, where a version of the API loaded by the target service instance corresponds to the running state of the first API;
the sending module is configured to send the request message to the target service instance, so that the target service instance processes the request message according to the loaded first API.
12. A request message processing device is characterized by comprising a first determining module and a sending module, wherein,
the first determining module is used for determining a first publishing step currently executed in a publishing scheme and an Application Programming Interface (API) published in the first publishing step, wherein the publishing scheme comprises a plurality of publishing steps and corresponding identifications of the APIs in a publishing state; enabling a gateway to determine an operation state of a first API according to a currently executed issuing step in a preset issuing scheme, wherein the operation state comprises an issuing state and a non-issuing state, and the gateway determines a target service instance according to the operation state of the first API, wherein the version of the API loaded by the target service instance corresponds to the operation state of the first API, so that the target service instance processes the request message according to the loaded first API;
and the sending module is used for sending the identifier of the API to a gateway so that the gateway stores the identifier of the API in an internal cache of the gateway.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711321264.7A CN108055322B (en) | 2017-12-12 | 2017-12-12 | Request message processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711321264.7A CN108055322B (en) | 2017-12-12 | 2017-12-12 | Request message processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108055322A CN108055322A (en) | 2018-05-18 |
CN108055322B true CN108055322B (en) | 2020-12-25 |
Family
ID=62131948
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711321264.7A Active CN108055322B (en) | 2017-12-12 | 2017-12-12 | Request message processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108055322B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108932121B (en) * | 2018-05-22 | 2021-12-07 | 哈尔滨工业大学(威海) | Multi-tenant distributed service component research and development oriented module and method |
CN111092811B (en) * | 2018-10-24 | 2021-11-26 | 北京金山云网络技术有限公司 | Request processing method and device, API gateway and readable storage medium |
CN111090449A (en) * | 2018-10-24 | 2020-05-01 | 北京金山云网络技术有限公司 | API service access method and device and electronic equipment |
CN109672558B (en) * | 2018-11-30 | 2021-12-07 | 哈尔滨工业大学(威海) | Aggregation and optimal matching method, equipment and storage medium for third-party service resources |
CN113783914A (en) * | 2020-09-01 | 2021-12-10 | 北京沃东天骏信息技术有限公司 | Data processing method, device and equipment |
CN112788099A (en) * | 2020-11-11 | 2021-05-11 | 中移雄安信息通信科技有限公司 | Method, device and equipment for loading back-end service and computer storage medium |
CN112612508B (en) * | 2020-12-24 | 2024-08-06 | 新华三云计算技术有限公司 | API version control method, device and storage medium in API gateway |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102833109A (en) * | 2012-08-30 | 2012-12-19 | 华为技术有限公司 | Positional information processing method and equipment of fault point |
CN104216724A (en) * | 2013-06-03 | 2014-12-17 | 阿里巴巴集团控股有限公司 | Method and system for updating network application program interface |
WO2016050034A1 (en) * | 2014-09-30 | 2016-04-07 | 中兴通讯股份有限公司 | Group addressing processing method, device, mtc intercommunicating gateway and api gw |
CN105786531A (en) * | 2014-12-19 | 2016-07-20 | 江苏融成嘉益信息科技有限公司 | Cooperative work method for online software update and data encryption |
CN106792923A (en) * | 2017-02-09 | 2017-05-31 | 华为软件技术有限公司 | A kind of method and device for configuring qos policy |
-
2017
- 2017-12-12 CN CN201711321264.7A patent/CN108055322B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102833109A (en) * | 2012-08-30 | 2012-12-19 | 华为技术有限公司 | Positional information processing method and equipment of fault point |
CN104216724A (en) * | 2013-06-03 | 2014-12-17 | 阿里巴巴集团控股有限公司 | Method and system for updating network application program interface |
WO2016050034A1 (en) * | 2014-09-30 | 2016-04-07 | 中兴通讯股份有限公司 | Group addressing processing method, device, mtc intercommunicating gateway and api gw |
CN105786531A (en) * | 2014-12-19 | 2016-07-20 | 江苏融成嘉益信息科技有限公司 | Cooperative work method for online software update and data encryption |
CN106792923A (en) * | 2017-02-09 | 2017-05-31 | 华为软件技术有限公司 | A kind of method and device for configuring qos policy |
Also Published As
Publication number | Publication date |
---|---|
CN108055322A (en) | 2018-05-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108055322B (en) | Request message processing method and device | |
CN109688235B (en) | Virtual network method for processing business, device and system, controller, storage medium | |
US20210176310A1 (en) | Data synchronization method and system | |
CN102333029B (en) | Routing method in server cluster system | |
CN112260876A (en) | Dynamic gateway route configuration method, platform, computer equipment and storage medium | |
CN107733957B (en) | Distributed service configuration system and version number distribution method | |
CN112418794B (en) | Service circulation method and device | |
US10191732B2 (en) | Systems and methods for preventing service disruption during software updates | |
CN111143023B (en) | Resource changing method and device, equipment and storage medium | |
US20130238799A1 (en) | Access control method, access control apparatus, and access control program | |
CN102402441A (en) | System and method for configuring multiple computers | |
CN112882738A (en) | Configuration information updating method and device under micro-service architecture and electronic equipment | |
CN112672420B (en) | Method, system, device and storage medium for positioning terminal in communication network | |
CN112860787A (en) | Method for switching master nodes in distributed master-slave system, master node device and storage medium | |
CN110569124A (en) | Task allocation method and device | |
EP2416526B1 (en) | Task switching method, server node and cluster system | |
CN113900842B (en) | Message consumption method and device, electronic equipment and computer storage medium | |
CN111858050A (en) | Server cluster mixed deployment method, cluster management node and related system | |
CN108259578A (en) | The upgrade method and device of clustered node | |
CN110213213B (en) | Timing task processing method and system for application | |
CN108509296B (en) | Method and system for processing equipment fault | |
US10637748B2 (en) | Method and apparatus for establishing interface between VNFMS, and system | |
CN110417876A (en) | Node server and main control device in session method, distributed system | |
CN109714328B (en) | Capacity adjustment method and device for game cluster | |
JP5691306B2 (en) | Information processing system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20201119 Address after: 266100 Shandong Province, Qingdao city Laoshan District Songling Road No. 399 Applicant after: Qingdao Haishi Information Technology Co., Ltd Address before: 266061 Shandong Province, Qingdao city Laoshan District Songling Road No. 399 Applicant before: QINGDAO HISENSE INTELLIGENT COMMERCIAL SYSTEM Co.,Ltd. |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |