CN108055322A - Request message processing method and processing device - Google Patents

Request message processing method and processing device Download PDF

Info

Publication number
CN108055322A
CN108055322A CN201711321264.7A CN201711321264A CN108055322A CN 108055322 A CN108055322 A CN 108055322A CN 201711321264 A CN201711321264 A CN 201711321264A CN 108055322 A CN108055322 A CN 108055322A
Authority
CN
China
Prior art keywords
api
gateway
mark
request message
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711321264.7A
Other languages
Chinese (zh)
Other versions
CN108055322B (en
Inventor
黄显晖
马映辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Haishi Information Technology Co., Ltd
Original Assignee
Qingdao Hisense Intelligent Business Systems Ltd By Share Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Intelligent Business Systems Ltd By Share Ltd filed Critical Qingdao Hisense Intelligent Business Systems Ltd By Share Ltd
Priority to CN201711321264.7A priority Critical patent/CN108055322B/en
Publication of CN108055322A publication Critical patent/CN108055322A/en
Application granted granted Critical
Publication of CN108055322B publication Critical patent/CN108055322B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the present invention provides a kind of request message processing method and processing device, and this method includes:Gateway receives the request message that client is sent;Gateway determines the corresponding first application programming interface API of request message;Gateway determines the operating status of the first API according to currently performed issuing steps in default distribution scheme, and operating status includes issued state and non-issued state, and distribution scheme includes multiple issuing steps and its mark of the corresponding API in issued state;Gateway determines destination service example according to the operating status of the first API, wherein, the version of the API of destination service example loading is corresponding with the operating status of the first API;Gateway sends request message to destination service example, so that the first API that destination service example is loaded according to it handles request message.For improving the efficiency of service issue.

Description

Request message processing method and processing device
Technical field
The present embodiments relate to field of communication technology more particularly to a kind of request message processing method and processing devices.
Background technology
At present, in client/server (Client/Server, abbreviation CS) framework, usually carried from server to client For service, in actual application, for the service that server provides, new version can also be issued, with upgrade server The service of offer.
In the prior art, after the new version of service is determined, in order to ensure that the stabilization of service release is excessive, usually first To the new version of a part of user's issuing service, another part user is still using the legacy version of service.That is, received in gateway After the request message that user sends, according to the mark of user, request message is transmitted to the Service Instance or old of new version The Service Instance of version, wherein, a service example generally includes multiple application programming interface (Application Programming Interface, abbreviation API), the API that the Service Instance of new version loads is the API of new version, legacy version Service Instance loading API be legacy version API.
However, in the prior art, if occurring during service is provided a user by the Service Instance of new version Failure can not then position the API that new Service Instance breaks down so that entire service issue failure causes what service was issued Inefficiency.
The content of the invention
The embodiment of the present invention provides a kind of request message processing method and processing device, improves the efficiency of service issue.
In a first aspect, the embodiment of the present invention provides a kind of request message processing method, including:
Gateway receives the request message that client is sent;
The gateway determines the corresponding first application programming interface API of the request message;
The gateway determines the operation shape of the first API according to currently performed issuing steps in default distribution scheme State, the operating status include issued state and non-issued state, and the distribution scheme includes multiple issuing steps and its right The mark of the API in issued state answered;
The gateway determines the destination service example according to the operating status of the first API, wherein, the target The version of the API of Service Instance loading is corresponding with the operating status of the first API;
The gateway sends the request message to the destination service example, so that the destination service example is according to it The first API of loading handles the request message.
In a kind of possible embodiment, the gateway determines corresponding first API of the request message, including:
The gateway judges that whether including API in the request message identifies;
If so, the API is identified corresponding API by the gateway is determined as the first API;
If it is not, then the gateway obtains the type of the first correspondence and the request message, and according to described first pair It should be related to and determine the first API with the type of the request message, first correspondence includes at least one API's The type of the corresponding request message of mark of mark and each API.
In alternatively possible embodiment, the gateway is walked according to currently performed issue in default distribution scheme Suddenly the operating status of the first API is determined, including:
The gateway obtains the mark of the API in issued state in the inner buffer of the gateway;
It is identical that the gateway judges that the API in the mark and the inner buffer of the first API is identified whether;
If so, the gateway determines that the operating status of the first API is issued state;
If it is not, then the gateway determines that the operating status of the first API is non-issued state.
In alternatively possible embodiment, the inner buffer includes master cache and from caching;The gateway is in institute The mark that the API in issued state is obtained in the inner buffer of gateway is stated, including:
The gateway obtains the master cache and the processing state from caching, and the processing state includes effective status And disarmed state;Wherein, in synchronization, the master cache and the processing state from caching there are a caching are to have Effect state, the processing state of another caching is disarmed state;
The gateway obtains the mark of the API in issued state in the caching of effective status.
In alternatively possible embodiment, the gateway obtains described in issue shape in the caching of effective status Before the mark of the API of state, further include:
The gateway receives the mark of the API in issued state that first server is sent, described;
The gateway stores the mark of the API in issued state in the caching of the disarmed state;
The gateway replaces the master cache and the processing state from caching.
In alternatively possible embodiment, the gateway determines the mesh according to the operating status of the first API Service Instance is marked, including:
The gateway judges whether the operating status of the first API is issued state;
If so, the mark of second correspondence and first API of the gateway in external cache, determines institute Destination service example is stated, second correspondence includes the mark target corresponding with the mark of each API of multiple API The mark of Service Instance;
If it is not, then default service example is determined as the destination service example, the default service example by the gateway Not load the Service Instance of latest edition API.
In alternatively possible embodiment, the gateway sends the request message to the destination service example, Including:
If the first API is non-issued state, default path information is obtained in the inner buffer of the gateway, and According to the default path information, the request message is sent to the destination service example, the default path information is silent Recognize the routing information of Service Instance, the default service example is the Service Instance for not loading latest edition API;
If the first API is issued state, believe to the path of destination service example described in registration center's acquisition request Breath, and according to the routing information of the destination service example, the request message is sent to the destination service example.
In alternatively possible embodiment, the method further includes:
The gateway receives the response message that the destination service example is sent;
The gateway adds the mark of the first API in the response message;
The gateway includes the response message of the mark of the first API to client transmission, so that the client End carries the mark of the first API when sending request message next time.
Second aspect, the embodiment of the present invention provide a kind of request message processing method, including:
First server determines currently performed first issuing steps in distribution scheme, is sent out in first issuing steps The application programming interface API of cloth, the distribution scheme include multiple issuing steps and its corresponding in issued state API mark;
The first server sends the mark of the API to gateway, so that the gateway delays in the inside of the gateway Deposit the mark of the middle storage API.
In alternatively possible embodiment, the method further includes:
The first server determines the second correspondence according to the distribution scheme, and second correspondence includes The mark of the corresponding destination service example of mark of the mark and each API of multiple API;
The first server stores second correspondence in external cache.
The third aspect, the embodiment of the present invention provide a kind of request message processing unit, and mould is determined including receiving module, first Block, the second determining module, the 3rd determining module and sending module, wherein,
The receiving module is used for, and receives the request message that client is sent;
First determining module is used for, and determines the corresponding first application programming interface API of the request message;
Second determining module is used for, and described is determined according to currently performed issuing steps in default distribution scheme The operating status of one API, the operating status include issued state and non-issued state, and the distribution scheme includes multiple hairs The mark of cloth step and its corresponding API in issued state;
3rd determining module is used for, and according to the operating status of the first API, determines the destination service example, Wherein, the version of the API of the destination service example loading is corresponding with the operating status of the first API;
The sending module is used for, and the request message is sent to the destination service example, so that the destination service The first API that example is loaded according to it handles the request message.
In a kind of possible embodiment, first determining module is specifically used for:
Whether judge includes API in the request message identifies;
If so, the API is identified corresponding API is determined as the first API;
If it is not, then obtain the type of the first correspondence and the request message, and according to first correspondence and The type of the request message determines the first API, first correspondence include at least one API mark and The type of the corresponding request message of mark of each API.
In alternatively possible embodiment, the second determining module is specifically used for:
The mark of the API in issued state is obtained in the inner buffer of the gateway;
It is identical to judge that the API in the mark and the inner buffer of the first API is identified whether;
If so, the operating status for determining the first API is issued state;
If not, it is determined that the operating status of the first API is non-issued state.
In alternatively possible embodiment, the inner buffer includes master cache and from caching;Second determining module It is specifically used for:
The master cache and the processing state from caching are obtained, the processing state includes effective status and invalid shape State;Wherein, in synchronization, the master cache and the processing state from caching there are a caching are effective status, The processing state of another caching is disarmed state;
The mark of the API in issued state is obtained in the caching of effective status.
In alternatively possible embodiment, described device further includes memory module and replaces module, wherein,
The receiving module is additionally operable to, and is obtained in caching of second determining module in effective status described in hair Before the mark of the API of cloth state, the mark of the API in issued state that first server is sent, described is received;
The memory module is used for, and the mark of the API in issued state is stored in the caching of the disarmed state Know;
The replacement module is used for, and replaces the master cache and the processing state from caching.
In alternatively possible embodiment, the 3rd determining module is specifically used for:
Whether the operating status for judging the first API is issued state;
If so, the mark of the second correspondence and the first API in external cache, determines the target clothes Pragmatic example, second correspondence include the mark destination service example corresponding with the mark of each API of multiple API Mark;
If it is not, default service example then is determined as the destination service example, the default service example is not load The Service Instance of latest edition API.
In alternatively possible embodiment, the sending module is specifically used for:
If the first API is non-issued state, default path information is obtained in the inner buffer of the gateway, and According to the default path information, the request message is sent to the destination service example, the default path information is silent Recognize the routing information of Service Instance, the default service example is the Service Instance for not loading latest edition API;
If the first API is issued state, believe to the path of destination service example described in registration center's acquisition request Breath, and according to the routing information of the destination service example, the request message is sent to the destination service example.
In alternatively possible embodiment, described device further includes add module, wherein,
The receiving module is additionally operable to, and receives the response message that the destination service example is sent;
The add module is used for, and the mark of the first API is added in the response message;
The sending module is additionally operable to, and the response message of the mark of the first API is included to client transmission, with The client is made to carry the mark of the first API when sending request message next time.
Fourth aspect, the embodiment of the present invention provides a kind of request message processing unit, including the first determining module and transmission Module, wherein,
First determining module is used for, and determines currently performed first issuing steps in distribution scheme, described first The application programming interface API issued in issuing steps, the distribution scheme include multiple issuing steps and its corresponding The mark of API in issued state;
The sending module is used for, and the mark of the API is sent to gateway, so that the gateway is in the inside of the gateway The mark of the API is stored in caching.
In a kind of possible embodiment, described device further includes the second determining module and memory module, wherein,
Second determining module is used for, and determines the second correspondence according to the distribution scheme, described second corresponds to pass System includes the mark of the corresponding destination service example of mark of the mark and each API of multiple API;
The memory module is used for, and second correspondence is stored in external cache.
Request message processing method and processing device provided in an embodiment of the present invention, the distribution scheme pre-established include multiple The mark of issuing steps and its corresponding API in issued state, each issuing steps one latest edition of correspondence API, i.e. when carrying out service issue, the API of a latest edition is only issued in each issuing steps.Correspondingly, in gateway After the request message for receiving client transmission, corresponding first API of client is obtained, and according to the operation shape of the first API State determines destination service example, and the version of the first API of destination service example loading is corresponding with the operating status of the first API, example Such as, when the first API is issued state, it is determined that the version of the API of destination service example loading is obtained as latest edition, when the When one API is non-issued state, it is determined that the version of the API of obtained destination service example loading is not latest edition, in this way, It can realize and the API of different latest editions is distributed to corresponding client in different issuing steps.In the above process In, it when carrying out service issue, is issued according to the granularity of API, service issue granularity is reduced, in this way, being issued in service If process break down, the API of failure can be positioned in time, and then improve the efficiency of service issue.
Description of the drawings
It in order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is this hair Some bright embodiments, for those of ordinary skill in the art, without having to pay creative labor, can be with Other attached drawings are obtained according to these attached drawings.
Fig. 1 is the Organization Chart of request message processing method provided in an embodiment of the present invention;
Fig. 2 is the schematic diagram one of request message processing method provided in an embodiment of the present invention;
Fig. 3 is the schematic diagram two of request message processing method provided in an embodiment of the present invention;
Fig. 4 is a kind of structure diagram one of request message processing unit provided in an embodiment of the present invention;
Fig. 5 is a kind of structure diagram two of request message processing unit provided in an embodiment of the present invention;
Fig. 6 is the structure diagram one of another request message processing unit provided in an embodiment of the present invention;
Fig. 7 is the structure diagram two of another request message processing unit provided in an embodiment of the present invention.
Specific embodiment
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, the technical solution in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is Part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art All other embodiments obtained without making creative work belong to the scope of protection of the invention.
Fig. 1 is the Organization Chart of request message processing method provided in an embodiment of the present invention.Fig. 1 is referred to, including client 101st, gateway 102, service cluster 103, first server 104, container management service device 105, external cache 106 and registration center 107。
Client 101 can be mobile phone, apparatus such as computer, and client 101 can be by gateway 102 to service cluster 103 Request message is sent, and passes through gateway 102 and receives the response message that service cluster 103 is sent.
Inner buffer is provided in gateway 102, the mark and acquiescence of the API in issued state are stored in inner buffer Routing information.First server 104 can be determined according to the issuing steps being currently executing in distribution scheme in issue shape The API of state, and update the mark of the API in issued state in inner buffer.Default path information refers to default service example Routing information, default service example refers to the Service Instance for not loading the API of latest edition.Certainly, in inner buffer also The operating status of each API can be stored, the operating status of API includes issued state and non-issued state.
Include multiple Service Instances in service cluster 103, each Service Instance can be by gateway 102 to client End 102 provides service.First server 104 can be according to the issuing steps control container pipe being currently executing in distribution scheme It manages server 105 and carries out Service Instance deployment.
Distribution scheme is loaded in first server 104, distribution scheme is used to indicate the issue step of the API in issuing service Suddenly the switching condition and between issuing steps, each issuing steps are indicated to one API of pre-set user mass-sending cloth.For example, First issuing steps of distribution scheme can be to issue API1 to the first user group, after the first issuing steps perform one day, The second issuing steps of distribution scheme are performed, the second issuing steps of distribution scheme can be to mass-send cloth API2 to second user.
Container management service device 105 can carry out the deployment of Service Instance under the control of first server 104.For example, Under different issuing steps, the number for the Service Instance that container management service device 105 is disposed may be different.
External cache 106 includes the correspondence between the mark of API and the mark of Service Instance.It optionally, can be with During initialization, the correspondence is stored into external cache 106 by first server 104.
Registration center 107 includes the routing information of Service Instance.Optionally, can only include adding in registration center 107 The routing information of the Service Instance of the API of latest edition is carried, i.e. default service example can not be included in registration center 107 Routing information.
In this application, distribution scheme includes multiple issuing steps, each issuing steps is indicated to pre-set user group Issue the API of a latest edition, i.e. when carrying out service issue, different API is issued in the different periods.Correspondingly, in net It, can be according to the issuing steps being currently executing in distribution scheme, really after closing the request message for receiving client transmission The fixed destination service example for being forwarded to the request message, for example, it is assumed that client corresponds to the first API, is receiving client After the request message of transmission, when the first API is in non-issued state, then the request message is forwarded to loading legacy version The request message when the first API is in issued state, is then forwarded to the of loading new version by the Service Instance of the first API The API of different latest editions is distributed to corresponding client by the Service Instance of one API to realize in different periods.Upper It during stating, when carrying out service issue, is issued according to the granularity of API, service issue granularity is reduced, in this way, taking If the process of business issue breaks down, the API of failure can be positioned in time, and then improves the efficiency of service issue.
In the following, by specific embodiment, the technical solution shown in the application is described in detail.It should be noted that Several specific embodiments can be combined with each other below, for the same or similar content, no longer carry out in various embodiments Repeated explanation.
Fig. 2 is the schematic diagram one of request message processing method provided in an embodiment of the present invention.Fig. 2 is referred to, including:
S201, first server determine currently performed first issuing steps in distribution scheme, in the first issuing steps The API of issue.
Wherein, distribution scheme includes multiple issuing steps and its mark of the corresponding API in issued state.It is each A issuing steps are used to indicate one corresponding API of issue, are only carrying out in synchronization there are one issuing steps, therefore, API in the issuing steps being currently executing is the API in issued state.
Optionally, the first issuing steps are any one step in distribution scheme.First server is according to distribution scheme In execution step perform distribution scheme.
Optionally, the switching condition between issuing steps is further included in distribution scheme.For example, switching condition can be to perform Duration is more than preset duration, the rate that runs succeeded is more than default success rate etc..It certainly, can be according to reality in actual application Border needs to set the switching condition, and the embodiment of the present invention is not especially limited this.
For example, distribution scheme can be as follows:The first issuing steps are performed after start-up, and the first issuing steps are by API1 It is distributed to client 1- clients 1000;After the execution duration of the first issuing steps is more than 10 hours, the second hair is performed Cloth step, the second issuing steps are that API is distributed to client 1- clients 2000;In the rate that runs succeeded of the second issuing steps During more than 90%, the 3rd issuing steps are performed, and so on, until performing the last one issuing steps.
It should be noted that after S201, first server also sends example deployment notice to container management service device, So that container management service device carries out the deployment of Service Instance.
Optionally, in different issuing steps, the number of Service Instance can not also be same.In actual application, The number of Service Instance in each issuing steps can be set according to actual needs, and the embodiment of the present invention is not made this specifically It limits.
Optionally, the Service Instance of container management service device deployment includes at least one default service example and at least One new demand servicing example.Wherein, the API of default service example loading is the API of legacy version.The loading of new demand servicing example API is the API of latest edition.
It should also be noted that, before S201, first server can determine the second correspondence according to distribution scheme, the Two correspondences include the mark of the corresponding destination service example of mark of the mark and each API of multiple API, and outside Second correspondence is stored in portion's caching.
For example, the second correspondence can be as shown in table 1:
Table 1
The mark of API The mark of Service Instance
API1 Service Instance 1
API2 Service Instance 2
API3 Service Instance 3
…… ……
It should be noted that table 2 simply illustrates the second correspondence in exemplary fashion, not to the second correspondence It limits.
S202, first server send the mark of the API in issued state to gateway.
The mark of API in issued state is stored in the inner buffer of gateway by S203, gateway.
Optionally, gateway may determine that in the inner buffer of gateway and be identified with the presence or absence of API, if so, by inner buffer In API marks be updated to the mark of API;If it is not, then the mark of API is stored in inner buffer by first server.
S204, client send request message to gateway.
S205, gateway determine corresponding first API of request message.
In embodiments of the present invention, for different types of request message, it is necessary to which different API handles it, i.e. There is default first correspondence, the first correspondence includes at least one API's between the type and API of request message The type of the corresponding request message of mark of mark and each API.For example, the first correspondence can be as shown in table 2:
Table 2
The type of request message The mark of API
First kind request message API1
Second Type request message API2
3rd type request message API3
…… ……
It optionally, can also be to certain customers API, correspondingly, when carrying out API issues in the first correspondence also It can include the mark of client, correspondingly, in S206, gateway is needed according to the mark of client and the class of request message Type determines the first API.For example, the first correspondence can be as shown in table 3:
Table 3
The mark of client The type of request message The mark of API
Client 1- clients 1000 First kind request message API1
Client 200- clients 1000 Second Type request message API2
Client 500- clients 2000 3rd type request message API3
…… …… ……
It should be noted that table 2 and table 3 simply illustrate the between the type of request message and API in exemplary fashion One correspondence not to the restriction of first correspondence, in actual application, can set this according to actual needs First correspondence, the embodiment of the present invention are not especially limited this.
Optionally, in client after gateway sends a request message, if gateway is according to above-mentioned first correspondence Determine to obtain the corresponding API of client as the first API, then gateway carries the first API in the response message sent to client Mark.In this way, when client sends request message next time, the mark of the first API can be carried in request message, this The API that request message includes after gateway receives request message, is directly identified corresponding API and is determined as first by sample API, gateway need not search above-mentioned first correspondence and can determine to obtain corresponding first API of client, and then improve The treatment effeciency of gateway.
It should be noted that during issue is serviced, it is newest any one may not to be issued to certain part client API, correspondingly, the part client does not correspond to any one API, then gateway can not acquire the part client pair The first API answered.In that case, the request message that gateway can send the client, which is sent to, does not load latest edition API Service Instance.
S206, gateway determine the operating status of the first API according to default distribution scheme.
Wherein, operating status includes issued state and non-issued state.When the issuing steps being carrying out in distribution scheme When the API of middle issue is an API, then the operating status of the first API is issued state, when the hair being carrying out in distribution scheme When the API issued in cloth step is not an API, then the operating status of the first API is non-issued state.
Optionally, the mark of the API in issued state can be stored in the inner buffer of gateway, in this way, gateway can To obtain the mark of the API in issued state in inner buffer, and judge the mark of the first API and in issued state Identifying whether for API is identical, if so, determining the state of the first API as issued state, if not, it is determined that the state of the first API For non-issued state.
Since gateway can quickly access inner buffer, it can quickly be determined by above method gateway Obtain the operating status of the first API.Further, by storing the API's in issued state in the inner buffer of gateway Mark so that when starting issuing service and when issuing steps switch over, need to only update and issued state is in inner buffer API mark, restart without carrying out gateway.
Certainly, the operating status of each API can also be stored in the inner buffer of gateway.For example, it is deposited in inner buffer The operating status of each API of storage can be as shown in table 4:
Table 4
The mark of API Operating status
API1 Issued state
API2 Non- issued state
API3 Non- issued state
…… ……
It should be noted that table 4 simply illustrates the operating status of each API in exemplary fashion, not to the fortune of API The restriction of row state.
S207, gateway determine destination service example according to the operating status of the first API.
Wherein, the version of the first API of destination service example loading is corresponding with the operating status of the first API.That is, when first When the operating status of API is issued state, then the version of the API of destination service example loading is latest edition, when the first API's When state is non-issued state, then the version of the API of destination service example loading is not latest edition.
Optionally, gateway may determine that whether the operating status of the first API is issued state;If so, gateway is according to outer The mark of the second correspondence and the first API in portion's caching, determines destination service example;If it is not, then gateway is by default service Example is determined as destination service example, and default service example is the Service Instance for not loading latest edition API.
Wherein, gateway can quickly access external cache so that gateway can be quickly according in external cache Second correspondence determines destination service example, improves the efficiency of definite destination service example.
S208, gateway send request message to destination service example.
After gateway selects to obtain destination service example, the routing information of destination service example can be obtained, and according to The routing information of destination service example sends request message to destination service example, so that the request of destination service instance processes disappears Breath.
Optionally, the routing information of destination service example can pass through the Internet protocol of destination service example (Internet Protocol, abbreviation IP) address and/or media access control (Medium Access Control, abbreviation MAC) address represents.
Optionally, when the first API is non-issued state, then can target be obtained by following feasible realization method and taken The routing information of pragmatic example:Default path information is obtained in the inner buffer of gateway, and the default path information is determined as The routing information of destination service example.
Optionally, when the first API is issued state, then can destination service be obtained by following feasible realization method The routing information of example:To the routing information of registration center's acquisition request destination service example.
S209, destination server send the corresponding response message of request message to gateway.
Destination server can be handled request message, meet with a response message according to the first API of its loading.
Wherein, when the first API is issued state, the API loaded in the destination server determined is latest edition API, i.e. the first API loaded in destination server is also the API of latest edition, in this way, destination server can be caused to lead to The first API for crossing latest edition handles request message.
S210, gateway send response message to client.
Request message processing method provided in an embodiment of the present invention, the distribution scheme pre-established include multiple issue steps Suddenly, each issuing steps corresponds to the API of a latest edition, i.e. when carrying out service issue, in each issuing steps only Issue the API of a latest edition.Correspondingly, after the request message for receiving client transmission in gateway, client is obtained Corresponding first API, and destination service example is determined according to the operating status of the first API, the first of the loading of destination service example The version of API is corresponding with the operating status of the first API, for example, when the first API is issued state, it is determined that obtain target clothes The version of the API of pragmatic example loading is latest edition, when the first API is non-issued state, it is determined that obtained destination service The version of the API of example loading is not latest edition, in this manner it is achieved that by different latest editions in different issuing steps This API is distributed to corresponding client.In above process, when carrying out service issue, sent out according to the granularity of API Cloth reduces service issue granularity, if in this way, in the process failure that service is issued, can position failure in time API, and then improve the efficiency of service issue.
On the basis of embodiment illustrated in fig. 2, in the following, by embodiment shown in Fig. 3, to embodiment shown in Fig. 2 into Row is further described.
Fig. 3 is the schematic diagram two of request message processing method provided in an embodiment of the present invention.Fig. 3 is referred to, including:
S301, first server determine currently performed first issuing steps in distribution scheme, in the first issuing steps 2nd API of issue.
S302, first server determine the number of Service Instance according to the first issuing steps.
Optionally, first service example can determine to need for the default service example disposed according to the first issuing steps The number of number and new demand servicing example.
For example, it is assumed that the load bearing ability of the API issued in the first issuing steps is limited, then new demand servicing reality can be determined The number of example is multiple.Assuming that the load bearing ability of the API issued in the first issuing steps is stronger, then new demand servicing can be determined The number of example is 1.
S303, first server send example deployment request to container management service device, and example deployment request includes clothes The number of pragmatic example.
S304, container management service device carry out Service Instance deployment according to the number of Service Instance.
It should be noted that after S304, the Service Instance in service cluster sends each service to registration center The routing information (not shown) of example.
S305, first server send the mark of the 2nd API to gateway.
S306, gateway store the mark of the 2nd API in the inner buffer of gateway.
It should be noted that the implementation procedure of S306 may refer to S203, no longer repeated at this time.
S307, client send request message to gateway.
S308, gateway determine corresponding first API of request message.
Optionally, gateway may determine that whether including API in request message identifies;It is corresponded to if so, gateway identifies API API be determined as the first API;If it is not, then gateway obtains the type of the first correspondence and request message, and corresponded to according to first Relation and the type of request message determine the first API.
S309, gateway determine the operating status of the first API according to default distribution scheme.
Optionally, can be determined according to the mark of the 2nd API in the mark of the first API and the inner buffer of gateway The operating status of one API.When the mark of the first API is identical with the mark of the 2nd API, it is determined that the operating status of the first API For issued state, otherwise, it is determined that the operating status of the first API is not issued state.
It should be noted that in S311, gateway is needed from the content read in inner buffer, in S308, the first clothes Business device needs write content in inner buffer.In order to avoid read-write clashes, master cache can be set and from caching, not In same issuing steps, there can be valid cache by master cache and from a buffer setting in caching.Correspondingly, when need into During row data reading operation, data are read from valid cache, when needing to carry out write operation, non-effective cache into row write is grasped Make.
Optionally, inner buffer includes master cache and from caching;Gateway can be by following feasible realization method in net The mark of the API in issued state is obtained in the inner buffer of pass:Gateway obtains master cache and the processing state from caching, place Reason state includes effective status and disarmed state;Wherein, at being cached in synchronization, master cache and from caching there are one Reason state is effective status, and the processing state of another caching is disarmed state;Gateway is in the caching of effective status at acquisition In the mark of the API of issued state.
Further, before gateway obtains the mark of the API in issued state in the caching of effective status, gateway is also Receive the mark of API that first server is sent, in issued state;Gateway is stored in the caching of disarmed state in hair The mark of the API of cloth state;Gateway replaces master cache and the processing state from caching.
For example, in the first issuing steps, the mark of the API in issued state is respectively written into master cache and from caching Know (being assumed to be API1), at this point, master cache and storing API1 from caching.The processing state of master cache is arranged to effective shape State so that in the first issuing steps, gateway carries out digital independent from master cache, read in issued state API's is identified as API1.
, it is necessary to when performing the second issuing steps after the execution of the first issuing steps terminates, it is assumed that in the second issuing steps API in issued state is API2, then writes API2 from caching, at this point, that stored in master cache is API1, from caching Storage is API2.To be arranged to effective status from the processing state of buffer setting so that in the second issuing steps, gateway from Digital independent is carried out from caching, the API's in issued state read is identified as API2.
The above process is repeated in the steps afterwards.
S310, gateway determine destination service example according to the operating status of the first API.
It should be noted that the implementation procedure of S310 may refer to S207, no longer repeated at this time.
S311, gateway obtain the routing information of destination service example.
It should be noted that the implementation procedure of S311 may refer to S208, no longer repeated at this time.
S312, gateway send request message according to the routing information of destination service example to destination service example.
S313, destination service example send response message to gateway.
S314, destination service example add the mark of the first API in the response message.
S315, destination service example include the response message of the mark of the first API to client transmission.
It should be noted that if gateway includes the mark of the first API to the response message that client is sent, then client When sending request message next time, the mark of the first API is carried in request message.
By embodiment shown in Fig. 3, it can realize and send out the API of different latest editions in different issuing steps Cloth is to corresponding client, due to only issuing an API in an issuing steps, if in this way, the process in service issue goes out Existing failure, can position the API of failure in time, and then improve the efficiency of service issue.
Fig. 4 is a kind of structure diagram one of request message processing unit provided in an embodiment of the present invention.Referring to Fig. 4 should Device can include receiving module 11, the first determining module 12, the second determining module 13, the 3rd determining module 14 and sending module 15, wherein,
The receiving module 11 is used for, and receives the request message that client is sent;
First determining module 12 is used for, and determines the corresponding first application programming interface API of the request message;
Second determining module 13 is used for, according to determining currently performed issuing steps in default distribution scheme The operating status of first API, the operating status include issued state and non-issued state, and the distribution scheme includes multiple The mark of issuing steps and its corresponding API in issued state;
3rd determining module 14 is used for, and according to the operating status of the first API, determines that the destination service is real Example, wherein, the version of the API of the destination service example loading is corresponding with the operating status of the first API;
The sending module 15 is used for, and the request message is sent to the destination service example, so that the target takes The first API that pragmatic example is loaded according to it handles the request message.
Request message processing unit provided in an embodiment of the present invention can perform the technical side shown in above method embodiment Case, realization principle and advantageous effect are similar, are no longer repeated herein.
In a kind of possible embodiment, first determining module 12 is specifically used for:
Whether judge includes API in the request message identifies;
If so, the API is identified corresponding API is determined as the first API;
If it is not, then obtain the type of the first correspondence and the request message, and according to first correspondence and The type of the request message determines the first API, first correspondence include at least one API mark and The type of the corresponding request message of mark of each API.
In alternatively possible embodiment, the second determining module 13 is specifically used for:
The mark of the API in issued state is obtained in the inner buffer of the gateway;
It is identical to judge that the API in the mark and the inner buffer of the first API is identified whether;
If so, the operating status for determining the first API is issued state;
If not, it is determined that the operating status of the first API is non-issued state.
In alternatively possible embodiment, the inner buffer includes master cache and from caching;Second determining module 13 are specifically used for:
The master cache and the processing state from caching are obtained, the processing state includes effective status and invalid shape State;Wherein, in synchronization, the master cache and the processing state from caching there are a caching are effective status, The processing state of another caching is disarmed state;
The mark of the API in issued state is obtained in the caching of effective status.
Fig. 5 is a kind of structure diagram two of request message processing unit provided in an embodiment of the present invention.Real shown in Fig. 4 On the basis of applying example, Fig. 5 is referred to, described device further includes memory module 16 and replaces module 17, wherein,
The receiving module 11 is additionally operable to, and the place is obtained in caching of second determining module 13 in effective status Before the mark of the API of issued state, the mark of the API in issued state that first server is sent, described is received;
The memory module 16 is used for, and stores the API's in issued state in the caching of the disarmed state Mark;
The replacement module 17 is used for, and replaces the master cache and the processing state from caching.
In alternatively possible embodiment, the 3rd determining module 14 is specifically used for:
Whether the operating status for judging the first API is issued state;
If so, the mark of the second correspondence and the first API in external cache, determines the target clothes Pragmatic example, second correspondence include the mark destination service example corresponding with the mark of each API of multiple API Mark;
If it is not, default service example then is determined as the destination service example, the default service example is not load The Service Instance of latest edition API.
In alternatively possible embodiment, sending module 15 is specifically used for:
If the first API is non-issued state, default path information is obtained in the inner buffer of the gateway, and According to the default path information, the request message is sent to the destination service example, the default path information is silent Recognize the routing information of Service Instance, the default service example is the Service Instance for not loading latest edition API;
If the first API is issued state, believe to the path of destination service example described in registration center's acquisition request Breath, and according to the routing information of the destination service example, the request message is sent to the destination service example.
In alternatively possible embodiment, described device further includes add module 18, wherein,
The receiving module 11 is additionally operable to, and receives the response message that the destination service example is sent;
The add module 18 is used for, and the mark of the first API is added in the response message;
The sending module 15 is additionally operable to, and the response message of the mark of the first API is included to client transmission, So that the client when sending request message next time, carries the mark of the first API.
Request message processing unit provided in an embodiment of the present invention can perform the technical side shown in above method embodiment Case, realization principle and advantageous effect are similar, are no longer repeated herein.
Fig. 6 is the structure diagram one of another request message processing unit provided in an embodiment of the present invention, refers to figure 6, which can include the first determining module 21 and sending module 22, wherein,
First determining module 21 is used for, and determines currently performed first issuing steps in distribution scheme, described The application programming interface API issued in one issuing steps, the distribution scheme include multiple issuing steps and its correspondence The API in issued state mark;
The sending module 22 is used for, and the mark of the API is sent to gateway, so that the gateway is in the gateway The mark of the API is stored in portion's caching.
Request message processing unit provided in an embodiment of the present invention can perform the technical side shown in above method embodiment Case, realization principle and advantageous effect are similar, are no longer repeated herein.
Fig. 7 is the structure diagram two of another request message processing unit provided in an embodiment of the present invention.Shown in Fig. 6 On the basis of embodiment, Fig. 7 is referred to, described device further includes the second determining module 23 and memory module 24, wherein,
Second determining module 23 is used for, and determines the second correspondence according to the distribution scheme, described second corresponds to Relation includes the mark of the corresponding destination service example of mark of the mark and each API of multiple API;
The memory module 24 is used for, and second correspondence is stored in external cache.
Request message processing unit provided in an embodiment of the present invention can perform the technical side shown in above method embodiment Case, realization principle and advantageous effect are similar, are no longer repeated herein.
One of ordinary skill in the art will appreciate that:Realizing all or part of step of above-mentioned each method embodiment can lead to The relevant hardware of program instruction is crossed to complete.Foregoing program can be stored in a computer read/write memory medium.The journey Sequence upon execution, execution the step of including above-mentioned each method embodiment;And foregoing storage medium includes:ROM, RAM, magnetic disc or The various media that can store program code such as person's CD.
Finally it should be noted that:Various embodiments above is only to illustrate the technical solution of the embodiment of the present invention rather than to it Limitation;Although the embodiment of the present invention is described in detail with reference to foregoing embodiments, those of ordinary skill in the art It should be understood that:It can still modify to the technical solution recorded in foregoing embodiments either to which part or All technical characteristic carries out equivalent substitution;And these modifications or replacement, the essence of appropriate technical solution is not made to depart from this hair The scope of bright embodiment scheme.

Claims (12)

1. a kind of request message processing method, which is characterized in that including:
Gateway receives the request message that client is sent;
The gateway determines the corresponding first application programming interface API of the request message;
The gateway determines the operating status of the first API according to currently performed issuing steps in default distribution scheme, The operating status includes issued state and non-issued state, and the distribution scheme includes multiple issuing steps and its corresponding The mark of API in issued state;
The gateway determines the destination service example according to the operating status of the first API, wherein, the destination service The version of the API of example loading is corresponding with the operating status of the first API;
The gateway sends the request message to the destination service example, so that the destination service example is loaded according to it The first API handle the request message.
2. according to the method described in claim 1, it is characterized in that, the gateway determines the request message corresponding first API, including:
The gateway judges that whether including API in the request message identifies;
If so, the API is identified corresponding API by the gateway is determined as the first API;
If it is not, then the gateway obtains the type of the first correspondence and the request message, and correspond to and close according to described first The type of system and the request message determines the first API, and first correspondence includes the mark of at least one API The type of the corresponding request message of mark of knowledge and each API.
3. method according to claim 1 or 2, which is characterized in that the gateway is according to current in default distribution scheme The issuing steps of execution determine the operating status of the first API, including:
The gateway obtains the mark of the API in issued state in the inner buffer of the gateway;
It is identical that the gateway judges that the API in the mark and the inner buffer of the first API is identified whether;
If so, the gateway determines that the operating status of the first API is issued state;
If it is not, then the gateway determines that the operating status of the first API is non-issued state.
4. according to the method described in claim 3, it is characterized in that, the inner buffer includes master cache and from caching;It is described Gateway obtains the mark of the API in issued state in the inner buffer of the gateway, including:
The gateway obtains the master cache and the processing state from caching, and the processing state includes effective status and nothing Effect state;Wherein, in synchronization, the master cache and the processing state from caching there are a caching are effective shape State, the processing state of another caching is disarmed state;
The gateway obtains the mark of the API in issued state in the caching of effective status.
5. according to the method described in claim 4, it is characterized in that, the gateway obtains the place in the caching of effective status Before the mark of the API of issued state, further include:
The gateway receives the mark of the API in issued state that first server is sent, described;
The gateway stores the mark of the API in issued state in the caching of the disarmed state;
The gateway replaces the master cache and the processing state from caching.
6. method according to claim 1 or 2, which is characterized in that the gateway is according to the operation shape of the first API State determines the destination service example, including:
The gateway judges whether the operating status of the first API is issued state;
If so, the mark of second correspondence and first API of the gateway in external cache, determines the mesh Service Instance is marked, second correspondence includes the mark destination service corresponding with the mark of each API of multiple API The mark of example;
If it is not, then default service example is determined as the destination service example by the gateway, the default service example is not Load the Service Instance of latest edition API.
7. method according to claim 1 or 2, which is characterized in that the gateway sends institute to the destination service example Request message is stated, including:
If the first API is non-issued state, the acquisition default path information in the inner buffer of the gateway, and according to The default path information sends the request message to the destination service example, and the default path information takes for acquiescence The routing information of pragmatic example, the default service example are the Service Instance for not loading latest edition API;
If the first API is issued state, to the routing information of destination service example described in registration center's acquisition request, and According to the routing information of the destination service example, the request message is sent to the destination service example.
8. method according to claim 1 or 2, which is characterized in that the method further includes:
The gateway receives the response message that the destination service example is sent;
The gateway adds the mark of the first API in the response message;
The gateway includes the response message of the mark of the first API to client transmission, so that the client exists When sending request message next time, the mark of the first API is carried.
9. a kind of request message processing method, which is characterized in that including:
First server determines currently performed first issuing steps in distribution scheme, is issued in first issuing steps Application programming interface API, the distribution scheme include multiple issuing steps and its corresponding API in issued state Mark;
The first server sends the mark of the API to gateway, so that the gateway is in the inner buffer of the gateway Store the mark of the API.
10. according to the method described in claim 9, it is characterized in that, the method further includes:
The first server determines the second correspondence according to the distribution scheme, and second correspondence includes multiple The mark of the corresponding destination service example of mark of the mark and each API of API;
The first server stores second correspondence in external cache.
11. a kind of request message processing unit, which is characterized in that determine mould including receiving module, the first determining module, second Block, the 3rd determining module and sending module, wherein,
The receiving module is used for, and receives the request message that client is sent;
First determining module is used for, and determines the corresponding first application programming interface API of the request message;
Second determining module is used for, and described first is determined according to currently performed issuing steps in default distribution scheme The operating status of API, the operating status include issued state and non-issued state, and the distribution scheme includes multiple issues The mark of step and its corresponding API in issued state;
3rd determining module is used for, and according to the operating status of the first API, determines the destination service example, wherein, The version of the API of the destination service example loading is corresponding with the operating status of the first API;
The sending module is used for, and the request message is sent to the destination service example, so that the destination service example The request message is handled according to the first API of its loading.
12. a kind of request message processing unit, which is characterized in that including the first determining module and sending module, wherein,
First determining module is used for, and determines currently performed first issuing steps in distribution scheme, in the described first issue The application programming interface API issued in step, the distribution scheme include multiple issuing steps and its corresponding are in The mark of the API of issued state;
The sending module is used for, and the mark of the API is sent to gateway, so that inner buffer of the gateway in the gateway The mark of the middle storage API.
CN201711321264.7A 2017-12-12 2017-12-12 Request message processing method and device Active CN108055322B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711321264.7A CN108055322B (en) 2017-12-12 2017-12-12 Request message processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711321264.7A CN108055322B (en) 2017-12-12 2017-12-12 Request message processing method and device

Publications (2)

Publication Number Publication Date
CN108055322A true CN108055322A (en) 2018-05-18
CN108055322B CN108055322B (en) 2020-12-25

Family

ID=62131948

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711321264.7A Active CN108055322B (en) 2017-12-12 2017-12-12 Request message processing method and device

Country Status (1)

Country Link
CN (1) CN108055322B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932121A (en) * 2018-05-22 2018-12-04 哈尔滨工业大学(威海) A kind of module and method towards multi-tenant Distributed Services component
CN109672558A (en) * 2018-11-30 2019-04-23 哈尔滨工业大学(威海) A kind of polymerization and Method of Optimal Matching towards third party's service resource, equipment and storage medium
WO2020083189A1 (en) * 2018-10-24 2020-04-30 北京金山云网络技术有限公司 Request processing method and device, api gateway, and readable storage medium
CN111090449A (en) * 2018-10-24 2020-05-01 北京金山云网络技术有限公司 API service access method and device and electronic equipment
CN112612508A (en) * 2020-12-24 2021-04-06 新华三云计算技术有限公司 API version control method and device in API gateway and storage medium
CN112788099A (en) * 2020-11-11 2021-05-11 中移雄安信息通信科技有限公司 Method, device and equipment for loading back-end service and computer storage medium
CN113783914A (en) * 2020-09-01 2021-12-10 北京沃东天骏信息技术有限公司 Data processing method, device and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102833109A (en) * 2012-08-30 2012-12-19 华为技术有限公司 Positional information processing method and equipment of fault point
CN104216724A (en) * 2013-06-03 2014-12-17 阿里巴巴集团控股有限公司 Method and system for updating network application program interface
WO2016050034A1 (en) * 2014-09-30 2016-04-07 中兴通讯股份有限公司 Group addressing processing method, device, mtc intercommunicating gateway and api gw
CN105786531A (en) * 2014-12-19 2016-07-20 江苏融成嘉益信息科技有限公司 Cooperative work method for online software update and data encryption
CN106792923A (en) * 2017-02-09 2017-05-31 华为软件技术有限公司 A kind of method and device for configuring qos policy

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102833109A (en) * 2012-08-30 2012-12-19 华为技术有限公司 Positional information processing method and equipment of fault point
CN104216724A (en) * 2013-06-03 2014-12-17 阿里巴巴集团控股有限公司 Method and system for updating network application program interface
WO2016050034A1 (en) * 2014-09-30 2016-04-07 中兴通讯股份有限公司 Group addressing processing method, device, mtc intercommunicating gateway and api gw
CN105786531A (en) * 2014-12-19 2016-07-20 江苏融成嘉益信息科技有限公司 Cooperative work method for online software update and data encryption
CN106792923A (en) * 2017-02-09 2017-05-31 华为软件技术有限公司 A kind of method and device for configuring qos policy

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932121A (en) * 2018-05-22 2018-12-04 哈尔滨工业大学(威海) A kind of module and method towards multi-tenant Distributed Services component
WO2020083189A1 (en) * 2018-10-24 2020-04-30 北京金山云网络技术有限公司 Request processing method and device, api gateway, and readable storage medium
CN111090449A (en) * 2018-10-24 2020-05-01 北京金山云网络技术有限公司 API service access method and device and electronic equipment
CN109672558A (en) * 2018-11-30 2019-04-23 哈尔滨工业大学(威海) A kind of polymerization and Method of Optimal Matching towards third party's service resource, equipment and storage medium
CN109672558B (en) * 2018-11-30 2021-12-07 哈尔滨工业大学(威海) Aggregation and optimal matching method, equipment and storage medium for third-party service resources
CN113783914A (en) * 2020-09-01 2021-12-10 北京沃东天骏信息技术有限公司 Data processing method, device and equipment
CN112788099A (en) * 2020-11-11 2021-05-11 中移雄安信息通信科技有限公司 Method, device and equipment for loading back-end service and computer storage medium
CN112612508A (en) * 2020-12-24 2021-04-06 新华三云计算技术有限公司 API version control method and device in API gateway and storage medium

Also Published As

Publication number Publication date
CN108055322B (en) 2020-12-25

Similar Documents

Publication Publication Date Title
CN108055322A (en) Request message processing method and processing device
CN108306819B (en) Instant communication system implementation method, medium and computing device based on block chain
KR101884498B1 (en) Virtual machine migration to minimize packet loss in virtualized network
CN108965007A (en) API gateway interface configures update method and device
CN107066570A (en) Data managing method and device
CN105580383A (en) Method and apparatus for real-time sharing of multimedia content between wireless devices
WO2011096865A1 (en) Method and node entity for enhancing content delivery network
CN104137085A (en) Method for controlling access of clients to a service in a cluster environment
CN107463613A (en) Page loading method and device
CN108134766A (en) A kind of method, apparatus, system, server and client for servicing publication
CN110808857B (en) Network intercommunication method, device, equipment and storage medium for realizing Kubernetes cluster
CN111193783B (en) Service access processing method and device
US10212286B2 (en) System and method for allocation and management of shared virtual numbers
JP2014164487A (en) Server, backup system, backup method and computer program
CN105045536B (en) A kind of method, apparatus and system of data storage
CN113110948A (en) Disaster tolerance data processing method and device
CN105897754A (en) Data processing system, data pulling method and client
CN106708636A (en) Cluster-based data caching method and apparatus
US9338232B2 (en) Method and system for synchronizing status of member servers belonging to same replication group
CN112612791B (en) Data processing method and device
CN107277188A (en) A kind of method, client, server and operation system for determining IP address attaching information
CN108958933B (en) Configuration parameter updating method, device and equipment of task executor
JP2009110476A (en) Storage processor, information providing server, operating method, and program
CN109308288A (en) Data processing method and device
CN109582732A (en) Using the method, apparatus of cache synchronization, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201119

Address after: 266100 Shandong Province, Qingdao city Laoshan District Songling Road No. 399

Applicant after: Qingdao Haishi Information Technology Co., Ltd

Address before: 266061 Shandong Province, Qingdao city Laoshan District Songling Road No. 399

Applicant before: QINGDAO HISENSE INTELLIGENT COMMERCIAL SYSTEM Co.,Ltd.

GR01 Patent grant
GR01 Patent grant