CN114390109B - Service processing method, micro-service gateway and data center system - Google Patents

Service processing method, micro-service gateway and data center system Download PDF

Info

Publication number
CN114390109B
CN114390109B CN202111515015.8A CN202111515015A CN114390109B CN 114390109 B CN114390109 B CN 114390109B CN 202111515015 A CN202111515015 A CN 202111515015A CN 114390109 B CN114390109 B CN 114390109B
Authority
CN
China
Prior art keywords
data center
service
data
micro
service gateway
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111515015.8A
Other languages
Chinese (zh)
Other versions
CN114390109A (en
Inventor
秦有祥
丰朋
吴丰科
石力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Unionpay Co Ltd
Original Assignee
China Unionpay Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Unionpay Co Ltd filed Critical China Unionpay Co Ltd
Priority to CN202111515015.8A priority Critical patent/CN114390109B/en
Publication of CN114390109A publication Critical patent/CN114390109A/en
Application granted granted Critical
Publication of CN114390109B publication Critical patent/CN114390109B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Abstract

A business processing method, a micro-service gateway and a data center system, wherein the method comprises the following steps: and setting a micro service gateway in the data center, setting the micro service gateway to be connected with service nodes in each data center, acquiring flow information containing the user center identification through the micro service gateway when the service needs to be processed, determining a target data center corresponding to the user center identification according to a preset flow distribution rule, and then directly transmitting the flow information to the service nodes of the target data center. Therefore, the micro service gateway directly interacts with the service nodes in other data centers, and data synchronization between the two data centers is not required to be waited, so that delay caused by data synchronization between the two data centers is saved, and the efficiency of service processing is improved.

Description

Service processing method, micro-service gateway and data center system
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a service processing method, a micro service gateway, and a data center system.
Background
The cloud flash payment multi-data center is established, so that the special payment is possible. The remote payment refers to the exchange of goods or services among consumers, merchants, third party payment service providers and financial institutions by using the Internet as a carrier and using a safe online payment tool. The special payment not only can enable the consumer to purchase goods in each area, but also provides great convenience for business settlement between the merchant and the financial institution.
In the prior art, when the special payment is executed, data synchronization is carried out between a data center accessed by a merchant and a data center accessed by a consumer so as to jointly complete the processing of one transaction. However, the geographical positions between the data center accessed by the merchant and the data accessed by the consumer may be far apart, and in this case, a large delay occurs in data synchronization correspondingly, so that the business processing of the pay-for-different transaction in each data center is not synchronous, which is unfavorable for the normal proceeding of the transaction, and increases the probability of transaction failure.
Based on this, a service processing method is needed at present to solve the technical problem of transaction failure caused by larger data synchronization delay between different data centers in the prior art.
Disclosure of Invention
In a first aspect, the present application provides a service processing method, adapted to a data center system, where the data center system includes at least two data centers, each of the at least two data centers includes a micro service gateway and a service node, and the micro service gateway is connected to the service node in the at least two data centers; the micro-service gateway obtains flow information, wherein the flow information comprises a user center identifier; the micro service gateway determines a target data center corresponding to the user center identifier according to a preset distribution rule, wherein the preset distribution rule comprises a corresponding relation between at least one user center identifier and at least one data center; the micro service gateway sends the streaming information to the service node in the target data center.
In the design, the micro-service gateway directly interacts with the service nodes in other data centers, and the flow information can be directly sent to the service nodes in the target data center through the micro-service gateway without waiting for data synchronization between the data layers of the two data centers, so that the technical problem of transaction failure caused by data synchronization delay between the two data centers is avoided, and the success rate of the transaction is improved.
One possible implementation manner, the streaming information is used for a service node in a target data center to send to a target Application (APP), and the preset streaming rule is from a streaming service, where the streaming rule of the streaming service is consistent with the streaming rule of a node of a content delivery network (content delivery network, CDN); the distribution rule of the CDN node is obtained after the CDN node distributes at least one user to at least two data centers in advance according to a request initiated by the at least one user on the target APP.
Through the method, the preset distribution rule is sent to the micro service gateway through the CDN, so that the distribution rule adopted by the micro service gateway side and the distribution rule adopted by the CDN side can be kept consistent, and therefore the service request of the service requester and the request initiated by the user on the target APP can be reasonably distributed, the accuracy of data processing is improved, the utilization rate of the data center is improved, and the service processing efficiency of the data center is further improved.
In one possible implementation manner, the running information may further include a generation time, after the micro service gateway obtains the running information, the micro service gateway may further determine whether a time difference between the generation time and the current time in the running information is smaller than a preset time delay, if yes, determine a target data center corresponding to the user center identifier according to a preset distribution rule, and if not, send the running information to a service node in the data center where the micro service gateway is located, where the service node in the data center where the micro service gateway is located is used for synchronizing the running information to the target data center in a data synchronization manner.
By the method, the micro-service gateway only calls the service nodes in other data centers according to the distribution rule when the interval between the moment of confirming the current service processing of the data center and the moment of generating the stream ID is smaller, otherwise, directly calls the service nodes of the data center, so that the cross-center interaction can be carried out when necessary, the cross-center interaction can be avoided when not necessary, the cost of the cross-center interaction is reduced, and the service processing efficiency of the data center system is further improved.
In a second aspect, the present application provides a micro-service gateway, adapted for use in a data center system, where the data center system includes at least two data centers, each of the at least two data centers includes a micro-service gateway and a service node, and the micro-service gateway is connected to the service node in the at least two data centers; wherein the micro service gateway in any data center comprises: the acquisition unit is used for acquiring the flow information, wherein the flow information comprises a user center identifier; the determining unit is used for determining a target data center corresponding to the user center identifier according to a preset distribution rule, wherein the preset distribution rule comprises the corresponding relation between at least one user center identifier and at least one data center; and the sending unit is used for sending the flow information to the service node in the target data center.
One possible implementation manner is that the streaming information is used for sending the service node in the target data center to the target application program APP, in which case the preset distribution rule comes from a distribution service, and the distribution rule of the distribution service is consistent with the distribution rule of the CDN node; the distribution rule of the CDN node is obtained after the CDN node distributes at least one user to at least two data centers in advance according to a request initiated by the at least one user on the target APP.
In one possible implementation manner, the pipeline information further includes a generation time, and in this case, before determining, according to a preset splitting rule, the target data center corresponding to the user center identifier, the determining unit is further configured to: and determining that the time difference between the generation time and the current time in the flow information is smaller than the preset time delay.
In a possible implementation manner, the determining unit is further configured to: if the time difference between the generation time and the current time in the streaming information is not smaller than the preset time delay, the streaming information is sent to a service node of a data center where a micro service gateway is located, and the service node of the data center where the micro service gateway is located is used for synchronizing the streaming information to the target data center in a data synchronization mode.
In a third aspect, the present application provides a data center system comprising at least two data centers, each of the at least two data centers comprising a micro service gateway and a service node, and the micro service gateway being connected to the service node in the at least two data centers, the micro service gateway being adapted to perform the method as designed in any of the first aspects above.
In a fourth aspect, the present application provides a computer readable storage medium storing a computer program which, when executed, performs the method of any one of the designs described above in the first aspect.
In a fifth aspect, the present application provides a computing device comprising: a memory for storing program instructions; a processor for invoking program instructions stored in the memory and executing the method according to any of the above designs according to the obtained program.
In a sixth aspect, the present application provides a computer program product for implementing a method as designed in any one of the first aspects above, when the computer program product is run on a processor.
The advantages of the second to sixth aspects may be specifically referred to the advantages achieved by any of the designs of the first aspect, and will not be described in detail herein.
Drawings
FIG. 1 schematically illustrates one possible system architecture provided by embodiments of the present application;
fig. 2 schematically illustrates a flow chart of a service processing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a structure of pipeline information according to an embodiment of the present disclosure;
fig. 4 is a schematic flow chart illustrating another service processing method according to an embodiment of the present application;
fig. 5 schematically illustrates a structural diagram of a micro service gateway according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fig. 1 schematically illustrates a possible system architecture provided by the embodiments of the present application, where, as shown in fig. 1, the system corresponds to a data center system, and the data center system may include at least two data centers, such as a first data center and a second data center illustrated in fig. 1, where the first data center and the second data center may be data centers disposed in different geographic location values, and data synchronization and data sharing between the two data centers may be implemented through a data layer.
Illustratively, the internal architecture of each data center is described by taking the first data center as an example:
as shown in fig. 1, the first data center includes: the application background, data layer and service node may also include, for example, a service assembly layer and an access gateway. The application background is used for connecting external applications, and the access gateway is used for connecting the CDN and then connecting the APP through the CDN. In an implementation, the external application may send the service to be processed to the application background, which sends the received service to be processed to the service node connected thereto. The service node is located in the service layer, and the service node can process the service to be processed sent by the application background, send the data generated in the service processing process to the data layer, store the data by the data layer, and synchronize the data to the data layer in other connected data centers. Further, if the other data center determines that the APP needs to query the service request through the own service node, the other data center processes the synchronous data received by the data layer to generate a service processing message. In addition, the service node in another data center can also transmit the service processing message to the APP served by the data center system through the service assembly layer and the access gateway.
Illustratively, traffic generated by communication between the first data center and the second data center is east-west traffic, and traffic generated by communication of the APP with the data center system is north-south traffic.
The overall flow of service processing using the data center system will be described below by taking the payment verification service processing as an example. In this embodiment, the data center system is a cloud-flash data center system, the application background may be a jindong application background, and the APP may be a cloud-flash APP, and then the service node may be a verification service layer. In practice, the verification process may be performed in several steps:
step one: firstly, carrying out the previous preparation of verification transaction, submitting an order by a user on a Beijing dong payment interface, entering a Beijing dong cash register, triggering a payment option, and selecting white strip payment, weChat payment and cloud flash payment in the page. Assuming that the user selects cloud pay, the jingdong cash desk is connected to the jingdong application background of the data center system, and the service system of the jingdong cash desk establishes a geographic position close to the geographic position of the first data center, so that the jingdong cash desk is firstly connected to the jingdong application background in the first data center.
It should be noted that, the genitor application background in which data center the genitor cash desk is connected to may be selected according to the geographic location, or may be set according to the agreement implementation achieved by the cloud flashing payment party and the genitor, and the basis of the genitor application background in which data center the genitor cash desk is connected to is not specifically limited in the embodiment of the present application.
Step two: and the Beijing east application background transmits the verification request carrying the verification information to the service node, namely the verification service layer. Illustratively, the verification information may include verification means and verification sequence, wherein the verification means may be: short message, payment password, fingerprint, face, graphic verification code, slider, login password, bank card information and the like, one of the verification contents can be selected for single-factor verification during verification, any several of the verification modes can be selected for multi-factor verification, and verification sequence is determined. In the embodiment of the application, the short message and the human face are selected for multi-factor verification, and the sequence is that advanced human face verification is performed for short message verification.
Step three: and the verification service layer of the first data center processes the verification request requiring short message verification and face verification into a corresponding message and transmits the message to the data layer of the first data center.
Step four: the data layer of the first data center synchronizes the message to the data layer of the second data center, and the data layer of the second data center transmits the message to the verification service layer of the second data center.
Step five: and the user initiates a verification process through the cloud payment APP. When the user registers the ID of the cloud flash APP, the user can be distributed to a unique corresponding data center of the ID, and if the user center identifier corresponding to the user ID is a second data center in the service, the verification process initiated by the user calls the second data center for processing.
Step six: and checking a verification mode and a verification sequence of the cloud flash payment APP in a verification service layer of the second data center, namely, advanced face verification and short message verification.
Step seven: the cloud flash payment APP transmits the face information and the short message verification code of the user to a verification service layer of the second data center, verification of the verification service layer is successful, a verification result of the verification success is transmitted to a data layer of the first data center, and the verification result is transmitted to the verification service layer of the first data center.
Step eight: the Beijing east cash register inquires the successful verification result at the verification business layer of the first data center through the Beijing east application background, and the verification process is finished.
In the first to eighth steps, since the data center called by the jindong cash desk and the data center called by the cloud flash APP are different, the data between the two data centers need to be synchronized so as to ensure that one verification transaction can be performed normally. However, in this way, if the geographical location between the two data centers is far away, the data synchronization may be delayed, which may cause problems in transaction processing. For example, in the step six, when the cloud flash APP queries the verification manner and the verification sequence in the verification service layer of the second data center, if the delay of data synchronization is very large, the first data center cannot synchronize the verification request requiring the short message verification and the face verification to the verification service layer of the corresponding second data center for a very long time.
In addition, in other scenarios, it may also occur that the merchant and the user-invoked data center are not identical. The description will be given with examples of the above steps one to eight. The data center called by the Jingdong cash register and the data center called by the cloud flash APP are the same, and the first data center is called, but the data center called by the cloud flash APP is manually switched to the second data center, or part of traffic is shunted to the second data center; while the data center called by the genius cash desk is still the second data center. In this case, the data center called by the jindong cash desk and the data center called by the cloud flash APP are different, and data between the two data centers needs to be synchronized.
Based on the above, the application provides a service processing method, which can be used for performing cross-center calling in the two scenes and can also solve the problem caused by synchronous delay.
Specifically, with continued reference to fig. 1, in this method, a micro service gateway is set in each data center in advance, and the micro service gateway in each data center is also connected with service nodes in other data centers (in order to simplify the architecture, fig. 1 only illustrates that the micro service gateway is connected with service nodes in the data center, and connection between the micro service gateway and service nodes in other data centers is not illustrated), so when a cross-data center call needs to be executed, the micro service gateway directly sends corresponding information to the service node in the data center to be invoked, and data synchronization is not required to be performed between two data centers through a data layer, thereby helping to save the problem of excessively long delay in executing data synchronization between two data centers and improving the efficiency of service processing.
The service processing method provided in the embodiment of the present application is further described below based on the system architecture illustrated in fig. 1.
Fig. 2 schematically illustrates a flow chart of a service processing method provided in the present application, and as shown in fig. 2, the service processing method includes:
in step 201, the micro service gateway obtains the flow information, where the flow information may include a user center identifier.
In the step 201, after the service requester accesses the application background of the data center, when there is a service processing requirement, a service request is sent to the application background, and the service request includes the user center identifier. After receiving the service request sent by the service request party, the application background generates flow information according to the user center identifier in the service request, and assigns a flow unique number (identity document, ID) to the flow information, and all subsequent service flows related to the service request are performed under the flow ID.
Fig. 3 illustrates a schematic structural diagram of a piece of pipeline information provided in an embodiment of the present application, where, as shown in fig. 3, in this example, the pipeline information may include a user center identifier, and may also include an identifier of an application background that sends the pipeline information, a generation time of the pipeline information, a machine code, and a random number. Wherein, the machine code is a program language … random number which is used for converting the flow information into a computer directly, is a number which is randomly generated when the flow ID is allocated and is used for representing the uniqueness of the flow ID. The various pieces of information in the pipeline information may be arranged in the order illustrated in fig. 3, or may be arranged in other orders, which is not particularly limited.
Step 202, the micro service gateway determines a target data center corresponding to the user center identifier according to a preset distribution rule.
In step 202, the preset distribution rule may include a correspondence between a user center identifier and a data center, where the preset distribution rule is derived from a distribution service, and the distribution rule of the distribution service is consistent with a distribution rule of a node of a content delivery network (content delivery network, CDN), so that after receiving any piece of running information, the micro-service gateway may obtain the user center identifier in the running information, and query the preset distribution rule to determine a target data center corresponding to the user center identifier.
Illustratively, the preset offload rule may be obtained by the CDN node in the following manner: when any user registers on the APP, the APP allocates a unique user ID (user ID, namely, user center identification of the user) for the user and sends the user ID to the CDN node, the CDN node matches the unique data center for the user ID from all the data centers, and then establishes a preset distribution rule according to the data centers allocated for the user IDs corresponding to the users and sends the preset distribution rule to the micro-service gateway in each data center so as to be stored and used by the micro-service gateway in each data center.
In step 203, the micro service gateway sends the streaming information to the service node in the target data center.
In step 203, assuming that the micro service gateway determines that the data center corresponding to one service request is the second data center according to the splitting rule, the micro service gateway may split all the service requests corresponding to the pipeline ID to the second data center for processing.
Further, as shown in fig. 1, assuming that the service request is a verification request, the running water information received by the micro service gateway in the first data center includes a verification mode and a verification sequence, the verification mode is face recognition and short message verification, and the verification sequence is that face recognition is performed first and then short message verification is performed, then:
the micro-service gateway in the first data center can directly send the received flow information to the verification service layer of the second data center, and the verification service layer processes the verification mode and the verification sequence into corresponding messages for the cloud flash APP to query. After the cloud flash payment APP checks the verification mode and the verification sequence in the verification service layer of the second data center, the face verification is performed firstly, then the short message verification is performed, and then the cloud flash payment APP transmits the face information and the short message verification code of the user to the verification service layer of the second data center, and the verification of the verification service layer is successful. Further, the Beijing east application background queries the successful verification result in the verification service layer of the second data center through the micro service gateway of the second data center, and thus the verification service request is completed.
The above embodiment provides a service processing method, which sets a micro service gateway in a data center, sets connection between the micro service gateway and service nodes in each data center, obtains flow information containing a user center identifier through the micro service gateway when a service needs to be processed, determines a target data center corresponding to the user center identifier according to a preset distribution rule, and then directly sends the flow information to the service nodes of the target data center. Therefore, the micro-service gateway directly interacts with the service nodes in other data centers, so that data synchronization between the two data centers is not required to be waited, the problem of transaction failure caused by data synchronization delay between the two data centers is avoided, and the probability of transaction success is improved.
In the embodiment of the application, the micro-service gateway can also combine the generation time of the flow information to decide whether to call other data centers to execute service processing so as to avoid cross-center call as much as possible. Such an implementation is described in detail below.
Fig. 4 schematically illustrates a flow chart of another service processing method provided in the present application, as shown in fig. 4, where the method includes:
in step 401, the micro service gateway obtains the running water information, where the running water information includes the generation time of the running water information.
Step 402: the micro service gateway determines whether the time difference between the generation time of the streaming information and the current time is less than the preset time delay, if yes, step 403 is executed, and if not, step 404 is executed.
And step 403, the micro-service gateway determines a target data center corresponding to the user center identifier according to a preset distribution rule, and sends the flow information to a service node in the target data center.
Step 404, the micro service gateway sends the flow information to a service node in a data center where the micro service gateway is located.
The embodiments in steps 201 to 203 above are still described as examples:
after the Beijing east cash register is accessed to the Beijing east application background of the first data center, the Beijing east cash register sends a service request to the Beijing east application background of the first data center, and the Beijing east application background generates running water information according to the service request and the time of receiving the service request and sends the running water information to a micro-service gateway in the first data center. After receiving the streaming information, the micro-service gateway in the first data center acquires the current time, determines the time difference between the current time and the generation time in the streaming information, and compares the time difference with a preset time delay. Wherein the preset time delay may be set between 10 and 30ms, preferably the preset time delay may be set to 20ms. In this way, the current time acquired by the micro service gateway is actually used for indicating the time when the current service starts to be processed in the first data center, so when the time difference between the acquired current time and the generation time in the streaming information is less than 20ms, the interval between the generation time of the streaming information and the time when the current service starts to be processed in the first data center is small, and in such a small interval, data synchronization is difficult to complete between the data layers between the first data center and the second data center, in which case the micro service gateway in the first data center can send the streaming information to the verification service layer of the second data center, so that other data centers can be called through the micro service gateway in time when the data synchronization of the data center takes longer, so that the service can be completed in time by processing across the center. On the contrary, when the time difference is not less than 20ms, the interval between the generation time of the streaming information and the time when the current service in the first data center starts to be processed is enough, and the time interval is enough to complete data synchronization of the data layers between the first data center and the second data center, in this case, the micro service gateway in the first data center can directly send the streaming information to the verification service layer in the first data center, and then the streaming information is sequentially transmitted to the verification service layer in the second data center through the data layers in the first data center and the second data center, so that service processing is completed in a data synchronization mode, the number of times of calling other data centers through the micro service gateway to realize cross-data center calling is reduced, and unnecessary communication consumption is saved.
By means of the method, the micro-service gateway only calls the service nodes in other data centers according to the distribution rule when the interval between the current service processing time of the data center and the generation time of the stream ID is smaller, otherwise, directly calls the service nodes of the data center, so that cross-center interaction can be carried out when necessary, and can be avoided when not necessary, the cost of the cross-center interaction is reduced, and the service processing efficiency of the data center system is further improved.
Based on the same technical concept, the embodiment of the application also provides a micro service gateway, which can execute the flow of the service processing method provided by the previous embodiment.
Fig. 5 schematically illustrates a structure of a micro service gateway provided in an embodiment of the present application, where the data center system includes at least two data centers, each of the at least two data centers includes a micro service gateway and a service node, and the micro service gateway is connected to the service node in the at least two data centers; wherein the micro service gateway in any data center comprises: an obtaining unit 501, configured to obtain running water information, where the running water information includes a user center identifier; the determining unit 502 is configured to determine, according to a preset distribution rule, a target data center corresponding to the user center identifier, where the preset distribution rule includes a correspondence between at least one user center identifier and at least one data center; a sending unit 503, configured to send the flow information to a service node in the target data center.
One possible implementation way, the streaming information is used for the service node in the target data center to send to the target application APP; the determining unit 502 is further configured to preset a offload rule from offload service, where the offload rule of the offload service is consistent with offload rules of nodes of the content delivery network (content delivery network, CDN); the distribution rule of the CDN node is obtained after the CDN node distributes at least one user to at least two data centers in advance according to a request initiated by the at least one user on the target APP.
One possible implementation manner, the flow information further comprises generation time; the determining unit 502 is further configured to determine that a time difference between the generation time and the current time in the pipeline information is less than a preset time delay.
In a possible implementation manner, the determining unit 502 is further configured to: if the time difference between the generation time and the current time in the streaming information is not smaller than the preset time delay, the streaming information is sent to a service node of a data center where a micro service gateway is located, and the service node of the data center where the micro service gateway is located is used for synchronizing the streaming information to the target data center in a data synchronization mode.
Based on the same technical concept, the embodiment of the invention further provides a data center system, which comprises at least two data centers, wherein each of the at least two data centers comprises a micro service gateway and a service node, and the micro service gateway is connected with the service node in the at least two data centers and is used for executing the method shown in any embodiment of fig. 2 or fig. 4.
Based on the same technical concept, the embodiment of the invention further provides a computing device, which comprises: a memory for storing program instructions;
and the processor is used for calling the program instructions stored in the memory and executing the method shown in any one embodiment of the figure 2 or the figure 4 according to the obtained program.
Based on the same technical idea, an embodiment of the invention also provides a computer-readable storage medium, which when running on a processor, implements a method as shown in any of the embodiments of fig. 2 or fig. 4.
Based on the same technical idea, embodiments of the present invention also provide a computer program product implementing the method as shown in any of the embodiments of fig. 2 or fig. 4 when said computer program product is run on a processor.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (7)

1. A service processing method, which is characterized by being applied to a data center system, wherein the data center system comprises at least two data centers, each of the at least two data centers comprises a micro service gateway and a service node, and the micro service gateway is connected with the service node in the at least two data centers;
the micro-service gateway obtains flow information, wherein the flow information comprises a user center identifier;
the flow information also comprises generation time; the micro service gateway determines that the time difference between the generation time and the current time in the flow information is smaller than a preset time delay, and determines a target data center corresponding to the user center identifier according to a preset distribution rule, wherein the preset distribution rule comprises a corresponding relation between at least one user center identifier and at least one data center; the preset distribution rule comes from distribution service, and the distribution rule of the distribution service is consistent with the distribution rule of the CDN node of the content delivery network;
the micro service gateway sends the flow information to a service node in the target data center;
the flow information is used for a service node in the target data center to send to a target application program APP;
the distribution rule of the CDN node is obtained after the CDN node distributes the at least one user to the at least two data centers in advance according to a request initiated by the at least one user on the target APP.
2. The method of claim 1, wherein the method further comprises:
if the time difference between the generation time and the current time in the streaming information is not smaller than the preset time delay, the streaming information is sent to a service node in a data center where the micro service gateway is located, and the service node in the data center where the micro service gateway is located is used for synchronizing the streaming information to the target data center in a data synchronization mode.
3. A micro service gateway, adapted to a data center system, wherein the data center system comprises at least two data centers, each of the at least two data centers comprises a micro service gateway and a service node, and the micro service gateway is connected with the service node in the at least two data centers; wherein the micro service gateway in any data center comprises:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring flow information, and the flow information comprises a user center identifier;
the determining unit is used for determining that the time difference between the generation time and the current time in the flow information is smaller than the preset time delay; determining a target data center corresponding to the user center identifier according to a preset distribution rule, wherein the preset distribution rule comprises a corresponding relation between at least one user center identifier and at least one data center; the preset distribution rule comes from distribution service, and the distribution rule of the distribution service is consistent with the distribution rule of the CDN node of the content delivery network;
a sending unit, configured to send the flow information to a service node in the target data center;
the flow information is used for a service node in the target data center to send to a target application program APP;
the distribution rule of the CDN node is obtained after the CDN node distributes the at least one user to the at least two data centers in advance according to a request initiated by the at least one user on the target APP.
4. The micro service gateway of claim 3, wherein the determining unit is further configured to:
and if the time difference between the generation time and the current time in the streaming information is not smaller than the preset time delay, sending the streaming information to a service node in a data center where the micro service gateway is located, wherein the service node in the data center where the micro service gateway is located is used for synchronizing the streaming information to the target data center in a data synchronization mode.
5. A data center system comprising at least two data centers, each of the at least two data centers comprising a micro service gateway and a service node, and the micro service gateway connecting the service nodes in the at least two data centers, the micro service gateway being adapted to perform the method of any of the preceding claims 1 to 2.
6. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when run, performs the method according to any one of claims 1 to 2.
7. A computing device, comprising:
a memory for storing program instructions;
a processor for invoking program instructions stored in said memory and for performing the method according to any of claims 1-2 in accordance with the obtained program.
CN202111515015.8A 2021-12-13 2021-12-13 Service processing method, micro-service gateway and data center system Active CN114390109B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111515015.8A CN114390109B (en) 2021-12-13 2021-12-13 Service processing method, micro-service gateway and data center system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111515015.8A CN114390109B (en) 2021-12-13 2021-12-13 Service processing method, micro-service gateway and data center system

Publications (2)

Publication Number Publication Date
CN114390109A CN114390109A (en) 2022-04-22
CN114390109B true CN114390109B (en) 2024-02-20

Family

ID=81195529

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111515015.8A Active CN114390109B (en) 2021-12-13 2021-12-13 Service processing method, micro-service gateway and data center system

Country Status (1)

Country Link
CN (1) CN114390109B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102437964A (en) * 2010-11-17 2012-05-02 华为技术有限公司 Method and device for issuing business as well as communication system
EP3041254A1 (en) * 2014-12-30 2016-07-06 Telefonica Digital España, S.L.U. Method for providing information on network status from telecommunication networks
CN109961204A (en) * 2017-12-26 2019-07-02 中国移动通信集团浙江有限公司 Quality of service analysis method and system under a kind of micro services framework
CN109995713A (en) * 2017-12-30 2019-07-09 华为技术有限公司 Service processing method and relevant device in a kind of micro services frame
CN110913025A (en) * 2019-12-31 2020-03-24 中国银联股份有限公司 Service calling method, device, equipment and medium
CN112671882A (en) * 2020-12-18 2021-04-16 上海安畅网络科技股份有限公司 Same-city double-activity system and method based on micro-service
CN113285888A (en) * 2021-04-30 2021-08-20 中国银联股份有限公司 Multi-service system multi-data center shunting method, device, equipment and medium
WO2021179493A1 (en) * 2020-03-09 2021-09-16 平安科技(深圳)有限公司 Microservice-based load balancing method, apparatus and device, and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7889649B2 (en) * 2006-12-28 2011-02-15 Ebay Inc. Method and system for gateway communication
US10440043B2 (en) * 2016-02-26 2019-10-08 Cable Television Laboratories, Inc. System and method for dynamic security protections of network connected devices
US10771582B2 (en) * 2018-03-04 2020-09-08 Netskrt Systems, Inc. System and apparatus for intelligently caching data based on predictable schedules of mobile transportation environments
US10805213B2 (en) * 2018-11-19 2020-10-13 International Business Machines Corporation Controlling data communication between microservices

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102437964A (en) * 2010-11-17 2012-05-02 华为技术有限公司 Method and device for issuing business as well as communication system
EP3041254A1 (en) * 2014-12-30 2016-07-06 Telefonica Digital España, S.L.U. Method for providing information on network status from telecommunication networks
CN109961204A (en) * 2017-12-26 2019-07-02 中国移动通信集团浙江有限公司 Quality of service analysis method and system under a kind of micro services framework
CN109995713A (en) * 2017-12-30 2019-07-09 华为技术有限公司 Service processing method and relevant device in a kind of micro services frame
CN110913025A (en) * 2019-12-31 2020-03-24 中国银联股份有限公司 Service calling method, device, equipment and medium
WO2021179493A1 (en) * 2020-03-09 2021-09-16 平安科技(深圳)有限公司 Microservice-based load balancing method, apparatus and device, and storage medium
CN112671882A (en) * 2020-12-18 2021-04-16 上海安畅网络科技股份有限公司 Same-city double-activity system and method based on micro-service
CN113285888A (en) * 2021-04-30 2021-08-20 中国银联股份有限公司 Multi-service system multi-data center shunting method, device, equipment and medium

Also Published As

Publication number Publication date
CN114390109A (en) 2022-04-22

Similar Documents

Publication Publication Date Title
WO2020258848A1 (en) Method and apparatus for cross-chain transmission of resources
CN108470298B (en) Method, device and system for transferring resource numerical value
CN111091429A (en) Electronic bill identification distribution method and device and electronic bill generation system
WO2020173287A1 (en) Systems and methods for determining network shards in blockchain network
WO2016177285A1 (en) Data pushing method and device
CN111614709B (en) Partition transaction method and system based on block chain
CN111047321A (en) Service processing method and device, electronic equipment and storage medium
CN111698315B (en) Data processing method and device for block and computer equipment
CN115859343A (en) Transaction data processing method and device and readable storage medium
CN112866421B (en) Intelligent contract operation method and device based on distributed cache and NSQ
CN111460504A (en) Service processing method, device, node equipment and storage medium
CN109819023B (en) Distributed transaction processing method and related product
CN105096122A (en) Fragmented transaction matching method and fragmented transaction matching device
CN110351362A (en) Data verification method, equipment and computer readable storage medium
CN114390109B (en) Service processing method, micro-service gateway and data center system
CN113194143A (en) Block chain account creating method and device and electronic equipment
CN112527901A (en) Data storage system, method, computing device and computer storage medium
US20230259930A1 (en) Cross-chain transaction processing method and apparatus, electronic device, and storage medium
CN113222575A (en) Deposit certificate opening method and device
CN113612732A (en) Resource calling method and device and multi-party secure computing system
CN111866171B (en) Message processing method, device, electronic equipment and medium
CN112994894B (en) Gateway-based single-thread request processing method and information verification AGENT
CN109901936B (en) Service cooperation method and device applied to distributed system
US20230091864A1 (en) Device for constructing neural block rapid-propagation protocol-based blockchain and operation method thereof
CN115098528B (en) Service processing method, device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant