CN114390109A - Service processing method, micro-service gateway and data center system - Google Patents

Service processing method, micro-service gateway and data center system Download PDF

Info

Publication number
CN114390109A
CN114390109A CN202111515015.8A CN202111515015A CN114390109A CN 114390109 A CN114390109 A CN 114390109A CN 202111515015 A CN202111515015 A CN 202111515015A CN 114390109 A CN114390109 A CN 114390109A
Authority
CN
China
Prior art keywords
data center
service
data
gateway
micro
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111515015.8A
Other languages
Chinese (zh)
Other versions
CN114390109B (en
Inventor
秦有祥
丰朋
吴丰科
石力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Unionpay Co Ltd
Original Assignee
China Unionpay Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Unionpay Co Ltd filed Critical China Unionpay Co Ltd
Priority to CN202111515015.8A priority Critical patent/CN114390109B/en
Publication of CN114390109A publication Critical patent/CN114390109A/en
Application granted granted Critical
Publication of CN114390109B publication Critical patent/CN114390109B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

A service processing method, a micro service gateway and a data center system are provided, wherein the method comprises the following steps: the method comprises the steps that a micro service gateway is arranged in a data center and connected with service nodes in each data center, when services need to be processed, flow information containing user center identification is obtained through the micro service gateway, a target data center corresponding to the user center identification is determined according to a preset flow distribution rule, and then the flow information is directly sent to the service nodes of the target data center. Therefore, the micro service gateway directly interacts with service nodes in other data centers, and data synchronization between the two data centers does not need to be waited, so that delay caused by data synchronization between the two data centers is saved, and the efficiency of service processing is improved.

Description

Service processing method, micro-service gateway and data center system
Technical Field
The present application relates to the field of computer technologies, and in particular, to a service processing method, a micro service gateway, and a data center system.
Background
The establishment of the cloud flash payment multi-data center enables the remote payment to be possible. The off-site payment means that the consumers, the merchants, the third-party payment service providers and the financial institutions use the internet as a carrier to exchange goods or services by using a safe online payment tool. The remote payment not only can enable consumers to purchase commodities in each area, but also can provide great convenience for business settlement of merchants and financial institutions.
In the prior art, when the remote payment is executed, data synchronization is performed between a data center accessed by a merchant and a data center accessed by a consumer, so that processing of a transaction is completed together. However, the geographic location between the data center accessed by the merchant and the data accessed by the consumer may be far apart, and under the condition, the data synchronization correspondingly has a large delay, so that the service processing of the remote payment transaction in each data center is not synchronous, the normal proceeding of the transaction is not facilitated, and the probability of transaction failure is increased.
Based on this, a service processing method is needed to solve the technical problem of transaction failure caused by large data synchronization delay between different data centers in the prior art.
Disclosure of Invention
In a first aspect, the present application provides a service processing method, which is applicable to a data center system, where the data center system includes at least two data centers, each of the at least two data centers includes a micro service gateway and a service node, and the micro service gateway is connected to the service node in the at least two data centers; the method comprises the steps that a microservice gateway obtains flow information, wherein the flow information comprises a user center identification; the micro service gateway determines a target data center corresponding to the user center identification according to a preset distribution rule, wherein the preset distribution rule comprises a corresponding relation between at least one user center identification and at least one data center; and the micro service gateway sends the flow information to a service node in the target data center.
In the design, the micro service gateway directly interacts with the service nodes in other data centers, and the flow information can be directly sent to the service nodes in the target data center through the micro service gateway without waiting for data synchronization between data layers of two data centers, so that the technical problem of transaction failure caused by data synchronization delay between the two data centers is solved, and the success rate of transaction is improved.
In a possible implementation manner, the flow information is used for sending a service node in a target data center to a target application program (APP), a preset distribution rule comes from a distribution service, and the distribution rule of the distribution service is consistent with a distribution rule of a Content Delivery Network (CDN) node; the distribution rule of the CDN node is obtained by distributing at least one user to at least two data centers in advance according to a request initiated by at least one user on a target APP by the CDN node.
Through the mode, the preset shunting rule is sent to the micro service gateway through the CDN, so that the shunting rule adopted by the micro service gateway side and the shunting rule adopted by the CDN side can be kept consistent, the service request of the service requester and the request initiated by the user on the target APP can be reasonably shunted, the accuracy of data processing is improved, the utilization rate of a data center can be improved, and the efficiency of processing the service by the data center is further improved.
According to a possible implementation manner, the flow information may further include generation time, after the micro service gateway obtains the flow information, it may be further determined whether a time difference between the generation time in the flow information and the current time is smaller than a preset time delay, if so, a target data center corresponding to a user center identifier is determined according to a preset flow distribution rule, and if not, the flow information is sent to a service node in a data center where the micro service gateway is located, where the service node in the data center where the micro service gateway is located is used for synchronizing the flow information to the target data center in a data synchronization manner.
Through the mode, the micro service gateway calls the service nodes in other data centers according to the shunting rule only when the interval between the time of confirming the current service processing of the data center and the time of generating the pipeline ID is small, otherwise, the micro service gateway directly calls the service nodes of the data center, so that the cross-center interaction can be carried out when necessary, the cross-center interaction can be avoided when unnecessary, the cross-center interaction cost is reduced, and the service processing efficiency of a data center system is further improved.
In a second aspect, the present application provides a micro service gateway, which is suitable for a data center system, where the data center system includes at least two data centers, each of the at least two data centers includes a micro service gateway and a service node, and the micro service gateway connects the service nodes in the at least two data centers; wherein, the microservice gateway in any data center includes: the acquisition unit is used for acquiring the flow information which comprises a user center identifier; the determining unit is used for determining a target data center corresponding to the user center identifier according to a preset distribution rule, wherein the preset distribution rule comprises a corresponding relation between at least one user center identifier and at least one data center; and the sending unit is used for sending the flow information to the service node in the target data center.
In a possible implementation manner, the flow information is used for sending a service node in a target data center to a target application program APP, in this case, a preset flow distribution rule comes from a flow distribution service, and the flow distribution rule of the flow distribution service is consistent with the flow distribution rule of a CDN node; the distribution rule of the CDN node is obtained by distributing at least one user to at least two data centers in advance according to a request initiated by at least one user on a target APP by the CDN node.
In a possible implementation manner, the flow information further includes generation time, and in this case, before the determining unit determines, according to a preset splitting rule, a target data center corresponding to the user center identifier, the determining unit is further configured to: and determining that the time difference between the generation time and the current time in the running water information is less than the preset time delay.
In one possible implementation, the determining unit is further configured to: and if the time difference between the generation time and the current time in the flow information is not smaller than the preset time delay, the flow information is sent to a service node of a data center where the micro service gateway is located, and the service node in the data center where the micro service gateway is located is used for synchronizing the flow information to the target data center in a data synchronization mode.
In a third aspect, the present application provides a data center system comprising at least two data centers, each of the at least two data centers comprising a micro service gateway and a service node, and the micro service gateway connecting the service nodes in the at least two data centers, the micro service gateway being configured to perform the method as set forth in any one of the above first aspects.
In a fourth aspect, the present application provides a computer readable storage medium storing a computer program which, when executed, performs the method as set forth in any of the first aspects above.
In a fifth aspect, the present application provides a computing device comprising: a memory for storing program instructions; and the processor is used for calling the program instructions stored in the memory and executing the method designed by any one of the first aspect according to the obtained program.
In a sixth aspect, the present application provides a computer program product for carrying out the method as designed in any one of the first aspects above, when the computer program product is run on a processor.
The advantageous effects of the second aspect to the sixth aspect can be found in any design of the first aspect, and are not described in detail herein.
Drawings
Fig. 1 schematically illustrates a possible system architecture provided by an embodiment of the present application;
fig. 2 is a schematic flowchart illustrating a service processing method provided in an embodiment of the present application;
FIG. 3 is a schematic structural diagram illustrating pipelining information provided by an embodiment of the present application;
fig. 4 is a schematic flow chart illustrating another service processing method provided in the embodiment of the present application;
fig. 5 schematically illustrates a structural diagram of a micro service gateway provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 illustrates a schematic diagram of a possible system architecture provided by an embodiment of the present application, as shown in fig. 1, the system corresponds to a data center system, and the data center system may include at least two data centers, such as a first data center and a second data center illustrated in fig. 1, where the first data center and the second data center may be data centers that are set at different geographic location values, and data synchronization and data sharing may be implemented between the two data centers through a data layer.
Illustratively, the internal architecture of each data center is described by taking a first data center as an example:
as shown in fig. 1, the first data center includes: the system comprises an application background, a data layer and a service node, and illustratively, the system also comprises a service assembly layer and an access gateway. The application background is used for being connected with external applications, and the access gateway is used for being connected with the CDN and further being connected with the APP through the CDN. In implementation, the external application may send the pending service to the application background, and the application background sends the received pending service to the service node connected to the application background. The service node is located in the service layer, the service node can process the service to be processed sent by the application background, send the data generated in the service processing process to the data layer, and then the data layer stores the data and synchronizes the data to the data layers in other connected data centers. Further, if it is determined that the APP needs to query the service request through its service node, another data center processes the synchronous data received by the data layer, and generates a service processing packet. In addition, the service node in another data center can also transmit the service processing message to the APP served by the data center system through the service assembly layer and the access gateway.
Illustratively, traffic resulting from communication between the first data center and the second data center is east-west traffic, and traffic resulting from communication of the APP with the data center system is north-south traffic.
The following describes an overall process of processing the service using the data center system, taking processing the payment verification service as an example. In this embodiment, the data center system is a cloud flash payment data center system, the application background may be a kingdom application background, and the APP may be a cloud flash payment APP, so that the service node may be a verification service layer. In the implementation, the verification process can be divided into the following steps:
the method comprises the following steps: firstly, the prior preparation of the verification transaction is carried out, a user submits an order on a payment interface of the Jingdong, enters a Jingdong cashier desk, triggers a payment option, and can select the white payment, the WeChat payment and the cloud flash payment in the page. Assuming that the user selects cloud flash payment, the jingdong cashier desk is connected to the jingdong application background of the data center system, and the geographical position set by the service system of the jingdong cashier desk is closer to the geographical position of the first data center, so that the jingdong application background in the first data center is firstly accessed by the jingdong cashier desk.
It should be noted that the reference of the beijing application background in which data center the beijing east cashier desk is accessed to is not specifically limited in the embodiment of the present application, and the beijing east application background in which data center the beijing east cashier desk is accessed to may be selected according to a geographic location or may be set according to an agreement between a cloud flash payer and the beijing east.
Step two: and the Jingdong application background transmits the verification request carrying the verification information to a service node, namely a verification service layer. Illustratively, the verification information may include a verification manner and a verification order, wherein the verification manner may be: the system comprises short messages, payment passwords, fingerprints, human faces, graphic verification codes, sliders, login passwords, bank card information and the like, wherein one of verification contents can be selected for single-factor verification during verification, any number of verification modes can be selected for multi-factor verification, and the verification sequence is determined. In the embodiment of the application, the short message and the face are selected for multi-factor verification as an example, and the sequence is that the face verification is performed in advance in the short message verification.
Step three: and the verification service layer of the first data center processes the verification request needing short message verification and face verification into a corresponding message and transmits the message to the data layer of the first data center.
Step four: the data layer of the first data center synchronizes the message to the data layer of the second data center, and the data layer of the second data center transmits the message to the verification service layer of the second data center.
Step five: and the user initiates a verification process through the cloud flash payment APP. When the user registers the ID of the cloud flash payment APP, the user is assigned to a corresponding data center only by the ID, and if the user center identification corresponding to the ID of the user is a second data center in the service, the verification process initiated by the user calls the second data center to process the verification process.
Step six: and the cloud flash payment APP checks the checking mode and the checking sequence at the checking service layer of the second data center, namely, the face of the person is firstly verified, and then the short message verification is carried out.
Step seven: the cloud flash payment APP transmits the face information and the short message verification code of the user to a verification service layer of the second data center, the verification service layer verifies successfully, the verification result which is verified successfully is transmitted to a data layer of the first data center, and the verification service layer is transmitted to the first data center.
Step eight: and the Jingdong cashier desk inquires a result of successful verification in a verification service layer of the first data center through the Jingdong application background, and the verification process is ended.
In the above steps one to eight, since the data center called by the cashier desk in the kyoto and the data center called by the cloud flash payment APP are different, data between the two data centers needs to be synchronized to ensure that a verification transaction can be normally performed. However, in this manner, if the two data centers are located at a relatively large geographic distance from each other, the data synchronization may be delayed, which may cause problems in transaction processing. For example, in the sixth step, when the cloud flash payment APP queries the verification mode and the verification sequence in the verification service layer of the second data center, if the delay of data synchronization is large, the first data center cannot synchronize the verification request requiring short message verification and face verification to the corresponding verification service layer of the second data center in a long time, and in this case, the cloud flash payment APP cannot query the verification mode and the verification sequence, and then cannot execute the next seventh step and the eighth step, which may further cause failure of the entire verification process.
In addition, in other scenarios, the data centers called by the merchant and the user may not be the same. The description is made with the example in the above steps one to eight. The data center called by the Jingdong cashier desk and the data center called by the cloud flash payment APP are originally the same, the first data center is called, but the data center called by the cloud flash payment APP is manually switched to the second data center, or partial flow is shunted to the second data center; and the data center called by the kyoto cash register is still the second data center. In this case, the data center called by the kyoto cash register and the data center called by the cloud flash APP are different, and data between the two data centers needs to be synchronized.
Based on this, the application provides a service processing method, which can be used in the two scenarios to perform cross-center calling and can also solve the problem caused by synchronous delay.
Specifically, as shown in fig. 1, the method sets a micro service gateway in each data center in advance, and the micro service gateway in each data center is also connected to a service node in each other data center (to simplify the architecture, fig. 1 only illustrates the connection between the micro service gateway and the service node in the data center, and the connection between the micro service gateway and the service node in the other data center is not illustrated), so that when a cross-data-center call needs to be performed, the micro service gateway directly sends corresponding information to the service node in the data center to be called, and data synchronization between two data centers through a data layer is no longer needed, which helps to save the problem of too long delay existing in performing data synchronization between two data centers and improve the efficiency of service processing.
The service processing method provided in the embodiment of the present application is further described below based on the system architecture illustrated in fig. 1.
Fig. 2 exemplarily shows a flow diagram of a service processing method provided by the present application, and as shown in fig. 2, the service processing method includes:
step 201, the micro service gateway obtains the flow information, wherein the flow information may include a user center identifier.
In step 201, after the service requester accesses the application background of the data center, when there is a service processing requirement, the service requester sends a service request to the application background, where the service request includes a user center identifier. After receiving a service request sent by a service request party, the application background generates running information according to a user center identifier in the service request, and assigns an Identity Document (ID) to the running information, and then all subsequent service processes related to the service request are performed under the running ID.
Fig. 3 illustrates a structural schematic diagram of a flow information provided in an embodiment of the present application, and as shown in fig. 3, in this example, the flow information may include a user center identifier, and may also include an identifier of an application background that sends the flow information, a generation time of the flow information, a machine code, and a random number. The machine code is a random number … which is used for converting the pipeline information into a program language directly used by a computer, and is a number randomly generated when the pipeline ID is distributed, and is used for representing the uniqueness of the pipeline ID. Each piece of information in the flow information may be arranged according to the sequence illustrated in fig. 3, or may be arranged according to other sequences, which is not limited specifically.
Step 202, the micro service gateway determines a target data center corresponding to the user center identifier according to a preset distribution rule.
In step 202, the preset offloading rule may include a corresponding relationship between the user center identifier and the data center, the preset offloading rule is from an offloading service, and the offloading rule of the offloading service is consistent with an offloading rule of a Content Delivery Network (CDN) node, so that the micro service gateway may obtain the user center identifier in the flow information after receiving any flow information, and query the preset offloading rule to determine a target data center corresponding to the user center identifier.
Illustratively, the preset offloading rule may be obtained by the CDN node as follows: when any user registers on the APP, the APP allocates a unique user ID (user ID is a user center identifier of the user) to the user, and sends the user ID to the CDN node, and the CDN node matches the unique data center for the user ID from each data center, and then establishes a preset offloading rule according to the data center allocated to the user ID corresponding to each user, and sends the preset offloading rule to the micro service gateway in each data center, so that the micro service gateway in each data center stores and uses the rules.
And step 203, the micro service gateway sends the flow information to a service node in the target data center.
In step 203, assuming that the micro service gateway determines that the data center corresponding to one service request is the second data center according to the splitting rule, the micro service gateway may split all service requests corresponding to the pipeline ID to the second data center for processing.
Further, as shown in fig. 1, assuming that the service request is a verification request, the flow information received by the micro service gateway in the first data center includes a verification mode and a verification sequence, the verification mode is face identification and short message verification, and the verification sequence is advanced face identification and then short message verification, then:
the micro service gateway in the first data center can directly send the received flow information to the verification service layer of the second data center, and then the verification service layer processes the verification mode and the verification sequence into corresponding messages so as to enable the cloud flash payment APP to inquire. And after the cloud flash payment APP checks the checking mode and the checking sequence in the checking service layer of the second data center, the person face is firstly checked, then the short message is checked, then the cloud flash payment APP transmits the face information and the short message check code of the user to the checking service layer of the second data center, and the checking service layer is checked successfully. Further, the jingdong application background queries a result of successful verification in a verification service layer of the second data center through the micro service gateway of the second data center, so that the verification service request is completed.
The embodiment provides a service processing method, which includes setting a micro service gateway in a data center, setting the micro service gateway to be connected with a service node in each data center, acquiring running water information including a user center identifier through the micro service gateway when a service needs to be processed, determining a target data center corresponding to the user center identifier according to a preset distribution rule, and then directly sending the running water information to the service node of the target data center. Therefore, the micro service gateway directly interacts with service nodes in other data centers, and data synchronization between the two data centers does not need to be waited, so that the problem of transaction failure caused by data synchronization delay between the two data centers is avoided, and the probability of transaction success is improved.
In the embodiment of the application, the micro service gateway can also decide whether to call other data centers to execute service processing or not by combining the generation time of the flow information so as to avoid cross-center calling as much as possible. This implementation is described in detail below.
Fig. 4 exemplarily shows a flow diagram of another service processing method provided by the present application, and as shown in fig. 4, the method includes:
step 401, the micro service gateway obtains the running water information, and the running water information includes the generation time of the running water information.
Step 402: the microservice gateway determines whether the time difference between the generation time of the pipeline information and the current time is less than a preset time delay, if so, the step 403 is executed, and if not, the step 404 is executed.
And step 403, the micro service gateway determines a target data center corresponding to the user center identifier according to a preset flow distribution rule, and sends the flow information to a service node in the target data center.
Step 404, the micro service gateway sends the flow information to a service node in the data center where the micro service gateway is located.
The above embodiments in steps 201 to 203 are still described as an example:
after the Jingdong cashier desk is accessed to the Jingdong application background of the first data center, the Jingdong cashier desk sends a service request to the Jingdong application background of the first data center, and the Jingdong application background generates flow information according to the service request and the time of receiving the service request and sends the flow information to the micro-service gateway in the first data center. And after receiving the flow information, the micro service gateway in the first data center acquires the current time, determines the time difference between the current time and the generation time in the flow information, and compares the time difference with the preset time delay. The preset time delay can be set to be 10-30 ms, and preferably, the preset time delay can be set to be 20 ms. In this way, the current time obtained by the micro service gateway is actually used for indicating the time when the current service starts to be processed in the first data center, so that when the time difference between the obtained current time and the generation time in the pipeline information is less than 20ms, it indicates that the interval between the generation time of the pipeline information and the time when the current service starts to be processed in the first data center is small, and in such a small time interval, it is difficult to complete data synchronization between the data layers between the first data center and the second data center, in this case, the micro service gateway in the first data center can send the pipeline information to the verification service layer of the second data center, so as to invoke other data centers through the micro service gateway in time when the data synchronization of the data centers consumes long time, so as to complete the service in time through cross-center processing. On the contrary, when the time difference is not less than 20ms, it indicates that the interval between the generation time of the pipeline information and the time when the current service in the first data center starts to be processed is large enough, and the large time interval is enough for completing data synchronization by the data layer between the first data center and the second data center.
Through the mode, the micro service gateway calls the service nodes in other data centers according to the shunting rule only when the interval between the time of confirming the current service processing of the data center and the generation time of the pipeline ID is small, otherwise, the service nodes of the data center are directly called, so that the cross-center interaction can be carried out when necessary, the cross-center interaction can be avoided when unnecessary, the cross-center interaction cost is reduced, and the service processing efficiency of a data center system is further improved.
Based on the same technical concept, the embodiment of the present application further provides a micro service gateway, and the micro service gateway can execute the flow of the service processing method provided by the foregoing embodiment.
Fig. 5 exemplarily illustrates that a structural diagram of a micro service gateway provided by an embodiment of the present application is applicable to a data center system, where the data center system includes at least two data centers, each of the at least two data centers includes a micro service gateway and a service node, and the micro service gateway connects the service nodes in the at least two data centers; wherein, the microservice gateway in any data center includes: an obtaining unit 501, configured to obtain flow information, where the flow information includes a user center identifier; a determining unit 502, configured to determine, according to a preset offloading rule, a target data center corresponding to a user center identifier, where the preset offloading rule includes a correspondence between at least one user center identifier and at least one data center; a sending unit 503, configured to send the pipelining information to the service node in the target data center.
In one possible implementation manner, the flow information is used for sending a service node in a target data center to a target application program APP; the determining unit 502 is further configured to preset a distribution rule from a distribution service, where the distribution rule of the distribution service is consistent with a distribution rule of a Content Delivery Network (CDN) node; the distribution rule of the CDN node is obtained by distributing at least one user to at least two data centers in advance according to a request initiated by at least one user on a target APP by the CDN node.
In a possible implementation manner, the running water information further includes generation time; the determining unit 502 is further configured to determine that a time difference between the generation time and the current time in the pipeline information is smaller than a preset time delay.
In one possible implementation, the determining unit 502 is further configured to: and if the time difference between the generation time and the current time in the flow information is not smaller than the preset time delay, the flow information is sent to a service node of a data center where the micro service gateway is located, and the service node in the data center where the micro service gateway is located is used for synchronizing the flow information to the target data center in a data synchronization mode.
Based on the same technical concept, an embodiment of the present invention further provides a data center system, which includes at least two data centers, where each of the at least two data centers includes a micro service gateway and a service node, and the micro service gateway connects the service nodes in the at least two data centers, and is configured to execute the method shown in any one of fig. 2 or fig. 4.
Based on the same technical concept, an embodiment of the present invention further provides a computing device, including: a memory for storing program instructions;
and the processor is used for calling the program instructions stored in the memory and executing the method shown in any one of the embodiments in the figure 2 or the figure 4 according to the obtained program.
Based on the same technical concept, the embodiment of the present invention also provides a computer-readable storage medium, which when running on a processor, implements the method as shown in any one of fig. 2 or fig. 4.
Based on the same technical concept, the embodiment of the present invention also provides a computer program product, which when running on a processor implements the method as shown in any one of the embodiments of fig. 2 or fig. 4.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (11)

1. The service processing method is applicable to a data center system, wherein the data center system comprises at least two data centers, each of the at least two data centers comprises a micro service gateway and a service node, and the micro service gateway is connected with the service node in the at least two data centers;
the micro service gateway acquires flow information, wherein the flow information comprises a user center identification;
the micro service gateway determines a target data center corresponding to the user center identifier according to a preset distribution rule, wherein the preset distribution rule comprises a corresponding relation between at least one user center identifier and at least one data center;
and the micro service gateway sends the flow information to a service node in the target data center.
2. The method of claim 1, wherein the pipelining information is for service nodes in the target data center to send to a target application APP;
the preset distribution rule is from distribution service, and the distribution rule of the distribution service is consistent with the distribution rule of the CDN node;
the distribution rule of the CDN node is obtained by the CDN node distributing the at least one user to the at least two data centers in advance according to a request initiated by the at least one user on the target APP.
3. The method of claim 1, wherein the pipeline information further includes a generation time;
before the micro service gateway determines the target data center corresponding to the user center identifier according to a preset distribution rule, the method further includes:
and determining that the time difference between the generation time and the current time in the running water information is less than a preset time delay.
4. The method of claim 3, wherein the method further comprises:
and if the time difference between the generation time and the current time in the flow information is not smaller than the preset time delay, sending the flow information to a service node in a data center where the micro service gateway is located, wherein the service node in the data center where the micro service gateway is located is used for synchronizing the flow information to the target data center in a data synchronization mode.
5. The micro service gateway is applicable to a data center system, wherein the data center system comprises at least two data centers, each of the at least two data centers comprises a micro service gateway and a service node, and the micro service gateway is connected with the service nodes in the at least two data centers; wherein, the microservice gateway in any data center includes:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring running information which comprises a user center identifier;
the determining unit is used for determining a target data center corresponding to the user center identifier according to a preset distribution rule, wherein the preset distribution rule comprises a corresponding relation between at least one user center identifier and at least one data center;
and the sending unit is used for sending the flow information to the service node in the target data center.
6. The micro-services gateway of claim 5, wherein the pipelining information is for a service node in the target data center to send to a target application APP;
the preset distribution rule is from distribution service, and the distribution rule of the distribution service is consistent with the distribution rule of the CDN node; receiving
The distribution rule of the CDN node is obtained by the CDN node distributing the at least one user to the at least two data centers in advance according to a request initiated by the at least one user on the target APP.
7. The microservice gateway of claim 5, wherein the flow information further comprises a generation time;
the determining unit is further configured to determine that a time difference between the generation time and the current time in the pipeline information is smaller than a preset time delay.
8. The micro-services gateway of claim 7, wherein the determining unit is further to:
and if the time difference between the generation time and the current time in the flow information is not smaller than the preset time delay, sending the flow information to a service node in a data center where the micro service gateway is located, wherein the service node in the data center where the micro service gateway is located is used for synchronizing the flow information to the target data center in a data synchronization mode.
9. A data center system comprising at least two data centers, each of the at least two data centers comprising a micro service gateway and a service node, the micro service gateway connecting the service nodes of the at least two data centers, the micro service gateway being configured to perform the method of any of claims 1 to 4.
10. A computer-readable storage medium, characterized in that it stores a computer program which, when executed, performs the method of any one of claims 1 to 4.
11. A computing device, comprising:
a memory for storing program instructions;
a processor for calling program instructions stored in said memory to execute the method of any of claims 1 to 4 in accordance with the obtained program.
CN202111515015.8A 2021-12-13 2021-12-13 Service processing method, micro-service gateway and data center system Active CN114390109B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111515015.8A CN114390109B (en) 2021-12-13 2021-12-13 Service processing method, micro-service gateway and data center system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111515015.8A CN114390109B (en) 2021-12-13 2021-12-13 Service processing method, micro-service gateway and data center system

Publications (2)

Publication Number Publication Date
CN114390109A true CN114390109A (en) 2022-04-22
CN114390109B CN114390109B (en) 2024-02-20

Family

ID=81195529

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111515015.8A Active CN114390109B (en) 2021-12-13 2021-12-13 Service processing method, micro-service gateway and data center system

Country Status (1)

Country Link
CN (1) CN114390109B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080159134A1 (en) * 2006-12-28 2008-07-03 Ebay Inc. Method and system for gateway communication
CN102437964A (en) * 2010-11-17 2012-05-02 华为技术有限公司 Method and device for issuing business as well as communication system
EP3041254A1 (en) * 2014-12-30 2016-07-06 Telefonica Digital España, S.L.U. Method for providing information on network status from telecommunication networks
US20170250823A1 (en) * 2016-02-26 2017-08-31 Cable Television Laboratories, Inc. System and method for dynamic security protections of network connected devices
CN109961204A (en) * 2017-12-26 2019-07-02 中国移动通信集团浙江有限公司 Quality of service analysis method and system under a kind of micro services framework
CN109995713A (en) * 2017-12-30 2019-07-09 华为技术有限公司 Service processing method and relevant device in a kind of micro services frame
US20190273804A1 (en) * 2018-03-04 2019-09-05 Netskrt Systems, Inc. System and apparatus for intelligently caching data based on predictable schedules of mobile transportation environments
CN110913025A (en) * 2019-12-31 2020-03-24 中国银联股份有限公司 Service calling method, device, equipment and medium
US20200162380A1 (en) * 2018-11-19 2020-05-21 International Business Machines Corporation Controlling data communication between microservices
CN112671882A (en) * 2020-12-18 2021-04-16 上海安畅网络科技股份有限公司 Same-city double-activity system and method based on micro-service
CN113285888A (en) * 2021-04-30 2021-08-20 中国银联股份有限公司 Multi-service system multi-data center shunting method, device, equipment and medium
WO2021179493A1 (en) * 2020-03-09 2021-09-16 平安科技(深圳)有限公司 Microservice-based load balancing method, apparatus and device, and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080159134A1 (en) * 2006-12-28 2008-07-03 Ebay Inc. Method and system for gateway communication
CN102437964A (en) * 2010-11-17 2012-05-02 华为技术有限公司 Method and device for issuing business as well as communication system
EP3041254A1 (en) * 2014-12-30 2016-07-06 Telefonica Digital España, S.L.U. Method for providing information on network status from telecommunication networks
US20170250823A1 (en) * 2016-02-26 2017-08-31 Cable Television Laboratories, Inc. System and method for dynamic security protections of network connected devices
CN109961204A (en) * 2017-12-26 2019-07-02 中国移动通信集团浙江有限公司 Quality of service analysis method and system under a kind of micro services framework
CN109995713A (en) * 2017-12-30 2019-07-09 华为技术有限公司 Service processing method and relevant device in a kind of micro services frame
US20190273804A1 (en) * 2018-03-04 2019-09-05 Netskrt Systems, Inc. System and apparatus for intelligently caching data based on predictable schedules of mobile transportation environments
US20200162380A1 (en) * 2018-11-19 2020-05-21 International Business Machines Corporation Controlling data communication between microservices
CN110913025A (en) * 2019-12-31 2020-03-24 中国银联股份有限公司 Service calling method, device, equipment and medium
WO2021179493A1 (en) * 2020-03-09 2021-09-16 平安科技(深圳)有限公司 Microservice-based load balancing method, apparatus and device, and storage medium
CN112671882A (en) * 2020-12-18 2021-04-16 上海安畅网络科技股份有限公司 Same-city double-activity system and method based on micro-service
CN113285888A (en) * 2021-04-30 2021-08-20 中国银联股份有限公司 Multi-service system multi-data center shunting method, device, equipment and medium

Also Published As

Publication number Publication date
CN114390109B (en) 2024-02-20

Similar Documents

Publication Publication Date Title
US20210203751A1 (en) Methods, systems, and devices for electronic note identifier allocation and electronic note generation
WO2020173287A1 (en) Systems and methods for determining network shards in blockchain network
US20220311607A1 (en) Key generation method and apparatus, device, and medium
CN111047321A (en) Service processing method and device, electronic equipment and storage medium
CN111614709A (en) Partition transaction method and system based on block chain
CN112866421B (en) Intelligent contract operation method and device based on distributed cache and NSQ
CN111478775A (en) Interactive aggregated signature method, equipment and storage medium
CN117011052A (en) Transaction method, POS machine and transaction system based on intelligent contract
CN107038025B (en) SOA architecture-based system calling method and device
CN113194143B (en) Block chain account creating method and device and electronic equipment
CN114390109B (en) Service processing method, micro-service gateway and data center system
CN112732799A (en) Method and device for querying Fabric Block Link book data
CN112527901A (en) Data storage system, method, computing device and computer storage medium
CN111654476B (en) User authorized access processing method and device
GB2520938A (en) Mobile device location
CN113205199A (en) Mobile phone bank foreign currency and cash reservation method and device
CN113095821A (en) Method and device for interaction of property rights
CN102222297A (en) Data processing method and system thereof
CN111429265A (en) Service agent method, system and server
CN112819479A (en) Order state processing method and device, storage medium and server
CN111866171B (en) Message processing method, device, electronic equipment and medium
CN111915313B (en) Digital asset transfer control method, device and communication system for blockchain
CN109901936B (en) Service cooperation method and device applied to distributed system
CN113055453B (en) Operation processing method, device and system based on block chain
CN114329604A (en) Multi-party privacy calculation method, device and system based on block chain and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant