CN112825045B - Payment request processing method, system and storage medium - Google Patents

Payment request processing method, system and storage medium Download PDF

Info

Publication number
CN112825045B
CN112825045B CN201911141171.5A CN201911141171A CN112825045B CN 112825045 B CN112825045 B CN 112825045B CN 201911141171 A CN201911141171 A CN 201911141171A CN 112825045 B CN112825045 B CN 112825045B
Authority
CN
China
Prior art keywords
payment
requests
gateway
payment request
request processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911141171.5A
Other languages
Chinese (zh)
Other versions
CN112825045A (en
Inventor
张凯
刘国栋
陈军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN201911141171.5A priority Critical patent/CN112825045B/en
Publication of CN112825045A publication Critical patent/CN112825045A/en
Application granted granted Critical
Publication of CN112825045B publication Critical patent/CN112825045B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/02Payment architectures, schemes or protocols involving a neutral party, e.g. certification authority, notary or trusted third party [TTP]
    • G06Q20/027Payment architectures, schemes or protocols involving a neutral party, e.g. certification authority, notary or trusted third party [TTP] involving a payment switch or gateway
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system

Abstract

The invention relates to the technical field of online payment, and discloses a payment request processing method, a system and a storage medium, wherein the method is applied to a system comprising a front gateway, a load balancer and a payment cluster, and comprises the following steps: the front gateway receives payment requests and sorts the payment requests by using a front gateway rule; the load balancer receives the sequenced payment requests and then distributes the sequenced payment requests to the payment clusters; and the payment cluster processes the payment request distributed by the load balancer. The technical effects of high concurrency, high performance, high availability and data consistency can be achieved.

Description

Payment request processing method, system and storage medium
Technical Field
The invention relates to the technical field of internet, in particular to the technical field of online payment, and specifically relates to a payment request processing method, a payment request processing system and a storage medium.
Background
Today there are more and more users choosing online payments, so a payment scenario requires a high number of solutions that can handle payment requests. In the common payment system in the prior art, the common payment system is directly routed to the container as load balancing through the company SLB equipment, and the SLB equipment can only perform simple load balancing and current limiting operations and does not meet the expansion of expected payment services.
Disclosure of Invention
It is an object of the present invention to overcome the above-mentioned disadvantages in the prior art, and to provide a payment request processing method with high concurrency, high performance, high availability and data consistency, which is mainly applied to an online payment scenario.
In order to achieve the above object, the present invention adopts the following technical means:
a payment request processing method is applied to a system comprising a front gateway, a load balancer and a payment cluster, and comprises the following steps: the front gateway receives payment requests and sorts the payment requests by using a front gateway rule; the load balancer receives the sequenced payment requests and then distributes the sequenced payment requests to the payment clusters; and the payment cluster processes the payment request distributed by the load balancer.
Preferably, the front gateway sorts the payment requests by using a front gateway rule, specifically: the prepositive gateway limits the current of the payment request by using a current limiting priority rule, wherein the current limiting priority rule is as follows: the method comprises the following steps of interface core field association current limiting, interface requester ID current limiting and interface unified current limiting.
When the number of the payment requests is high, the rules called by all the core interfaces when the number of the payment requests is higher than a preset threshold value are not changed, the non-core interfaces configure the flow limiting priority rule, and the non-core interfaces perform degradation operation when the number of the payment requests is higher than the preset threshold value.
Optionally, data is collected at an interface associated with the payment request to form buried point cluster information, and the buried point cluster information is stored in a database, where the buried point cluster information may participate in the following routing priority rule for sorting. Here, the database may employ HBase and Redis storage. Redis is fast to read but limited by memory; the HBase is slow to read but can store much larger data, so that the HBase is suitable for persistent storage of large data. Therefore, the invention adopts HBase and Redis to realize the data warehouse and the cache database, thereby achieving both speed and expansibility.
Preferably, the front gateway uses a front gateway rule to order the payment request, and further includes: the front gateway sorts the current-limited payment requests by using a routing priority rule, wherein the routing priority rule comprises the following steps: payment order ID, payment order-receiving scene, the above-mentioned buried point cluster information, user ID, random.
If the preposed gateway calls a background interface within the specified time to obtain the definite error code, the retried calling is carried out according to the routing priority rule.
Preferably, in the step of processing the payment request distributed by the load balancer by the payment clusters, one payment cluster is composed of a plurality of payment partitions, and each payment partition of the payment cluster is deployed as a configuration file, a data source, middleware and a server according to different configurations by using the same set of codes. The configuration file, the data source, the middleware and the server are not limited in number.
Preferably, the payment clusters are independent of each other and disaster recovery. When a certain payment cluster has a problem, other payment clusters are automatically tried to be routed, and the disaster recovery capability is improved.
Preferably, the number of payment clusters is adjusted according to the number of payment requests. The capacity of the payment system in the transverse capacity expansion is improved, along with the improvement of the service volume, the payment system does not need to be massively reconstructed, and the improvement of the service volume can be responded by expanding the payment cluster.
It is yet another object of the present invention to provide a front-end gateway that receives payment requests and orders the payment requests using the front-end gateway rules described above.
It is a further object of the present invention to provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a payment request processing method as recited in any one of the above.
Another object of the present invention is to provide a payment request processing system, which includes a user terminal, a front-end gateway and a payment server, wherein the payment server receives a payment instruction generated based on an interaction between the user terminal and the front-end gateway, and processes the payment instruction by using the payment request processing method described in any one of the above.
According to the above, the payment request processing method of the present application is applied to a system including a front gateway, a load balancer, and a payment cluster, and includes: the front gateway receives the payment request and sorts the payment request by using a front gateway rule; the load balancer receives the sequenced payment requests and then distributes the sequenced payment requests to the payment clusters; and the payment cluster processes the payment request distributed by the load balancer. When the number of the payment requests is high, the payment requests are sequenced by using a front gateway rule, and then distributed to a payment cluster by using a load balancer, so that excessive access load borne by a single server is avoided, the risk of downtime of the server in an access peak period is reduced, high concurrency processing capacity is remarkably improved, the stability of a system is improved, and user experience is also improved.
Drawings
FIG. 1 is a flow chart illustrating a payment request processing method according to an embodiment of the invention;
FIG. 2 is a schematic block diagram of a payment request processing method;
FIG. 3 is a schematic diagram of the relationship between the collected data to form the embedded point cluster information and the pre-gateway and the payment server;
FIG. 4 is a flow diagram illustrating a process for ordering payment requests using pre-gateway rules in accordance with one embodiment of the present invention;
FIG. 5 is a schematic block diagram of a payment request processing system in accordance with one aspect of the present invention.
Detailed Description
Hereinafter, the payment request processing method according to an embodiment of the present invention will be further described.
The payment request processing method in the embodiment is applied to a system comprising a front gateway, a load balancer and a payment cluster, and comprises the following steps: the front gateway receives the payment request and sorts the payment request by using a front gateway rule; the load balancer receives the sequenced payment requests and then distributes the sequenced payment requests to the payment clusters; and the payment cluster processes the payment request distributed by the load balancer.
Fig. 1 is a schematic flowchart of a payment request processing method according to the present embodiment, and fig. 2 is a schematic block diagram of the payment request processing method. The load balancer receives payment requests from, for example, a caller, a service, a client, a browser, and the like, and then distributes the payment requests to the front gateway, and then performs the payment request processing method of this embodiment: the front gateway receives the payment request and sorts the payment request by using a front gateway rule; the load balancer receives the sequenced payment requests and then distributes the sequenced payment requests to the payment clusters; and the payment cluster processes the payment request distributed by the load balancer. When the number of the payment requests is high, the payment requests are sequenced by using a front gateway rule, and then distributed to a payment cluster by using a load balancer, so that excessive access load borne by a single server is avoided, the risk of downtime of the server in an access peak period is reduced, high concurrency processing capacity is remarkably improved, the stability of a system is improved, and user experience is also improved.
Since a general load balancer can only perform simple load balancing and current limiting operations, and a further current limiting means is required when the number of payment requests is higher than a preset threshold, the front gateway in the embodiment sorts the payment requests by using a front gateway rule, specifically: the prepositive gateway limits the current of the payment request by using a current limiting priority rule, wherein the current limiting priority rule is as follows from high to low: the method comprises the following steps of interface core field association current limiting, interface requester ID current limiting and interface unified current limiting. On one hand, the flow can be limited in a targeted manner through screening so as to improve the processing capacity of important payment requests; on the other hand, by setting the flow limiting priority rule, the degree of flow limiting can be flexibly adjusted according to the number of payment requests.
And when the number of the payment requests is higher than the preset threshold value, the non-core interface performs degradation operation. By distinguishing the core interface from the non-core interface and configuring the current limiting priority rule on the non-core interface, the processing capability of important payment requests is further improved, and the flexibility of current limiting is further improved.
Fig. 3 is a schematic diagram of a relationship between data collection and formation of embedded point cluster information, a front gateway and a payment server, wherein data collection and formation of embedded point cluster information are selected at an interface related to a payment request, and the embedded point cluster information is stored in a database so as to maintain data consistency. For example, the buried point data is compatible with partial legacy interfaces (such as order query and order close) after the user pays for the order, key fields of the interfaces are saved in a database when the user pays for the order, and the key fields can be read and participate in the following routing priority rule for sorting. The database in the embodiment adopts HBase and Redis storage, and the Redis storage is in a memory, so that the reading is fast, and the database is suitable for being used as a cache; HBase is slow to read, but can store much larger data, and is suitable for persistent storage of large data. Therefore, the embodiment adopts HBase and Redis to realize the data warehouse and the cache database, and both the speed and the expansibility are considered.
Further, the front gateway uses the front gateway rule to order the payment request, and further includes: the front gateway sorts the current-limited payment requests by using a routing priority rule, wherein the routing priority rule is as follows: payment order ID, payment order-receiving scene, buried point cluster information, user ID, random. On one hand, the processing capacity can be improved for important payment requests; on the other hand, by setting the routing priority rule, the routing level can be flexibly adjusted according to the number of the payment requests, so that the payment experience of the user is improved in a targeted manner.
If the preposed gateway calls the background interface within the specified time to obtain the definite error code, the retried calling is carried out according to the route priority rule. The probability of failure in processing the payment request is reduced, and the failure experience of the user in the payment process is reduced, so that the user experience is further improved.
Therefore, in the embodiment, different interfaces can be set in a differentiated mode through the front gateway, so that the payment requests are judged and sequenced correspondingly, and the breakdown of the payment processing system caused by too high quantity of the high concurrent requests is avoided.
In the method of this embodiment, the load balancer receives the ordered payment requests and then distributes the ordered payment requests to the payment clusters, one payment cluster is composed of a plurality of payment partitions, and each payment partition of the payment cluster is deployed as a configuration file, a data source, middleware and a server according to different configurations through the same set of codes. The configuration file, the data source, the middleware and the server are not limited in number, for example, the configuration file, the data source, the middleware and the server can be a plurality of servers, when one server is normally upgraded or has a problem, the other server can still support payment, and a high-availability effect can be achieved. Moreover, the payment clusters can be independent of each other, and disaster recovery can be realized when necessary. In addition, when the service demand increases, the number of payment clusters can be adjusted according to the number of payment requests.
As shown in fig. 5, an embodiment of the present invention can provide a payment request processing system, which includes a user side, a front gateway and a payment server, where the front gateway receives a payment request sent by the user side, sorts the payment request according to the front gateway rule, and then sends the payment request to the payment server, and the payment server processes the payment request.
To more clearly illustrate the method for ordering payment requests by using pre-gateway rules according to an embodiment of the present invention, fig. 4 is a schematic flow chart illustrating the process of ordering payment requests by using pre-gateway rules according to an embodiment of the present invention, including the following steps:
the preposed gateway presets a preposed gateway rule which comprises the following steps: a. the flow limiting priority rule limits the flow of the payment request, and the flow limiting priority rule comprises the following steps from high to low: interface core field association current limiting, interface requester ID current limiting and interface unified current limiting; b. when the number of the payment requests is higher than the preset threshold value, the non-core interfaces configure the current limiting priority rule, and perform degradation operation when the number of the payment requests is higher than the preset threshold value; c. the payment requests are ordered by a flow-limiting priority rule, and the routing priority rule is as follows from high to low: payment order ID, payment order receiving scene, embedded point cluster information, user ID and random; d. if the preposed gateway calls the background interface within the specified time to obtain the definite error code, the retried calling is carried out according to the routing priority rule.
S1: the front gateway receives a payment request.
S2: and collecting data through a part of legacy interfaces compatible with the user payment after ordering to form embedded point cluster information, and storing the embedded point cluster information to a database.
S3: judging whether the payment request belongs to a current limiting condition or not by using a current limiting priority rule in a preposed gateway rule, and if so, limiting the current; if not, the process goes to S4.
S4: the payment requests are ordered using the routing priority rule in the pre-gateway rule.
S5: judging whether the preposed gateway acquires the error code, if so, returning to S4; if not, the process goes to S6.
S6: the front gateway queues and routes the payment request.
The following examples further illustrate the payment request processing method and system of the present invention.
Taking the purchase of network goods as an example, a plurality of users operate mobile phones (user terminals) to access internet shopping malls (service terminals) and select goods to be purchased for ordering, at this time, a plurality of payment requests for purchasing the goods are generated, and the payment requests come from a payment order ID of "123456", a payment order receiving scene of "take-out platform", a user ID of "abc", an arbitrary payment request "X" and an arbitrary payment request "Y", and are sent to the front gateway.
After receiving the payment request, the front gateway sorts the plurality of payment requests by using a front gateway rule, wherein the front gateway rule comprises: a. the flow limiting priority rule is from high to low: the payment request interface comprises a core field of 'Y' and is used for limiting current, the payment request interface comprises an identifier ID of 'abc' and is used for limiting current, and the payment request interface is used for limiting current in a unified mode if the two conditions are met; b. the rules called by all the core interfaces when the number of the payment requests is higher than the preset threshold value are not changed, and the non-core interfaces perform degradation operation when the number of the payment requests is higher than the preset threshold value (the payment request interface contains a core field of 'Y' and still limits the current, and the payment request interface contains an identifier ID of 'abc' and does not limit the current); c. the routing priority rule is from high to low: payment order ID, payment order receiving scene, embedded point cluster information, user ID and random; d. if the preposed gateway obtains the error code, the preposed gateway carries out retry calling according to the routing priority rule.
Key fields 'IHK' are collected through an order query interface after a compatible user pays an order to form buried point data, and the key fields are stored in a database to participate in the sequencing of the routing priority rule of the front-end gateway.
The method comprises the steps of judging the requests by utilizing a preposed gateway rule, wherein any payment request 'Y' is judged to be limited in the flow limiting priority rule, the payment request with the user ID 'abc' is judged to be limited in the flow limiting priority rule, and the preposed gateway sorts other payment requests which are not limited according to the routing priority rule in the following order: a payment request with a payment order ID of "123456", a payment request with a payment receipt scene of "takeaway platform", a payment request with buried point data of "IHK", and an arbitrary payment request "X".
The load balancer receives the sequenced payment requests and then distributes the payment requests to a payment cluster, the payment cluster processes the payment requests, the payment cluster is provided with a plurality of payment subareas which are respectively a configuration file, a data source, a middleware and two servers which are deployed according to different configurations through the same set of codes, and therefore the payment request processing performance is improved. When the access amount is high, the number of configuration files, data sources, middleware and/or servers can be increased according to the service requirement, so that the high concurrent processing capacity is improved; in addition, the payment clusters can be disaster recovery mutually when necessary, and the payment request processing performance is further improved.
Here, the payment request processing system includes a user terminal, a front gateway having front gateway rules, and a payment server that receives a payment instruction generated based on an interaction between the user terminal and the front gateway. The system also has a higher capability of processing high concurrency and better performance of processing payment requests.
According to the content, the method and the device can respond to the application scene with high payment request quantity, improve the processing capacity of the payment service, and further effectively improve the user experience.
In this specification, the invention has been described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.

Claims (10)

1. A payment request processing method is characterized in that the method is applied to a system comprising a front gateway, a load balancer and a payment cluster; the payment request processing method comprises the following steps:
the front gateway receives payment requests and sorts the payment requests by using a front gateway rule;
the load balancer receives the sequenced payment requests and then distributes the sequenced payment requests to the payment clusters;
the payment cluster processes the payment request distributed by the load balancer;
the front gateway sorts the payment requests by using a front gateway rule, and specifically comprises the following steps:
and the prepositive gateway limits the current of the payment request by using a current limiting priority rule.
2. The payment request processing method of claim 1,
the flow limiting priority rule is as follows from high to low:
the method comprises the following steps of interface core field association current limiting, interface requester ID current limiting and interface unified current limiting.
3. The payment request processing method of claim 2,
and when the number of the payment requests is higher than the preset threshold value, the rules called by all the core interfaces are unchanged, the non-core interfaces configure the flow limiting priority rule, and the non-core interfaces perform degradation operation when the number of the payment requests is higher than the preset threshold value.
4. The payment request processing method of claim 2, wherein data is collected at an interface associated with the payment request to form the buried point cluster information, and the buried point cluster information is stored in a database.
5. The payment request processing method of claim 4,
the front gateway sorts the payment requests by using a front gateway rule, and further comprises:
the head-end gateway uses routing priority rules to order the throttled payment requests,
the routing priority rule is from high to low:
payment order ID, payment order receiving scene, the embedded point cluster information, user ID and random.
6. The payment request processing method of claim 5,
if the preposed gateway calls a background interface within the specified time to obtain the definite error code, the retried calling is carried out according to the routing priority rule.
7. The payment request processing method of claim 1, wherein one payment cluster is composed of a plurality of payment partitions, and each payment partition of the payment cluster is deployed as a configuration file, a data source, middleware and a server according to different configurations through the same set of codes.
8. The payment request processing method of claim 7, wherein the number of payment clusters is adjusted according to the number of payment requests.
9. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method according to any one of claims 1 to 8.
10. A payment request processing system comprising a user terminal, a front gateway and a payment server, wherein the payment server receives a payment instruction generated based on an interaction between the user terminal and the front gateway and processes the payment instruction by using the method of any one of claims 1 to 8.
CN201911141171.5A 2019-11-20 2019-11-20 Payment request processing method, system and storage medium Active CN112825045B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911141171.5A CN112825045B (en) 2019-11-20 2019-11-20 Payment request processing method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911141171.5A CN112825045B (en) 2019-11-20 2019-11-20 Payment request processing method, system and storage medium

Publications (2)

Publication Number Publication Date
CN112825045A CN112825045A (en) 2021-05-21
CN112825045B true CN112825045B (en) 2022-12-30

Family

ID=75906912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911141171.5A Active CN112825045B (en) 2019-11-20 2019-11-20 Payment request processing method, system and storage medium

Country Status (1)

Country Link
CN (1) CN112825045B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113890853B (en) * 2021-09-27 2024-04-19 北京字跳网络技术有限公司 Current limiting method and device, storage medium and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101826186A (en) * 2009-02-13 2010-09-08 美国银行公司 Managing payment is handled in the aggregate payment concentrator system system, method and program
CN105208133A (en) * 2015-10-20 2015-12-30 上海斐讯数据通信技术有限公司 Server, load balancer as well as server load balancing method and system
CN106453564A (en) * 2016-10-18 2017-02-22 北京京东尚科信息技术有限公司 Elastic cloud distributed massive request processing method, device and system
CN109360066A (en) * 2018-10-25 2019-02-19 广元量知汇科技有限公司 B2C e-commerce system Internet-based
CN110335030A (en) * 2019-06-27 2019-10-15 上海数禾信息科技有限公司 Pay route system, method
CN110417671A (en) * 2019-07-31 2019-11-05 中国工商银行股份有限公司 The current-limiting method and server of data transmission

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8645272B2 (en) * 2011-06-24 2014-02-04 Western Union Financial Services, Inc. System and method for loading stored value accounts

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101826186A (en) * 2009-02-13 2010-09-08 美国银行公司 Managing payment is handled in the aggregate payment concentrator system system, method and program
CN105208133A (en) * 2015-10-20 2015-12-30 上海斐讯数据通信技术有限公司 Server, load balancer as well as server load balancing method and system
CN106453564A (en) * 2016-10-18 2017-02-22 北京京东尚科信息技术有限公司 Elastic cloud distributed massive request processing method, device and system
CN109360066A (en) * 2018-10-25 2019-02-19 广元量知汇科技有限公司 B2C e-commerce system Internet-based
CN110335030A (en) * 2019-06-27 2019-10-15 上海数禾信息科技有限公司 Pay route system, method
CN110417671A (en) * 2019-07-31 2019-11-05 中国工商银行股份有限公司 The current-limiting method and server of data transmission

Also Published As

Publication number Publication date
CN112825045A (en) 2021-05-21

Similar Documents

Publication Publication Date Title
CN101147130B (en) Method and system for selecting a resource manager to satisfy a service request
CN107545338B (en) Service data processing method and service data processing system
US9009599B2 (en) Technique for handling URLs for different mobile devices that use different user interface platforms
US8832648B2 (en) Managing dynamic configuration data for a set of components
CN108492068B (en) Method and device for path planning
CN102880956A (en) Payment server and payment channel integration method
CN109597643A (en) Using gray scale dissemination method, device, electronic equipment and storage medium
CN110650209B (en) Method and device for realizing load balancing
CN112884405A (en) Inquiry system and scheduling method thereof
CN112654003A (en) Method, device, storage medium and electronic equipment for sending message
CN106952085B (en) Method and device for data storage and service processing
CN114118888A (en) Order ex-warehouse method and device
CN112825045B (en) Payment request processing method, system and storage medium
KR20140031429A (en) Item recommend system and method thereof, apparatus supporting the same
CN109271438A (en) A kind of data bank access method and its system
CN113704295A (en) Service request processing method and system and electronic equipment
CN110930101A (en) Method, device, electronic equipment and readable medium for determining delivery time of order
CN104954496A (en) Cloud resource allocation method and device
CN111835570B (en) Global state persistence decentralized blockchain network node device and working method
CN104579793B (en) The dispatching method and system of Internet resources
CN109412873B (en) Configuration updating method and device, terminal equipment and computer storage medium
CN112488803A (en) Favorite storage access method and device, equipment and medium thereof
CN109905446B (en) Service processing method, server and computer storage medium
CN112785358A (en) Order fulfillment merchant access method and device
US10019248B2 (en) System and method for service matching of instant message software

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant