CN110765109A - Service request response method, device, equipment and storage medium - Google Patents

Service request response method, device, equipment and storage medium Download PDF

Info

Publication number
CN110765109A
CN110765109A CN201911016888.7A CN201911016888A CN110765109A CN 110765109 A CN110765109 A CN 110765109A CN 201911016888 A CN201911016888 A CN 201911016888A CN 110765109 A CN110765109 A CN 110765109A
Authority
CN
China
Prior art keywords
service request
data center
database
target data
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911016888.7A
Other languages
Chinese (zh)
Inventor
陈艺辉
陈文极
林震宇
徐立宇
林晨
林智泓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Construction Bank Corp
Original Assignee
China Construction Bank Corp
CCB Finetech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Construction Bank Corp, CCB Finetech Co Ltd filed Critical China Construction Bank Corp
Priority to CN201911016888.7A priority Critical patent/CN110765109A/en
Publication of CN110765109A publication Critical patent/CN110765109A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/211Schema design and management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The embodiment of the invention discloses a method, a device, equipment and a storage medium for responding a service request. The method comprises the following steps: receiving a service request; wherein the service request includes user identity information of the service request. And determining a target data center according to the user identity information of the service request. And reading the weight of each database in the target data center. And determining a target database according to the weight of each database in the target data center, and responding to the service request according to the target database. By operating the technical scheme provided by the request, the processing capacity of the flow generated by the service request can be improved, and the cost of the server can be saved.

Description

Service request response method, device, equipment and storage medium
Technical Field
The present invention relates to computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for responding to a service request.
Background
At present, with the development of computer technology, more and more network users are available, and the amount of requests for network services increases, so that the amount of data to be transmitted in the internet, that is, network traffic, becomes larger and larger.
In the prior art, network traffic is usually processed by using a single device, and when the network traffic exceeds a load during processing, hardware of the device is often upgraded.
However, hardware upgrade not only increases the cost investment, but also devices with excellent performance cannot meet the increasing network traffic processing requirements.
Disclosure of Invention
The invention provides a method, a device, equipment and a storage medium for responding a service request, which are used for improving the processing capacity of the flow generated by the service request and saving the cost of a server.
In a first aspect, an embodiment of the present invention provides a method for responding to a service request, where the method includes:
receiving a service request; the service request comprises user identity information of the service request;
determining a target data center according to the user identity information of the service request;
reading the weight of each database in the target data center;
and determining a target database according to the weight of each database in the target data center, and responding to the service request according to the target database.
In a second aspect, an embodiment of the present invention further provides a device for responding to a service request, where the device includes:
a service request receiving module, configured to receive a service request; the service request comprises user identity information of the service request;
the target data center determining module is used for determining a target data center according to the user identity information of the service request;
the weight reading module is used for reading the weight of each database in the target data center;
and the target database determining module is used for determining a target database according to the weight of each database in the target data center and responding to the service request according to the target database.
In a third aspect, an embodiment of the present invention further provides an apparatus, where the apparatus includes:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method for responding to a service request as described above.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for responding to the service request as described above.
The embodiment of the invention receives the service request; wherein the service request includes user identity information of the service request. And determining a target data center according to the user identity information of the service request. And reading the weight of each database in the target data center. And determining a target database according to the weight of each database in the target data center, and responding to the service request according to the target database. The problem of overlarge load when processing network flow is solved, the processing capacity of the flow generated by the service request is improved, and the cost of the server is saved.
Drawings
Fig. 1 is a flowchart of a method for responding to a service request according to an embodiment of the present invention;
fig. 2 is a flowchart of a service request response according to a second embodiment of the present invention;
fig. 3 is a flowchart of a response to a service request according to a third embodiment of the present invention;
fig. 4 is an architecture diagram of a data center according to a third embodiment of the present invention;
fig. 5 is a schematic structural diagram of a service request response apparatus according to a fourth embodiment of the present invention;
fig. 6 is a schematic structural diagram of an apparatus according to a fifth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a method for responding to a service request according to an embodiment of the present invention, where this embodiment is applicable to a case where a target database is searched according to a service request, and the method can be executed by a device for responding to a service request according to an embodiment of the present invention, and specifically includes the following steps:
s110, receiving a service request; wherein the service request includes user identity information of the service request.
The operation party responding to the service request may be any object capable of processing the service request, such as a website, an Application (APP), and a server. The service request is a request for realizing a service function, and may be generated based on an operation of a user. The user identity information contains user specific information, such as a user ID, from which the service request was made. Each user corresponds to unique identity information.
And S120, determining a target data center according to the user identity information of the service request.
The data center can be a service platform for providing internet services, and the system at least comprises a computing unit and a database. And finding a corresponding target data center according to the content contained in the user identity information, for example, the user identity information contains information of the first data center, and then the service request is distributed to the first data center.
And S130, reading the weight of each database in the target data center.
The weight of the database reflects the importance degree of the database corresponding to the service request, and can be the proportion of each database corresponding to the service request, wherein the data stored in each database is the same. For example, if there are two sets AB of databases, where a is sixty percent and B is forty percent, then the set a of databases corresponds to sixty percent of service requests and the set B of databases corresponds to forty percent of service requests.
S140, determining a target database according to the weight of each database in the target data center, and responding to the service request according to the target database.
And determining the target database according to the weight of each database in the target data center, and distributing the service request to the corresponding database according to the weight of each database. For example, if there are two sets AB of databases, the a weight percentage is sixty percent and the B weight percentage is forty percent, then if five service requests are processed, then each sixty percent probability is assigned to the a group database and forty percent probability is assigned to the B group database. The target database may be a database group or a single database.
And responding to the service request according to the target database, and correspondingly processing the service request according to the content stored in the database.
According to the technical scheme provided by the embodiment of the invention, the service request is received; wherein the service request includes user identity information of the service request. And determining a target data center according to the user identity information of the service request. And reading the weight of each database in the target data center. And determining a target database according to the weight of each database in the target data center, and responding to the service request according to the target database. The problem of overlarge load when processing network flow is solved, the processing capacity of the flow generated by the service request is improved, and the cost of the server is saved.
On the basis of the above technical solution, optionally, determining the target data center according to the user identity information of the service request includes:
and determining the target data center according to the fragment field in the user identity information and the mapping relation between the fragment field in the preset configuration center and the target data center.
The fragmentation field in the user identity information is used to allocate the service request to the corresponding data center, and may be a fixed number of bits in the ID code representing the user identity. The configuration center is used for storing and changing data related to configuration and can be implemented by nacos. When the configuration center is changed, the configuration center is pushed to the computing unit to realize dynamic configuration modification. And presetting a mapping relation between the fragment fields in the configuration center and the target data center, wherein the mapping relation is a corresponding relation between the fragment fields and the target data center, and when the fragment fields meet the corresponding relation, the service requests are distributed to the corresponding data centers. For example, the last five and six bits in the ID code are fragmentation information, and this embodiment is not limited herein, and 00-39 corresponds to a first data center, and 40-99 corresponds to a second data center. For example, the user ID is 100000591100, the fragmentation information is 59, and the service request is assigned to the first data center. The advantage of this arrangement is that the same user request falls only into the same data center, increasing the speed of processing data.
On the basis of the above technical solution, optionally, the target data center is distributed in two different areas in a first city, and is divided into a first data center and a second data center; and the first data center and the second data center are distributed with different services corresponding to the service requests and are backups of each other.
The first data center and the second data center are defined relative to the city, for example, Shenzhen has the first data center and the second data center, and Beijing also has the first data center and the second data center. When any one of the first data center and the second data center fails, the other data center can undertake all service requests in the first city. The advantage of this arrangement is that traffic disruption is prevented when the first data center fails.
On the basis of the above technical solution, optionally, the target data center is further distributed in a second city, which is a third data center, and the third data center is used for backing up all the services in the first city.
Wherein the second city is defined relative to the first city. For example, if beijing is a first city and wuhan is a second city, then beijing has a first data center and a second data center, wuhan has a third data center, and wuhan also has the first data center and the second data center relative to the city, and beijing also has a third data center as a backup of other cities. The first city and the second city can not be used as mutual remote backup, and the two city data centers are prevented from simultaneously breaking down.
The third data center is used for backing up all the traffic users in the first city, and is only used for bearing all the service requests in the first city when the first data center and the second data center simultaneously have faults and only used as backup at ordinary times. The synchronization between different cities can use Nacos as a registration center, and the Nacos-sync component in the registration center is adopted to realize service synchronization. The advantage of implementing such an arrangement is to prevent traffic disruption when all data centers in a city fail due to problems such as urban networks.
On the basis of the above technical solution, optionally, the target database at least includes a main database and a first backup database, and the first backup database is a backup of the main database; the main database and the first standby database are located in the same data center.
When the main database is down, the virtual internet protocol address corresponding to the main database is unbound, the virtual internet protocol address is bound to the first standby database, and the data source is switched to the first standby database. The main database and the first standby database adopt a synchronization mode as synchronization. And the service paralysis caused by the downtime of the main database due to hardware faults and other problems is prevented.
On the basis of the above technical solution, optionally, the target database further includes a second backup database and/or a third backup database, and both the second backup database and the third backup database are backups of the main database; the second backup database is located in the second data center, and the third backup database is located in the third data center.
When the first standby database is down, the following three conditions are adopted:
1) the main database still crashes, the data source is switched to the second standby database at the moment, and the data of the second standby database and the data of the main database are synchronized in a semi-synchronous mode;
2) when the second standby database is down, the data source is switched to a third standby database, and the data of the third standby database and the data of the main database are synchronized in an asynchronous mode;
3) and the master database is recovered to be normal, and the data source is switched back to the master database.
The advantage of setting the second and third standby databases is that it prevents the service from being broken down when the main database and the first standby database are down due to hardware failure and other problems.
Optionally, in all the cities in which the data center is set, a global set is set in one of the data centers in one of the fixed cities, and is used for storing global configuration parameters; and setting city sets in all the data centers except the global set for synchronizing the global configuration parameters.
The fixed city is a unique city selected from all cities provided with data centers, for example, the data centers are arranged in Beijing, Shanghai and Shenzhen, and a global set is only arranged in one data center of Beijing and comprises global public data such as a configuration center, user information, a user account, authority, state management, monitoring analysis and the like.
The global set attribute is readable and writable, namely the modification parameter can only pass through the global set, thereby ensuring the safety of public data. The city set is used for synchronizing global configuration parameters, and the attribute is read-only. When the public parameters need to be read in the data center, the data is called in the city set of the city.
On the basis of the above embodiments, the present embodiment sets the global set and the city set, so that the data reading speed is improved, and the server pressure is reduced.
Example two
Fig. 2 is a flowchart of a service request response according to a second embodiment of the present invention. The technical scheme is supplementary explained aiming at the process of reading the weight of each database in the target data center. Compared with the scheme, the reading of the weight of each database in the target data center in the scheme comprises the following steps:
generating service information according to the user identity information of the service request and service configuration in a preset configuration center;
determining the weight of each database in the target data center according to the weight field in the service information; or,
and determining the weight of each database in the target data center according to a weight rule in the preset configuration center.
Specifically, the flow of the service request response method is shown in fig. 2:
s210, receiving a service request; wherein the service request includes user identity information of the service request.
S220, determining a target data center according to the user identity information of the service request.
And S230, generating service information according to the user identity information of the service request and service configuration in a preset configuration center.
The service configuration in the preset configuration center is used for providing the configuration corresponding to the service request, for example, providing the configuration of the database corresponding to the service request. The service information is information generated for each user request, that is, each time a user sends a request, a piece of service information is correspondingly generated. Providing partial information, such as fragmentation information, by the user identity information; and then, providing another part of information, such as configuration information of a database, by service configuration in a preset configuration center, and generating service information by the service configuration and the configuration information of the database. Wherein the service configuration provided by the configuration center can be modified in the configuration center. For example, the service information is embodied as service ID 20190000451101, where 45 is fragmentation information, 01 is configuration information of the database, and 01 can be adjusted to 02, 03, etc. in the configuration center.
S240, determining the weight of each database in the target data center according to the weight field in the service information; or determining the weight of each database in the target data center according to a weight rule in the preset configuration center.
The weight field in the service information is a database configuration information part provided by service configuration in a preset configuration center in the service information, and is embodied as the weight field in the service information for determining the weight of each database in a target data center. May be a fixed number of bits in the ID code embodying the service information. For example, the service ID is 20190000451101, and the last two bits are set as the weight field, then 01 represents the service request entering the 01 database, and the weight of the 01 database is one hundred percent.
When the weight field is XX, it indicates that the target database is not specified, and the weight of each database in the target data center is determined according to the weight rule in the preset configuration center. The weight rules are preset, and for the regulation of the database weight, the weight rules can be grouped by the slicing bits. E.g. into a first group according to a service request with fragmentation bits 00-21 in the service information, into a second group according to a service request with fragmentation bits 22-41, etc. Taking one set of weighting rules as an example, the database has three sets, 00, 01, and 02. The weighting rule is: the 00 database weight is twenty percent, the 01 database weight is thirty percent, and the 02 database weight is fifty percent, at which point, if ten service requests are processed, each twenty percent probability is assigned to the 00 database, thirty percent probability is assigned to the 01 database, and fifty percent probability is assigned to the 02 database. Each group of weight rules can be the same or different, and the specific weight rules can be changed outside the program through the configuration center, so that the conditions of downtime and the like of the target database are prevented.
And S250, determining a target database according to the weight of each database in the target data center, and responding to the service request according to the target database.
On the basis of the above embodiments, the present embodiment generates the configuration part in the service information through the service configuration provided by the configuration center, that is, the configuration center can change the service information at any time, thereby achieving the effect of dynamic configuration. Meanwhile, the weight is set for the target database, and the service requests are distributed to the corresponding databases, so that the load of a single group of databases on all the service requests is reduced, and the processing capacity of the flow generated by the service requests is improved.
EXAMPLE III
Fig. 3 is a flowchart of a service request response according to a third embodiment of the present invention. The technical scheme is supplementary explained aiming at the process after the target data center is determined according to the user identity information of the service request. Compared with the scheme, after the target data center is determined according to the user identity information of the service request, the method further comprises the following steps:
if the target data center comprises at least two computing units, acquiring a computing unit distribution rule of the target data center;
and determining a target computing unit in the at least two computing units according to the computing unit distribution rule.
Specifically, the flow of the service request response method is shown in fig. 3:
s310, receiving a service request; the service request comprises user identity information of the service request;
s320, determining a target data center according to the user identity information of the service request;
s330, if the target data center comprises at least two computing units, obtaining a computing unit distribution rule of the target data center;
the computing unit is used for performing logic computation in the service execution process. The calculation unit distribution rule is used for distributing the calculation units corresponding to the services. The calculation unit allocation rules are associated with the target data centers, and the calculation unit allocation rules for each target data center are not necessarily the same.
Fig. 4 is an architecture diagram of a data center. As shown in fig. 4, the computing unit includes a computing unit a and a computing unit B, which are identical, serve outside simultaneously, and point to the same database set, and each database group is included in the database set, such as database group 00, database group 01, database group 02 to database group N, and the like. When the system is upgraded, one computing unit, such as the computing unit A, is upgraded, and the computing unit B still continues to provide services; when the calculation unit A finishes upgrading, the calculation unit B is upgraded at the moment, and the calculation unit A provides services again. And finally, when the computing unit B is upgraded, namely all the computing units are upgraded, and the external service is almost not interrupted in the upgrading process, the effect of upgrading without stopping is realized, and the processing efficiency of the service application is improved.
S340, determining a target calculation unit in at least two calculation units according to the calculation unit distribution rule.
Wherein, the target computing unit in the at least two computing units is determined, and the service request is distributed to the target computing unit in the two computing units according to the computing unit distribution rule. The calculation unit allocation rule may be a fixed calculation unit, or may be a random allocation according to a preset rule, which is not limited in this embodiment.
S350, reading the weight of each database in the target data center;
s360, determining a target database according to the weight of each database in the target data center, and responding to the service request according to the target database.
On the basis of the foregoing embodiment, in this embodiment, if the target data center includes at least two computing units, a computing unit allocation rule of the target data center is obtained; and determining a target computing unit in the at least two computing units according to the computing unit distribution rule. The problem that the service application amount is continuously increased and a single processing unit cannot meet the load requirement is solved, and the effects of reducing the time for waiting for response of a user, improving the processing capacity of the flow generated by the service request and saving the server cost are achieved.
On the basis of the above technical solution, optionally, the calculation unit allocation rule includes a ratio of a preset service request processing quantity of the calculation unit;
correspondingly, determining a target computing unit of the at least two computing units according to the computing unit allocation rule includes:
and determining a target computing unit from the at least two computing units according to the relation between the current service request processing quantity ratio of the at least two computing units included in the target data center and the preset service request processing quantity ratio.
For example: the rule of the calculation unit distribution is stipulated that the calculation unit A accounts for thirty percent, the calculation unit B accounts for seventy percent, then the probability that the service application has thirty percent is processed by the calculation unit A, and the target calculation unit is the calculation unit A; there is a thirty percent probability processed by the computing unit B, where the target computing unit is computing unit B.
The distribution rule of the specific computing unit can be changed through the configuration center, and the situations that the target computing unit fails and the like are prevented.
On the basis of the foregoing embodiment, the present embodiment includes that the allocation rule of the computing unit includes a ratio of a preset service request processing quantity of the computing unit; correspondingly, determining a target computing unit of the at least two computing units according to the computing unit allocation rule includes: and determining a target computing unit from the at least two computing units according to the relation between the current service request processing quantity ratio of the at least two computing units included in the target data center and the preset service request processing quantity ratio. Therefore, the effects of sharing the task amount of the computing unit, improving the processing capacity of the flow generated by the service request and saving the cost of the server are achieved.
Example four
Fig. 5 is a schematic structural diagram of a service request response apparatus according to a fourth embodiment of the present invention. The device can be realized by hardware and/or software, can execute the response method of the service request provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. As shown in fig. 5, the apparatus includes:
a service request receiving module 510, configured to receive a service request; the service request comprises user identity information of the service request;
a target data center determining module 520, configured to determine a target data center according to the user identity information of the service request;
a weight reading module 530, configured to read the weight of each database in the target data center;
and a target database determining module 540, configured to determine a target database according to the weight of each database in the target data center, and respond to the service request according to the target database.
According to the technical scheme provided by the embodiment of the invention, the service request is received; wherein the service request includes user identity information of the service request. And determining a target data center according to the user identity information of the service request. And reading the weight of each database in the target data center. And determining a target database according to the weight of each database in the target data center, and responding to the service request according to the target database. The problem of overlarge load when processing network flow is solved, the processing capacity of the flow generated by the service request is improved, and the cost of the server is saved.
On the basis of the above technical solutions, optionally, the target data center determining module includes:
and the target data center determining submodule is used for determining the target data center according to the fragment field in the user identity information and the mapping relation between the fragment field in the preset configuration center and the target data center.
On the basis of the above technical solutions, optionally, the weight reading module includes:
a service information generating module, configured to generate service information according to the user identity information of the service request and service configuration in a preset configuration center;
the first weight determining submodule is used for determining the weight of each database in the target data center according to the weight field in the service information; or,
and the second weight determining submodule is used for determining the weight of each database in the target data center according to the weight rule in the preset configuration center.
On the basis of the above technical solutions, optionally, the apparatus further includes:
and an allocation rule obtaining module, configured to obtain, after the target data center determining module 520, a calculation unit allocation rule of the target data center if the target data center includes at least two calculation units.
A calculation unit determination module, configured to determine a target calculation unit of the at least two calculation units according to the calculation unit allocation rule after the target data center determination module 520.
On the basis of the above technical solutions, optionally, the calculation unit allocation rule includes a ratio of a preset service request processing quantity of the calculation unit;
accordingly, the calculation unit determination module includes:
and the target calculation unit determining unit is used for determining the target calculation unit from the at least two calculation units according to the relation between the current service request processing quantity ratio of the at least two calculation units included in the target data center and the preset service request processing quantity ratio.
EXAMPLE five
Fig. 6 is a schematic structural diagram of an apparatus according to a fifth embodiment of the present invention, as shown in fig. 6, the apparatus includes a processor 60, a memory 61, an input device 62, and an output device 63; the number of processors 60 in the device may be one or more, and one processor 60 is taken as an example in fig. 6; the processor 60, the memory 61, the input device 62 and the output device 63 in the device/terminal/server may be connected by a bus or other means, which is exemplified by the bus connection in fig. 6.
The memory 61 is a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to a service request responding method in the embodiment of the present invention. The processor 60 executes various functional applications and data processing of the device by executing software programs, instructions and modules stored in the memory 61, namely, implements one of the above-described service request response methods.
The memory 61 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 61 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 61 may further include memory located remotely from the processor 60, which may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
EXAMPLE six
An embodiment of the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, perform a method for responding to a service request, the method including:
receiving a service request; the service request comprises user identity information of the service request;
determining a target data center according to the user identity information of the service request;
reading the weight of each database in the target data center;
and determining a target database according to the weight of each database in the target data center, and responding to the service request according to the target database.
Of course, the storage medium containing the computer-executable instructions provided by the embodiments of the present invention is not limited to the method operations described above, and may also perform related operations in a service request response method provided by any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiment of the above search apparatus, each included unit and module are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A method for responding to a service request, comprising:
receiving a service request; the service request comprises user identity information of the service request;
determining a target data center according to the user identity information of the service request;
reading the weight of each database in the target data center;
and determining a target database according to the weight of each database in the target data center, and responding to the service request according to the target database.
2. The method of claim 1, wherein determining the target data center according to the user identity information of the service request comprises:
and determining the target data center according to the fragment field in the user identity information and the mapping relation between the fragment field in the preset configuration center and the target data center.
3. The method of claim 1, wherein reading the weight of each database in the target data center comprises:
generating service information according to the user identity information of the service request and service configuration in a preset configuration center;
determining the weight of each database in the target data center according to the weight field in the service information; or,
and determining the weight of each database in the target data center according to a weight rule in the preset configuration center.
4. The method of claim 1, wherein after determining the target data center according to the user identity information of the service request, the method further comprises:
if the target data center comprises at least two computing units, acquiring a computing unit distribution rule of the target data center;
and determining a target computing unit in the at least two computing units according to the computing unit distribution rule.
5. The method of claim 4, wherein the calculation unit allocation rule comprises a preset service request processing number ratio of calculation units;
correspondingly, determining a target computing unit of the at least two computing units according to the computing unit allocation rule includes:
and determining a target computing unit from the at least two computing units according to the relation between the current service request processing quantity ratio of the at least two computing units included in the target data center and the preset service request processing quantity ratio.
6. An apparatus for responding to a service request, comprising:
a service request receiving module, configured to receive a service request; the service request comprises user identity information of the service request;
the target data center determining module is used for determining a target data center according to the user identity information of the service request;
the weight reading module is used for reading the weight of each database in the target data center;
and the target database determining module is used for determining a target database according to the weight of each database in the target data center and responding to the service request according to the target database.
7. The apparatus of claim 6, wherein the target data center determination module comprises:
and the target data center determining submodule is used for determining the target data center according to the fragment field in the user identity information and the mapping relation between the fragment field in the preset configuration center and the target data center.
8. The apparatus of claim 7, wherein the weight reading module comprises:
a service information generating module, configured to generate service information according to the user identity information of the service request and service configuration in a preset configuration center;
the first weight determining submodule is used for determining the weight of each database in the target data center according to the weight field in the service information; or,
and the second weight determining submodule is used for determining the weight of each database in the target data center according to the weight rule in the preset configuration center.
9. An apparatus, characterized in that the apparatus comprises:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a method of responding to a service request as claimed in any of claims 1-5.
10. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out a method of responding to a service request according to any one of claims 1 to 5.
CN201911016888.7A 2019-10-24 2019-10-24 Service request response method, device, equipment and storage medium Pending CN110765109A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911016888.7A CN110765109A (en) 2019-10-24 2019-10-24 Service request response method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911016888.7A CN110765109A (en) 2019-10-24 2019-10-24 Service request response method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110765109A true CN110765109A (en) 2020-02-07

Family

ID=69333761

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911016888.7A Pending CN110765109A (en) 2019-10-24 2019-10-24 Service request response method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110765109A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150302037A1 (en) * 2014-04-18 2015-10-22 Wal-Mart Stores, Inc. System and method for storing and processing database requests
CN107622091A (en) * 2017-08-23 2018-01-23 阿里巴巴集团控股有限公司 A kind of data base query method and device
CN108011929A (en) * 2017-11-14 2018-05-08 平安科技(深圳)有限公司 Data request processing method, apparatus, computer equipment and storage medium
CN109428877A (en) * 2017-09-01 2019-03-05 百度在线网络技术(北京)有限公司 A kind of method and apparatus for by user equipment access operation system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150302037A1 (en) * 2014-04-18 2015-10-22 Wal-Mart Stores, Inc. System and method for storing and processing database requests
CN107622091A (en) * 2017-08-23 2018-01-23 阿里巴巴集团控股有限公司 A kind of data base query method and device
CN109428877A (en) * 2017-09-01 2019-03-05 百度在线网络技术(北京)有限公司 A kind of method and apparatus for by user equipment access operation system
CN108011929A (en) * 2017-11-14 2018-05-08 平安科技(深圳)有限公司 Data request processing method, apparatus, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
US11586673B2 (en) Data writing and reading method and apparatus, and cloud storage system
CN108920272B (en) Data processing method, device, computer equipment and storage medium
US10082982B2 (en) Data backup method and apparatus, data restoration method and apparatus, and server
US10642694B2 (en) Monitoring containers in a distributed computing system
CN110096336B (en) Data monitoring method, device, equipment and medium
CN104462225B (en) The method, apparatus and system of a kind of digital independent
CN111585887B (en) Communication method and device based on multiple networks, electronic equipment and storage medium
WO2019237594A1 (en) Session persistence method and apparatus, and computer device and storage medium
CN110730250B (en) Information processing method and device, service system and storage medium
CN111459749A (en) Prometous-based private cloud monitoring method and device, computer equipment and storage medium
WO2022111313A1 (en) Request processing method and micro-service system
CN109802986B (en) Equipment management method, system, device and server
CN110597918A (en) Account management method and device and computer readable storage medium
CN110688523A (en) Video service providing method, device, electronic equipment and storage medium
CN113946491A (en) Microservice data processing method, microservice data processing device, computer equipment and storage medium
CN117914675A (en) Method and device for constructing distributed cache system
CN111274022A (en) Server resource allocation method and system
CN111708763A (en) Data migration method and device of fragment cluster and fragment cluster system
CN111046004A (en) Data file storage method, device, equipment and storage medium
WO2022121387A1 (en) Data storage method and apparatus, server, and medium
CN110765109A (en) Service request response method, device, equipment and storage medium
CN113760519B (en) Distributed transaction processing method, device, system and electronic equipment
CN114995762A (en) Thick backup roll capacity expansion method, device, equipment and storage medium
CN114385366A (en) Elastic capacity expansion method, system, medium and equipment for container group of container cloud platform
JP7133037B2 (en) Message processing method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220919

Address after: 25 Financial Street, Xicheng District, Beijing 100033

Applicant after: CHINA CONSTRUCTION BANK Corp.

Address before: 25 Financial Street, Xicheng District, Beijing 100033

Applicant before: CHINA CONSTRUCTION BANK Corp.

Applicant before: Jianxin Financial Science and Technology Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200207