CN112769776A - Distributed service response method, system, device and storage medium - Google Patents
Distributed service response method, system, device and storage medium Download PDFInfo
- Publication number
- CN112769776A CN112769776A CN202011571078.0A CN202011571078A CN112769776A CN 112769776 A CN112769776 A CN 112769776A CN 202011571078 A CN202011571078 A CN 202011571078A CN 112769776 A CN112769776 A CN 112769776A
- Authority
- CN
- China
- Prior art keywords
- service process
- end service
- client
- service
- response
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 919
- 230000004044 response Effects 0.000 title claims abstract description 221
- 230000008569 process Effects 0.000 claims abstract description 851
- 238000012545 processing Methods 0.000 claims abstract description 38
- 230000006854 communication Effects 0.000 claims description 95
- 238000004891 communication Methods 0.000 claims description 93
- 230000006835 compression Effects 0.000 claims description 12
- 238000007906 compression Methods 0.000 claims description 12
- 238000006243 chemical reaction Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 3
- 238000011161 development Methods 0.000 abstract description 11
- 238000007726 management method Methods 0.000 description 27
- 238000010586 diagram Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 11
- 238000013500 data storage Methods 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 230000002159 abnormal effect Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 230000002085 persistent effect Effects 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 239000002699 waste material Substances 0.000 description 2
- 239000000872 buffer Substances 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005242 forging Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/131—Protocols for games, networked simulations or virtual reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/50—Network service management, e.g. ensuring proper service fulfilment according to agreements
- H04L41/5041—Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the time relationship between creation and deployment of a service
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/51—Discovery or management thereof, e.g. service location protocol [SLP] or web services
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer And Data Communications (AREA)
Abstract
The embodiment of the application discloses a distributed service response method, a system, a device and a storage medium, which relate to the technical field of computers and comprise the following steps: the method comprises the steps that a first front-end service process receives a universal service request sent by a first client and sends the universal service request to a first back-end service process, the first front-end service process belongs to a front-end service process group, the front-end service process group comprises a plurality of front-end service processes, the first back-end service process belongs to a back-end service process group, the back-end service process group comprises a plurality of back-end service processes, and a plurality of applied universal service processing programs are deployed in the back-end service processes; the first back-end service process selects and operates a corresponding general service processing program according to the general service request to obtain a general service response, and sends the general service response to the first front-end service process; and the first front-end service process receives the general service response and feeds the general service response back to the first client. By adopting the method, the technical problem of repeated development of the back-end service of different mini-games in the prior art can be solved.
Description
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a distributed service response method, a distributed service response system, a distributed service response device and a storage medium.
Background
With the development of electronic game technology, the ecology of mini-games is gradually improved. Wherein, the mini-game can be understood as a game with small volume and simple playing method. After the mini game is developed, the back-end service of the mini game becomes an important guarantee for smooth running of the mini game, and the back-end service can process various services of the mini game, such as services of double fighting, friend management, point management, ranking list checking and the like. After a mini game is completed, corresponding back-end services need to be deployed quickly, and a closed loop of the mini game and the back-end services is formed. However, different mini-games have some of the same type of traffic, which can result in repeated development of each backend service.
Disclosure of Invention
The embodiment of the application provides a distributed service response method, a distributed service response system, a distributed service response device and a storage medium, and aims to solve the technical problem of repeated development of back-end services of different mini-games in the prior art.
In a first aspect, an embodiment of the present application provides a distributed service response method, including:
a first front-end service process receives a universal service request sent by a first client, wherein the first front-end service process belongs to a front-end service process group, and the front-end service process group comprises a plurality of front-end service processes;
the first front-end service process sends the universal service request to a first back-end service process, the first back-end service process belongs to a back-end service process group, the back-end service process group comprises a plurality of back-end service processes, and a plurality of applied universal service processing programs are deployed in the back-end service processes;
the first back-end service process operates a corresponding general service processing program according to the general service request to obtain a general service response;
the first back-end service process sends the general service response to the first front-end service process;
and the first front-end service process receives the general service response and feeds the general service response back to the first client.
In a second aspect, an embodiment of the present application further provides a distributed service response system, including: the system comprises a front-end service process group and a back-end service process group, wherein the front-end service process group comprises a plurality of front-end service processes, the back-end service process group comprises a plurality of back-end service processes, and a plurality of general service processing programs of applications are deployed in the back-end service processes;
a first front-end service process receives a general service request sent by a first client, wherein the first front-end service process belongs to a front-end service process group;
the first front-end service process sends the universal service request to a first back-end service process, and the first back-end service process belongs to a back-end service process group;
the first back-end service process operates a corresponding general service processing program according to the general service request to obtain a general service response;
the first back-end service process sends the general service response to the first front-end service process;
and the first front-end service process receives the general service response and feeds the general service response back to the first client.
In a third aspect, an embodiment of the present application further provides a distributed service response apparatus, including:
the first request receiving module is configured in a first front-end service process and used for receiving a universal service request sent by a first client, wherein the first front-end service process belongs to a front-end service process group, and the front-end service process group comprises a plurality of front-end service processes;
a request sending module, configured to the first front-end service process, configured to send the universal service request to a first back-end service process, where the first back-end service process belongs to a back-end service process group, the back-end service process group includes multiple back-end service processes, and multiple application universal service processing programs are deployed in the back-end service processes;
the request response module is configured in the first back-end service process and used for operating a corresponding general service processing program according to the general service request so as to obtain a general service response;
a response sending module configured to the first back-end service process, and configured to send the generic service response to the first front-end service process;
and the response receiving module is configured in the first front-end service process and used for receiving the general service response and feeding the general service response back to the first client.
In a fourth aspect, the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the distributed service response method according to the first aspect.
According to the distributed service response method, the distributed service response system, the distributed service response device and the distributed service response storage medium, the first front-end service process receives the universal service request of the first client and forwards the universal service request to the first back-end service process, the first back-end service process runs the corresponding universal service processing program according to the universal service request and sends the universal service response to the first front-end service process, the universal service processing programs of multiple applications are deployed in the back-end service process, and then the first front-end service process feeds the universal service response back to the first client. By deploying the universal service processing programs of different applications in the back-end service process, the universal service of different applications is processed by the back-end service process, the requirement of public service integration is met, particularly in a mini-game scene, a set of service framework does not need to be deployed for each mini-game independently, namely, extra development codes are not needed, repeated development and manpower resource waste are avoided, and the development efficiency and stability are effectively guaranteed. And moreover, by adopting a distributed deployment and front-back service process separation strategy, the problem that other services cannot be used due to the exception of a single service process is effectively avoided. The service decoupling mode is adopted, namely the front-end service process is connected with the client, the back-end service process is responsible for processing specific general services, the back-end service process is not required to be connected with the client, the service can be more stable, and the system deployment is more convenient.
Drawings
Fig. 1 is a flowchart of a distributed service response method according to an embodiment of the present application;
fig. 2 is a flowchart of another distributed service response method according to an embodiment of the present application;
fig. 3 is a flowchart of another distributed service response method according to an embodiment of the present application;
fig. 4 is a first topology diagram provided by an embodiment of the present application;
FIG. 5 is a second topology diagram provided by an embodiment of the present application;
fig. 6 is a schematic diagram of a general service data flow provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a distributed service response system according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of another distributed service response system provided in the embodiment of the present application;
fig. 9 is a schematic structural diagram of a distributed service response apparatus according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are for purposes of illustration and not limitation. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures.
It is to be noted that, in this document, relational terms such as first and second are used solely to distinguish one entity or action or object from another entity or action or object without necessarily requiring or implying any actual such relationship or order between such entities or actions or objects. For example, a "first" and a "second" of a first front-end service process and a second front-end service process are used to distinguish between two different front-end service processes.
The distributed service response method provided by the embodiment of the application can be executed by a distributed service response system, and the distributed service response system can be regarded as a background service system and is used for providing service for the client. The client is a device used by the user side, which includes but is not limited to: mobile phones, tablet computers, notebooks, desktop computers, etc. At least one type of application is installed in the client. The application can be understood as an application program, which interacts with the background service system through the client to realize various business responses in the running process of the application. The embodiment of the application type division rule is not limited, for example, the application type is divided into game, office, camera shooting, learning, shopping and the like according to the application function division, and for example, the application type is refined according to the application function, scene, operation mode and the like, and the game can be divided into small game, large game and the like after being refined. Further, the content of the service may be determined according to the type of the application, for example, when the type of the application is a mini-game, the content of the service includes: friend management, game ranking list viewing, point management and the like. It is understood that a client may install multiple applications of the same type. If the category is game, the client can be provided with chess game, shooting game, team game, intelligence game, etc.
In an embodiment, the distributed service response system includes a front-end service process group and a back-end service process group, where the front-end service process group includes a plurality of front-end service processes. The back-end service process group comprises a plurality of back-end service processes. It can be understood that each front-end service process and each back-end service process may be integrated in one physical server or may be distributed in a plurality of physical servers, and in practical application, capacity expansion may be performed on the physical servers in use in combination with the access volume of the client. The access amount of the client can be understood as the service processing demand of the client. Further, each front-end service process and each back-end service process can be understood as a node (node) in the distributed service response system, that is, the distributed service response system is a framework built by using the node. Wherein a node refers to an endpoint of a network connection or a connection point of two (or more) lines. The node may be a processor, a controller or a workstation, and in the embodiment, the node is specifically a service process. It will be appreciated that the functions of the different nodes may be different and that the nodes, when interconnected by links, may act as control points in the network. In an embodiment, the function of the front-end service process includes performing data communication with a client, and the function of the back-end service process includes performing business processing of an application. For example, after receiving a service request sent by a client, a front-end service process forwards the service request to a back-end service process, and the back-end service process responds to the service request sent by the front-end service process and sends a service response to the front-end service process, so that the front-end service process feeds the service response back to the client. It should be noted that, in practical applications, besides the back-end service process can respond to the service request, the client and the front-end service process can also process part of the service request. Optionally, the distributed service response system further includes: and the storage service process is used for providing data storage service and can store data generated in the service providing process of the distributed service response system, such as business data storage, log records and the like. Optionally, a master service process may be selected between each front-end service process and each back-end service process in the distributed service response system in an election manner, at this time, the master service process may be understood as a master, and the remaining front-end service processes and back-end service processes may be understood as slaves, where the master plays a role in managing the slaves, such as managing service processes of the slaves, maintaining life cycles of the slaves, and building a communication bridge between the slaves.
When the distributed service response method is executed based on the distributed service response system, the following contents may be included. Specifically, fig. 1 is a flowchart of a distributed service response method provided in the embodiment of the present application.
For example, when the type is a mini-game, services such as match-up, friend management, score (game score) management, game ranking list viewing, operation record and online match-up data real-time synchronization can be provided in different mini-game applications, and the services can be understood as general services in the mini-game applications. The common service may also be denoted as a common service, which specifically refers to the same service provided in different applications. Optionally, the processing logic of the common service in different applications is the same, for example, when the common service is to view a leaderboard, the corresponding processing logic is to search all players of the game (which may be all network players or players in the service area where the game is currently located), and generate the leaderboard according to the points and/or the grades of all players. The generic traffic may be determined by the back-end service process (or other server). In one embodiment, the services which can be responded by the backend service process among different applications are divided, and then the general services among different applications are extracted according to the contents of the services. It should be noted that, in the embodiment, the general service is extracted from the same type of application as an example, and in practical application, the general service may also be extracted from different types of applications. Optionally, after the general service is extracted, each application may determine the general service owned by itself.
Further, the universal service request refers to a request for implementing a universal service, and the content included in the request may be set according to an actual situation. In the embodiment, the universal service request is generated by the client and sent to the front-end server, at this time, the client sending the universal service request is marked as a first client, and the front-end service process currently receiving the universal service request is marked as a first front-end service process. Specifically, the first client may directly send the universal service request to the first front-end service process, or the first client establishes a network connection with the first front-end service process first and sends the universal service request to the first front-end service process after the network connection. In the embodiment, the first client establishes a network connection with the first front-end service process first as an example, and at this time, after the connection is established, the universal service request generated by the first client may be sent to the first front-end service process. The embodiment of the connection mode between the first client and the first front-end service process is not limited, for example, a socket connection is adopted. Optionally, the first client selects the first front-end service process from the front-end service process group in a random manner, or the front-end service process group selects the first front-end service process in a load balancing manner.
In one embodiment, the first client generates the generic service request, specifically, the generic service request is generated for an application in the first client. For example, after the first client generates the universal service request, the first client may process the universal service request first, so as to ensure that the universal service request can be accurately, stably sent to the first client in real time. In the embodiment, the universal service request is obtained by the first client performing dictionary compression and format conversion on the original universal service request. The original universal service request refers to a universal service request generated by a first client, and the universal service request sent to a first front-end service process is obtained after dictionary compression and format conversion are carried out on the original universal service request. Illustratively, the original universal service request is JSON data, where JSON (JavaScript Object Notation) is a lightweight data exchange format, and the JSON data is generated when the application of the first client has the request of the universal service. Then, the first client performs dictionary compression on the JSON data, performs format conversion on the data after the dictionary compression to obtain binary data, and the binary data can be regarded as an obtained universal service request, and then sends the universal service request to the first front-end service process. The dictionary compression is a compression method, rules used by the dictionary compression in the embodiment can be set by combining with actual conditions, the storage space of JSON data can be reduced on the basis of reserving effective contents in the JOSN data through the dictionary compression, and the transmission, storage and processing efficiency and the like of the JSON data are improved. Format conversion is realized by Protocol (google Protocol buffers), wherein the Protocol is a tool library with efficient Protocol data exchange format, and in the embodiment, conversion efficiency can be ensured when format conversion is performed by the Protocol. It can be understood that, in practical applications, the processing rule of the original generic service request may also be adjusted according to the specific requirements of the application.
And step 120, the first front-end service process sends the universal service request to a first back-end service process, the first back-end service process belongs to a back-end service process group, the back-end service process group comprises a plurality of back-end service processes, and a plurality of applied universal service processing programs are deployed in the back-end service processes.
In one embodiment, each backend service process of the backend service process group is a stateless node, and the service provided by the backend service process group is a stateless service. The stateless service separates the life cycle of the service process from the life cycle of the service state, so that the service state cannot be lost when the service process is hung up. Furthermore, each backend service process is deployed with a generic service handler for each application that it supports. The generic service processing program refers to a program used when processing a generic service request, and embodies logic used when processing a generic service request. In one embodiment, each generic service of each application has a corresponding generic service handler, and it is understood that the generic service handlers corresponding to the same generic service may be the same or different. It can be understood that the backend service processes are stateless nodes, and therefore, when the backend service process group is horizontally expanded (i.e., the backend service processes are added), only a general service processing program needs to be deployed in the newly added backend service processes, and the backend service processes do not need to be coupled with specific general services.
In one embodiment, each front-end service process and each back-end service process are connected through a Secure Shell (shh) protocol, so that data communication is performed between each front-end service process and each back-end service process.
Specifically, after receiving the universal service request, the first front-end service process sends the universal service request to a back-end service process of the back-end service process group. In the embodiment, a backend service process receiving a general service request is recorded as a first backend service process. Optionally, the first front-end service process sends the general service request to the randomly selected first back-end service process, or the first front-end service process sends the general service request to an idle first back-end service process. In the embodiment, an idle backend service process is taken as an example for description. At this time, the first backend service process is an idle backend service process. Specifically, the first front-end service process performs data communication with each back-end service process, and determines whether each back-end service process is idle in the communication process, where idle may also be understood as idle, and a back-end service process determines that it belongs to an idle back-end service process when it does not process a general service request. Optionally, when there are multiple idle backend service processes, the first front-end service process may randomly select an idle backend service process as the first backend service process, and send the common service request to the first backend service process. Optionally, when there is no idle backend service process, the first front-end service process may randomly select a backend service process as the first backend service process, and send the general service request to the first backend service process, or after the first front-end service process waits for the idle backend service process, take the idle backend service process as the first backend service process, and send the general service request to the first backend service process.
Specifically, the first front-end service process may directly forward the universal service request to the first back-end service process, or the first front-end service process performs data processing on the universal service request and sends the processed universal service request to the first back-end service process. At this time, the step specifically comprises: and the first front-end service process performs data processing on the universal service request and sends the universal service request after data processing to the first back-end service process. The data processing mode can be set according to actual conditions, for example, the first front-end service process splits the universal service request to compress the universal service request, so that flow loss during communication between the first front-end service process and the first back-end service process is reduced, and then the first front-end service process encrypts and decrypts the universal service request process to prevent the universal service request from being tampered in the transmission process.
And step 130, the first back-end service process operates a corresponding general service processing program according to the general service request to obtain a general service response.
Each universal service request has a corresponding universal service processing program, and after the first back-end service process receives the universal service request, the corresponding universal service processing program can be determined according to the universal service request. The manner of selecting the corresponding general service processing program according to the general service request may be set according to actual conditions, for example, a corresponding relationship between each general service processing program and the types of the service requests and the application identifiers that can be processed is pre-established, and then when the general service request is received, the types of the service requests and the application identifiers included in the general service request are extracted, and the corresponding general service processing program is searched according to the corresponding relationship. Then, the first backend service process runs the generic service handler, and the process can be understood as a process of responding to the generic service request. And after the general service processing program is operated, acquiring an operation result, wherein the operation result can be regarded as a response result of the general service request, and in the embodiment, the operation result is recorded as a general service response. It will be appreciated that the generic service response may also include content such as an application identification and a service request type. According to the above, the method specifically includes steps 131 to 133:
step 131, the first back-end service process determines the application identifier and the service request type according to the common service request.
Specifically, the first back-end service process analyzes the general service request and obtains an application identifier and a service request type contained in the general service request. The embodiment of the analysis method is not limited.
Step 132, the first backend service process selects a corresponding generic service processing program according to the application identifier and the service request type.
Specifically, when a general service processing program is deployed in the back-end service process, an application identifier and a service request type corresponding to the general service processing program are synchronously recorded. It can be understood that, if the same common service of different applications shares one common service processing program, the common service processing program corresponds to a plurality of application identifiers, and each application identifier can be added or deleted in combination with the actual situation.
Illustratively, the first backend service process searches the recorded corresponding relationship according to the obtained application identifier and the service request type to determine the corresponding general service processing program. It can be understood that the selecting of the common service handler by the first backend service process means that the first backend service process calls the common service handler through a corresponding Application Programming Interface (API).
Step 133, the first backend service process runs the selected generic service handler to obtain a generic service response.
And the first back-end service process returns the general service response to the first front-end service process. It can be understood that, when the first backend service process receives the generic service request, the communication address (e.g., IP address) of the first frontend service process is also determined, and then the first backend service process can send the generic service response to the first frontend service process according to the communication address.
Optionally, after the first backend service process sends the general service response to the first front-end service process, it is determined that the processing is completed, and then the first backend service process is changed back to the idle backend service process.
And 150, the first front-end service process receives the general service response and feeds the general service response back to the first client.
For example, since the first front-end service process maintains network connection with the first client, after receiving the universal service response, the first front-end service process may feed back the universal service response to the first client according to the network connection, so that the first client specifies the universal service response. Optionally, the first front-end service process performs dictionary compression and format conversion on the general service response, and then feeds the general service response back to the first client.
Optionally, when the first front-end service process receives a plurality of common service requests, the plurality of common service requests may be distinguished according to the application identifier and the service request type, and when a common service response is received, the common service request corresponding to the common service response is determined according to the application identifier and the service request type. Optionally, when the first front-end service process is connected to multiple clients, a user identifier corresponding to each client may be recorded, different clients may be distinguished through the user identifiers, where a user identifier is an identifier corresponding to a user using an application, and when the user logs in the application, the user identifier is an identifier generated after the user logs in, and when the user does not log in the application, the user identifier is an application default identifier. At this time, when receiving the universal service request sent by the first client, the first front-end service process also acquires the user identifier of the first client, associates the user identifier with the universal service request, then determines the first client corresponding to the universal service response according to the user identifier when receiving the universal service response, and feeds back the universal service response to the first client.
In an embodiment, the first client parses the general service response after receiving the general service response, where the parsing embodiment is not limited, and optionally, a program used when the general service response can be parsed is selected from the first client according to the application identifier and the general service type, and the general service response is parsed by the program. Generally, when the first client installs the application, a corresponding program for resolving the universal service response is synchronously installed. Further, after the universal service response is analyzed, if it is determined that the universal service response needs to be displayed in the display screen of the first client, the first client draws a picture corresponding to the universal service response in the display screen, so that a user of the first client can clearly specify the specific content of the universal service response.
The technical scheme includes that the first front-end service process receives a general service request of the first client and forwards the general service request to the first back-end service process, the first back-end service process runs a corresponding general service processing program according to the general service request and sends a general service response to the first front-end service process, the general service processing programs of multiple applications are deployed in the back-end service process, and then the first front-end service process feeds the general service response back to the first client, so that the technical problem of repeated development of back-end services of different mini games in the prior art is solved. By deploying the universal service processing programs of different applications in the back-end service process, the universal service of different applications is processed by the back-end service process, the requirement of public service integration is met, particularly in a mini-game scene, a set of service framework does not need to be deployed for each mini-game independently, namely, extra development codes are not needed, repeated development and manpower resource waste are avoided, and the development efficiency and stability are effectively guaranteed. And moreover, by adopting a distributed deployment and front-back service process separation strategy, the problem that other services cannot be used due to the exception of a single service process is effectively avoided. The service decoupling mode is adopted, namely the front-end service process is connected with the client, the back-end service process is responsible for processing specific general services, the back-end service process is not required to be connected with the client, the service can be more stable, and the system deployment is more convenient.
On the basis of the above embodiment, before the first front-end service process receives the generic service response program sent by the first client, it needs to establish a network connection with the first client. Fig. 2 is a flowchart of another distributed service response method according to an embodiment of the present application, which is a flowchart of a first front-end service process establishing a network connection with a first client, and with reference to fig. 2, before step 110, the method further includes steps 210 to 240:
step 210, the second front-end service process receives the first connection request sent by the first client, and the second front-end service process belongs to the front-end service process group.
Specifically, the first client sends a connection request to a front-end service process of the front-end service process group when the first client has a network connection requirement. In the embodiment, a connection request sent by a first client is recorded as a first connection request, and a front-end service process receiving the first connection request is recorded as a second front-end service process. Wherein, the specific content of the first connection request can be set in combination with the actual situation. For example, the first connection request contains specific requested content (i.e., connection), a user identification of the first client, an application identification, and the like.
In one embodiment, the second front-end service process is a front-end service process randomly selected from the group of front-end service processes by the first client. When the front-end service process is selected, the front-end service process corresponding to a Domain Name randomly allocated by a Domain Name System (DNS) is used as a second front-end service process. Among them, DNS is a service of the internet that may make it more convenient for clients to access the internet.
In the embodiment, after receiving the first service request, the second front-end service process determines that the first client initiates network connection, and then the second front-end service process selects the first front-end service process from the front-end service process group in a load balancing manner. In one embodiment, Nginx is integrated in each front-end service process, and load balancing service is realized by Nginx, wherein Nginx (engine x) is a high-performance HTTP and reverse proxy web server and can be used as load balancing service. The load balancing has the function of avoiding connecting each client to the same front-end service process so as to improve the response speed and efficiency of the distributed service system.
In one embodiment, when determining the first front-end service process in a load balancing manner, the method specifically includes: and counting the number of the front-end service processes connected with the client, and selecting one front-end service process as a first front-end service process. At this time, step 220 specifically includes steps 221 to 222:
and step 221, the second front-end service process counts the number of client connections of each front-end service process in the front-end service process group.
Specifically, each front-end service process records the number of clients establishing network connection with the front-end service process, which is recorded as the number of client connections in the embodiment, and each front-end service process corresponds to one number of client connections. When the front-end service process establishes a new network connection, the number of the client-side connections is increased by 1, and when the front-end service process stops a network connection, the number of the client-side connections is decreased by 1.
Furthermore, data can be shared among the front-end service processes in the front-end service process group. At this time, after receiving the first connection request, the second front-end service process communicates with each front-end service process in the front-end service process group to determine the number of client connections of each front-end service process. It should be noted that, in practical applications, other manners may also be adopted, for example, when the number of client connections of each front-end service process changes, the number of client connections of each front-end service process is reported to other front-end service processes, and when the second front-end service process receives the first connection request, the number of client connections reported by other front-end service processes is directly obtained.
Step 222, the second front-end service process selects the front-end service process with the minimum number of client connections as the first front-end service process.
Illustratively, the second front-end service process selects the front-end service process with the minimum number of client connections as the first front-end service process. It can be understood that if the second front-end service process is the front-end service process with the minimum number of client connections, the second front-end service process takes itself as the first front-end service process. In one embodiment, if there are a plurality of front-end service processes with the smallest number of connections, the second front-end service process may randomly select one front-end service process from the plurality of front-end service processes as the first front-end service process.
Specifically, since communication can be performed between the front-end service processes, the second front-end service process obtains the communication address of the first front-end service process after determining the first front-end service process. In an embodiment, a communication address of a first front-end service process is recorded as a first communication address, and the first communication address is an IP address. And then, the second front-end service process feeds back the acquired IP address to the first client. Optionally, if the second front-end service process is the first front-end service process, the second front-end service process may feed back the communication address of the second front-end service process to the first client, or directly establish a network connection with the first client.
In an embodiment, after receiving the first communication address, the first client regenerates a connection request, and in the embodiment, the regenerated connection request is recorded as a second connection request, where content included in the second connection request may be set according to an actual situation, and optionally, the second connection request and the first connection request may include the same content or different content, which is not limited in the embodiment. For example, the second connection request includes specific requested content, an application identification, a user identification, and a key for the application identification. The secret key of the application identifier is used for enabling the first front-end service process to carry out communication verification so as to prevent a third-party application from forging a connection request. In practical applications, when the first connection request and the second connection request contain the same content, the first connection request may be directly used as the second connection request. Further, the first client sends the second connection request to the first front-end service process according to the acquired first communication address.
And after receiving the second connection request, the first front-end service process determines that the first client has a network connection request, then sends a corresponding response to the first client to inform the first client of agreeing to establish network connection, and after receiving the response, the first client determines to establish network connection with the first front-end service process. The method for generating the response by the first front-end service process and the content embodiment of the response are not limited.
In one embodiment, a first front-end service process records the communication address (IP address), application identifier, and user identifier of a first client to clarify the currently connected first client, and at the same time, other front-end service processes acquire the above contents when needed.
In an embodiment, the socket connection established between the first client and the first front-end service process is specifically a Websocket connection, where the Websocket is a persistent protocol and can implement long connection between the first client and the first front-end service process.
It can be understood that after the first client establishes the network connection with the first front-end service process, the network connection may be disconnected according to the actual situation, where the disconnection condition and the disconnection method embodiment are not limited.
In one embodiment, after step 240, the method further includes: and the first front-end service process updates the client connection number of the first front-end service process. Specifically, after the first front-end service process establishes network connection with the first client, the number of clients currently connected by the first front-end service process is increased, so that the first front-end service process updates the number of client connections of the first front-end service process, wherein the updating process specifically adds 1 to the number of client connections to ensure the real-time performance and accuracy of the number of client connections.
In one embodiment, after the first front-end service process and the first client establish the network connection, the first front-end service process and the first client detect whether the network connection is maintained at intervals. At this time, referring to fig. 2, the step 240 further includes steps 250 to 280:
and step 250, the first front-end service process sends a heartbeat packet to the first client at intervals.
In an embodiment, the heartbeat packet is a self-defined message that the first front-end service process notifies the first client of the self state at regular time, and is sent at a certain time interval, which is similar to a heartbeat. The embodiments of the contents contained in the heartbeat packet are not limited. After the network connection is established, the first front-end service process generates a heartbeat packet at intervals and sends the heartbeat packet to the first client. The specific duration of the interval can be set according to actual conditions.
In an embodiment, the heartbeat packet generated by the first client is recorded as a heartbeat response packet, which may be regarded as a response of the first client to the heartbeat packet sent by the first front-end service process, and an embodiment of a content included in the heartbeat response packet is not limited. And after generating the heartbeat response packet, the first client sends the heartbeat response packet to the first front-end service process. It can be understood that when the network connection is normal, the first client will feed back the corresponding heartbeat response packet after receiving the heartbeat packet.
Further, after the first front-end service process sends the heartbeat packet once, whether the heartbeat response packet fed back by the first client is received or not is determined. If the heartbeat response packet is received, it indicates that the first client may perform accurate feedback for the heartbeat packet, and at this time, it is determined that the network connection between the first front-end service process and the first client is normal, that is, step 270 is performed. If the heartbeat response packet is not received, it indicates that the first client cannot accurately feed back the heartbeat packet process, and at this time, it may be determined that the network connection between the first front-end service process and the first client is abnormal, that is, step 280 is performed. The reason why the first client cannot perform accurate feedback on the heartbeat packet may be: the first client does not receive the heartbeat packet or the heartbeat response packet fed back by the first client cannot reach the first front-end service process and the like. The non-reception of the heartbeat response packet is specifically that after the first front-end service process sends a heartbeat packet, the heartbeat response packet is not received before the next heartbeat packet is sent. Optionally, after the first front-end service process does not receive the heartbeat response packet, one to multiple heartbeat packets may be sent again, and if the corresponding heartbeat response packet is still not received after the heartbeat packet is sent, it indicates that the first client cannot accurately feed back the heartbeat packet process. If the corresponding heartbeat response packet is received after the heartbeat packet is sent, it is indicated that the first client can accurately feed back aiming at the heartbeat packet process. This can avoid misjudgment of the network connection state.
Wherein, the network connection is maintained to mean that the network connection is normal.
The network connection is not maintained, namely the network connection is abnormal, at the moment, the first front-end service process disconnects the network connection with the first client, and the number of the client connections of the first front-end service process is modified. For the first client, it disconnects the network connection with the first front-end service process. Thereafter, the first client may attempt to connect with the first front-end service process again, i.e., return to performing step 240. Alternatively, the first client re-connects, i.e. returns to step 210.
By the method, when the first client has the connection requirement, the first front-end service process is reasonably selected in a load balancing mode, the situation that all the clients are connected to the same front-end service process at the same time can be avoided, and reasonable utilization of resources of the front-end service process can be guaranteed by selecting the first front-end service process according to the connection number of the clients. After the establishment, the first front-end service process can be made to confirm whether the current network connection is normal or not in a heartbeat packet mode, so that the normal receiving of the universal service request is ensured.
On the basis of the above embodiment, the universal service request further includes a request for data communication between clients, and in the request, only the front-end service process needs to establish network connection between the clients, and no back-end service process needs to respond. At this time, the distributed service response method further includes a process in which the front-end service process establishes a network connection between the clients. Specifically, fig. 3 is a flowchart of another distributed service response method provided in the embodiment of the present application, and referring to fig. 3, the distributed service response method specifically includes:
Wherein the common service request is a matching request. The matching request refers to a request that the first client matches with other clients in the same application and can perform data communication with the other clients after the matching is completed. The number of other clients may be one or more. For example, the application installed in the first client is a mini-game, then the match request may be a two-player match, a multi-player match, or the like. The content contained in the matching request can be set according to actual conditions.
And step 320, the first front-end service process acquires the application identifier, the first user identifier and the second communication address of the first client according to the matching request.
Currently, the first front-end service process and the first client already establish a network connection, and therefore, after receiving the matching request, the first front-end service process may obtain a communication address of the first client.
In one embodiment, the matching request further includes a matching number, an application identifier, and a user identifier. The number of matches refers to the number of clients that need to be matched. In the embodiment, a user identifier corresponding to the first client is marked as a first user identifier. And after receiving the matching request, the first front-end service process analyzes the matching request to acquire the corresponding application identifier and the first user identifier. At this time, the first front-end service process explicitly requests the matched application and user through the application identifier and the first user identifier.
And step 330, the first front-end service process searches the corresponding queue to be matched according to the application identifier.
Illustratively, each application corresponds to one queue to be matched, and the queue to be matched stores the user identifier of each client which corresponds to the application currently sending the matching request. In one embodiment, the queue to be matched is stored in a front-end service process. In another embodiment, the distributed business response system further comprises a storage service process, the storage service process mainly providing data storage services. At this time, the queue to be matched may be stored in the storage service process, and each front-end service process may access the queue to be matched in the storage service process. Further, when the distributed service response system provides service for a new application, the queue to be matched of the application is synchronously established, and the corresponding relation between the queue to be matched and the application identifier is stored. At this time, the first front-end service process may search for the queue to be matched corresponding to the first client according to the corresponding relationship between the queue to be matched and the application identifier. Optionally, the same application may also correspond to multiple queues to be matched, for example, two-person matching of the same application corresponds to one queue to be matched, three-person matching of the same application corresponds to one queue to be matched, and so on.
It can be understood that each front-end service process in the front-end service process group can access the queue to be matched. The format embodiment adopted when the queue to be matched stores the user identifier is not limited. In an embodiment, the queue to be matched is implemented by Redis. The remote Dictionary service is an open source log-type or key-value database written in ANSI C language, supporting network, based on memory and persistent, and provides API in multiple languages. At this time, the queue to be matched corresponding to each application is stored in the Redis.
And step 340, the first front-end service process searches a second user identifier of a second client according to the queue to be matched, wherein the second client is a client matched with the first client.
Specifically, the second client determines the client matched with the first client after the first front-end service process responds to the matching request. The second client is one or more, and the number of the second clients is determined according to the matching number in the matching request. In the embodiment, the user identifier of the second client is recorded as a second user identifier, and the communication address of the second client is recorded as a third communication address. It is understood that the second client and the first client have the same application identification.
Illustratively, the first front-end service process searches a second user identifier corresponding to the second client in the queue to be matched, and then determines the second client. The embodiment of the method for searching for the second user identifier by the first front-end service process is not limited. For example, the queue to be matched adopts a first-in first-out mode, at this time, the first front-end service process selects the user identifier recorded first as the second user identifier, and if the user identifier is not stored in the queue to be matched, the first user identifier is stored in the queue to be matched for being matched by other clients. For another example, the first front-end service process writes the first user identifier into the queue to be matched, and selects the first user identifier and the corresponding second user identifier through the queue to be matched. In the embodiment, the first front-end service process is described as an example in which the first user identifier is also written into the queue to be matched. At this time, the step specifically includes steps 341 to 343:
step 341, the first front-end service process writes the first user identifier into the queue to be matched.
The implementation embodiment of the first user identifier when being written into the queue to be matched is not limited.
Step 342, the first front-end service process reads the adjacent user identifier in the queue to be matched according to the first-in first-out rule.
Specifically, after the first front-end service process writes the first user identifier into the queue to be matched, the user identifier is continuously read from the queue to be matched, where the number of user identifiers read each time is related to the matching number, for example, the matching number is 1, that is, the number matches one second client, and then the number of user identifiers read is 2. Optionally, if the user identifiers stored in the queue to be matched are smaller than the number of the read user identifiers, the first front-end service process continues to read the user identifiers until the user identifiers are read.
Optionally, the first front-end service process reads the adjacent user identifier in the queue to be matched, and records the adjacent user identifier as the adjacent user identifier. When reading the adjacent user identifiers, a first-in first-out rule is adopted, that is, the user identifiers stored in the queue to be matched first are read out first. It should be noted that the first-in first-out is an optional matching algorithm when the user identifier is read, and other ways may also be used in practical application to read the user identifier, which is not limited in the embodiment.
Step 343, the first front-end service process obtains the first user identifier and the second user identifier in the adjacent user identifiers.
Specifically, the first front-end service process continuously reads the adjacent user identifier. And then, when the first user identifier exists in the read adjacent user identifiers, determining other user identifiers in the adjacent user identifiers as second user identifiers, and determining that the matching is successful.
It can be understood that other front-end service processes in the front-end service process group may also continuously read the neighboring user identifier after receiving the matching request, and at this time, other front-end service processes may read the neighboring user identifier including the first user identifier. Because data sharing can be performed among the front-end service processes, other front-end service processes can notify the read adjacent user identification to the first front-end service process in a data sharing mode, so that the first front-end service process can acquire the adjacent user identification. Similarly, when the first front-end service process reads the adjacent user identifier required by the other front-end service processes, the other front-end service processes are enabled to obtain the adjacent user identifier in a data sharing manner. And the first front-end service process determines whether the adjacent user identification belongs to other front-end service processes in a mode of determining whether the adjacent user identification contains the user identification corresponding to the matching request.
It should be noted that the mode in which the first front-end service process writes the first user identifier and reads the adjacent user identifier may be considered as a mode of a producer and a consumer, where the producer is configured to write the user identifier into the queue to be matched, and the consumer is configured to read the adjacent user identifier from the queue to be matched. I.e. the writing and reading processes are separated.
And 350, the first front-end service process feeds the second user identifier and the third communication address of the second client back to the first client, and feeds the first user identifier and the second communication address back to a third front-end service process, wherein the third front-end service process belongs to a front-end service process group, and the third front-end service process is in network connection with the second client.
Specifically, the third front-end service process belongs to a front-end service process group, and the third front-end service process is a front-end service process that establishes a network connection with the second client. When the number of the second clients is multiple, the number of the third front-end service processes can also be multiple. It is understood that the third front-end service process and the first front-end service process may be the same front-end service process or different front-end service processes.
Illustratively, after the first front-end service process acquires the second user identifier, a third front-end service process is determined through data sharing among the front-end service processes, and the third front-end service process is notified, so that the third front-end service process definitely matches the second client with the first client and stops reading the adjacent user identifiers in the queue to be matched. The embodiment of the method for determining the third front-end service process by the first front-end service process is not limited. In an embodiment, when the first front-end service process and the third front-end service process perform data sharing, a communication address of the second client may be determined, and in the embodiment, the communication address of the second client is recorded as a third communication address. And then, the first front-end service process feeds back the second user identification and the third communication address to the first client so as to enable the first client to definitely match the user and the address, and at the moment, the second user identification and the third communication address can be understood as a general service response fed back by the first front-end service process. The first front-end service process also sends the first user identification and the second communication address to the third front-end service process so that the third front-end service process can definitely identify the user and the address matched with the second client.
And step 360, the third front-end service process feeds the first user identifier and the second communication address back to the second client.
Illustratively, the third front-end service process feeds back the received first user identifier and the second communication address to the second client, so that the second client definitely matches the user and the address.
At this time, the first client and the second client are successfully matched. In one embodiment, after matching is successful, the first front-end service process may further store related matching data, for example, for an application of a mini-game, after the first client and the second client are matched, they are located in the same virtual game room, at this time, the first front-end service process obtains the room number of the game room, and stores the room number, the first user identifier, and the second user identifier as matching data. Optionally, the first front-end service process stores the matching data into a storage service process, for example, the first front-end service process stores the matching data into the Redis.
In one embodiment, after the first client and the second client are successfully matched, data communication can be performed. At this time, the data communication process between the first client and the second client specifically includes steps 370 to 380:
In the embodiment, the matching of the communication data refers to communication data sent by the first client to the second client after the first client and the second client are successfully matched. The content of the matching communication data is not limited to the embodiment. Specifically, the first client generates the matching communication data and then sends the matching communication data to the first front-end service process. And then, the first front-end service process receives the matched communication data and sends the matched communication data to the third front-end service process.
And 380, the third front-end service process sends the matched communication data to the second client.
Illustratively, the third front-end service process sends the matching communication data sent by the first front-end service process to the second client after determining that the matching communication data is received. And the second client responds to the matched communication data after receiving the matched communication data so as to realize data communication from the first client to the second client.
It will be appreciated that the above process describes only the process of the first client sending matching communication data to the second client. The second client may also send the matching communication data to the first client, and the process is the same as the above process, which is not described herein again.
In one embodiment, after the first client and the second client are successfully matched, for the common service requests sent by the first client and the second client, the first front-end service process and the third front-end service process forward the common service requests of the first client and the second client to the same back-end service process for processing.
The matching request is responded through the front-end service process, the request response speed can be improved without passing through the back-end service process, and the matched clients can forward the data through the front-end service process during data communication, so that the delay problem can be effectively avoided, and the transmission speed is improved.
On the basis of the above embodiment, the distributed service response method further includes: and the storage service process stores the service data generated by the back-end service process group and the front-end service process group.
Specifically, the service data refers to data that needs to be stored in the process of providing services by the front-end service process group and the back-end service process group. The types of the service data corresponding to the front-end service process group and the service data corresponding to the back-end service process group can be the same or different. It is understood that different types of applications may have different corresponding service count content. For example, when the application type is a mini game, the service data corresponding to the front-end service process group includes at least one of the number of client connections, the communication address of the client, the queue to be matched, game room management, and the like of each front-end service process. The service data corresponding to the back-end service process group comprises at least one of service data, log records, user data and/or hotspot information. The user data comprises user identification, user basic information (such as gender, age, mobile phone number, mailbox and the like), a login password, a user list, mapping management of game roles and users and the like, the business data comprises point management, leaderboard, friend management and the like, the log record comprises operation information and other contents of a back-end service process, and the hotspot information comprises fixed information which is required to be frequently used when the application runs (for example, hotspot information corresponding to small games such as gunwars comprises equipment information, user data and the like, and hotspot information corresponding to small games such as chess and cards comprises chessboard information, user data and the like).
Further, the service data is stored in the storage service process. The storage service process provides a storage data service, and the type of the database contained in the storage service process can be set according to actual conditions. In one embodiment, the storage service process is composed of a Redis and a Solid State Drive (SSD), the Redis is used for data caching, the SDD is used for data backup, and the SDD includes MySQL and log data, where MySQL is a relational database management system that can store content such as service data, and user data corresponding to the front-end service process. And the data cached in Redis can be backed up in MySQL.
It is understood that both the set of front-end service processes and the set of back-end service processes may access service data stored in the storage service processes.
It should be noted that the service data described above is only an exemplary description, and in practical applications, the service data stored in Redis and MySQL may be set according to practical situations.
By the storage service process, the storage and backup of the service data can be realized, and the management and the query of the service data are facilitated.
On the basis of the above embodiment, the distributed service response method further includes: the main service process manages a front-end service process group and a back-end service process group, and the main service process is generated by voting of the front-end service process group and the back-end service process group.
Specifically, the distributed service response system adopts a master-slave structure, in the embodiment, a service process serving as a master in the distributed service response system is recorded as a master service process, and the rest service processes in the distributed service response system serve as slaves (slave). The main service process may be a front-end service process or a back-end service process. The master service process can manage each front-end service process and each back-end service process, wherein the management content of the master service process specifically comprises a service process for managing the slave machines, a life cycle for maintaining the slave machines, a communication bridge between the slave machines and the like. The number of the main service processes can be one or more, and in the embodiment, one main service process is taken as an example.
Optionally, the main service process is generated by election of each front-end service process and each back-end service process. In one embodiment, the host service process is generated by zookeeper elections. The zookeeper is a distributed, open-source distributed application program coordination service, and provides distributed services for a distributed business response system. Specifically, when the zookeeper elects the main service process, the main service process is determined by the front-end service processes and the back-end service processes in a voting manner. The embodiment of the manner for voting by each front-end service process and each back-end service process is not limited. And confirming the service process with the highest vote number as the main service process after voting. And after the election is completed, the master service process keeps network connection with each slave machine through shh. At this time, whether the master service process normally operates or not can be determined between each slave machine and the master service process in a heartbeat packet mode.
Optionally, the total number of the front-end service process and the back-end service process in the distributed service response system is singular, so as to prevent a situation that the votes of the two service processes are the highest when the main service process is elected.
Further, when the main service process is abnormal (terminates the service as desired), another front-end service process and back-end service process reselect a new main service process, and the distributed service response system adds a new node (e.g., adds a front-end service process or a back-end service process) during election, so as to ensure that the main service process is normally elected.
By setting the main service process, the management of each service process can be facilitated, and the stability of the distributed service response system is improved.
The following describes an exemplary distributed service response method provided in this embodiment.
In this example, the first client-installed application is a mini-game. Specifically, fig. 4 is a first topological graph provided in the embodiment of the present application, which is a topological graph among the first client, the first front-end service process, and the storage service process. Referring to fig. 4, the first client includes a message processing module, a general service module, and an interaction module, the first front-end service process includes a routing management module, a matching service module, and a connection service module, and the storage service process is composed of MySQL and Redis.
Specifically, the message processing module is configured to perform websocket (ws for short in fig. 4) connection with the first front-end service process, generate an initial universal service request and send the initial universal service request to the message processing module, perform dictionary compression on the original universal service request by the message processing module, perform protobuf processing on the original universal service request to obtain a binary universal service request, and then send the universal service request to the first front-end service process by the message processing module. The universal service module is used for receiving the universal service response forwarded by the message processing module, selecting a program (marked as a controller) capable of analyzing the universal service response in the first client according to the universal service response for analysis, and then sending an analysis result to the interaction module. The interaction module is used for drawing and displaying the analysis result. The routing management module is used for performing websocket connection with the first client, receiving a general service request sent by the first client, and sending a general service response to the first client. The route management module can also be in data communication with the storage service process, namely, the route management module stores service data into the storage service process or reads the service data in the storage service process. The matching service module is used for responding to the matching request, such as accessing a to-be-matched list in Redis, so as to write a first user identifier of the first client into the to-be-matched list, and read an adjacent user identifier through the to-be-matched list, so as to obtain a second user identifier and a third communication address of the second client, and then, the matching service module sends the second user identifier and the third communication address to the routing management module, so that the routing management module feeds back the second user identifier and the third communication address to the first client. The connection service module is used for generating heartbeat packets regularly to determine the connection state with the first client (the connection service module can be directly communicated with the message processing module or communicated with the message processing module through the route management module).
Further, fig. 5 is a second topological diagram provided in the embodiment of the present application, which is a topological diagram among the first front-end service process, the first back-end service process, and the storage service process. Referring to fig. 5, the first front-end service process is based on fig. 4 and further includes a generic service module. Namely, the first front-end service process comprises: the system comprises a route management module, a general service module, a matching service module and a connection service module. The first back-end service process comprises a business management module and a data storage module.
The general service module of the first front-end service process is used for splitting, encrypting and decrypting the general service request and the like, and then the processed general service request is sent to the first back-end service process through the routing management module. The service management module of the first back-end service process is configured to receive the general service request sent by the first front-end service process, call a corresponding general service processing program (denoted as controller) to process the general service request, so as to obtain a corresponding general service response, and then send the general service response to the route management module of the first front-end service process. And the data storage module of the first back-end service process is used for recording service data and carrying out data communication with the storage service process.
Fig. 6 is a schematic diagram of a general service data flow provided in an embodiment of the present application. The first front-end service process, the first back-end service process, the first client and the storage service process of fig. 6 specifically include the modules shown in fig. 4 and fig. 5. Specifically, referring to fig. 6, the first client determines the first communication address of the first front-end service process and then sends a second connection request to the first front-end service process, and after receiving the second connection request, the first front-end service process establishes network connection with the first client and records a game ID (i.e., an application identifier) and a user ID (i.e., a first user identifier). And then, the first client generates a universal service request and sends the universal service request to the first front-end service process. And the first front-end service process processes the universal service request and then forwards the universal service request to the idle first back-end service process. And then, the first back-end service process calls a corresponding general service processing program to respond to the general service to obtain a general service response, and sends the general service response to the first front-end service process. In the process, the first back-end service process stores the user data and other contents into MySQL. And the first front-end service process forwards the general service response to the first client so that the first client receives and displays the general service response.
On the basis of the above embodiments, the embodiments of the present application further provide a distributed service response system. Fig. 7 is a schematic structural diagram of a distributed service response system according to an embodiment of the present application. Referring to fig. 7, the distributed service response system includes a front-end service process group 41 and a back-end service process group 42, where the front-end service process group includes a plurality of front-end service processes, the back-end service process group 42 includes a plurality of back-end service processes, and a plurality of general service processing programs of applications are deployed in the back-end service processes.
A first front-end service process receives a universal service request sent by a first client, wherein the first front-end service process belongs to a front-end service process group 41; the first front-end service process sends the universal service request to a first back-end service process, which belongs to the back-end service process group 42; the first back-end service process operates a corresponding general service processing program according to the general service request to obtain a general service response; the first back-end service process sends the general service response to the first front-end service process; and the first front-end service process receives the general service response and feeds the general service response back to the first client.
On the basis of the foregoing embodiment, the operation of the corresponding general service processing program by the first backend service process according to the general service request specifically includes: determining an application identifier and a service request type according to the general service request; selecting a corresponding general service processing program according to the application identifier and the service request type; and running the selected general service processing program.
On the basis of the above embodiment, before the first front-end service process receives the universal service request sent by the first client, the method includes: a second front-end service process receives a first connection request sent by a first client, wherein the second front-end service process belongs to a front-end service process group 41; the second front-end service process selects a first front-end service process from the front-end service process group 41 by using load balancing; the second front-end service process feeds back the first communication address of the first front-end service process to the first client, so that the first client sends a second connection request to the first front-end service process according to the first communication address; and the first front-end service process receives a second connection request sent by the first client and establishes network connection with the first client according to the second connection request.
On the basis of the foregoing embodiment, after the first front-end service process establishes a network connection with the first client according to the second connection request, the method includes: a first front-end service process sends heartbeat packets to a first client at intervals; the first front-end service process confirms whether a heartbeat response packet fed back by the first client side is received at intervals; if the heartbeat response packet is received at intervals, the first front-end service process determines to keep network connection with the first client; and if the heartbeat response packet is not received at intervals, the first front-end service process determines that the first client side does not keep network connection.
On the basis of the above embodiment, the selecting, by the second front-end service process, the first front-end service process in the front-end service process group by using load balancing specifically includes: counting the client connection number of each front-end service process in the front-end service process group; and selecting the front-end service process with the minimum client connection number as a first front-end service process. After the first front-end service process establishes network connection with the first client according to the second connection request, the method comprises the following steps: and the first front-end service process updates the client connection number of the first front-end service process.
On the basis of the foregoing embodiment, after the general service request is a matching request and the first front-end service process receives the general service request sent by the first client, the method further includes: the first front-end service process requests an application identifier, a first user identifier and a second communication address of the first client according to the matching; the first front-end service process searches a corresponding queue to be matched according to the application identifier; the first front-end service process searches a second user identification of a second client according to the queue to be matched, wherein the second client is a client matched with the first client; the first front-end service process feeds back the second user identification and the third communication address of the second client to the first client, and feeds back the first user identification and the second communication address to a third front-end service process, wherein the third front-end service process belongs to a front-end service process group 41, and the third front-end service process is in network connection with the second client; and the third front-end service process feeds back the first user identification and the second communication address to the second client.
On the basis of the foregoing embodiment, after the first front-end service process feeds back the second user identifier and the third communication address of the second client to the first client, the method further includes: the first front-end service process receives the matched communication data sent by the first client and sends the matched communication data to the third front-end service process; and the third front-end service process sends the matched communication data to the second client.
On the basis of the foregoing embodiment, the searching, by the first front-end service process, the second user identifier of the second client according to the queue to be matched includes: the first front-end service process writes the first user identification into a queue to be matched; reading adjacent user identifications in a queue to be matched by a first front-end service process according to a first-in first-out rule; the first front-end service process acquires a first user identifier and a second user identifier in adjacent user identifiers.
On the basis of the above embodiment, the first backend service process is an idle backend service process.
On the basis of the above embodiment, the distributed service response system further includes: storing the service process; and the storage service process stores the service data generated by the back-end service process group and the front-end service process group.
On the basis of the above embodiment, the distributed service response system further includes: a main service process; the main service process manages a front-end service process group and a back-end service process group, and the main service process is generated by election of the front-end service process group and the back-end service process group.
The following provides an exemplary description of the distributed service response system provided in the embodiments of the present application. Fig. 8 is a schematic structural diagram of another distributed service response system provided in the embodiment of the present application. Referring to fig. 8, the distributed business response system includes a front-end service process, a back-end service process, Redis, MySQL, a master, wherein the master is determined by the zookeeper node. The system comprises a client, a back-end service process, a MySQL server and a master, wherein the front-end service process and the back-end service process are multiple, the front-end service process is connected with the client, the back-end service process, the Redis, the MySQL server and the master, and the back-end service process is connected with the Redis, the MySQL server and the master. When the distributed service response system executes the distributed service response method, the functions of each node may refer to the relevant description in the above embodiments, which is not described herein again.
The distributed service response system provided by the embodiment of the present application can be used to execute the distributed service response method provided by any of the above embodiments, and has corresponding functions and beneficial effects.
On the basis of the foregoing embodiments, the embodiments of the present application further provide a distributed service response apparatus. Fig. 9 is a schematic structural diagram of a distributed service response apparatus according to an embodiment of the present application. Referring to fig. 9, a distributed service response apparatus provided in an embodiment of the present application includes: a first request receiving module 501, a request sending module 502, a request responding module 503, a response sending module 504 and a response receiving module 505.
The first request receiving module 501 is configured in a first front-end service process, and is configured to receive a universal service request sent by a first client, where the first front-end service process belongs to a front-end service process group, and the front-end service process group includes multiple front-end service processes; a request sending module 502, configured to a first front-end service process, configured to send a universal service request to a first back-end service process, where the first back-end service process belongs to a back-end service process group, the back-end service process group includes multiple back-end service processes, and a universal service processing program for multiple applications is deployed in the back-end service processes; a request response module 503 configured to the first backend service process, and configured to run a corresponding generic service processing program according to the generic service request to obtain a generic service response; a response sending module 504, configured to the first back-end service process, configured to send the universal service response to the first front-end service process; the response receiving module 505 is configured in the first front-end service process, and configured to receive the general service response and feed back the general service response to the first client.
On the basis of the above embodiment, the request response module 503 includes: the type determining unit is used for determining the application identifier and the service request type according to the general service request; the program determining unit is used for selecting a corresponding general service processing program according to the application identifier and the service request type; and the program running unit is used for running the selected general service processing program.
On the basis of the above embodiment, the system further includes a second request receiving module, configured to the second front-end service process, and configured to receive the first connection request sent by the first client before the first front-end service process receives the universal service request sent by the first client, where the second front-end service process belongs to a front-end service process group; the load balancing module is configured on the second front-end service process and used for selecting the first front-end service process from the front-end service process group by utilizing load balancing; the request feedback module is configured in the second front-end service process and used for feeding back the first communication address of the first front-end service process to the first client so that the first client sends a second connection request to the first front-end service process according to the first communication address; and the third request receiving module is configured in the first front-end service process and used for receiving a second connection request sent by the first client and establishing network connection with the first client according to the second connection request.
On the basis of the above embodiment, the method further includes: the heartbeat packet sending module is configured in the first front-end service process and used for sending heartbeat packets to the first client at intervals after the first front-end service process establishes network connection with the first client according to the second connection request; the heartbeat packet feedback module is configured in the first front-end service process and used for confirming whether a heartbeat response packet fed back by the first client side is received at intervals; the first connection determining module is configured in the first front-end service process and used for determining to keep network connection with the first client if the heartbeat response packet is received at intervals; and the second connection determining module is configured in the first front-end service process and is used for determining that the network connection with the first client is not kept if the heartbeat response packet is not received at intervals.
On the basis of the above embodiment, the load balancing module includes: the connection number counting unit is used for counting the client connection number of each front-end service process in the front-end service process group; and the connection selection unit is used for selecting the front-end service process with the minimum client connection number as the first front-end service process. Correspondingly, the distributed service response system further comprises: and the connection number updating module is configured in the first front-end service process and used for updating the connection number of the client after the first front-end service process establishes network connection with the first client according to the second connection request.
On the basis of the above embodiment, the common service request is a matching request, and the distributed service response system further includes: the address determination module is configured in the first front-end service process and used for acquiring the application identifier, the first user identifier and the second communication address of the first client according to the matching request after the first front-end service process receives the universal service request sent by the first client; the queue searching module is configured in the first front-end service process and used for searching the corresponding queue to be matched according to the application identifier; the identifier searching module is configured in the first front-end service process and used for searching a second user identifier of a second client according to the queue to be matched, wherein the second client is a client matched with the first client; the first identifier feedback module is configured in the first front-end service process and used for feeding back the second user identifier and the third communication address of the second client to the first client front-end service process and feeding back the first user identifier and the second communication address to the third front-end service process, the third front-end service process belongs to a front-end service process group, and the third front-end service process is in network connection with the second client; and the second identifier feedback module is configured in the third front-end service process and used for feeding back the first user identifier and the second communication address to the second client.
On the basis of the above embodiment, the distributed service response system further includes: the first matching communication module is configured in the first front-end service process and used for receiving matching communication data sent by the first client and sending the matching communication data to the third front-end service process after the first front-end service process feeds back the second user identifier and the second client to the first client; and the second matching communication module is configured in the third front-end service process and used for sending the matching communication data to the second client.
On the basis of the above embodiment, the identifier lookup module includes: the identification writing unit is used for writing the first user identification into the queue to be matched; the identifier reading unit is used for reading the adjacent user identifiers in the queue to be matched according to a first-in first-out rule; and the identifier acquisition unit is used for acquiring the first user identifier and the second user identifier in the adjacent user identifiers.
On the basis of the above embodiment, the first back-front-end service process is an idle back-end service process.
On the basis of the above embodiment, the method further includes: and the storage module is configured in the storage service process and used for storing the service data generated by the back-end service process group and the front-end service process group.
On the basis of the above embodiment, the method further includes: and the management module is configured in the main service process and used for managing the front-end service process group and the back-end service process group, and the main service process is generated by election of the front-end service process group and the back-end service process group.
On the basis of the above embodiment, the universal service request is obtained by the first client performing dictionary compression and format conversion on the original universal service request.
It should be noted that, in the embodiment of the distributed service response apparatus, each unit and each module included in the embodiment are only divided according to functional logic, but are not limited to the above division, as long as the corresponding function can be implemented; in addition, specific names of the functional units are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the application.
The distributed service response device provided by this embodiment is used to execute the distributed service response method provided by any of the above embodiments, and has corresponding functions and beneficial effects.
The embodiments of the present application also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform any one of the distributed service response methods provided by the embodiments of the present application, and have corresponding functions and advantages.
From the above description of the embodiments, it is obvious for those skilled in the art that the present application can be implemented by software and necessary general hardware, and certainly can be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk, or an optical disk of a computer, and includes several instructions to enable a computer device (which may be a personal computer, a service process, or a network device) to execute the distributed service response method according to the embodiments of the present application.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (17)
1. A distributed service response method, comprising:
a first front-end service process receives a universal service request sent by a first client, wherein the first front-end service process belongs to a front-end service process group, and the front-end service process group comprises a plurality of front-end service processes;
the first front-end service process sends the universal service request to a first back-end service process, the first back-end service process belongs to a back-end service process group, the back-end service process group comprises a plurality of back-end service processes, and a plurality of applied universal service processing programs are deployed in the back-end service processes;
the first back-end service process operates a corresponding general service processing program according to the general service request to obtain a general service response;
the first back-end service process sends the general service response to the first front-end service process;
and the first front-end service process receives the general service response and feeds the general service response back to the first client.
2. The distributed service response method according to claim 1, wherein the running of the corresponding generic service handler by the first backend service process according to the generic service request includes:
the first back-end service process determines an application identifier and a service request type according to the general service request;
the first back-end service process selects a corresponding general service processing program according to the application identifier and the service request type;
and the first back-end service process runs the selected general service processing program.
3. The distributed service response method according to claim 1, wherein before the first front-end service process receives the generic service request sent by the first client, the method comprises:
a second front-end service process receives a first connection request sent by a first client, wherein the second front-end service process belongs to the front-end service process group;
the second front-end service process selects a first front-end service process from the front-end service process group by utilizing load balancing;
the second front-end service process feeds back a first communication address of the first front-end service process to the first client, so that the first client sends a second connection request to the first front-end service process according to the first communication address;
and the first front-end service process receives a second connection request sent by the first client and establishes network connection with the first client according to the second connection request.
4. The distributed service response method according to claim 3, wherein after the first front-end service process establishes the network connection with the first client according to the second connection request, the method comprises:
the first front-end service process sends a heartbeat packet to the first client at intervals;
the first front-end service process confirms whether a heartbeat response packet fed back by the first client side is received at intervals;
if the heartbeat response packet is received at intervals, the first front-end service process determines that the first client side is kept in network connection;
and if the heartbeat response packet is not received at intervals, the first front-end service process determines that the first client side does not keep network connection.
5. The distributed traffic response method according to claim 3, wherein the selecting, by the second front-end service process, the first front-end service process from the set of front-end service processes using load balancing comprises:
the second front-end service process counts the number of client connections of each front-end service process in the front-end service process group;
the second front-end service process selects the front-end service process with the minimum client connection number as the first front-end service process;
after the first front-end service process establishes network connection with the first client according to the second connection request, the method includes:
and the first front-end service process updates the client connection number of the first front-end service process.
6. The distributed service response method of claim 1, wherein the generic service request is a match request,
after the first front-end service process receives the universal service request sent by the first client, the method further comprises the following steps:
the first front-end service process acquires the application identifier, the first user identifier and the second communication address of the first client according to the matching request;
the first front-end service process searches a corresponding queue to be matched according to the application identifier;
the first front-end service process searches a second user identifier of a second client according to the queue to be matched, wherein the second client is a client matched with the first client;
the first front-end service process feeds back the second user identifier and a third communication address of the second client to the first client, and feeds back the first user identifier and the second communication address to a third front-end service process, wherein the third front-end service process belongs to the front-end service process group, and the third front-end service process is in network connection with the second client;
and the third front-end service process feeds back the first user identification and the second communication address to the second client.
7. The distributed service response method according to claim 6, wherein after the first front-end service process feeds back the second user identifier and the third communication address of the second client to the first client, the method comprises:
the first front-end service process receives the matched communication data sent by the first client and sends the matched communication data to the third front-end service process;
and the third front-end service process sends the matched communication data to the second client.
8. The distributed service response method according to claim 6, wherein the searching, by the first front-end service process, for the second user identifier of the second client according to the queue to be matched comprises:
the first front-end service process writes the first user identification into the queue to be matched;
the first front-end service process reads the adjacent user identification in the queue to be matched according to a first-in first-out rule;
and the first front-end service process acquires the first user identification and the second user identification in adjacent user identifications.
9. The distributed traffic response method according to claim 1, wherein the first backend service process is an idle backend service process.
10. The distributed service response method according to claim 1, further comprising:
and the storage service process stores the service data generated by the back-end service process group and the front-end service process group.
11. The distributed service response method according to claim 1, further comprising:
and the main service process manages the front-end service process group and the back-end service process group, and the main service process is generated by election of the front-end service process group and the back-end service process group.
12. The distributed service response method according to claim 1, wherein the generic service request is obtained by the first client performing dictionary compression and format conversion on an original generic service request.
13. A distributed service response system, comprising: the system comprises a front-end service process group and a back-end service process group, wherein the front-end service process group comprises a plurality of front-end service processes, the back-end service process group comprises a plurality of back-end service processes, and a plurality of general service processing programs of applications are deployed in the back-end service processes:
a first front-end service process receives a general service request sent by a first client, wherein the first front-end service process belongs to a front-end service process group;
the first front-end service process sends the universal service request to a first back-end service process, and the first back-end service process belongs to a back-end service process group;
the first back-end service process operates a corresponding general service processing program according to the general service request to obtain a general service response;
the first back-end service process sends the general service response to the first front-end service process;
and the first front-end service process receives the general service response and feeds the general service response back to the first client.
14. The distributed transaction response system of claim 13, further comprising: storing the service process;
and the storage service process stores the service data generated by the back-end service process group and the front-end service process group.
15. The distributed transaction response system of claim 13, further comprising: a main service process;
and the main service process manages the front-end service process group and the back-end service process group, and the main service process is generated by election of the front-end service process group and the back-end service process group.
16. A distributed traffic response apparatus, comprising:
the first request receiving module is configured in a first front-end service process and used for receiving a universal service request sent by a first client, wherein the first front-end service process belongs to a front-end service process group, and the front-end service process group comprises a plurality of front-end service processes;
a request sending module, configured to the first front-end service process, configured to send the universal service request to a first back-end service process, where the first back-end service process belongs to a back-end service process group, the back-end service process group includes multiple back-end service processes, and multiple application universal service processing programs are deployed in the back-end service processes;
the request response module is configured in the first back-end service process and used for operating a corresponding general service processing program according to the general service request so as to obtain a general service response;
a response sending module configured to the first back-end service process, and configured to send the generic service response to the first front-end service process;
and the response receiving module is configured in the first front-end service process and used for receiving the general service response and feeding the general service response back to the first client.
17. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the distributed service response method according to any one of claims 1 to 12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011571078.0A CN112769776B (en) | 2020-12-27 | 2020-12-27 | Distributed service response method, system, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011571078.0A CN112769776B (en) | 2020-12-27 | 2020-12-27 | Distributed service response method, system, device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112769776A true CN112769776A (en) | 2021-05-07 |
CN112769776B CN112769776B (en) | 2023-04-18 |
Family
ID=75695847
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011571078.0A Active CN112769776B (en) | 2020-12-27 | 2020-12-27 | Distributed service response method, system, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112769776B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103188245A (en) * | 2011-12-31 | 2013-07-03 | 上海火瀑云计算机终端科技有限公司 | Fight game server system |
US20160132309A1 (en) * | 2014-11-06 | 2016-05-12 | IGATE Global Solutions Ltd. | Efficient Framework for Deploying Middleware Services |
WO2017140216A1 (en) * | 2016-02-16 | 2017-08-24 | 阿里巴巴集团控股有限公司 | Method and device for network load balancing, control, and network interaction |
CN108121820A (en) * | 2017-12-29 | 2018-06-05 | 北京奇虎科技有限公司 | A kind of searching method and device based on mobile terminal |
US20180316778A1 (en) * | 2017-04-26 | 2018-11-01 | Servicenow, Inc. | Batching asynchronous web requests |
CN109683888A (en) * | 2018-12-19 | 2019-04-26 | 睿驰达新能源汽车科技(北京)有限公司 | A kind of multiplexing method and reusable business module of business module |
CN111343236A (en) * | 2020-02-07 | 2020-06-26 | 广州极晟网络技术有限公司 | Method, device and communication system for communication between server and client |
CN111367658A (en) * | 2020-02-24 | 2020-07-03 | 广州市百果园信息技术有限公司 | Live broadcast service system and process management method |
CN112068812A (en) * | 2020-09-02 | 2020-12-11 | 数字广东网络建设有限公司 | Micro-service generation method and device, computer equipment and storage medium |
-
2020
- 2020-12-27 CN CN202011571078.0A patent/CN112769776B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103188245A (en) * | 2011-12-31 | 2013-07-03 | 上海火瀑云计算机终端科技有限公司 | Fight game server system |
US20160132309A1 (en) * | 2014-11-06 | 2016-05-12 | IGATE Global Solutions Ltd. | Efficient Framework for Deploying Middleware Services |
WO2017140216A1 (en) * | 2016-02-16 | 2017-08-24 | 阿里巴巴集团控股有限公司 | Method and device for network load balancing, control, and network interaction |
US20180316778A1 (en) * | 2017-04-26 | 2018-11-01 | Servicenow, Inc. | Batching asynchronous web requests |
CN108121820A (en) * | 2017-12-29 | 2018-06-05 | 北京奇虎科技有限公司 | A kind of searching method and device based on mobile terminal |
CN109683888A (en) * | 2018-12-19 | 2019-04-26 | 睿驰达新能源汽车科技(北京)有限公司 | A kind of multiplexing method and reusable business module of business module |
CN111343236A (en) * | 2020-02-07 | 2020-06-26 | 广州极晟网络技术有限公司 | Method, device and communication system for communication between server and client |
CN111367658A (en) * | 2020-02-24 | 2020-07-03 | 广州市百果园信息技术有限公司 | Live broadcast service system and process management method |
CN112068812A (en) * | 2020-09-02 | 2020-12-11 | 数字广东网络建设有限公司 | Micro-service generation method and device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112769776B (en) | 2023-04-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115004673B (en) | Message pushing method, device, electronic equipment and computer readable medium | |
CN112769837B (en) | Communication transmission method, device, equipment, system and storage medium based on WebSocket | |
US20070121490A1 (en) | Cluster system, load balancer, node reassigning method and recording medium storing node reassigning program | |
US20050138517A1 (en) | Processing device management system | |
US10630531B2 (en) | Propagating state information to network nodes | |
US7453865B2 (en) | Communication channels in a storage network | |
US20130007253A1 (en) | Method, system and corresponding device for load balancing | |
US10069941B2 (en) | Scalable event-based notifications | |
CN111353161A (en) | Vulnerability scanning method and device | |
CN114024972B (en) | Long connection communication method, system, device, equipment and storage medium | |
CN112988377B (en) | Resource allocation method, system and medium for cloud service | |
CN112698838B (en) | Multi-cloud container deployment system and container deployment method thereof | |
CN107370809A (en) | Method of data synchronization and data search system | |
CN113055461B (en) | ZooKeeper-based unmanned cluster distributed cooperative command control method | |
CN105681379A (en) | Cluster management system and method | |
CN112055048A (en) | P2P network communication method and system for high-throughput distributed account book | |
CN111193778A (en) | Method, device, equipment and medium for WEB service load balancing | |
US8407291B1 (en) | System and method for dispensing e-Care | |
US10025859B2 (en) | Method and system for second-degree friend query | |
WO2022134830A1 (en) | Method and apparatus for processing block node data, computer device, and storage medium | |
CN111274022A (en) | Server resource allocation method and system | |
CN112769776B (en) | Distributed service response method, system, device and storage medium | |
JP2022525205A (en) | Abnormal host monitoring | |
CN102577249B (en) | The example set of the connection of dynamic addressing main frame | |
CN106790354A (en) | A kind of communication means and its device of anti-data congestion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |