CN112698965A - System and method for realizing message queue and message scheduling system - Google Patents

System and method for realizing message queue and message scheduling system Download PDF

Info

Publication number
CN112698965A
CN112698965A CN202011559212.5A CN202011559212A CN112698965A CN 112698965 A CN112698965 A CN 112698965A CN 202011559212 A CN202011559212 A CN 202011559212A CN 112698965 A CN112698965 A CN 112698965A
Authority
CN
China
Prior art keywords
server node
message
server
message queue
node group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011559212.5A
Other languages
Chinese (zh)
Other versions
CN112698965B (en
Inventor
张连升
冯健
晏原
吴昭
黎江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202011559212.5A priority Critical patent/CN112698965B/en
Publication of CN112698965A publication Critical patent/CN112698965A/en
Application granted granted Critical
Publication of CN112698965B publication Critical patent/CN112698965B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1448Management of the data involved in backup or backup restore
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • Computer And Data Communications (AREA)

Abstract

The present disclosure provides a system, a method and a message scheduling system for implementing a message queue, which relate to the technical field of computers, and in particular to the field of data processing and message recommendation. The implementation scheme is as follows: a system for implementing a message queue, comprising: a first server node group configured to generate routing information for access of a message queue in response to receiving request information associated with access of the message queue; and a second server node group configured to perform access to the message queue of the request information based on the routing information, wherein each of the first server node group and the second server node group is divided into at least one server node set based on a raft protocol, each server node set including a plurality of server nodes divided into a master server node and a slave server node.

Description

System and method for realizing message queue and message scheduling system
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to the field of data processing and message recommendation, and in particular, to a system and a method for implementing a message queue, a message scheduling system, and a message recommendation system and method.
Background
Message queues play a crucial role in the real-time scheduling of data. In order to meet different service requirements, a piece of data often needs to pass through dozens or even hundreds of different processing modules, and the different processing modules need to be efficiently and reliably linked through a message queue.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, unless otherwise indicated, the problems mentioned in this section should not be considered as having been acknowledged in any prior art.
Disclosure of Invention
The present disclosure provides a system, a method, a message scheduling system, and a message recommendation system and method for implementing a message queue.
According to an aspect of the present disclosure, there is provided a system for implementing a message queue, including: a first server node group configured to generate routing information for access of a message queue in response to receiving request information associated with access of the message queue; a second server node group configured to perform access to the message queue for the request information based on the routing information, wherein each of the first server node group and the second server node group is divided into at least one server node set based on a raft protocol, each of the server node sets including a plurality of server nodes divided into a master server node and a slave server node.
According to another aspect of the present disclosure, there is provided a message scheduling system, including: a message production unit and/or a message consumption unit; and a system for implementing a message queue as described above, wherein the system for implementing a message queue interacts with the message production unit and/or the message consumption unit to implement message scheduling.
According to another aspect of the present disclosure, there is provided a method for implementing a message queue, including: constructing a first server node group and a second server node group, wherein each of the first server node group and the second server node group is divided into at least one server node set based on a raft protocol, and each server node set comprises a plurality of server nodes divided into a master server node and a slave server node; configuring the first server node group to generate routing information for access to a message queue in response to receiving request information associated with access to the message queue; and configuring the second server node group to perform access to the message queue of the request information based on the routing information.
According to another aspect of the present disclosure, there is provided a message recommendation system including: a message production unit and a message consumption unit; and the system for realizing the message queue, wherein the message produced by the message producing unit is recommended to the message consuming unit through the system for realizing the message queue.
According to another aspect of the present disclosure, there is provided a message recommendation method including: the messages produced by the message producing unit are recommended to the message consuming unit by the method for implementing a message queue as described above.
According to one or more embodiments of the present disclosure, a traditional zookeeper centralized management mode can be abandoned, and stability and performance bottlenecks caused by excessive dependence on zookeeper can be solved. At the same time, the availability and stability of the server node can be ensured.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the embodiments and, together with the description, serve to explain the exemplary implementations of the embodiments. The illustrated embodiments are for purposes of illustration only and do not limit the scope of the claims. Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a schematic diagram of an exemplary system in which systems and methods according to embodiments of the present disclosure may be implemented;
FIG. 2 illustrates a block diagram of a system for implementing a message queue according to an embodiment of the present disclosure;
FIG. 3 illustrates a block diagram of a storage engine according to an embodiment of the present disclosure;
FIG. 4 shows a schematic diagram of a weight queue according to an embodiment of the present disclosure;
FIG. 5 shows a schematic diagram of a message scheduling system according to an embodiment of the present disclosure;
FIG. 6 shows a flow diagram of a method for implementing a message queue according to an embodiment of the disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, unless otherwise specified, the use of the terms "first", "second", etc. to describe various elements is not intended to limit the positional relationship, the timing relationship, or the importance relationship of the elements, and such terms are used only to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, based on the context, they may also refer to different instances.
The terminology used in the description of the various described examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, if the number of elements is not specifically limited, the elements may be one or more. Furthermore, the term "and/or" as used in this disclosure is intended to encompass any and all possible combinations of the listed items.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
FIG. 1 illustrates a schematic diagram of an exemplary system in which systems and methods according to embodiments of the present disclosure may be implemented. Referring to fig. 1, the system 100 includes one or more client devices 101, 102, 103, 104, 105, and 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to the server 120. Client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications.
In an embodiment of the present disclosure, the server 120, which may also be referred to as a server node in the following, may operate such that the system and method for implementing message queuing according to an embodiment of the present disclosure can be implemented.
In some embodiments, the server 120 may also provide other services or software applications that may include non-virtual environments and virtual environments. In certain embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of client devices 101, 102, 103, 104, 105, and/or 106 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof, which may be executed by one or more processors. A user operating a client device 101, 102, 103, 104, 105, and/or 106 may, in turn, utilize one or more client applications to interact with the server 120 to take advantage of the services provided by these components. It should be understood that a variety of different system configurations are possible, which may differ from system 100. Accordingly, fig. 1 is one example of a system for implementing the systems and methods described herein and is not intended to be limiting.
A user may use client devices 101, 102, 103, 104, 105, and/or 106 to interact with server 120. The client device may provide an interface that enables a user of the client device to interact with the client device. The client device may also output information to the user via the interface. Although fig. 1 depicts only six client devices, those skilled in the art will appreciate that any number of client devices may be supported by the present disclosure.
Client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computer devices, such as portable handheld devices, general purpose computers (such as personal computers and laptop computers), workstation computers, wearable devices, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and so forth. These computer devices may run various types and versions of software applications and operating systems, such as Microsoft Windows, Apple iOS, UNIX-like operating systems, Linux, or Linux-like operating systems (e.g., Google Chrome OS); or include various Mobile operating systems, such as Microsoft Windows Mobile OS, iOS, Windows Phone, Android. Portable handheld devices may include cellular telephones, smart phones, tablets, Personal Digital Assistants (PDAs), and the like. Wearable devices may include head mounted displays and other devices. The gaming system may include a variety of handheld gaming devices, internet-enabled gaming devices, and the like. The client device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), Short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any type of network known to those skilled in the art that may support data communications using any of a variety of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. By way of example only, one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, WIFI), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture involving virtualization (e.g., one or more flexible pools of logical storage that may be virtualized to maintain virtual storage for the server). In various embodiments, the server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above, as well as any commercially available server operating systems. The server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, and the like.
In some implementations, the server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of the client devices 101, 102, 103, 104, 105, and 106. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client devices 101, 102, 103, 104, 105, and 106.
In some embodiments, the server 120 may be a server of a distributed system, or a server incorporating a blockchain. The server 120 may also be a cloud server, or a smart cloud computing server or a smart cloud host with artificial intelligence technology. The cloud Server is a host product in a cloud computing service system, and is used for solving the defects of high management difficulty and weak service expansibility in the traditional physical host and Virtual Private Server (VPS) service.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of the databases 130 may be used to store information such as audio files and video files. The data store 130 may reside in various locations. For example, the data store used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. The data store 130 may be of different types. In certain embodiments, the data store used by the server 120 may be a database, such as a relational database. One or more of these databases may store, update, and retrieve data to and from the database in response to the command.
In some embodiments, one or more of the databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key-value stores, object stores, or regular stores supported by a file system.
In the related art, a system for implementing a message queue generally implements centralized management of the system through a zookeeper server node. This makes the system overly dependent on the stability and performance of the zookeeper server node itself. Once a zookeeper fails, the entire system is exposed to the risk of failure.
In order to solve the above technical problem, the present disclosure provides a system for implementing a message queue. FIG. 2 shows a block diagram of a system for implementing a message queue according to an embodiment of the present disclosure. As shown in fig. 2, the system 200 for implementing message queuing may include: a first server node group 210 configured to generate routing information for access of a message queue in response to receiving request information associated with access of the message queue; a second server node group 220 configured to perform access to the message queue for the request information based on the routing information, wherein each of the first server node group and the second server node group is divided into at least one server node set based on a raft protocol, each of the server node sets including a plurality of server nodes divided into a master server node and a slave server node.
According to the system for realizing the message queue, the overall architecture of the server nodes is divided into two layers including the first server node group and the second server node group, and the two layers are matched for realizing the message queue, so that the centralized management mode of the zookeeper is abandoned, and the stability and performance bottleneck caused by excessive dependence on the zookeeper are solved. Meanwhile, the server nodes can be managed through the server node set to which the server nodes belong, and the health management and the data consistency management of the server nodes can be realized inside the server node set, so that the availability and the stability of the server nodes are ensured.
As shown in FIG. 2, a first group of server nodes 210 may include a set of server nodes 210 based on the raft protocol1. Server node set 2101May include three server nodes 2101-1、2101-2、2101-3. The raft protocol is a distributed protocol that provides strong consistency, decentralization, and high availability. Due to the collection of server nodes 2101Is constructed based on the raft protocol, and therefore includes three server nodes 2101-1、2101-2、2101-3 is a 3-copy set conforming to the raft protocol. According to the raft protocol, at the three server nodes 2101-1、2101-2、2101In-3, there may be one master server node and two slave server nodes. For example, server node 2101-1 may be a primary server node, and server node 2101-2、2101-3 may be a slave server node.
The second server node group 220 may include a plurality of server node sets 220 based on a raft protocol1To 220n(n is a natural number greater than 1). Server node set 2201To 220nMay comprise three server nodes. For example, the set of server nodes 2201May include three server nodes 2201-1、2201-2、2201-3. Similarly, the set of server nodes 220nMay include three server nodes 220n-1、220n-2、220n-3. Due to the collection of server nodes 2201To 220nEach of which is also built based on the raft protocol, so that the three server nodes that it comprises are also 3-copy sets conforming to the raft protocol. According to the raft protocol, among the three server nodes, there may be one master server node and two slave server nodes. For example, at the server node collection 2201And 220nIn, the server node 2201-1 and 220n-1 may be a master server node, and server node 2201-2、2201-3 and 220n-2、220n-3 may be a slave server node.
Those skilled in the art will appreciate that FIG. 2 merely illustrates that the first server node group 210 includes one server node set 210 by way of example1However, the number of server node sets may vary depending on the actual situation. Furthermore, fig. 2 only exemplarily shows an embodiment in which the server node sets each include three server nodes, however, the number of server nodes may also vary according to actual situations, and the number of server nodes within each server node set may also be different. In addition, the selection of the master server node and the slave server node can also be changed according to the actual situation.
The first server node group 210 may receive request information associated with access to the message queue from clients (e.g., various client devices as shown in fig. 1) and generate routing information for access to the message queue. The routing information may be sent to a second serviceA set of device nodes 220. The second group of server nodes 220 may perform access to the message queue for the request information based on the routing information. In one example, the receiving the request information and generating the routing information may be performed by a set of server nodes 210 of the first group of server nodes 2101 Primary server node 210 in1-1, and the access of the execution message queue may be performed by a set of server nodes 220 of a second group of server nodes 2201To 220n Primary server node 220 in (1)1-1 to 220n1. That is, the master server node may be a server node in the server node set that performs the service processing specifically.
According to the embodiment of the disclosure, the overall architecture of the server node is divided into two layers including a first server node group and a second server node group, and the two layers cooperate to be used for realizing the message queue, so that the centralized management mode of the zookeeper is abandoned, and the stability and performance bottleneck caused by excessive dependence on the zookeeper are solved.
In addition, according to the embodiment of the disclosure, the server node can be managed by the server node set to which the server node belongs, and health management and data consistency management of the server node can be realized inside the server node set, so that the availability and stability of the server node are ensured.
Optionally, a set of server nodes 210 for a first group of server nodes 2101Master server node 2101-1 a set of server nodes 220 that can be configured to manage a second group of server nodes 2201To 220nTo increase or decrease the set of server nodes 220 of the second group of server nodes 2201To 220nThe number of the cells. Thus, the first server node group 210 is facilitated to dynamically manage the use of the second server node group 220 according to actual needs. For example, the first server node group 210 may appropriately allocate the server resources of the second server node group 220, i.e., allocate an appropriate number of server node sets, according to the actual traffic volume.
Second server node group 220The number n of server node sets involved may be determined by the first server node group 210, and may be exemplified by the server node set 2101 Primary server node 210 in11. In other words, the first server node group 210 may be responsible for capacity expansion and capacity reduction of the server node sets in the second server node group 220. The expansion means to increase the number of server node sets in the second server node group 220. Conversely, the reduction means a reduction in the number of server node sets in the second server node group 220. For example, when the traffic related to the access of the message queue is large, the first server node group 210 may divide more sets of server nodes for the second server node group 220 in order to perform the access of the message queue.
Optionally, the first server node group 210 may be further configured to store meta-information of the message queue, the meta-information comprising a set of server nodes 220 with the second server node group 2201To 220nInformation on the division of (2). Thus, facilitating the first server node group 210 to dynamically manage the set of server nodes 220 to the second server node group 2201To 220nThe use of (1).
In one example, the first set of server nodes 210 may include a database for storing meta information, such as mysql and redis databases (e.g., the databases shown in fig. 1).
Optionally, the system 200 for implementing a message queue according to an embodiment of the present disclosure may further include a storage engine for storing the message queue, wherein the storage engine is based on a key-value data structure.
The key-value based data structure can facilitate providing an intervention function when data (i.e., messages) are stored, thereby solving a problem that messages cannot be cancelled in a message queue. This is because, unlike the conventional disk append write mode, the key-value based data structure is no longer limited to reading data according to file offsets.
FIG. 3 shows a block diagram of a storage engine according to an embodiment of the present disclosure. In one example, the storage engine 300 may be such thatThe method is implemented by using a distributed storage system Big Table. As shown in FIG. 3, the storage engine 300 may include a plurality of storage tables 3001To 300n(n is a natural number greater than 1). Storage table 3001To 300nMay comprise a plurality of memory slices. For example, storage table 3001May include memory sharding 3001-1 to 3001-n, and stores table 300nMay include memory sharding 300n-1 to 300n-n. To ensure storage balance, data (i.e., messages) may be stored to a memory slice in an idle state.
Those skilled in the art will appreciate that the distributed storage system Big Table shown in fig. 3 is only an exemplary embodiment of the storage engine 300, and may be replaced by other distributed storage systems or different distributed storage systems may be used at the same time according to the actual application, as long as they are storage engines based on the key-value data structure. In one example, the system 200 for implementing message queues may use a unified storage engine interface to facilitate scaling of the storage engine 300.
Optionally, the first server node group 210 may be further configured to store meta-information of the message queue, the meta-information including a storage address of the message queue, the storage address indicating a particular storage engine storing the message queue. Thus, in the case where different distributed storage systems are used simultaneously as described above, the first server node group 210 can integrally manage the specific storage engine that stores the message queue.
Optionally, the request information may include a request for writing a message to the storage engine 300 and/or reading a message from the storage engine 300, the message constituting the message queue in the storage engine 300. Thus, the cancellation of messages in the message queue can be achieved using the intervening functionality of the storage engine based on the key-value data structure.
Optionally, the routing information may indicate a primary server node in a particular set of server nodes among the second group of server nodes 220Performing a write and/or read of the message. Since the second server node group 220 is responsible for specific access handling, indicating by routing messages can facilitate multiple server node sets 220 in the second server node group 2201To 220nAnd issuing an independent processing instruction.
As described above, referring again to FIG. 2, the second group of server nodes 220 may include a plurality of sets of server nodes 2201To 220n. First server node group 210 may determine which set of server nodes need to be used to perform the writing and/or reading of the message. Accordingly, after the first server node group 210 receives request information on writing and/or reading of a message from a client and generates routing information, it is possible to indicate which server node set is used to perform writing and/or reading of a message by the routing information. For example, the routing information may indicate the use of the set of server nodes 220 that are idle at the time1Writing and/or reading of messages is performed.
In addition, since the server node set is constructed based on the raft protocol, as described above, the master server node may be a server node specifically performing service processing in the server node set. Therefore, when it is indicated by the routing information which server node set is used to perform writing and/or reading of a message, it is actually specifically indicated that writing and/or reading of a message is performed by the master server node in the server node set.
For example, assume that the routing information indicates use of the set of server nodes 220 that are idle at the time1The writing and/or reading of the message is performed, which in fact specifically indicates the set of server nodes 2201 Primary server node 220 in (1)1-1 to perform writing and/or reading of messages.
In performing writing and/or reading of messages, optionally, the primary server node 2201-1 can assign and/or retrieve messages to/from the respective weight queue according to their weight. This allows a message with a high weight to be transmitted preferentially to be full when the processing capacity is limitedWhich is sufficient for the timeliness of the message.
Fig. 4 shows a schematic diagram of a weight queue according to an embodiment of the present disclosure. As shown in FIG. 4, each of the weight queues 400-1, 400-2 … 400-n may represent a collection of messages having the same weight attribute. Thus, the messages assigned to the respective weight queues 400-1, 400-2 … 400-n may each have different weight attributes. For example, a message having the highest weight, i.e., the highest priority, may be assigned to the weight queue 400-1 and the weights are decremented from the weight queues 400-1, 400-2 through 400-n, so that a message having the lowest priority may be assigned to the weight queue 400-n.
It should be noted that the weight attribute of the message may be an attribute of the message itself. When the message is written into the message queue, the weight attribute of the message can be acquired, and then the message is distributed to the corresponding weight queue. Accordingly, when reading a message from the message queue, the presence of the weight queue can allow a message with a high weight to be preferentially transmitted in a case where the processing capability is limited.
Referring again to FIG. 2, optionally, a set of server nodes 210 of a first server node group 210 and a second server node group 2201And 2201To 220nThe slave server node in (b) may be configured to perform a hot standby to replace the master server node in the event of a failure of the master server node. Therefore, the health condition of the server nodes can be ensured on the whole, and the influence on the whole function due to the fault of the specific server node can be avoided.
As described above, server node 2101-1 and 2201-1 and 220n-1 may be a primary server node, and server node 2101-2、2101-3 and 2201-2、2201-3、220n-2、220n-3 may be a slave server node. Thus, these slave service nodes 2101-2、2101-3 and 2201-2、2201-3、220n-2、220n-3 can be configured to hot-standby to be at the respective primary server node 2101-1 and 2201-1 and 220n-1 replacement of said primary service upon failureAnd a device node.
As described above, according to the system for implementing a message queue according to the embodiment of the present disclosure, by dividing the overall architecture of the server nodes into two layers including the first server node group and the second server node group, and the two layers cooperate to implement the message queue, the centralized management mode of zookeeper is abandoned, and the stability and performance bottleneck caused by excessive dependence on zookeeper are solved. Meanwhile, the server nodes can be managed through the server node set to which the server nodes belong, and the health management and the data consistency management of the server nodes can be realized inside the server node set, so that the availability and the stability of the server nodes are ensured.
According to the embodiment of the disclosure, a message scheduling system is also provided. Fig. 5 shows a schematic diagram of a message scheduling system according to an embodiment of the present disclosure. As shown in fig. 5, the message scheduling system 500 may include: a message producing unit 510 and/or a message consuming unit 530; and a system for implementing message queuing 520, wherein the system for implementing message queuing 520 interacts with the message production unit 510 and/or the message consumption unit 530 to implement message scheduling.
It will be appreciated that the system 520 for implementing message queues may be an intermediate module in the message scheduling system 500 that is located between the upstream message producing unit 510 and the downstream message consuming unit 530 to implement scheduling of messages.
The embodiment of the system 520 for implementing a message queue may be similar to the system 200 described above with reference to fig. 2-4, and thus is not described herein again.
According to an embodiment of the present disclosure, there is also provided a method for implementing a message queue. FIG. 6 shows a flow diagram of a method for implementing a message queue according to an embodiment of the disclosure. As shown in fig. 6, the method for implementing a message queue may include: step S601, constructing a first server node group and a second server node group, wherein each of the first server node group and the second server node group is divided into at least one server node set based on a raft protocol, and each server node set comprises a plurality of server nodes divided into a master server node and a slave server node; step S602, configuring the first server node group to generate routing information for access to a message queue in response to receiving request information associated with access to the message queue; step S603, configuring the second server node group to perform access to the message queue of the request information based on the routing information.
Optionally, for the set of server nodes of the first server node group, the master server node may be configured to manage the partitioning of the set of server nodes of the second server node group to increase or decrease the number of the set of server nodes of the second server node group.
The first server node group may be further configured to store meta-information of the message queue, the meta-information comprising information about the partitioning of the set of server nodes of the second server node group.
The method may further comprise: a storage engine is provided for storing a queue of messages, wherein the storage engine is based on a key-value data structure.
The first server node group may be further configured to store meta-information of the message queue, the meta-information comprising a storage address of the message queue, the storage address indicating a particular storage engine storing the message queue.
The request information may include requests for writing messages to and/or reading messages from the storage engine, the messages constituting a message queue in the storage engine.
The writing and/or reading of the message may be performed by the routing information indicating a primary server node in a particular set of server nodes among the second group of server nodes.
The master server node may assign and/or retrieve messages to and/or from the respective weight queues according to their weights.
The slave server nodes in the set of server nodes of the first and second server node groups may be configured to be hot-standby to replace the master server node in the event of a failure of the master server node.
According to an embodiment of the present disclosure, there is provided a message recommendation system including: a message production unit and a message consumption unit; and the system for realizing the message queue, wherein the message produced by the message producing unit is recommended to the message consuming unit through the system for realizing the message queue.
According to an embodiment of the present disclosure, there is provided a message recommendation method including: the messages produced by the message producing unit are recommended to the message consuming unit by the method for implementing a message queue as described above.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be performed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the above-described methods, systems and examples are merely illustrative embodiments or examples and that the scope of the invention is not to be limited by these embodiments or examples, but only by the claims as issued and their equivalents. Various elements in the embodiments or examples may be omitted or may be replaced with equivalents thereof. Further, the steps may be performed in an order different from that described in the present disclosure. Further, various elements in the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced with equivalent elements that appear after the present disclosure.

Claims (21)

1. A system for implementing a message queue, comprising:
a first server node group configured to generate routing information for access of a message queue in response to receiving request information associated with access of the message queue;
a second server node group configured to perform access to the message queue for the request information based on the routing information,
wherein each of the first server node group and the second server node group is divided into at least one server node set based on a raft protocol, each of the server node sets including a plurality of server nodes divided into a master server node and a slave server node.
2. The system of claim 1, wherein, for the set of server nodes of the first server node group, the master server node is configured to manage partitioning of the set of server nodes of the second server node group to increase or decrease the number of the set of server nodes of the second server node group.
3. The system of claim 2, wherein the first server node group is further configured to store meta information of the message queue, the meta information including information related to a partitioning of the set of server nodes of the second server node group.
4. The system of claim 1, further comprising a storage engine to store the message queue, wherein the storage engine is based on a key-value data structure.
5. The system of claim 4, wherein the first server node group is further configured to store meta information of the message queue, the meta information including a storage address of the message queue, the storage address indicating a particular storage engine storing the message queue.
6. The system of claim 4, wherein the request information comprises a request to write a message to and/or read the message from the storage engine, the message constituting the message queue in the storage engine.
7. The system of claim 6, wherein the writing and/or reading of the message is performed by the routing information indicating a primary server node in a particular set of server nodes among the second set of server nodes.
8. The system of claim 7, wherein the master server node assigns and/or retrieves the messages to and from respective weight queues according to their weights.
9. The system of claim 1, wherein the slave server nodes in the set of server nodes of the first and second server node groups are configured to be hot-backed up to replace the master server node in the event of a failure of the master server node.
10. A message scheduling system, comprising:
a message production unit and/or a message consumption unit; and
the system for implementing a message queue of any of claims 1-9, wherein the system for implementing a message queue interacts with the message production unit and/or message consumption unit to implement message scheduling.
11. A method for implementing a message queue, comprising:
constructing a first server node group and a second server node group, wherein each of the first server node group and the second server node group is divided into at least one server node set based on a raft protocol, and each server node set comprises a plurality of server nodes divided into a main server node and a slave server node;
configuring the first server node group to generate routing information for access to a message queue in response to receiving request information associated with access to the message queue; and
configuring the second server node group to perform access to the message queue of the request information based on the routing information.
12. The method of claim 11, wherein the master server node is configured to manage a partitioning of the set of server nodes of the second server node group to increase or decrease the number of the set of server nodes of the second server node group for the set of server nodes of the first server node group.
13. The method of claim 12, wherein the first server node group is further configured to store meta-information of the message queue, the meta-information comprising information related to partitioning of the set of server nodes of the second server node group.
14. The method of claim 11, further comprising: providing a storage engine for storing the message queue, wherein the storage engine is based on a key-value data structure.
15. The method of claim 14, wherein the first server node group is further configured to store meta-information of the message queue, the meta-information comprising a storage address of the message queue, the storage address indicating a particular storage engine storing the message queue.
16. The method of claim 14, wherein the request information comprises a request to write a message to and/or read the message from the storage engine, the message constituting the message queue in the storage engine.
17. The method of claim 16, wherein the writing and/or reading of the message is performed by the routing information indicating a primary server node in a particular set of server nodes among the second set of server nodes.
18. The method of claim 17, wherein the master server node assigns and/or retrieves the messages to and from respective weight queues according to their weights.
19. The method of claim 11, wherein the slave server nodes in the set of server nodes of the first and second server node groups are configured to be hot-backed to replace the master server node in the event of a failure of the master server node.
20. A message recommendation system comprising:
a message production unit and a message consumption unit; and
the system for implementing a message queue of any of claims 1-9, wherein messages produced by the message producing unit are recommended to the message consuming unit by the system for implementing a message queue.
21. A message recommendation method, comprising: recommending messages produced by a message producing unit to a message consuming unit by a method for implementing a message queue according to any of claims 11-19.
CN202011559212.5A 2020-12-25 2020-12-25 System and method for realizing message queue and message scheduling system Active CN112698965B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011559212.5A CN112698965B (en) 2020-12-25 2020-12-25 System and method for realizing message queue and message scheduling system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011559212.5A CN112698965B (en) 2020-12-25 2020-12-25 System and method for realizing message queue and message scheduling system

Publications (2)

Publication Number Publication Date
CN112698965A true CN112698965A (en) 2021-04-23
CN112698965B CN112698965B (en) 2021-09-21

Family

ID=75510306

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011559212.5A Active CN112698965B (en) 2020-12-25 2020-12-25 System and method for realizing message queue and message scheduling system

Country Status (1)

Country Link
CN (1) CN112698965B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114979179A (en) * 2022-05-24 2022-08-30 中国工商银行股份有限公司 Message processing method and related device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130159525A1 (en) * 2011-12-20 2013-06-20 Fujitsu Limited Information processing apparatus and data control method
CN103312624A (en) * 2012-03-09 2013-09-18 腾讯科技(深圳)有限公司 Message queue service system and method
CN103761141A (en) * 2013-12-13 2014-04-30 北京奇虎科技有限公司 Method and device for realizing message queue
CN106878473A (en) * 2017-04-20 2017-06-20 腾讯科技(深圳)有限公司 A kind of message treatment method, server cluster and system
CN106953901A (en) * 2017-03-10 2017-07-14 重庆邮电大学 A kind of trunked communication system and its method for improving message transmission performance
CN107341051A (en) * 2016-05-03 2017-11-10 北京京东尚科信息技术有限公司 Cluster task coordination approach, system and device
CN108762953A (en) * 2018-05-25 2018-11-06 连云港杰瑞电子有限公司 A kind of message queue implementation method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130159525A1 (en) * 2011-12-20 2013-06-20 Fujitsu Limited Information processing apparatus and data control method
CN103312624A (en) * 2012-03-09 2013-09-18 腾讯科技(深圳)有限公司 Message queue service system and method
CN103761141A (en) * 2013-12-13 2014-04-30 北京奇虎科技有限公司 Method and device for realizing message queue
CN107341051A (en) * 2016-05-03 2017-11-10 北京京东尚科信息技术有限公司 Cluster task coordination approach, system and device
CN106953901A (en) * 2017-03-10 2017-07-14 重庆邮电大学 A kind of trunked communication system and its method for improving message transmission performance
CN106878473A (en) * 2017-04-20 2017-06-20 腾讯科技(深圳)有限公司 A kind of message treatment method, server cluster and system
CN108762953A (en) * 2018-05-25 2018-11-06 连云港杰瑞电子有限公司 A kind of message queue implementation method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114979179A (en) * 2022-05-24 2022-08-30 中国工商银行股份有限公司 Message processing method and related device
CN114979179B (en) * 2022-05-24 2024-01-30 中国工商银行股份有限公司 Message processing method and related device

Also Published As

Publication number Publication date
CN112698965B (en) 2021-09-21

Similar Documents

Publication Publication Date Title
CA2978889C (en) Opportunistic resource migration to optimize resource placement
CN110198231A (en) Capacitor network management method and system and middleware for multi-tenant
US9438665B1 (en) Scheduling and tracking control plane operations for distributed storage systems
US10833935B2 (en) Synchronizing network configuration in a multi-tenant network
US9852220B1 (en) Distributed workflow management system
US11303509B2 (en) Resource allocation to reduce correlated failures
US8751711B2 (en) Storage topology manager
Agneeswaran Big-data–theoretical, engineering and analytics perspective
US11416294B1 (en) Task processing for management of data center resources
CN111274002A (en) Construction method and device for supporting PAAS platform, computer equipment and storage medium
US11256719B1 (en) Ingestion partition auto-scaling in a time-series database
WO2020019313A1 (en) Graph data updating method, system, computer readable storage medium, and device
US11461053B2 (en) Data storage system with separate interfaces for bulk data ingestion and data access
US20200127959A1 (en) Architecture for large data management in communication applications through multiple mailboxes
CN112698965B (en) System and method for realizing message queue and message scheduling system
US11636139B2 (en) Centralized database system with geographically partitioned data
CN111414356A (en) Data storage method and device, non-relational database system and storage medium
US9875373B2 (en) Prioritization of users during disaster recovery
Nurain et al. An in-depth study of map reduce in cloud environment
US20240176762A1 (en) Geographically dispersed hybrid cloud cluster
Zhang et al. A cloud queuing service with strong consistency and high availability
US11522799B1 (en) Dynamically managed data traffic workflows
US11429453B1 (en) Replicating and managing aggregated descriptive data for cloud services
Ganguli Convergence of Big Data and Cloud Computing Environment
US20230418681A1 (en) Intelligent layer derived deployment of containers

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant