CN108322358B - Method and device for sending, processing and consuming multi-live distributed messages in different places - Google Patents

Method and device for sending, processing and consuming multi-live distributed messages in different places Download PDF

Info

Publication number
CN108322358B
CN108322358B CN201711349628.2A CN201711349628A CN108322358B CN 108322358 B CN108322358 B CN 108322358B CN 201711349628 A CN201711349628 A CN 201711349628A CN 108322358 B CN108322358 B CN 108322358B
Authority
CN
China
Prior art keywords
service cluster
consumption
main service
module
zookeeper
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711349628.2A
Other languages
Chinese (zh)
Other versions
CN108322358A (en
Inventor
冯浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201711349628.2A priority Critical patent/CN108322358B/en
Publication of CN108322358A publication Critical patent/CN108322358A/en
Application granted granted Critical
Publication of CN108322358B publication Critical patent/CN108322358B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0805Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
    • H04L43/0817Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking functioning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0668Management of faults, events, alarms or notifications using network fault recovery by dynamic selection of recovery network elements, e.g. replacement by the most appropriate element after failure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/10Active monitoring, e.g. heartbeat, ping or trace-route
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/14Arrangements for monitoring or testing data switching networks using software, i.e. software packages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1044Group management mechanisms 

Abstract

The embodiment of the invention provides a method for transmitting a remote multi-active distributed message, a processing method, a consumption method and a device, wherein the method for transmitting the message comprises the following steps: the production end monitors whether at least two service clusters of the service end are available; if the production end monitors that the at least two service clusters are available, the production end sends messages issued by the production end to the at least two service clusters respectively; the production end selects one service cluster from the at least two service clusters as a main service cluster; and the production end updates the main service cluster information of the main service cluster to the available other service clusters, the center zookeeper and the health check center HCC. In the embodiment of the invention, the release message is sent to different available service clusters, and the main service cluster is selected for the consumption end in advance, so that the whole service cannot be unavailable even if a certain service cluster has a problem, thereby improving the high availability of the service cluster.

Description

Method and device for sending, processing and consuming multi-live distributed messages in different places
Technical Field
The invention relates to the technical field of computers, in particular to a method and a device for sending, processing and consuming multi-live distributed messages in different places.
Background
Kafka is a distributed, partitionable, replicable, distributed messaging system that operates in a cluster and may be composed of one or more services. The distributed message system is used as an important component in the distributed system, mainly solves the problems of application coupling, asynchronous messages, traffic cutting and the like, and realizes a high-performance, high-availability, scalable and final consistency framework. Is an indispensable middleware of a large-scale distributed system. The Kafka cluster is a typical representative thereof. The existing Kafka clusters are separately deployed and maintained, due to the fact that the number of Kafka clusters is large, the Kafka clusters are served to consumers through Data Centers (DC), once a problem occurs in a certain data center, all Kafka clusters connected with the data center cannot be served to the consumers, the consumers need to perform Kafka cluster switching, and after the consumers are switched to a new Kafka cluster, the consumers need to re-consume, a large number of repeated messages are generated, and therefore the satisfaction degree of the consumers is reduced.
Therefore, how to achieve high availability of services across data centers and reduce the generated repeated messages in the process of switching clusters by consumers is a technical problem to be solved at present.
Disclosure of Invention
The technical problem to be solved by the embodiments of the present invention is to provide a method for sending, processing and consuming a distributed message of multiple activities in different places, so as to solve the problem that in the prior art, high availability of a service across data centers cannot be realized, which causes a large amount of repeated messages to be generated in a cluster switching process by a consumer, and reduces satisfaction.
Correspondingly, the embodiment of the invention also provides a remote multi-activity distributed message sending device, a processing device and a consuming device, which are used for ensuring the realization and the application of the method.
In order to solve the problems, the invention is realized by the following technical scheme:
a first aspect provides a method for sending a heterogeneous multi-active distributed message, the method comprising:
the production end monitors whether at least two service clusters of the service end are available;
if the production end monitors that the at least two service clusters are available, the production end sends the messages issued by the production end to the at least two service clusters respectively;
the production end selects one service cluster from the at least two service clusters as a main service cluster;
and the production end updates the main service cluster information of the main service cluster to the available other service clusters, the center zookeeper and the health check center HCC.
Optionally, the selecting, by the production end, one service cluster from the at least two service clusters as a main service cluster includes:
the production end acquires a distributed lock from a central zookeeper of the server end;
and the production end selects one service cluster from the at least two service clusters as a main service cluster through the distributed lock.
A second aspect provides a method of placeshifting multi-active distributed message processing, the method comprising:
the server receives main service cluster information sent by the production end;
the server side stores the main service cluster information;
the server receives a connection request sent by a consumer;
the server side determines the main service cluster information of the consumption side according to the connection request;
and the server side sends the main service cluster information to the consumption side so that the consumption side is connected to the main service cluster corresponding to the main service cluster information for consumption.
Optionally, the method further includes:
the server calculates a consumption starting offset value consumed by each consumption end on the corresponding service cluster;
and the server synchronizes the consumption starting point offset value to zookeeper and health check center HCC respectively designated by each of the rest service clusters.
Optionally, the server calculates a consumption starting offset value consumed by each consuming end on the corresponding service cluster according to the following formula:
offset A is equal to offset B-Lag-adjustment factor, and offset A is less than or equal to 0
Wherein, the offset a is a consumption starting offset value of a topic message on the service cluster a;
the offset B is a current consumption offset value of the topic on the service cluster B;
the bag is a Lag value consumed by the topic message on the service cluster B;
the adjustment factor is an adjustment constant.
A third aspect provides a method of displaced multi-live distributed message consumption, the method comprising:
the consumption end acquires current main service cluster information from a center zookeeper of the server end;
the consumption end is connected with the corresponding main service cluster according to the main service cluster information to consume;
the consumption end monitors whether the main service cluster is available;
if the consumption end monitors that the main service cluster is unavailable, acquiring new service cluster information from the central zookeeper;
the consumption end acquires a consumption starting offset value of consumption from a Health Checking Center (HCC);
the consumption end resets the starting point of the consumption progress according to the consumption starting point deviant;
and the consumption end is connected to a new service cluster corresponding to the new service cluster information, and starts to consume according to the set consumption starting offset value.
A fourth aspect provides a displaced multi-active distributed messaging apparatus, the apparatus comprising:
the monitoring module is used for monitoring whether at least two service clusters in the server side are available;
the first sending module is used for sending the messages issued by the production end to the at least two service clusters respectively when the monitoring module monitors that the at least two service clusters are available;
the first selection module is used for selecting one service cluster from the at least two service clusters which are monitored by the monitoring module to be available as a main service cluster;
and the first updating module is used for updating the main service cluster information of the main service cluster to the available rest service clusters, the center zookeeper and the health checking center HCC.
Optionally, the selecting module includes:
the acquisition module is used for acquiring the distributed lock from the central zookeeper of the server;
and the selection submodule is used for selecting one service cluster from the at least two service clusters as a main service cluster through the distributed lock.
A fifth aspect provides a placeshifting multi-active distributed message processing apparatus, the apparatus comprising:
the first receiving module is used for receiving the main service cluster information sent by the production end;
the storage module is used for storing the main service cluster information;
the second receiving module is used for receiving the connection request sent by the consumption end;
a determining module, configured to determine, according to the connection request and the main service cluster information stored in the storage module, main service cluster information of the consuming end;
and the first sending module is used for sending the main service cluster information to the corresponding consuming end so that the consuming end is connected to the main service cluster corresponding to the main service cluster information for consumption.
Optionally, the apparatus further comprises:
the calculating module is used for calculating the consumption starting offset value consumed by each consumption end on the corresponding service cluster;
and the synchronization module is used for synchronizing the consumption starting point offset value to zookeeper and health check center HCC respectively designated by each of the rest service clusters.
Optionally, the calculating module calculates the consumption offset value of the consumption starting point of each consumption end on the corresponding service cluster according to the following formula:
offset A is equal to offset B-Lag-adjustment factor, and offset A is less than or equal to 0
Wherein, the offset a is a consumption starting offset value of a topic message on the service cluster a;
the offset B is a current consumption offset value of the topic on the service cluster B;
the bag is a Lag value consumed by the topic message on the service cluster B;
the adjustment factor is an adjustment constant.
A sixth aspect provides a multi-place, multi-alive, distributed message consumption apparatus, the apparatus comprising:
the first acquisition module is used for acquiring the current main service cluster information from a center zookeeper of the server;
the connection module is used for connecting the corresponding main service cluster to consume according to the main service cluster information;
the monitoring module is used for monitoring whether the main service cluster is available;
the first acquiring module is further configured to acquire new service cluster information from the central zookeeper again when the monitoring module monitors that the main service cluster is unavailable;
a second obtaining module, configured to obtain a consumption starting offset value for consumption from the health checking center HCC;
the resetting module is used for resetting the consumption counteracting progress according to the consumption starting offset value;
and the connection module is further configured to start consumption according to the consumption starting offset value set on the new service cluster corresponding to the new service cluster information.
Compared with the prior art, the embodiment of the invention has the following advantages:
in the embodiment of the invention, the production end sends the release message to different available service clusters, and even if a certain subsequent service cluster has a problem, the whole service cannot be unavailable, so that the high availability of the service cluster is improved. The server allocates corresponding main service clusters for the consumption end in advance, calculates consumption starting offset values consumed by each consumption end on the corresponding service clusters, synchronizes the consumption starting offset values to zookeeper and health checking center HCC appointed by other service clusters, namely synchronizes different service clusters to record the consumption starting offset values of the consumption end, reduces repeated consumption of the consumption end, and improves satisfaction degree of consumers.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
Fig. 1 is a flowchart of a distributed message sending method for multiple different locations according to an embodiment of the present invention;
fig. 2 is another flowchart of a distributed message sending method for multiple different locations according to an embodiment of the present invention;
fig. 3 is a flowchart of a distributed message processing method for multiple different locations according to an embodiment of the present invention;
fig. 4 is another flowchart of a distributed message processing method of multiple remote sites according to an embodiment of the present invention;
FIG. 5 is a flow chart of a distributed message consumption method for multiple remote sites according to an embodiment of the present invention;
fig. 6 is a flowchart of a method for managing distributed messages of multiple remote locations according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a remote multi-active distributed message sending apparatus according to an embodiment of the present invention;
fig. 8 is another schematic structural diagram of a remote multi-active distributed message sending apparatus according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a distributed message processing apparatus with multiple different locations according to an embodiment of the present invention;
FIG. 10 is a schematic structural diagram of a distributed message consumption device with multiple different locations according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a distributed message consumption system with multiple different locations according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Referring to fig. 1, a flowchart of a distributed message sending method for multiple different locations is provided in an embodiment of the present invention; the method comprises the following steps:
step 101: the production end monitors whether at least two service clusters of the service end are available;
in this step, each production end in the production end cluster (or each message sending end in the message sending end cluster) first monitors whether all the service clusters of the service end are available (i.e., whether the service clusters are alive), and then selects and marks the main service cluster according to the monitoring result. The consuming end can only connect one main service cluster in each consuming process, and the specific information of the main service cluster is selected for the consuming end by the generating end in advance.
In this embodiment, the generation end may monitor whether the service cluster is available through a Software Development Kit (SDK) package, and a specific monitoring process thereof is well known to those skilled in the art and is not described herein again.
The service cluster in this embodiment may be a Kafka cluster.
Step 102: if the production end monitors that the at least two service clusters are available, the production end sends the messages issued by the production end to the at least two service clusters respectively;
in this step, if the production end monitors that one or more service clusters or all service clusters of the service end are available, the messages issued by the production end are respectively sent to all the service clusters which are monitored to be available.
Step 103: the production end selects one service cluster from the at least two service clusters to be marked as a main service cluster;
in this step, one service cluster is selected from the service clusters available to the production end to mark the main service cluster. One way of choice may be:
the production end acquires the distributed lock from the central zookeeper of the server end; and then selecting one service cluster from the at least two service clusters as a main service cluster through the distributed lock, namely marking the selected service cluster as the main service cluster.
The distributed lock is a way to control synchronous access to shared resources between distributed systems. In distributed systems, it is often necessary to coordinate their actions. If one or a group of resources are shared between different systems or different hosts of the same system, then access to these resources often requires mutual exclusion to prevent interference with each other, thereby ensuring consistency, in which case a distributed lock is used.
That is, a distributed lock is one that provides a mutual exclusion mechanism between a set of processes so that only one process can hold the lock at any time. Distributed locks can be used to implement leader elections in large distributed systems (e.g., selecting a master business cluster among multiple business clusters, etc.), and at any point in time, the process holding the lock is the leader of the system. The specific implementation process is well known to those skilled in the art, and will not be described herein.
For example, after monitoring that the service cluster a and the service cluster B of the server are available service clusters, the production end first acquires a distributed lock from a center zookeeper of the server, then the production end sends election requests to the service cluster a and the service cluster B respectively, the election requests include the distributed lock, when receiving the election requests, the service cluster a and the service cluster B respectively feed back responses to the production end, at this time, if the production end first receives the response of the service cluster a, it is indicated that the service cluster a holds the distributed lock, and the production end marks the service cluster a as a main service cluster. On the contrary, if the production end receives the response of the service cluster B first, it indicates that the service cluster B holds the distributed lock, and the production end marks the service cluster B as the main service cluster.
Step 104: and the production end updates the main service cluster information of the main service cluster to the available other service clusters, the center zookeeper and the health check center HCC.
And the production terminal respectively sends the selected main service cluster information to zookeeper appointed by other available service clusters for storage, and registers the main service cluster information into a central zookeeper, so that the zookeeper appointed by the available service clusters and the central zookeeper store the information of the main service clusters and provide service for the consumption terminal.
In this embodiment, the production end may further send the selected main service cluster information to a health check center HCC, so that the HCC stores the main service cluster information, and checks the availability of the main service cluster corresponding to the main service cluster information, which is described in detail below.
It should be noted that the service cluster in this embodiment is a Kafka cluster (the same as the following embodiments), each Kafka cluster is composed of multiple Kafka instances (servers), and each instance is a proxy (broker).
In the embodiment of the invention, the production end monitors the availability of all the service clusters of the server end, sends the issued messages to a plurality of available service clusters respectively, selects the main service cluster for the consumption end (or the message receiving end) in advance, and updates the selected main service cluster to the rest service clusters and the center zookeeper respectively. That is to say, in the embodiment of the present invention, the production end sends the message issued by the production end to different available service clusters, and selects the main service cluster for the consumption end in advance, so that even if a problem occurs in a certain service cluster, the unavailability of the whole service is not caused, thereby improving the high availability of the service cluster.
Referring to fig. 2, another flow chart of a distributed message sending method for multiple different locations is provided according to an embodiment of the present invention. The embodiment is different from the above embodiment in that a production end monitors that only one service cluster is available, and the method includes:
step 201: the production end monitors whether at least two service clusters of the service end are available;
the step is the same as step 101, which is described in detail above and will not be described herein again.
Step 202: if the production end monitors that only one of the at least two service clusters is available, the production end sends the message issued by the production end to the available service cluster;
in this step, the production end monitors that only one service cluster is available at the service end, and sends the message issued by the production end to the available service cluster.
Step 203: the production end takes the available service cluster as a main service cluster;
the production end marks the available service cluster as a main service cluster, namely, the main service cluster is selected for the production end in advance.
Step 204: and the production end updates the information of the main service cluster to the available service cluster, the center zookeeper and the health check center HCC.
And the production end updates the information of the selected main service cluster into the available zookeeper and the center zookeeper appointed by the service cluster, so that the available zookeeper, center zookeeper and health checking center HCC appointed by the service cluster can store the information of the main service cluster and provide service for the consumption end.
Optionally, in another embodiment, on the basis of the above embodiment, the method may further include:
and if the production end monitors that none of the at least two service clusters is available, stopping sending the message issued by the production end to any one of the at least two service clusters.
That is, if the production end monitors that all the services of the service end are unavailable in clustering, the production end stops sending the message issued by the production end to any service cluster of the service end.
In the embodiment of the invention, the production end sends the issued message to the available service cluster, and selects the main service cluster for the consumption end in advance, thereby improving the high availability of the service cluster.
Referring to fig. 3, a flowchart of a distributed message processing method for multiple different locations is provided, where the method includes:
step 301: the server receives main service cluster information sent by the production end;
in the step, a center zookeeper of a server receives main service cluster information sent by one or more production terminals,
the central zookeeper is a service cluster composed of a plurality of servers (servers) and used for monitoring the service state of each machine in the current service cluster, and once a machine cannot provide services, other service clusters in the service cluster must know, so that a service strategy is adjusted and redistributed.
Step 302: the server side stores the main service cluster information;
step 303: the server receives a connection request sent by a consumer;
the consuming end may be one consuming end or a plurality of consuming ends, and this embodiment is not limited.
Wherein, the connection request is the connection request initiated by the consumption end.
Step 304: the server side determines the main service cluster information of the consumption side according to the connection request;
and the central zookeeper of the server selects corresponding main service cluster information for the consumer by utilizing a load balancing distribution principle according to the connection request and the stored main service cluster information.
Step 305: and the server side sends the main service cluster information to the corresponding consumption side so that the consumption side is connected to the main service cluster corresponding to the main service cluster information for consumption.
And the center zookeeper of the server side sends the selected main service cluster information to the consumption side, so that the consumption side receives the main service cluster information and establishes connection and consumption for the main service cluster corresponding to the main service cluster information.
In the embodiment of the invention, the server side firstly stores and receives the main service cluster information sent by different production sides, and then allocates the corresponding main service cluster to the consumption side according to the connection request initiated by the consumption side, thereby improving the high availability of the service cluster.
Referring to fig. 4, another flowchart of a distributed message processing method for multiple remote locations is shown, which is different from the embodiment of fig. 3 in that after the production end consumes, a consumption start offset value of the consumption end is calculated, and the consumption start offset value is synchronized to other service clusters. The method comprises the following steps:
step 401: the server receives main service cluster information sent by the production end;
step 402: the server side stores the main service cluster information;
step 403: the server receives a connection request sent by a consumer;
step 404: the server side determines the main service cluster information of the consumption side according to the connection request;
step 405: the server side sends the main service cluster information to the consumption side so that the consumption side can be connected to the main service cluster corresponding to the main service cluster information for consumption;
it should be noted that steps 401 to 405 in this embodiment are the same as steps 301 to 305 in the above embodiment, and are described in detail above, and are not described again here.
Step 406: the server calculates a consumption starting offset value consumed by each consumption end on the corresponding service cluster;
in this embodiment, the center zookeeper of the server calculates the consumption start Offset value consumed by each consuming end on the corresponding service cluster by using Offset synchronization (Offset sync) application, and a specific calculation formula of the center zookeeper is as follows:
offset a is equal to offset B-land-adjustment factor, wherein, if offset a is less than or equal to 0,
wherein, offset a is a consumption starting offset value of a certain topic (topic) information on the service cluster a;
the offset B is a current consumption offset value of the topic information on the service cluster B;
the Lag is a Lag value consumed by the topic on the service cluster B, and the calculation formula is as follows: log size-Offset, where the Offset value can be obtained from Zookeeper, Offset is a consumed Offset value marked as having been consumed for a single partition of a certain topic (topic) in a traffic cluster (such as kafka cluster); logsize is the total number of messages for a certain topic individual partition in a service cluster (such as a kafka cluster). Wherein, Partition is a Partition contained by Topic, and Topic usually has 0-8 partitions for 9 partitions.
The adjustment factor is an adjustment value, i.e., an adjustment constant, set to further improve accuracy. Can be flexibly set according to consumption speed and switching time. The calculation formula is as follows: QPS Time interval, wherein QPS is the Query Per Second (Query Per Second) rate at which the consumer is monitored.
Step 407: and the server synchronizes the consumption starting point offset value to zookeeper and health check center HCC respectively designated by each of the rest service clusters.
And the center zookeeper of the server synchronizes the consumption starting Offset value calculated by the Offset Syncer application to zookeeper appointed by other service clusters so as to facilitate other service clusters to know the service consumption condition of the production end in time.
The synchronization may be performed in real time, or may be performed within a set time period, for example, a second, or may be performed at an interval of 2 seconds to 5 seconds, or may be performed within another time period according to actual needs.
In the embodiment of the invention, the server stores and receives the main service cluster information sent by different production terminals, allocates the corresponding main service cluster to the consumption terminal according to the connection request initiated by the consumption terminal, calculates the consumption starting offset value consumed by each consumption terminal on the corresponding service cluster, and synchronizes the consumption starting offset value to the zookeeper and the health check center HCC appointed by other service clusters, namely synchronizes the consumption starting offset values of the consumption terminals recorded by different service clusters, thereby reducing the repeated consumption of the consumption terminal as much as possible and simultaneously improving the high availability of the service clusters.
Referring to fig. 5, a flowchart of a distributed message consumption method for multiple different locations is provided, where the method includes:
step 501: the consumption end acquires current main service cluster information from a center zookeeper of the server end;
in this step, before consumption, each consumption end in the consumption end cluster firstly obtains current main service cluster information, namely master service cluster information, from zookeeper of the service end.
Wherein, the consuming side in this embodiment includes one or more consuming clusters.
Step 502: the consumption end is connected with the corresponding main service cluster according to the main service cluster information to consume;
and the consumption end establishes connection with the corresponding main service cluster according to the acquired main service cluster information and then starts to consume.
Step 503: the consumption end monitors whether the main service cluster is available;
in the process of consumption, the consumption end monitors whether the main service cluster is available in real time or at regular time. The monitoring mode can be monitored by a packaged software development kit SDK, and the specific monitoring process is well known to those skilled in the art and will not be described herein again.
Step 504: if the consumption end monitors that the main service cluster is unavailable, acquiring new service cluster information from the central zookeeper;
and if the consumption end monitors that the main service cluster is unavailable, new service cluster information is obtained from the central zookeeper again.
Step 505: the consumer side acquires a consumption starting offset value of the consumer side from the health checking center HCC;
after acquiring a new service cluster, the consuming end acquires a consumption starting offset value of the consumption from the health checking center HCC.
Step 506: the consumption end resets the consumption progress according to the consumption starting offset value;
and the consumption end resets the consumption progress according to the consumption starting point deviation value so as to control the consumption progress and reduce the repeated consumption of the message to the maximum extent.
Step 507: and the consumption end is connected with a new service cluster corresponding to the new service cluster information, and starts to consume according to the set consumption starting offset value.
In the embodiment of the invention, before consumption of each consumption cluster of a consumption end, current main service cluster information is obtained from a center zookeeper of a server end, connection with the corresponding main service cluster is established to start consumption, and a starting point of consumption progress of the center zookeeper is set according to a registered consumption starting point deviant on the center zookeeper. And when the main service cluster is monitored to be unavailable, acquiring a new service cluster and a corresponding consumption starting offset value again, resetting the starting point of the consumption progress according to the acquired consumption starting offset value, and then starting consumption according to the reset starting point when switching to the new service cluster, so that the repeated consumption of the message is reduced to the maximum extent.
Referring to fig. 6, a flowchart of a method for managing multiple remote active distributed messages according to an embodiment of the present invention is shown, where the method includes:
step 601: the health checking center checks whether each service cluster of the server is available;
in this step, a Health Check Center (HCC) of the server is connected to all service clusters respectively, and is used to Check whether each service cluster is available (i.e. survival situation), specifically, the HCC usually uses heart beat (heart beat) to Check whether each cluster is available, and deletes the information of the service cluster from the assigned zookeeper.
In this embodiment, each service cluster has a specific zookeeper, and information of the service cluster is stored in the zookeeper.
Currently, the service clusters stored in each designated Zookeeper are stored in a temporary node manner. When checking whether each service cluster is available, the HCC mainly checks whether a heartbeat (heart beat) exists within a timeout (timeout) time, and if the heartbeat still exists within the timeout time, the HCC indicates that the service cluster is available, that is, the temporary node still exists; if the heartbeat is lost, the service cluster is not available, and the temporary node needs to be deleted in the zookeeper.
Step 602: if the health check center checks that a service cluster is available, reserving the service cluster information in a zookeeper appointed by the available service cluster;
the HCC, upon checking that one or more service clusters are available, retains the service cluster information in the zookeeper specified by each service cluster that is available.
Step 603: and if the health check center checks that a service cluster is unavailable, deleting the service cluster information in the zookeeper appointed by the unavailable service cluster.
In this step, when the HCC detects that at least one service cluster is unavailable, the HCC deletes the information of the unavailable service cluster in the zookeeper specified by the unavailable service cluster, that is, the unavailable temporary node in the zookeeper, and the subsequent consumer no longer accesses the service cluster, and the consumer is switched to a new service cluster.
Optionally, in another embodiment, on the basis of the above embodiment, the method may further include: the HCC obtains the message offset synchronization value of each service cluster recording each consumption end; setting the consumption progress of each consumption terminal according to the consumption starting offset value; and then, synchronizing the consumption progress to zookeeper appointed by the rest service clusters.
In the embodiment of the invention, the HCC checks the availability of the service clusters at the server side, deletes unavailable service clusters, then obtains the consumption starting offset value of the recording consumption end on each service cluster, and synchronizes the consumption starting offset value to zookeeper appointed by other service clusters, namely synchronizes the consumption progress of the consumption end recorded by different service clusters, reduces the repeated consumption of the consumption end as much as possible, and simultaneously improves the high availability of the service clusters.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 7, a schematic structural diagram of a remote multi-active distributed message sending apparatus provided in an embodiment of the present invention is shown, where the apparatus includes: a monitoring module 71, a first sending module 72, a first selecting module 73 and a first updating module 74, wherein,
a monitoring module 71, configured to monitor whether at least two service clusters in the server are available;
a first sending module 72, configured to send a message issued by a production end to the at least two service clusters respectively when the monitoring module 71 monitors that the at least two service clusters are available;
a first selecting module 73, configured to select one service cluster from the at least two service clusters monitored by the monitoring module 71 as a main service cluster;
in this embodiment, the first selecting module includes: the system comprises an acquisition module and a selection submodule (not shown in the figure), wherein the acquisition module is used for acquiring a distributed lock from a central zookeeper of a server; and the selection submodule is used for selecting one service cluster from the at least two service clusters as a main service cluster through the distributed lock.
A first updating module 74, configured to update the main service cluster information of the main service cluster to the available remaining service clusters, the center zookeeper and the health checking center HCC.
Optionally, in another embodiment, the apparatus may further include: a schematic structural diagram of the second sending module 81, the second selecting module 82 and the second updating module 83 is shown in fig. 8, wherein,
a second sending module 81, configured to send a message issued by a production end to an available service cluster when the monitoring module 71 monitors that only one service cluster of the at least two service clusters is available;
a second selecting module 82, configured to use one service cluster monitored by the monitoring module 71 as a master service cluster;
the second updating module 83 is configured to update the information of the main service cluster to an available service cluster, a central zookeeper and a health checking center HCC.
Optionally, in another embodiment, on the basis of the above embodiment, the apparatus may further include: a sending stopping module (not shown in the figure), wherein the sending stopping module is configured to stop sending the message issued by the production end to any one of the at least two service clusters when the monitoring module monitors that none of the at least two service clusters is available.
Optionally, the apparatus may be integrated on each production cluster in the production end, or may be deployed independently, which is not limited in this embodiment.
The implementation process of the functions and actions of each module in the device is detailed in the implementation process of the corresponding step in the method, and is not described herein again.
Referring to fig. 9, a schematic structural diagram of a distributed message processing apparatus with multiple different locations is provided in an embodiment of the present invention, where the apparatus includes: a first receiving module 91, a storing module 92, a second receiving module 93, a determining module 94 and a first sending module 95, wherein,
a first receiving module 91, configured to receive main service cluster information sent by a production end;
a storage module 92, configured to store the main service cluster information, where the storage module may be integrated on a central zookeeper or may be deployed independently;
a second receiving module 93, configured to receive a connection request sent by a consuming end;
a determining module 94, configured to determine, according to the connection request and the master service cluster information stored in the storage module, master service cluster information of the consuming end;
a first sending module 95, configured to send the main service cluster information to the corresponding consuming end, so that the consuming end is connected to the main service cluster corresponding to the main service cluster information for consumption.
Optionally, in another embodiment, on the basis of the above embodiment, the apparatus may further include: a calculation module and a synchronization module (not shown), wherein,
the calculating module is used for calculating the consumption starting offset value consumed by each consumption end on the corresponding service cluster; wherein, the calculating module calculates the consumption starting offset value according to the following formula:
offset A is equal to offset B-Lag-adjustment factor, offset A is less than or equal to 0,
wherein, offset a is a consumption starting offset value of a certain topic (topic) information on the service cluster a;
the offset B is a current consumption offset value of certain topic information on the service cluster B;
the Lag is a Lag value consumed by a certain topic on the service cluster B, and the calculation formula is as follows: log size-Offset, where the Offset value can be obtained from Zookeeper, Offset is a consumed Offset value marked as having been consumed for a single partition of a certain topic (topic) in a traffic cluster (such as kafka cluster); logsize is the total number of messages for a certain topic individual partition in a service cluster (such as a kafka cluster). Wherein, Partition is a Partition contained by Topic, and Topic usually has 0-8 partitions for 9 partitions.
The adjustment factor is an adjustment value, i.e., an adjustment constant, set to further improve accuracy. Can be flexibly set according to consumption speed and switching time. The calculation formula is as follows: QPS Time interval, wherein QPS is the Query Per Second (Query Per Second) rate at which the consumer is monitored.
And the synchronization module is used for synchronizing the consumption starting point offset value to zookeeper and health check center HCC respectively designated by each of the rest service clusters.
Optionally, the apparatus may be integrated on a central zookeeper in a server, or may be deployed independently, which is not limited in this embodiment.
The implementation process of the functions and actions of each module in the device is detailed in the implementation process of the corresponding step in the method, and is not described herein again.
Referring to fig. 10, a schematic structural diagram of a distributed message consumption device for multiple different locations according to an embodiment of the present invention is shown, where the device includes: a first acquisition module 11, a connection module 12, a monitoring module 13, a second acquisition module 14 and a reset module 15, wherein,
the first obtaining module 11 is configured to obtain current main service cluster information from a center zookeeper of a server;
the connection module 12 is configured to connect the corresponding main service cluster for consumption according to the main service cluster information;
a monitoring module 13, configured to monitor whether the main service cluster is available;
the first obtaining module 11 is further configured to obtain new service cluster information from the central zookeeper again when the monitoring module 13 monitors that the main service cluster is unavailable;
the connection module 12 is further configured to connect to a new service cluster corresponding to the new service cluster information according to the new service cluster information for consumption;
a second obtaining module 14, configured to obtain a consumption starting offset value for consumption from the health checking center HCC;
a resetting module 15, configured to reset the consumption offsetting progress according to the consumption starting offset value;
the connection module 12 is further configured to start consumption according to the consumption starting offset value set when the connection module is connected to a new service cluster.
Optionally, the device may be integrated on each consumption cluster of the consumption end, or may be deployed independently, which is not limited in this embodiment.
The implementation process of the functions and actions of each module in the device is detailed in the implementation process of the corresponding step in the method, and is not described herein again.
Referring to fig. 11, a schematic structural diagram of a distributed message consumption system for multiple different locations is provided, where the system includes: an analog Multiplexer 20 (AQM), a production side 21, a service side 22 and a consumption side 23. For convenience of illustration, in the present embodiment, the production end 21 includes a Producer1 and a Producer2 as an example, the service end 22 includes a Center Zookeeper221, and a health check Center 222(HCC, health check Center), a service cluster (Kafka) a and a service cluster (Kafka) B as well as a Zookeeper a corresponding to Kafka and a Zookeeper B corresponding to Kafka B as examples, and the consumption end 23 includes a Consumer1 and a Consumer2 as examples.
In this embodiment, if, AQM20 sends a beijing telecom message to Producer 1; sending a Beijing Unicom message to Producer 2; the following description will take Producer1 as an example. Producer2 is similar to Producer 1;
after receiving the beijing telecommunication message, the Producer1 monitors whether KafkaA and KafkaB of the service end 22 are available; if the KafkaA and the KafkaB are both available, the Producer1 sends the message (namely the received Beijing telecommunication message) issued by the Producer1 to the KafkaA and the KafkaB respectively; kafkaa and Kafkab will store the received Beijing telecommunication messages respectively, while ZookeeperA mainly functions to maintain and monitor the state change of the data stored by Kafkaa, and by monitoring the state change of these data, the cluster management based on data can be achieved. Similarly, zookeepers b are mainly used to maintain and monitor the state change of the data stored in KafkaB, and by monitoring the state change of the data, the data-based cluster management can be achieved.
Said Producer1 selecting one Kafka from said KafkaA and KafkaB as a primary Kafka for consumer-side ligation; the selection process comprises the following steps: the Producer1 acquires the distributed lock from the central zookeeper222 of the server; then, one Kafka is selected from the KafkA and the KafkAB as the main Kafka through the distributed lock, for example, the selected Kafka is marked as the main Kafka. Producer1 then updates the information for the master KafkaA to the available KafkaB, central zookeeper, and health check center HCC 222. KafkaB of the server 22, the central zookeeper and the health check centre HCC store the received information of the master KafkaA.
When receiving a connection request sent by a Consumer (such as a Consumer1), a center zookeeper221 of a server determines main service cluster information of the Consumer (such as a Consumer1) according to the connection request; the determination process comprises the following steps: the center zookeeper of the server 22 selects, according to the connection request and the stored main service cluster information, corresponding main service cluster information for the Consumer1 by using a load balancing distribution principle, for example, selects KafkaA information for the Consumer 1.
Finally, the center zookeeper221 of the server 22 sends the selected main service cluster information (such as KafkaA information) to the Consumer1, so that the Consumer1 is connected to the KafkaA corresponding to the KafkaA information for consumption.
A consumption end (such as a Consumer1) acquires current main service cluster information (such as Kafkaa) from a center zookeeper221 of a server 22, and connects a corresponding main service cluster for consumption according to the main service cluster information; thereafter, the consumer 22 monitors whether the main service cluster is available; when the situation that the main service cluster is unavailable is monitored, new service cluster information is obtained from the central zookeeper; then, the consumption start offset value for consumption is obtained from the health check center HCC 222; resetting the starting point of the consumption progress according to the consumption starting point offset value; and finally, the consumption end is connected to a new service cluster corresponding to the new service cluster information, and consumption is started according to the set consumption starting offset value.
The health check center HCC checks whether each service cluster of the server is available, if yes, the service cluster information is reserved in a specified zookeeper of the available service cluster; and if the service cluster is detected to be unavailable, deleting the service cluster information in the appointed zookeeper of the unavailable service cluster.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The method and apparatus provided by the present invention are described in detail, and the principle and the implementation of the present invention are described herein by using specific examples, which are only used to help understand the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (12)

1. A method for sending a multi-active distributed message in different places is characterized by comprising the following steps:
the production end monitors whether at least two service clusters of the service end are available;
if the production end monitors that the at least two service clusters are available, the production end sends the messages issued by the production end to the at least two service clusters respectively;
the production end selects one service cluster from the at least two service clusters as a main service cluster;
the production end updates the main service cluster information of the main service cluster to other available service clusters, a central zookeeper and a health check center HCC, so that when the consumption end monitors that the main service cluster is unavailable, new service cluster information is obtained from the central zookeeper; wherein the health check center HCC is adapted to monitor whether each service cluster is available.
2. The method of claim 1, wherein the selecting, by the production end, one service cluster from the at least two service clusters as a master service cluster comprises:
the production end acquires a distributed lock from a central zookeeper of the server end;
and the production end selects one service cluster from the at least two service clusters as a main service cluster through the distributed lock.
3. A distributed message processing method for multiple activities at different places is characterized by comprising the following steps:
the server receives main service cluster information sent by the production end;
the server side stores the main service cluster information;
the server receives a connection request sent by a consumer;
the server side determines the main service cluster information of the consumption side according to the connection request;
the server side sends the main service cluster information to the consumption side so that the consumption side can be connected to the main service cluster corresponding to the main service cluster information for consumption;
and the health checking center HCC of the server side monitors whether the main service cluster is available, and when the main service cluster is unavailable, the information of the main service cluster is deleted from the zookeeper appointed by the main service cluster, so that the consumption side acquires new service cluster information from the center zookeeper.
4. The method of claim 3, further comprising:
the server calculates a consumption starting offset value consumed by each consumption end on the corresponding service cluster;
and the server synchronizes the consumption starting point offset value to zookeeper and health check center HCC respectively designated by each of the rest service clusters.
5. The method of claim 4, wherein the server calculates the consumption starting offset value consumed by each consuming end on the corresponding service cluster according to the following formula:
offset A is equal to offset B-Lag-adjustment factor, and offset A is less than or equal to 0
Wherein, the offset a is a consumption starting offset value of a topic message on the service cluster a;
the offset B is a current consumption offset value of the topic on the service cluster B;
the bag is a Lag value consumed by the topic message on the service cluster B;
the adjustment factor is an adjustment constant.
6. A distributed message consumption method of multi-activity in different places is characterized by comprising the following steps:
the consumption end acquires current main service cluster information from a center zookeeper of the server end;
the consumption end is connected with the corresponding main service cluster according to the main service cluster information to consume; the main service cluster is selected for the consumption end by the production end in advance;
the consumption end monitors whether the main service cluster is available;
if the consumption end monitors that the main service cluster is unavailable, acquiring new service cluster information from the central zookeeper;
the consumption end acquires a consumption starting offset value of consumption from a Health Checking Center (HCC);
the consumption end resets the starting point of the consumption progress according to the consumption starting point deviant;
and the consumption end is connected to a new service cluster corresponding to the new service cluster information, and starts to consume according to the set consumption starting offset value.
7. A remote multi-session distributed messaging apparatus, comprising:
the monitoring module is used for monitoring whether at least two service clusters in the server side are available;
the first sending module is used for sending the messages issued by the production end to the at least two service clusters respectively when the monitoring module monitors that the at least two service clusters are available;
the first selection module is used for selecting one service cluster from the at least two service clusters which are monitored by the monitoring module to be available as a main service cluster;
a first updating module, configured to update main service cluster information of the main service cluster to other available service clusters, a central zookeeper and a health checking center HCC, so that a consumer side acquires new service cluster information from the central zookeeper when monitoring that the main service cluster is unavailable; wherein the health check center HCC is configured to monitor whether the main service cluster is available.
8. The apparatus of claim 7, wherein the selection module comprises:
the acquisition module is used for acquiring the distributed lock from the central zookeeper of the server;
and the selection submodule is used for selecting one service cluster from the at least two service clusters as a main service cluster through the distributed lock.
9. A distributed message processing apparatus for multiple remote locations, comprising:
the first receiving module is used for receiving the main service cluster information sent by the production end;
the storage module is used for storing the main service cluster information;
the second receiving module is used for receiving the connection request sent by the consumption end;
a determining module, configured to determine, according to the connection request and the main service cluster information stored in the storage module, main service cluster information of the consuming end;
a first sending module, configured to send the main service cluster information to the corresponding consuming end, so that the consuming end is connected to the main service cluster corresponding to the main service cluster information for consuming;
and a Health Checking Center (HCC) of a server side monitors whether the main service cluster is available, and when the main service cluster is unavailable, the information of the main service cluster is deleted from a zookeeper appointed by the main service cluster, so that the consumption side acquires new service cluster information from the center zookeeper.
10. The apparatus of claim 9, further comprising:
the calculating module is used for calculating the consumption starting offset value consumed by each consumption end on the corresponding service cluster;
and the synchronization module is used for synchronizing the consumption starting point offset value to zookeeper and health check center HCC respectively designated by each of the rest service clusters.
11. The apparatus of claim 10, wherein the calculating module calculates the consumption starting offset value consumed by each consuming end on the corresponding service cluster according to the following formula:
offset A is equal to offset B-Lag-adjustment factor, and offset A is less than or equal to 0
Wherein, the offset a is a consumption starting offset value of a topic message on the service cluster a;
the offset B is a current consumption offset value of the topic on the service cluster B;
the bag is a Lag value consumed by the topic message on the service cluster B;
the adjustment factor is an adjustment constant.
12. A displaced multi-live distributed message consumption apparatus, comprising:
the first acquisition module is used for acquiring the current main service cluster information from a center zookeeper of the server;
the connection module is used for connecting the corresponding main service cluster to consume according to the main service cluster information; the main service cluster is selected for the consumption end by the production end in advance;
the monitoring module is used for monitoring whether the main service cluster is available;
the first acquiring module is further configured to acquire new service cluster information from the central zookeeper again when the monitoring module monitors that the main service cluster is unavailable;
a second obtaining module, configured to obtain a consumption starting offset value for consumption from the health checking center HCC;
the resetting module is used for resetting the consumption counteracting progress according to the consumption starting offset value;
and the connection module is further configured to start consumption according to the consumption starting offset value set on the new service cluster corresponding to the new service cluster information.
CN201711349628.2A 2017-12-15 2017-12-15 Method and device for sending, processing and consuming multi-live distributed messages in different places Active CN108322358B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711349628.2A CN108322358B (en) 2017-12-15 2017-12-15 Method and device for sending, processing and consuming multi-live distributed messages in different places

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711349628.2A CN108322358B (en) 2017-12-15 2017-12-15 Method and device for sending, processing and consuming multi-live distributed messages in different places

Publications (2)

Publication Number Publication Date
CN108322358A CN108322358A (en) 2018-07-24
CN108322358B true CN108322358B (en) 2020-09-01

Family

ID=62892280

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711349628.2A Active CN108322358B (en) 2017-12-15 2017-12-15 Method and device for sending, processing and consuming multi-live distributed messages in different places

Country Status (1)

Country Link
CN (1) CN108322358B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111182011B (en) * 2018-11-09 2022-06-10 中移(杭州)信息技术有限公司 Service set distribution method and device
CN109743366B (en) * 2018-12-21 2022-04-05 苏宁易购集团股份有限公司 Resource locking method, device and system for multi-living scene
CN109451065B (en) * 2018-12-26 2021-06-01 中电福富信息科技有限公司 Soft load balancing and shunting automation system and operation method thereof
CN111163172B (en) * 2019-12-31 2022-04-22 北京奇艺世纪科技有限公司 Message processing system, method, electronic device and storage medium
CN111953760B (en) * 2020-08-04 2023-08-11 深圳市欢太科技有限公司 Data synchronization method, device, multi-activity system and storage medium
CN112671590B (en) * 2020-12-31 2023-01-20 北京奇艺世纪科技有限公司 Data transmission method and device, electronic equipment and computer storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105338086A (en) * 2015-11-04 2016-02-17 浪潮软件股份有限公司 Distributed message forwarding method
CN106789741A (en) * 2016-12-26 2017-05-31 北京奇虎科技有限公司 The consuming method and device of message queue
CN106817295A (en) * 2016-12-08 2017-06-09 努比亚技术有限公司 A kind of message processing apparatus and method
CN106953901A (en) * 2017-03-10 2017-07-14 重庆邮电大学 A kind of trunked communication system and its method for improving message transmission performance
CN107451147A (en) * 2016-05-31 2017-12-08 北京京东尚科信息技术有限公司 A kind of method and apparatus of kafka clusters switching at runtime

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9542414B1 (en) * 2013-01-11 2017-01-10 Netapp, Inc. Lock state reconstruction for non-disruptive persistent operation
CN104516966A (en) * 2014-12-24 2015-04-15 北京奇虎科技有限公司 High-availability solving method and device of database cluster
JP6405255B2 (en) * 2015-02-05 2018-10-17 株式会社日立製作所 COMMUNICATION SYSTEM, QUEUE MANAGEMENT SERVER, AND COMMUNICATION METHOD
CN105183591A (en) * 2015-09-07 2015-12-23 浪潮(北京)电子信息产业有限公司 High-availability cluster implementation method and system
CN107463468A (en) * 2016-06-02 2017-12-12 北京京东尚科信息技术有限公司 Buffer memory management method and its equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105338086A (en) * 2015-11-04 2016-02-17 浪潮软件股份有限公司 Distributed message forwarding method
CN107451147A (en) * 2016-05-31 2017-12-08 北京京东尚科信息技术有限公司 A kind of method and apparatus of kafka clusters switching at runtime
CN106817295A (en) * 2016-12-08 2017-06-09 努比亚技术有限公司 A kind of message processing apparatus and method
CN106789741A (en) * 2016-12-26 2017-05-31 北京奇虎科技有限公司 The consuming method and device of message queue
CN106953901A (en) * 2017-03-10 2017-07-14 重庆邮电大学 A kind of trunked communication system and its method for improving message transmission performance

Also Published As

Publication number Publication date
CN108322358A (en) 2018-07-24

Similar Documents

Publication Publication Date Title
CN108322358B (en) Method and device for sending, processing and consuming multi-live distributed messages in different places
WO2020147331A1 (en) Micro-service monitoring method and system
CN103581276B (en) Cluster management device, system, service customer end and correlation method
US11068499B2 (en) Method, device, and system for peer-to-peer data replication and method, device, and system for master node switching
CN112333249B (en) Business service system and method
CN105915405A (en) Large-scale cluster node performance monitoring system
CN105471960A (en) Information interaction system and method between private clouds and public cloud
CN109547512A (en) A kind of method and device of the distributed Session management based on NoSQL
CN102984501A (en) Network video-recording cluster system
CN109173270B (en) Game service system and implementation method
CN108199912B (en) Method and device for managing and consuming distributed messages of multiple activities in different places
CN103842964B (en) The system and method for accurate load balance is supported in transaction middleware machine environment
US10802896B2 (en) Rest gateway for messaging
CN108063832B (en) Cloud storage system and storage method thereof
CN107203429A (en) A kind of method and device that distributed task scheduling is loaded based on distributed lock
CN113704354A (en) Data synchronization method and device, computer equipment and storage medium
CN106230622B (en) Cluster implementation method and device
CN112631764A (en) Task scheduling method and device, computer equipment and computer readable medium
CN112486707A (en) Redis-based message asynchronous consumption method and device
CN111262892B (en) Multi-ROS service discovery system
CN111726388A (en) Cross-cluster high-availability implementation method, device, system and equipment
CN108170527B (en) Remote multi-activity distributed message consumption method and device
CN109344202B (en) Data synchronization method and management node
CN109005246B (en) Data synchronization method, device and system
CN114268799B (en) Streaming media transmission method and device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant