CN116010501A - Distributed medium-level data processing method, system, device, storage medium and program product - Google Patents

Distributed medium-level data processing method, system, device, storage medium and program product Download PDF

Info

Publication number
CN116010501A
CN116010501A CN202211733542.0A CN202211733542A CN116010501A CN 116010501 A CN116010501 A CN 116010501A CN 202211733542 A CN202211733542 A CN 202211733542A CN 116010501 A CN116010501 A CN 116010501A
Authority
CN
China
Prior art keywords
data
service unit
service
cache
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211733542.0A
Other languages
Chinese (zh)
Inventor
张大林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bigo Technology Singapore Pte Ltd
Original Assignee
Bigo Technology Singapore Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bigo Technology Singapore Pte Ltd filed Critical Bigo Technology Singapore Pte Ltd
Priority to CN202211733542.0A priority Critical patent/CN116010501A/en
Publication of CN116010501A publication Critical patent/CN116010501A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application discloses a distributed middle-stage data processing method, a system, equipment, a storage medium and a program product, wherein the method comprises the following steps: in case that the pre-service detects a data write request, the pre-service forwards the data write request to a first service unit based on the acquired packet information; under the condition that the first service unit is a main service unit, adding a queuing lock to the data writing request by the first service unit, obtaining logic data associated with the data writing request in a first cache, and processing the logic data based on the data writing request to obtain updated data; the first service unit writes the update data into the first cache to cover the logic data, and notifies a broadcast service to perform broadcast notification of the update data, where the pre-service, the first service unit, and the broadcast service are disposed in the same machine room. The disaster recovery capability of data processing can be improved, and the usability is higher.

Description

Distributed medium-level data processing method, system, device, storage medium and program product
Technical Field
The embodiments of the present application relate to the field of data processing technologies, and in particular, to a distributed middle-stage data processing method, system, device, storage medium, and program product.
Background
With the development of networks and hardware devices, more and more application products are based on networks. For these applications, it is often necessary to implement data transmission and data processing between the client and the server to support the implementation of various functions of the application.
In the related art, when receiving data of a client for processing, a server generally adopts a single-point processing or multi-point processing mode. The single-point processing means that a read-write request sent by a client is received through a set single-point instance and corresponding data processing is performed, when the single point fails, related services cannot be provided, the failure rate of the read-write operation of the client is greatly improved, meanwhile, if a cache is down, data is lost, and the stability is poor. Multipoint processing refers to the processing and maintenance of data through multiple instances, but this approach is detrimental to the implementation of serialization operations, where data consistency is difficult to guarantee.
Disclosure of Invention
The embodiment of the application provides a distributed middle-stage data processing method, a system, equipment, a storage medium and a program product, which solve the problems that in the related technology, the read-write operation failure rate of a client is high, the stability is poor, the data consistency is difficult to ensure in efficient data processing, the disaster tolerance capability of the data processing can be improved, and the usability is higher.
In a first aspect, an embodiment of the present application provides a distributed middle-stage data processing method, where the method includes:
in case that the pre-service detects a data write request, the pre-service forwards the data write request to a first service unit based on the acquired packet information;
under the condition that the first service unit is a main service unit, adding a queuing lock to the data writing request by the first service unit, obtaining logic data associated with the data writing request in a first cache, and processing the logic data based on the data writing request to obtain updated data;
the first service unit writes the update data into the first cache to cover the logic data, and notifies a broadcast service to perform broadcast notification of the update data, where the pre-service, the first service unit, and the broadcast service are disposed in the same machine room.
In a second aspect, an embodiment of the present application further provides a distributed middle-stage data processing method, which is applied to a server, and includes:
in the case that the pre-service detects a data read request, forwarding the data read request to a first service unit based on the acquired packet information by the pre-service, wherein the first service unit comprises a master service unit or a slave service unit;
the first service unit acquires data information associated with the first cache and the data reading request, and sends the data information to a corresponding client after serialization processing, so that the client can update local storage data based on the data information.
In a third aspect, embodiments of the present application further provide a distributed middle-stage data processing system, including:
a pre-service configured to forward a data write request to a first service unit based on the acquired packet information in case the data write request is detected;
the first service unit is configured to add a queuing lock to the data write request under the condition that the first service unit is a main service unit, obtain logic data associated with the data write request in a first cache, and process the logic data based on the data write request to obtain updated data; and writing the update data into the first cache to cover the logic data, and notifying a broadcast service to perform broadcast notification of the update data, wherein the pre-service, the first service unit and the broadcast service are arranged in the same machine room.
In a fourth aspect, embodiments of the present application further provide a distributed middle-stage data processing system, including:
a pre-service configured to forward a data read request to a first service unit based on the acquired packet information, the first service unit comprising a master service unit or a slave service unit, in case the data read request is detected;
the first service unit is configured to acquire data information associated with the first cache and the data reading request, and send the data information to a corresponding client after serialization processing, so that the client can update local storage data based on the data information.
In a fifth aspect, embodiments of the present application further provide a distributed middle-stage data processing apparatus, including:
one or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the distributed, medium-level data processing method described in embodiments of the present application.
In a sixth aspect, embodiments of the present application also provide a non-volatile storage medium storing computer-executable instructions that, when executed by a computer processor, are configured to perform the distributed, medium-data processing method described in embodiments of the present application.
In a seventh aspect, the embodiments of the present application further provide a computer program product, which includes a computer program stored in a computer readable storage medium, from which at least one processor of the apparatus reads and executes the computer program, so that the apparatus performs the distributed medium-data processing method described in the embodiments of the present application.
In the embodiment of the present application, when a pre-service detects a data write request, forwarding the data write request to a first service unit based on the acquired packet information, adding a queuing lock to the data write request when the first service unit is a main service unit, acquiring logic data associated with the data write request in a first cache, processing the logic data based on the data write request to obtain updated data, and writing the updated data into the first cache to cover the logic data, so as to notify a broadcast service of broadcasting the updated data, where the pre-service, the first service unit and the broadcast service are disposed in the same machine room. In the scheme, the detection of the data writing request is carried out by setting the front-end service, the forwarding of the data writing request is carried out based on the grouping information, the processing and the maintenance of the data are carried out by the multi-service unit, the disaster tolerance capability is improved, and the queuing lock is added to realize the serialization processing of the data writing request under the condition that the forwarded processing unit is the main service unit, so that the data consistency is ensured in the efficient data processing process, and the availability is higher.
Drawings
FIG. 1 is a schematic view of a scenario of an exemplary distributed middle-stage data processing method according to an embodiment of the present application;
FIG. 2 is a flowchart of a distributed middle-stage data processing method according to an embodiment of the present application;
fig. 3 is a schematic diagram of a service architecture of a large-area deployment according to an embodiment of the present application;
FIG. 4 is a flowchart of another method for processing distributed middle-stage data according to an embodiment of the present application;
FIG. 5 is a timing diagram illustrating a process for processing a data write request according to an embodiment of the present application;
FIG. 6 is a flowchart of another method for processing distributed middle-stage data according to an embodiment of the present application;
FIG. 7 is a block diagram of a distributed, well-balanced data processing system according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a distributed middle-stage data processing apparatus according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the embodiments of the application and are not limiting of the embodiments of the application. It should be further noted that, for convenience of description, only some, but not all of the structures related to the embodiments of the present application are shown in the drawings.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type and not limited to the number of objects, e.g., the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The distributed medium-level data processing method provided by the embodiment of the application can be applied to a scene that a client and a server perform data interaction and the server needs to perform corresponding data processing. Referring to fig. 1, fig. 1 is a schematic view of a scenario of an exemplary distributed data processing method according to an embodiment of the present application, where interaction between a client and a server is illustrated as an example. The client 10 may send a data reading request to the server 20, and after receiving the data reading request, the server 20 performs corresponding data reading and feeds back the data reading to the client 10; the client 10 may send a data write request to the server 20, and after receiving the data write request, the server 20 performs corresponding data processing to update the cache data.
Taking the example that the client sends a data read-write request to the server through installed application software, the application software can be multi-player online game software, and the application of realizing functions such as wolf killing and scenario killing is realized. When multiple users enter a created game room using respective clients, the clients may update specific game rules by sending data write requests to the server, and the corresponding clients may read the game rules by sending data read requests. At this time, the server is required to perform efficient processing of data read requests and data write requests to ensure stable implementation of application functions.
Fig. 2 is a flowchart of a distributed middle-stage data processing method, provided in an embodiment of the present application, applied to a server, for processing a data write request, and specifically includes the following steps:
step S101, in the case that the pre-service detects a data write request, the pre-service forwards the data write request to the first service unit based on the acquired packet information.
The front-end service is used for receiving the data writing request and forwarding the data writing request correspondingly, and can be independently deployed in a certain server of the machine room. Alternatively, the data write request may be a request sent by the client, routed to the nearest front-end service by the proximity allocation principle. Before receiving and forwarding the data write request, the front-end service also includes checking the data write request, for example, checking whether the client sent by the data write request has data processing authority, and forwarding the data write request after checking that the client has data processing authority.
In one embodiment, when the front-end service forwards the data write request, it is forwarded to the service unit that processes the data write request based on the packet information. Optionally, the service units may be deployed in a server in a globally configured machine room, where the packet information records a packet condition of each service unit, and each packet corresponds to a data write request that needs to be processed. Exemplary packet information has recorded therein a plurality of packets, each packet including a master service unit and one or more slave service units, the service units within each packet being for handling a number of data write requests. Taking the example of implementing a multiplayer game function by an application program, the game can be created into a plurality of game rooms simultaneously, each game room comprises a plurality of users for game operations, assuming that 5 grouped service units are used for processing data writing requests, all the created rooms can be divided into 5 game groups, and each game group corresponds to one grouping of the service units for data processing. Alternatively, the room dividing may be performed according to the tail number of the room ID, for example, tail number 0 and tail number 1 are a group, tail number 2 and tail number 3 are a group, and so on.
Step S102, when the first service unit is a main service unit, adding a queuing lock to the data writing request by the first service unit, obtaining logic data associated with the data writing request in a first cache, and processing the logic data based on the data writing request to obtain updated data.
In one embodiment, the pre-service forwards a data write request to a first service unit, which may be a master service unit or a slave service unit, and adds a queuing lock to the data write request when it is the master service unit for use in implementing serialization processing of the data write request. The queuing lock adding mode can be based on a standard library set by a corresponding programming language, or the queuing lock adding mode can be carried out by using a set processing function and a set processing thread.
In one embodiment, the serialization processing of the data write request is performed after adding the queuing lock, during the processing, logic data associated with the data write request in the first cache is acquired, and the logic data is processed based on the data write request to obtain updated data. For example, the data writing request may be creation or update of a certain game playing method, for example, in a certain created game room, the waiting time of a certain game link is changed from 20 seconds to 10 seconds, when the data writing request is processed by the first service unit, the corresponding first service unit obtains the associated logic data stored in the first cache, and the waiting time of the link is changed from 20 seconds to 10 seconds. The first cache stores game rules, namely logic data, corresponding to game rooms, and the game rules can be changed based on data writing requests sent by clients with data writing request authorities in the same room.
Step 103, the first service unit writes the update data into the first cache to cover the logic data, and notifies a broadcast service to perform broadcast notification of the update data, where the pre-service, the first service unit, and the broadcast service are disposed in the same machine room.
In one embodiment, after the first service unit obtains the logic data in the first cache and updates the logic data to obtain updated data, the first service unit performs the coverage of the logic data in a manner of performing full coverage on the original logic data so as to replace the original stored logic data. Meanwhile, the notification broadcast service performs broadcast notification of the update data to notify other clients of local data update. The front service, the first service unit and the broadcast service are arranged in the same machine room, namely, after receiving the data write request through a nearby allocation principle of a certain front service based on the data write request, the front service forwards the data write request to the first service unit of the same machine room, and under the condition that the first service unit is a main service unit, serialization processing of the data write request is executed, and a processing result is notified to the broadcast service of the same machine room to carry out corresponding broadcast notification, so that the overall processing efficiency of data is improved.
From the above, it can be seen that, when the pre-service detects the data write request, the data write request is forwarded to the first service unit based on the obtained packet information, and when the first service unit is the main service unit, a queuing lock is added to the data write request, logic data associated with the data write request in the first cache is obtained, the logic data is processed based on the data write request to obtain updated data, and the updated data is written in the first cache to cover the logic data, so as to notify the broadcast service of broadcasting the updated data, where the pre-service, the first service unit and the broadcast service are disposed in the same machine room. In the scheme, the detection of the data writing request is carried out by setting the front-end service, the forwarding of the data writing request is carried out based on the grouping information, the processing and the maintenance of the data are carried out by the multi-service unit, the disaster tolerance capability is improved, and the queuing lock is added to realize the serialization processing of the data writing request under the condition that the forwarded processing unit is the main service unit, so that the data consistency is ensured in the efficient data processing process, and the availability is higher.
In one embodiment, in the case that the aforementioned first service unit is a slave service unit, the first service unit forwards the data write request to the master service unit based on the set routing information, so as to be used for the master service unit to process the data write request, where the master service unit and the slave service units belong to the same packet, and one or more slave service units are included in the packet. The service deployment architecture diagram is shown in fig. 3, and fig. 3 is a schematic diagram of a service architecture of a large-area deployment according to an embodiment of the present application. In this example, a deployment scenario is illustrated as one large area, and a global deployment procedure may include multiple large areas. For the xx large area, a plurality of machine rooms are arranged, and each machine room can be located in a different area position. Taking the example that the xx large area comprises two machine rooms, namely a machine room 1 and a machine room 2, the machine room 1 is provided with a front-end service 11, a first grouping of main service units 12 and a second grouping of auxiliary service units 13; the machine room 2 is provided with a front-end service 21, a first grouping of slave service units 22 and a second grouping of master service units 23, wherein each grouping comprises a master service unit and one or more slave service units, namely, a master-slave deployment mode is used for each grouping, when the data writing request processing is executed, the data writing request processing is only carried out through the master service units in the corresponding grouping, and the data processing consistency is ensured. When the service unit forwarded by the pre-service for the first time is a slave service unit, the slave service unit routes the data write request to the master service unit within the same packet for processing. The deployment mode adopts globalization, large-area division and multi-group deployment architecture, one master and multiple slaves in groups are isolated from each other, disaster recovery capacity is effectively improved, and the system has high availability.
Fig. 4 is a flowchart of another distributed middle-stage data processing method according to an embodiment of the present application, where, as shown in fig. 4, the method specifically includes:
step S201, in the case that the pre-service detects a data write request, the pre-service forwards the data write request to the first service unit based on the acquired packet information.
Step 202, adding a queuing lock to the data writing request by the first service unit under the condition that the first service unit is a main service unit, obtaining logic data associated with the data writing request in a first cache, and processing the logic data based on the data writing request to obtain updated data.
In step S203, the first service unit writes the update data into the first cache to cover the logic data, and notifies a broadcast service to perform a broadcast notification of the update data, where the pre-service, the first service unit, and the broadcast service are disposed in the same machine room.
Step S204, the broadcasting service sends the broadcasting notice of the updated data to each client, wherein the broadcasting notice records version number information.
In one embodiment, after the update of the first buffer corresponding to the data writing request is performed, a broadcast notification of the update data is sent to each client through the broadcast service, where the broadcast notification records version number information. Exemplary, the update formula of the version number information is as follows:
verison i =max(version i-1 +1,time)(i≥1)
wherein the time is microsecond time stamp, verison i Representing the version number after the i-th data update.
Step S205, after receiving the broadcast notification, each client performs verification based on the version number information, and updates the local storage data if the verification passes.
In one embodiment, the broadcast notification is broadcast to various clients, such as clients in the same game room. And after receiving the broadcast notification, each client performs verification based on the version number information, if the version number recorded by the version number information is larger than the version number stored locally, the verification passes, otherwise, the verification does not pass, and the update of the locally stored data is executed after the verification passes.
According to the method, when the data update notification is carried out, each client side carries out verification in a version number comparison mode, and when the verification is passed, local data update is carried out, so that the problem of inconsistent data caused by re-coverage of a historical version is avoided, and meanwhile, the time-based time sequence setting of the version number is ensured through a special version number information generation mode, and the success rate and rationality of data update are further ensured.
On the basis of the above technical solution, after the first service unit writes the update data into the first cache to cover the logical data, the method further includes: and synchronously updating the data in the first buffer to a second buffer and a third buffer, wherein the second buffer and the first buffer are arranged in different machine rooms under the same dividing large area, and the third buffer and the first buffer are arranged in different dividing large areas. When the first cache data is updated, a global synchronization mechanism of the cache is automatically triggered, for example, the set cache updating component performs global synchronization of the cache, and the data of the first cache is synchronized to the second cache and the third cache of different machine rooms and different large areas.
Fig. 5 is a timing chart of a processing procedure of a data write request provided in this embodiment of the present application, as shown in fig. 5, after a client 1 sends a data write request, a front service receives the data write request, performs permission checking and obtains packet information, forwards the data write request to a first service unit, and at this time, assuming that the first service unit is a main service unit, the first service unit obtains logic data associated with the data write request in a first cache, processes the logic data based on the data write request to obtain updated data, and notifies a broadcasting service to perform broadcasting notification of updating data after updating is completed, the broadcasting service sends the broadcasting notification to a client 2 correspondingly, and the client 2 that receives the broadcasting notification performs data checking and updating.
In one embodiment, the distributed middle-stage data processing method further includes: the main service unit loads timers governed by the same group in the first cache in batches, determines whether to execute corresponding business logic according to the type and the content of the timers, and updates corresponding data in the first cache based on the business logic in response to a judging result of executing the business logic. The method includes the steps that if the type of the timer is a timer which is used for callback after the set time is met, the recorded content of the timer is specific callback time, namely when the timer is determined to be the type which is set and needs callback processing, processing logic which is correspondingly set when the timer is executed under the condition that the corresponding callback time is met. Taking setting of a room playing method of a game application as an example, when a client initiates a data writing request, a cache timer is registered according to business logic of different playing method types, and the same room can register a plurality of types of cache timers. The cache timer is realized by adopting the sortsets of rediss, rooms which belong to the same group and have the same registration timer type can be recorded into the same sortset and written into the cache. The master service unit loads the cache timer managed by the group at regular time, executes callback according to the type of the timer, and realizes the active switching of the room playing state machine. For example, assuming there are currently 5 game rooms, each game room is set with different play, each having a respective cache timer registered in the cache. For the game room 1, it is assumed that 10 game links exist, each game link lasts for 20 seconds, and each game link has respective processing logic, for example, a first game link speaks for the user 1, a second game link speaks for the user 2, a third game link speaks for the voting link and the like, and switching of each link can be triggered by the timing time of the set cache timer, so that state machine transfer based on data writing operation sent by the client is realized, multiple different data processing settings can be satisfied, and data processing efficiency is improved.
Fig. 6 is a flowchart of another distributed middle-stage data processing method according to an embodiment of the present application, configured to process a data read request, as shown in fig. 6, specifically including:
in step S301, in case that the pre-service detects a data read request, the pre-service forwards the data read request to a first service unit based on the acquired packet information, where the first service unit includes a master service unit or a slave service unit.
The explanation of the pre-service and the packet information refers to the data write request part, and will not be described herein. In one embodiment, the data read request is received by the nearest front-end service, which forwards it to the first service unit, again based on the proximity principle. Here, the first service unit may be a master service unit or a slave service unit.
Step S302, the first service unit obtains data information associated with the first cache and the data read request, and sends the data information to a corresponding client after serializing processing, so that the client updates local storage data based on the data information.
In one embodiment, the first service unit obtains data information associated with the first cache and the data read request, and sends the data information to the corresponding client after serializing the data information. The execution body may be a master service unit or a slave service unit, according to the specific situation of the first service unit forwarded to, that is, the processing of the data read request by using the fixed type service unit is not needed. A specific deployment architecture is illustrated with reference to fig. 3.
According to the method, the data reading requests are processed by one or more set slave service units or master service units without type distinction, the concurrent processing of the data reading requests is realized by the set plurality of service units, the processing efficiency of the data reading requests is improved, meanwhile, the grouping deployment mode adopts globalization, large-area division and multi-grouping deployment architecture, the grouping is mutually isolated, the disaster tolerance capability is effectively improved, and the system has high availability capability.
Fig. 7 is a block diagram of a distributed data processing system according to an embodiment of the present application, where the device is configured to execute the distributed data processing method according to the foregoing embodiment, and has functional modules and beneficial effects corresponding to the execution method. As shown in fig. 7, the apparatus specifically includes: a pre-service 101, a first service unit 102, wherein, when processing a data write request,
a pre-service 101 configured to forward a data write request to a first service unit 102 based on the acquired packet information in case the data write request is detected;
the first service unit 102 is configured to add a queuing lock to the data write request under the condition that the first service unit is a main service unit, obtain logic data associated with the data write request in a first cache, and process the logic data based on the data write request to obtain updated data; and writing the update data into the first cache to cover the logic data, and notifying a broadcast service to perform broadcast notification of the update data, wherein the pre-service, the first service unit and the broadcast service are arranged in the same machine room.
According to the scheme, when the front-end service detects the data writing request, the data writing request is forwarded to the first service unit based on the acquired grouping information, a queuing lock is added to the data writing request when the first service unit is the main service unit, logic data associated with the data writing request in the first cache is acquired, the logic data is processed based on the data writing request to obtain updated data, the updated data is written into the first cache to cover the logic data, and broadcast service is notified to broadcast notification of the updated data, wherein the front-end service, the first service unit and the broadcast service are arranged in the same machine room. In the scheme, the detection of the data writing request is carried out by setting the front-end service, the forwarding of the data writing request is carried out based on the grouping information, the processing and the maintenance of the data are carried out by the multi-service unit, the disaster tolerance capability is improved, and the queuing lock is added to realize the serialization processing of the data writing request under the condition that the forwarded processing unit is the main service unit, so that the data consistency is ensured in the efficient data processing process, and the availability is higher.
In a possible embodiment, in case the first service unit is a slave service unit, the first service unit forwards the data write request to a master service unit based on the set routing information for the master service unit to process the data write request, the master service unit and the slave service units belonging to the same packet, wherein one or more slave service units are included in the packet.
In one possible embodiment, the system further comprises a broadcast service 103 configured to:
after the notification broadcast service performs the broadcast notification of the update data, the broadcast service 103 sends the broadcast notification of the update data to each client, where version number information is recorded in the broadcast notification, so that the clients perform verification based on the version number information after receiving the broadcast notification, and update the locally stored data if the verification passes.
In a possible embodiment, the first service unit 102 is further configured to:
the main service unit loads timers governed by the same group in the first cache in batches, and determines whether to execute corresponding business logic according to the types and the contents of the timers;
and responding to the judging result of executing the business logic, and updating corresponding data in the first cache by the main service unit based on the business logic.
In one possible embodiment, the system further comprises a cache synchronization service 104 configured to:
after the first service unit writes the update data into the first cache to cover the logic data, synchronously updating the data in the first cache to a second cache and a third cache, wherein the second cache and the first cache are arranged in different machine rooms under the same division area, and the third cache and the first cache are arranged in different division areas.
In one embodiment, the distributed, well-engineered data processing system, when processing data read requests, the front-end service 101 is configured to:
forwarding a data read request to a first service unit based on the acquired packet information, wherein the first service unit comprises a master service unit or a slave service unit;
the first service unit 102 is configured to:
and acquiring data information associated with the first cache and the data reading request, carrying out serialization processing on the data information, and then sending the data information to a corresponding client for updating the local storage data based on the data information by the client.
According to the method, the data reading requests are processed by one or more set slave service units or master service units without type distinction, the concurrent processing of the data reading requests is realized by the set plurality of service units, the processing efficiency of the data reading requests is improved, meanwhile, the grouping deployment mode adopts globalization, large-area division and multi-grouping deployment architecture, the grouping is mutually isolated, the disaster tolerance capability is effectively improved, and the system has high availability capability.
FIG. 8 is a schematic structural diagram of a distributed middle-stage data processing apparatus according to an embodiment of the present application, where, as shown in FIG. 8, the apparatus includes a processor 201, a memory 202, an input device 203, and an output device 204; the number of processors 201 in the device may be one or more, one processor 201 being taken as an example in fig. 8; the processor 201, memory 202, input devices 203, and output devices 204 in the apparatus may be connected by a bus or other means, for example in fig. 8. The memory 202 is used as a computer readable storage medium for storing software programs, computer executable programs and modules, such as program instructions/modules corresponding to the distributed data processing method in the embodiments of the present application. The processor 201 executes various functional applications of the device and data processing by running software programs, instructions and modules stored in the memory 302, i.e., implements the distributed, medium-level data processing method described above. The input means 203 may be used to receive entered numeric or character information and to generate key signal inputs related to user settings and function control of the device. The output device 204 may include a display device such as a display screen.
The present application also provides a non-volatile storage medium containing computer executable instructions which, when executed by a computer processor, are configured to perform a distributed data processing method described in the above embodiments, where the method includes:
in case that the pre-service detects a data write request, the pre-service forwards the data write request to a first service unit based on the acquired packet information;
under the condition that the first service unit is a main service unit, adding a queuing lock to the data writing request by the first service unit, obtaining logic data associated with the data writing request in a first cache, and processing the logic data based on the data writing request to obtain updated data;
the first service unit writes the update data into the first cache to cover the logic data, and notifies a broadcast service to perform broadcast notification of the update data, where the pre-service, the first service unit, and the broadcast service are disposed in the same machine room. And/or the number of the groups of groups,
in the case that the pre-service detects a data read request, forwarding the data read request to a first service unit based on the acquired packet information by the pre-service, wherein the first service unit comprises a master service unit or a slave service unit;
the first service unit acquires data information associated with the first cache and the data reading request, and sends the data information to a corresponding client after serialization processing, so that the client can update local storage data based on the data information.
It should be noted that, in the above embodiment of the distributed middle stage data processing system, each unit and module included are only divided according to the functional logic, but not limited to the above division, so long as the corresponding functions can be implemented; in addition, the specific names of the functional units are also only for convenience of distinguishing from each other, and are not used to limit the protection scope of the embodiments of the present application.
In some possible implementations, various aspects of the methods provided herein may also be implemented in the form of a program product comprising program code for causing a computer device to carry out the steps of the methods described herein above according to various exemplary embodiments of the application, when the program product is run on the computer device, e.g. the computer device may carry out the distributed intermediate-stage data processing method as described in the examples of the application. The program product may be implemented using any combination of one or more readable media.

Claims (11)

1. The distributed medium-platform data processing method is applied to a server and is characterized by comprising the following steps of:
in case that the pre-service detects a data write request, the pre-service forwards the data write request to a first service unit based on the acquired packet information;
under the condition that the first service unit is a main service unit, adding a queuing lock to the data writing request by the first service unit, obtaining logic data associated with the data writing request in a first cache, and processing the logic data based on the data writing request to obtain updated data;
the first service unit writes the update data into the first cache to cover the logic data, and notifies a broadcast service to perform broadcast notification of the update data, where the pre-service, the first service unit, and the broadcast service are disposed in the same machine room.
2. The distributed middle-stage data processing method according to claim 1, wherein in the case that the first service unit is a slave service unit, the first service unit forwards the data write request to a master service unit based on the set routing information for the master service unit to process the data write request, the master service unit and the slave service unit belong to the same packet, and one or more slave service units are included in the packet.
3. The distributed medium-sized data processing method according to claim 1, further comprising, after the notification broadcast service performs the broadcast notification of the update data:
the broadcast service sends a broadcast notice of the update data to each client, wherein the broadcast notice records version number information;
and after receiving the broadcast notification, each client performs verification based on the version number information, and updates the local storage data under the condition that the verification is passed.
4. A distributed mid-range data processing method as claimed in any one of claims 1 to 3, wherein the distributed mid-range data processing method further comprises:
the main service unit loads timers governed by the same group in the first cache in batches, and determines whether to execute corresponding business logic according to the types and the contents of the timers;
and responding to the judging result of executing the business logic, and updating corresponding data in the first cache by the main service unit based on the business logic.
5. A distributed intermediate data processing method according to any one of claims 1-3, further comprising, after the first service unit writes the update data in the first cache to overwrite the logical data:
and synchronously updating the data in the first cache to a second cache and a third cache, wherein the second cache and the first cache are arranged in different machine rooms under the same dividing large area, and the third cache and the first cache are arranged in different dividing large areas.
6. The distributed medium-platform data processing method is applied to a server and is characterized by comprising the following steps of:
in the case that the pre-service detects a data read request, forwarding the data read request to a first service unit based on the acquired packet information by the pre-service, wherein the first service unit comprises a master service unit or a slave service unit;
the first service unit acquires data information associated with the first cache and the data reading request, and sends the data information to a corresponding client after serialization processing, so that the client can update local storage data based on the data information.
7. A distributed, medium-sized data processing system, comprising:
a pre-service configured to forward a data write request to a first service unit based on the acquired packet information in case the data write request is detected;
the first service unit is configured to add a queuing lock to the data write request under the condition that the first service unit is a main service unit, obtain logic data associated with the data write request in a first cache, and process the logic data based on the data write request to obtain updated data; and writing the update data into the first cache to cover the logic data, and notifying a broadcast service to perform broadcast notification of the update data, wherein the pre-service, the first service unit and the broadcast service are arranged in the same machine room.
8. A distributed, medium-sized data processing system, comprising:
a pre-service configured to forward a data read request to a first service unit based on the acquired packet information, the first service unit comprising a master service unit or a slave service unit, in case the data read request is detected;
the first service unit is configured to acquire data information associated with the first cache and the data reading request, and send the data information to a corresponding client after serialization processing, so that the client can update local storage data based on the data information.
9. A distributed, well-being data processing apparatus, the apparatus comprising: one or more processors; storage means for storing one or more programs which when executed by the one or more processors cause the one or more processors to implement the distributed in-flight data processing method of any of claims 1-6.
10. A non-volatile storage medium storing computer executable instructions which, when executed by a computer processor, are for performing the distributed intermediate data processing method of any of claims 1-6.
11. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the distributed medium-data processing method of any of claims 1-6.
CN202211733542.0A 2022-12-30 2022-12-30 Distributed medium-level data processing method, system, device, storage medium and program product Pending CN116010501A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211733542.0A CN116010501A (en) 2022-12-30 2022-12-30 Distributed medium-level data processing method, system, device, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211733542.0A CN116010501A (en) 2022-12-30 2022-12-30 Distributed medium-level data processing method, system, device, storage medium and program product

Publications (1)

Publication Number Publication Date
CN116010501A true CN116010501A (en) 2023-04-25

Family

ID=86031403

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211733542.0A Pending CN116010501A (en) 2022-12-30 2022-12-30 Distributed medium-level data processing method, system, device, storage medium and program product

Country Status (1)

Country Link
CN (1) CN116010501A (en)

Similar Documents

Publication Publication Date Title
CN107402722B (en) Data migration method and storage device
JP2002091938A (en) System and method for processing fail-over
CN104735098A (en) Session information control method and system
US20120278817A1 (en) Event distribution pattern for use with a distributed data grid
CN108228581B (en) Zookeeper compatible communication method, server and system
US20110016349A1 (en) Replication in a network environment
CN115550384B (en) Cluster data synchronization method, device and equipment and computer readable storage medium
US8301750B2 (en) Apparatus, system, and method for facilitating communication between an enterprise information system and a client
CN111404628A (en) Time synchronization method and device
CN111475480A (en) Log processing method and system
CN110740145A (en) Message consumption method, device, storage medium and electronic equipment
CN113259476B (en) Message pushing method and system
CN109347906B (en) Data transmission method, device and server
CN114565502A (en) GPU resource management method, scheduling method, device, electronic equipment and storage medium
JPH10307732A (en) Message transmitting method
CN110321199B (en) Method and device for notifying common data change, electronic equipment and medium
CN116010501A (en) Distributed medium-level data processing method, system, device, storage medium and program product
CN111756800A (en) Method and system for processing burst flow
Liu et al. Zoro: A robotic middleware combining high performance and high reliability
CN113259426B (en) Method, system, device and medium for resolving data dependency in microservice
CN111600958B (en) Service discovery system, service data management method, server, and storage medium
US20240028611A1 (en) Granular Replica Healing for Distributed Databases
Watabe et al. A distributed multiparty desktop conferencing system and its architecture
CN113660988A (en) Data processing method, device, system, equipment and computer readable storage medium
US8429136B2 (en) Information processing method and information processing apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination