CN111435938A - Data request processing method, device and equipment - Google Patents

Data request processing method, device and equipment Download PDF

Info

Publication number
CN111435938A
CN111435938A CN201910030276.7A CN201910030276A CN111435938A CN 111435938 A CN111435938 A CN 111435938A CN 201910030276 A CN201910030276 A CN 201910030276A CN 111435938 A CN111435938 A CN 111435938A
Authority
CN
China
Prior art keywords
service node
determining
adjacent
service
data request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910030276.7A
Other languages
Chinese (zh)
Other versions
CN111435938B (en
Inventor
彭兵庭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910030276.7A priority Critical patent/CN111435938B/en
Publication of CN111435938A publication Critical patent/CN111435938A/en
Application granted granted Critical
Publication of CN111435938B publication Critical patent/CN111435938B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Abstract

The application provides a method, a device and equipment for processing a data request, wherein the method comprises the following steps: aiming at service nodes in a service node set, acquiring an execution sequence of the service nodes; determining path information corresponding to the service node set according to the execution sequence; determining a data processing mode corresponding to the service node according to the path information; the service node is used for processing the data request by using the data processing mode. By the technical scheme, the processing efficiency of the service node is improved.

Description

Data request processing method, device and equipment
Technical Field
The present application relates to the field of internet, and in particular, to a method, an apparatus, and a device for processing a data request.
Background
In a service system based on a distributed architecture, data requests (such as log requests, HTTP (hypertext Transfer Protocol) requests, etc.) are generally processed by a plurality of services, and these services need to process the data requests in a predetermined order. For example, first, the data request a is processed by the service 1, then, based on the processing result of the service 1, the data request a is processed in parallel by the services 2 and 3, and then, based on the processing results of the services 2 and 3, the data request a is processed by the service 4.
In order to realize the above functions, in the related art, a processing method of indicating a data request in a code of each service is required. For example, in the code of service 2, service 2 is instructed to process a data request based on the processing result of service 1, and service 2 transmits the processing result to service 4, similarly to the codes of other services.
Obviously, in the above-mentioned method, a processing method for indicating a data request in a code is required, which results in a large development workload of the code, a high development and learning cost, a complex development difficulty, and a low processing efficiency of the service.
Disclosure of Invention
The application provides a data request processing method, which comprises the following steps:
aiming at service nodes in a service node set, acquiring an execution sequence of the service nodes;
determining path information corresponding to the service node set according to the execution sequence;
determining a data processing mode corresponding to the service node according to the path information; the service node is used for processing the data request by using the data processing mode.
The application provides a data request processing method, which comprises the following steps:
aiming at service nodes in a service node set, acquiring an execution sequence of the service nodes;
determining a data processing mode corresponding to the service node according to the execution sequence; the service node is used for processing the data request by using the data processing mode.
The application provides a data request processing device, the device includes:
the acquisition module is used for acquiring the execution sequence of the service nodes in the service node set;
a determining module, configured to determine path information corresponding to the service node set according to the execution sequence, and determine a data processing manner corresponding to the service node according to the path information; the service node is used for processing the data request by using the data processing mode.
The present application provides a server, the server comprising:
a processor and a machine-readable storage medium having stored thereon a plurality of computer instructions, the processor when executing the computer instructions performs:
aiming at service nodes in a service node set, acquiring an execution sequence of the service nodes;
determining path information corresponding to the service node set according to the execution sequence;
determining a data processing mode corresponding to the service node according to the path information; the service node is used for processing the data request by using the data processing mode.
Based on the above technical solution, in the embodiment of the present application, the path information may be determined according to the execution sequence of each service node, the data processing manner corresponding to each service node is determined according to the path information, and the data processing manner is sent to the service node, so that the service node processes the data request by using the data processing manner. In the above mode, the data processing mode does not need to be indicated in the code, but the path information is automatically generated, and the data processing mode corresponding to each service node is determined according to the path information, so that the problems of large development workload of the code, high development and learning cost, complex development difficulty and the like are solved, the processing efficiency of the service nodes is improved, the development workload of users is reduced, and the service experience is good.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments of the present application or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings of the embodiments of the present application.
FIG. 1 is a flow diagram of a method for processing a data request in one embodiment of the present application;
FIG. 2 is a flow chart of a method of processing a data request in another embodiment of the present application;
FIGS. 3A-3D are schematic diagrams of path information in one embodiment of the present application;
FIG. 4 is a block diagram of a data request processing device in one embodiment of the present application;
fig. 5 is a hardware configuration diagram of a server according to an embodiment of the present application.
Detailed Description
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein is meant to encompass any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in the embodiments of the present application to describe various information, the information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. Depending on the context, moreover, the word "if" as used may be interpreted as "at … …" or "when … …" or "in response to a determination".
The embodiment of the present application provides a method for processing a data request, where the method may be applied to any device (such as a server, etc.), and referring to fig. 1, the method is a flowchart of the method, and the method may include:
step 101, aiming at a service node in a service node set, obtaining an execution sequence of the service node.
Specifically, metadata corresponding to the service node set is obtained, and an execution sequence of the service nodes is obtained according to the metadata, that is, an execution sequence of each service node in the service node set is obtained according to the metadata. Wherein the metadata may include an execution order for each service node in the set of service nodes.
In one example, obtaining metadata corresponding to the service node set may include: acquiring a service node set corresponding to a service type, wherein the service node set may include a plurality of service nodes, and each service node in the service node set is used for processing a data request of the service type; acquiring metadata corresponding to the service node set according to the service type; wherein the metadata further includes the service type.
And 102, determining path information corresponding to the service node set according to the execution sequence.
Step 103, determining a data processing manner corresponding to the service node according to the path information, that is, determining a data processing manner corresponding to each service node in the service node set according to the path information.
Specifically, determining a data processing manner corresponding to the service node according to the path information may include:
in case one, if the data processing mode includes a trigger condition, a first neighboring service node in front of the service node may be determined according to the path information, and a trigger condition corresponding to the service node is determined according to the first neighboring service node; the triggering condition is used for indicating that the data request is processed based on the processing result of the first adjacent service node after the processing result of the first adjacent service node is received.
Determining a second adjacent service node behind the service node according to the path information if the data processing mode comprises the receiving condition, and determining the receiving condition corresponding to the service node according to the second adjacent service node; wherein the receiving condition is used for indicating that the processing result of the service node is sent to the second adjacent service node.
And in case that the data processing mode includes the trigger condition and the receiving condition, determining a first neighboring service node in front of the service node according to the path information, and determining a second neighboring service node behind the service node according to the path information. Further, the trigger condition corresponding to the service node may be determined according to the first neighboring service node, and the receiving condition corresponding to the service node may be determined according to the second neighboring service node. The triggering condition is used for indicating that after the processing result of the first adjacent service node is received, the data request is processed based on the processing result of the first adjacent service node. Further, the reception condition is used to instruct transmission of a processing result of the serving node to the second neighboring serving node.
In one example, after determining the data processing manner corresponding to the service node according to the path information, the service node may be configured to process the data request by using the data processing manner. For example, the data processing method may be sent to the service node, so that the service node processes the data request by using the data processing method after acquiring the data request.
Specifically, the triggering condition is sent to the service node, so that the service node, based on the triggering condition, after acquiring the data request and receiving the processing result of the first adjacent service node, processes the data request based on the processing result of the first adjacent service node; or sending the receiving condition to the service node, so that the service node processes the data request after acquiring the data request based on the receiving condition, and sends the processing result of the service node to a second adjacent service node; or sending the triggering condition and the receiving condition to the service node, so that the service node obtains the data request based on the triggering condition and the receiving condition, receives the processing result of the first adjacent service node, processes the data request based on the processing result of the first adjacent service node, and sends the processing result of the service node to the second adjacent service node.
In the above embodiment, the path information may specifically include: a pipeline-based directed acyclic path; or, a polymeric based directed acyclic pathway; or, based on a non-convergent, directed acyclic path; or a directed acyclic path based on multiple parallel processes. Of course, the above is merely an example, and no limitation is made thereto.
In an example, the execution sequence is only an example given for convenience of description, and in practical applications, the execution sequence between the steps may also be changed, and the execution sequence is not limited. Moreover, in other embodiments, the steps of the respective methods do not have to be performed in the order shown and described herein, and the methods may include more or less steps than those described herein. Moreover, a single step described in this specification may be broken down into multiple steps for description in other embodiments; multiple steps described in this specification may be combined into a single step in other embodiments.
Based on the above technical solution, in the embodiment of the present application, the path information may be determined according to the execution sequence of each service node, the data processing manner corresponding to each service node is determined according to the path information, and the data processing manner is sent to the service node, so that the service node processes the data request by using the data processing manner. In the above mode, the data processing mode does not need to be indicated in the code, but the path information is automatically generated, and the data processing mode corresponding to each service node is determined according to the path information, so that the problems of large development workload of the code, high development and learning cost, complex development difficulty and the like are solved, the processing efficiency of the service nodes is improved, the development workload of users is reduced, and the service experience is good.
Based on the same application concept as the above method, another data request processing method is also provided in the embodiment of the present application, as shown in fig. 2, which is a schematic flow diagram of the method, and the method may include:
step 201, aiming at the service node in the service node set, obtaining the execution sequence of the service node.
The implementation process of step 201 may refer to step 101, and is not described herein again.
Step 202, determining the data processing mode corresponding to the service node according to the execution sequence.
Specifically, determining the data processing mode corresponding to the service node according to the execution sequence may include:
in case one, if the data processing manner includes a trigger condition, a first neighboring service node in front of the service node may be determined according to the execution sequence, and the trigger condition corresponding to the service node is determined according to the first neighboring service node; the triggering condition is used for indicating that the data request is processed based on the processing result of the first adjacent service node after the processing result of the first adjacent service node is received.
Determining a second adjacent service node behind the service node according to the execution sequence if the data processing mode comprises the receiving condition, and determining the receiving condition corresponding to the service node according to the second adjacent service node; wherein the receiving condition is used for indicating that the processing result of the service node is sent to the second adjacent service node.
And in case that the data processing mode includes the triggering condition and the receiving condition, determining a first adjacent service node in front of the service node according to the execution sequence, and determining a second adjacent service node behind the service node according to the execution sequence. Further, the trigger condition corresponding to the service node may be determined according to the first neighboring service node, and the receiving condition corresponding to the service node may be determined according to the second neighboring service node. The triggering condition is used for indicating that after the processing result of the first adjacent service node is received, the data request is processed based on the processing result of the first adjacent service node. Further, the reception condition is used to instruct transmission of a processing result of the serving node to the second neighboring serving node.
After determining the data processing mode corresponding to the service node according to the execution sequence, the service node may be configured to process the data request by using the data processing mode. For example, the data processing method may be sent to the service node, so that the service node processes the data request by using the data processing method after acquiring the data request. For a specific process, refer to the above embodiments, and are not described herein again.
In an example, the execution sequence is only an example given for convenience of description, and in practical applications, the execution sequence between the steps may also be changed, and the execution sequence is not limited. Moreover, in other embodiments, the steps of the respective methods do not have to be performed in the order shown and described herein, and the methods may include more or less steps than those described herein. Moreover, a single step described in this specification may be broken down into multiple steps for description in other embodiments; multiple steps described in this specification may be combined into a single step in other embodiments.
Based on the above technical solution, in the embodiment of the present application, the path information may be determined according to the execution sequence of each service node, the data processing manner corresponding to each service node is determined according to the path information, and the data processing manner is sent to the service node, so that the service node processes the data request by using the data processing manner. In the above mode, the data processing mode does not need to be indicated in the code, but the path information is automatically generated, and the data processing mode corresponding to each service node is determined according to the path information, so that the problems of large development workload of the code, high development and learning cost, complex development difficulty and the like are solved, the processing efficiency of the service nodes is improved, the development workload of users is reduced, and the service experience is good.
The following describes a method for processing the data request in conjunction with several specific application scenarios.
With application scenario 1, for data request a of service type a, first, data request a is processed by service node a1, then, based on the processing result of service node a1, data request a is processed by service node a2, then, based on the processing result of service node a2, data request a is processed by service node A3, and then, based on the processing result of service node A3, data request a is processed by service node a 4.
Metadata A is configured for service type A, which may include service type A, execution order 1 for service node A1 (representing service node A1 first execution), execution order 2 for service node A2 (representing service node A2 second execution), execution order 3 for service node A3 (representing service node A3 third execution), and execution order 4 for service node A4 (representing service node A4 fourth execution). In summary, based on the execution sequence of each service node, it may be determined that the topology sequence executed last is: 1234.
further, in the service node set corresponding to service type a, service node a1, service node a2, service node A3, and service node a4 may be included, that is, service node a1, service node a2, service node A3, and service node a4 are all used to process data request a corresponding to service type a.
In the application scenario, the method for processing the data request in application scenario 1 may include:
step a1, obtaining a service node set corresponding to the service type a, where the service node set includes a service node a1, a service node a2, a service node A3, and a service node a4, and the service node a1, the service node a2, the service node A3, and the service node a4 are used to process a data request a corresponding to the service type a.
Step a2, obtaining metadata corresponding to the service node set according to the service type a, where the metadata a includes the service type a, and thus the metadata corresponding to the service node set may be the metadata a.
Step a3, obtaining the execution sequence of each service node in the service node set according to the metadata A.
For example, since metadata a includes execution order 1 of service node a1, execution order 2 of service node a2, execution order 3 of service node A3, and execution order 4 of service node a4, based on metadata a, it may be determined that the execution order of service node a1 in the service node set is 1, the execution order of service node a2 is 2, the execution order of service node A3 is 3, and the execution order of service node a4 is 4.
Step a4, determining the path information corresponding to the service node set according to the execution sequence of each service node.
For example, since the execution order of the service node a1 is 1, the execution order of the service node a2 is 2, the execution order of the service node A3 is 3, and the execution order of the service node a4 is 4, that is, the service node a1 is the first service node to compute a path, the service node a2 is the second service node to compute a path, the service node A3 is the third service node to compute a path, and the service node a4 is the fourth service node to compute a path, the path information corresponding to the service node set may be the service node a 1-the service node a 2-the service node A3-the service node a4, which is shown in fig. 3A and is an example of the path information.
Step a5, determining the data processing mode corresponding to each service node according to the path information.
For example, referring to the path information shown in fig. 3A, for the service node a1, since the service node a1 is the first service node for calculating the path, that is, the service node a1 does not have an adjacent service node in front of it and has an adjacent service node a2 behind it, the data processing manner for determining the service node a1 includes a receiving condition, and the receiving condition is used to instruct to send the processing result of the service node a1 to the service node a 2.
For the service node a2, since the service node a2 is the second service node of the computed path, that is, the service node a2 has an adjacent service node a1 in front of it and an adjacent service node A3 behind it, it is determined that the data processing manner of the service node a2 includes a trigger condition and a receiving condition, the trigger condition is used to indicate that after receiving the processing result of the service node a1, the data request is processed based on the processing result of the service node a1, and the receiving condition is used to indicate that the processing result of the service node a2 is sent to the service node A3.
For the service node A3, since the service node A3 is the third service node of the computation path, that is, the service node A3 has an adjacent service node a2 in front of it and an adjacent service node a4 behind it, it is determined that the data processing manner of the service node A3 includes a trigger condition and a receiving condition, the trigger condition is used to indicate that after receiving the processing result of the service node a2, the data request is processed based on the processing result of the service node a2, and the receiving condition is used to indicate that the processing result of the service node A3 is sent to the service node a 4.
For the service node a4, since the service node a4 is the last service node of the computation path, that is, the service node a4 has an adjacent service node A3 in front of it and an adjacent service node in the back of it, it is determined that the data processing manner of the service node A3 includes a trigger condition, and the trigger condition is used to indicate that, after receiving the processing result of the service node A3, the data request is processed based on the processing result of the service node A3.
Step a6, sending the data processing mode corresponding to each service node to the service node.
For example, the receive condition of service node A1 is sent to service node A1, the trigger condition and receive condition of service node A2 are sent to service node A2, the trigger condition and receive condition of service node A3 are sent to service node A3, and the trigger condition of service node A4 is sent to service node A4.
And step a7, each service node processes the data request by using a data processing mode.
For example, after receiving the data request a for service type a, the first service node corresponding to service type a, i.e., service node a1, is determined, and the data request a is sent to service node a 1.
After receiving the data request a, the service node a1 may process the data request a to obtain a processing result of the service node a 1. Based on the reception condition of the service node a1, the service node a1 may send the data request a and the processing result of the service node a1 to the service node a 2.
After the service node a2 receives the data request a and the processing result of the service node a1, based on the trigger condition of the service node a2, the service node a2 processes the data request a based on the processing result of the service node a1, and obtains the processing result of the service node a 2. Based on the reception condition of the service node a2, the service node a2 transmits the data request a and the processing result of the service node a2 to the service node A3.
After the service node A3 receives the data request a and the processing result of the service node a2, based on the trigger condition of the service node A3, the service node A3 processes the data request a based on the processing result of the service node a2, and obtains the processing result of the service node A3. Based on the reception condition of the service node A3, the service node A3 transmits the data request a and the processing result of the service node A3 to the service node a 4.
After the service node a4 receives the data request a and the processing result of the service node A3, based on the trigger condition of the service node a4, the service node a4 processes the data request a based on the processing result of the service node A3 to obtain the processing result of the service node a4, and thus, the processing procedure of the data request a is completed.
With application scenario 2, for data request B of service type B, first, data request B may be processed by serving node B1, then, based on the processing result of serving node B1, data request B may be processed in parallel by serving node B2 and serving node B3, and then, based on the processing result of serving node B2 and the processing result of serving node B3, data request B may be processed by serving node B4. Metadata B is configured for service type B, which includes service type B, execution order 1 of serving node B1, execution order 2 of serving node B2, execution order 2 of serving node B3, and execution order 3 of serving node B4. In summary, based on the execution sequence of each service node, it may be determined that the topology sequence executed last is: 1223.
in the application scenario, the method for processing the data request of the application scenario 2 may include:
step B1, obtaining a service node set corresponding to service type B, where the service node set includes service node B1, service node B2, service node B3, and service node B4, and service node B1, service node B2, service node B3, and service node B4 are used to process data request B corresponding to service type B.
Step B2, obtaining the metadata corresponding to the service node set according to the service type B, where the metadata B includes the service type B, and therefore the metadata corresponding to the service node set may be the metadata B.
And B3, acquiring the execution sequence of each service node in the service node set according to the metadata B.
For example, it is determined that the execution order of the serving node B1 in the set of serving nodes is 1, the execution order of the serving node B2 is 2, the execution order of the serving node B3 is 2, and the execution order of the serving node B4 is 3.
And b4, determining the path information corresponding to the service node set according to the execution sequence of each service node.
For example, since the execution order of the serving node B1 is 1, the execution order of the serving node B2 is 2, the execution order of the serving node B3 is 2, and the execution order of the serving node B4 is 3, the path information corresponding to the serving node set may be serving node B1-serving node B2 or serving node B3-serving node B4.
Referring to fig. 3B, an example of path information corresponding to a service node set is shown.
And b5, determining the data processing mode corresponding to each service node according to the path information.
For example, referring to the path information shown in fig. 3B, for serving node B1, since serving node B1 is the first serving node to calculate the path, that is, serving node B1 does not have an adjacent serving node in front of it, and has an adjacent serving node B2 and serving node B3 behind it, determining the data processing manner of serving node B1 may include a receiving condition, where the receiving condition is used to instruct to send the processing result of serving node B1 to serving node B2 and send the processing result of serving node B1 to serving node B3.
The serving node B2 has a neighboring serving node B1 in front of it and a neighboring serving node B4 behind it, so the data processing mode of the serving node B2 includes a trigger condition and a receiving condition, the trigger condition is used to indicate that after receiving the processing result of the serving node B1, the data request is processed based on the processing result of the serving node B1, and the receiving condition is used to indicate that the processing result of the serving node B2 is sent to the serving node B4.
The serving node B3 has a neighboring serving node B1 in front of it and a neighboring serving node B4 behind it, so the data processing mode of the serving node B3 includes a trigger condition and a receiving condition, the trigger condition is used to indicate that after receiving the processing result of the serving node B1, the data request is processed based on the processing result of the serving node B1, and the receiving condition is used to indicate that the processing result of the serving node B3 is sent to the serving node B4.
The serving node B4 has neighboring serving node B2 and serving node B3 in front of it and neighboring serving node B in back of it, and therefore, it is determined that the data processing manner of the serving node B3 includes a trigger condition, where the trigger condition is used to indicate that, after receiving the processing result of the serving node B2 and the processing result of the serving node B3, the data request is processed based on the processing result of the serving node B2 and the processing result of the serving node B3.
And b6, sending the data processing mode corresponding to each service node to the service node.
And b7, each service node processes the data request by using a data processing mode.
For example, the serving node B1, upon receiving the data request B, may process the data request B to obtain a processing result for the serving node B1. Based on the reception condition of the serving node B1, the serving node B1 transmits the data request B and the processing result of the serving node B1 to the serving node B2 and the serving node B3.
After serving node B2 receives the processing results of data request B and serving node B1, based on the trigger condition of serving node B2, serving node B2 processes data request B based on the processing result of serving node B1, and obtains the processing result of serving node B2. Based on the reception condition of the serving node B2, the serving node B2 transmits the data request B and the processing result of the serving node B2 to the serving node B4.
After serving node B3 receives the processing results of data request B and serving node B1, based on the trigger condition of serving node B3, serving node B3 processes data request B based on the processing result of serving node B1, and obtains the processing result of serving node B3. Based on the reception condition of the serving node B3, the serving node B3 transmits the data request B and the processing result of the serving node B3 to the serving node B4.
After the serving node B4 receives the data request B, the processing result of the serving node B2, and the processing result of the serving node B3, based on the trigger condition of the serving node B4, the serving node B4 processes the data request B based on the processing result of the serving node B2 and the processing result of the serving node B3 to obtain the processing result of the serving node B4, and thus, the processing procedure of the data request B is completed. It should be noted that the trigger condition is satisfied after the processing result of the serving node B2 and the processing result of the serving node B3 are received.
Application scenario 3, for data request C of service type C, first, data request C may be processed by serving node C1, and then, based on the processing result of serving node C1, data request C may be processed in parallel by serving node C2, serving node C3, and serving node C4. The metadata C is configured for the service type C, and may include the service type C, an execution order 1 of the service node C1, an execution order 2 of the service node C2, an execution order 2 of the service node C3, and an execution order 2 of the service node C4. In summary, based on the execution sequence of each service node, it may be determined that the topology sequence executed last is: 1222.
in the application scenario, the method for processing the data request of the application scenario 3 may include:
and d1, acquiring a service node set corresponding to the service type C, wherein the service node set comprises a service node C1, a service node C2, a service node C3 and a service node C4, and the service node C1, the service node C2, the service node C3 and the service node C4 are used for processing the data request C corresponding to the service type C.
Step d2, obtaining the metadata corresponding to the service node set according to the service type C, where the metadata C includes the service type C, and thus the metadata corresponding to the service node set may be the metadata C.
And d3, acquiring the execution sequence of each service node in the service node set according to the metadata C.
For example, it is determined that the execution order of the service node C1 in the service node set is 1, the execution order of the service node C2 is 2, the execution order of the service node C3 is 2, and the execution order of the service node C4 is 2.
And d4, determining the path information corresponding to the service node set according to the execution sequence of each service node.
For example, since the execution order of the service node C1 is 1, the execution order of the service node C2 is 2, the execution order of the service node C3 is 2, and the execution order of the service node C4 is 2, the path information corresponding to the service node set may be the service node C1 — the service node C2, the service node C3, or the service node C4.
Referring to fig. 3C, an example of path information corresponding to a service node set is shown.
And d5, determining the data processing mode corresponding to each service node according to the path information.
For example, referring to the path information shown in fig. 3C, since there is no neighboring service node in front of the service node C1 and there are neighboring service node C2, service node C3, and service node C4 behind the service node C1, determining the data processing manner of the service node C1 may include a reception condition indicating that the processing result of the service node C1 is sent to the service node C2, the service node C3, and the service node C4.
The service node C2 is preceded by a neighboring service node C1, and therefore, the data processing mode of the service node C2 includes a trigger condition, and the trigger condition is used for indicating that after the processing result of the service node C1 is received, the data request is processed based on the processing result of the service node C1. The service node C3 is preceded by a neighboring service node C1, and therefore, the data processing mode of the service node C3 includes a trigger condition, and the trigger condition is used for indicating that after the processing result of the service node C1 is received, the data request is processed based on the processing result of the service node C1. The service node C4 is preceded by a neighboring service node C1, and therefore, the data processing mode of the service node C4 includes a trigger condition, and the trigger condition is used for indicating that after the processing result of the service node C1 is received, the data request is processed based on the processing result of the service node C1.
And d6, sending the data processing mode corresponding to each service node to the service node.
And d7, each service node processes the data request by using a data processing mode.
After receiving the data request C, the service node C1 processes the data request C to obtain a processing result of the service node C1. Based on the reception condition of the serving node C1, the serving node C1 transmits the data request C and the processing result of the serving node C1 to the serving node C2, the serving node C3, and the serving node C4.
After the service node C2 receives the data request C and the processing result of the service node C1, based on the trigger condition of the service node C2, the service node C2 processes the data request C based on the processing result of the service node C1, and obtains the processing result of the service node C2. After the service node C3 receives the data request C and the processing result of the service node C1, based on the trigger condition of the service node C3, the service node C3 processes the data request C based on the processing result of the service node C1, and obtains the processing result of the service node C3. After the service node C4 receives the data request C and the processing result of the service node C1, based on the trigger condition of the service node C4, the service node C4 processes the data request C based on the processing result of the service node C1 to obtain the processing result of the service node C4, and thus, the processing procedure of the data request C is completed.
Of course, the application scenarios 1-3 are only examples of the present application, and are not limited thereto.
In one example, in a streaming computing scenario, a distributed task scheduling scenario, and an aggregation service processing scenario, computation paths similar to DAGs (directed acyclic graphs) are required, as shown in fig. 3A to 3C, and in order to obtain computation paths quickly in these scenarios, the computation paths can be obtained quickly by using the above method. In the above mode, the data processing mode does not need to be indicated in the code, but the path information is automatically generated, and the data processing mode corresponding to each service node is determined according to the path information, so that the problems of large development workload of the code, high development and learning cost, complex development difficulty and the like can be solved, the processing efficiency of each service node is improved, the development workload of a user is reduced, and the service experience is good.
In one example, the path information in this embodiment is specifically: a directional acyclic path based on a polymeric type (see fig. 3B); or, a non-convergent type directed acyclic path. Further, for a non-convergent based directed acyclic path, the method may include: a pipeline-based directed acyclic path (see FIG. 3A); or a directed acyclic path based on multiple parallel processes (see fig. 3C).
Wherein, the polymerization type-based directed acyclic path may refer to: starting from the starting vertex, all the pointed vertices converge to another vertex, starting from the one vertex, all the pointed vertices converge to another fixed point, and then looping, and finally all the vertices converge to the end point, as shown in fig. 3B and fig. 3D, a schematic diagram of a convergent type directed acyclic path is shown, and in this embodiment, fig. 3B is taken as an example for description.
In one example, when the data request requires a pipeline-like execution path, as shown in fig. 3A, an execution order may be marked for each service node, and then path information may be generated according to the execution order, where the path information may be a topology-like sequence 1234. After the service node a1 corresponding to the class topology sequence 1 executes the data request, the computing nodes of the whole pipeline are triggered, that is, the service node a1 corresponding to the class topology sequence 1, the service node a2 corresponding to the class topology sequence 2, the service node A3 corresponding to the class topology sequence 3, and the service node a4 corresponding to the class topology sequence 4 execute the data request respectively.
In an example, when the data request requires an aggregation-based execution path, as shown in fig. 3B, an execution order is marked for each service node, and then path information is generated according to the execution order, where the path information may be a topology-like sequence 1223. After the data request is executed by the serving node B1 corresponding to the class topology sequence 1, the aggregated execution path is triggered, that is, the data request is executed by the serving node B1 corresponding to the class topology sequence 1 first, then the data request is executed by the serving node B2 corresponding to the class topology sequence 2 and the data request is executed by the serving node B3 in parallel, and then the data request is executed by the serving node B4 corresponding to the class topology sequence 3.
In one example, when the data request is an execution path that needs to be processed based on multiple consumers in parallel, as shown in fig. 3C, each service node may be labeled with an execution order, and then path information is generated according to the execution order, where the path information may be a class topology sequence 1222. After the service node C1 corresponding to the class topology sequence 1 is triggered to execute the data request, the execution path of the parallel processing of the whole consumer can be triggered, that is, the service node C1 corresponding to the class topology sequence 1 executes the data request first, and then the service node C2, the service node C3 and the service node C4 corresponding to the class topology sequence 2 execute the data request in parallel.
Based on the same application concept as the method, an embodiment of the present application further provides a data request processing apparatus, as shown in fig. 4, which is a structural diagram of the data request processing apparatus, and the apparatus includes:
an obtaining module 41, configured to obtain an execution order of service nodes in a service node set;
a determining module 42, configured to determine path information corresponding to the service node set according to the execution sequence, and determine a data processing manner corresponding to the service node according to the path information; the service node is used for processing the data request by using the data processing mode.
The determining module 42 is specifically configured to, when determining the data processing mode corresponding to the service node according to the path information: if the data processing mode comprises a triggering condition, determining a first adjacent service node in front of the service node according to the path information; determining a trigger condition corresponding to a service node according to a first adjacent service node; if the data processing mode comprises a receiving condition, determining a second adjacent service node behind the service node according to the path information; determining a receiving condition corresponding to the service node according to a second adjacent service node; if the data processing mode comprises a triggering condition and a receiving condition, determining a first adjacent service node in front of the service node according to the path information; determining a second adjacent service node behind the service node according to the path information; determining a trigger condition corresponding to the service node according to the first adjacent service node; determining a receiving condition corresponding to the service node according to the second adjacent service node; the triggering condition is used for indicating that after the processing result of the first adjacent service node is received, the data request is processed based on the processing result of the first adjacent service node; the receiving condition is used for indicating that the processing result of the service node is sent to the second adjacent service node.
Based on the same application concept as the method, an embodiment of the present application further provides a server, where the server includes: a processor and a machine-readable storage medium; wherein the machine-readable storage medium has stored thereon a plurality of computer instructions, and the processor executes the computer instructions to perform the following:
aiming at service nodes in a service node set, acquiring an execution sequence of the service nodes;
determining path information corresponding to the service node set according to the execution sequence;
determining a data processing mode corresponding to the service node according to the path information; the service node is used for processing the data request by using the data processing mode.
An embodiment of the present application further provides a machine-readable storage medium, where a number of computer instructions are stored on the machine-readable storage medium, and when executed, the computer instructions perform the following processes:
aiming at service nodes in a service node set, acquiring an execution sequence of the service nodes;
determining path information corresponding to the service node set according to the execution sequence;
determining a data processing mode corresponding to the service node according to the path information; the service node is used for processing the data request by using the data processing mode.
Referring to fig. 5, which is a block diagram of a server proposed in the embodiment of the present application, the server 50 may include: a processor 51, a network interface 52, a bus 53, and a memory 54.
The memory 54 may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the memory 54 may be: RAM (random Access Memory), volatile Memory, non-volatile Memory, flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., a compact disk, a dvd, etc.).
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Furthermore, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (13)

1. A method for processing a data request, the method comprising:
aiming at service nodes in a service node set, acquiring an execution sequence of the service nodes;
determining path information corresponding to the service node set according to the execution sequence;
determining a data processing mode corresponding to the service node according to the path information; the service node is used for processing the data request by using the data processing mode.
2. The method of claim 1,
the obtaining of the execution sequence of the service node includes:
acquiring metadata corresponding to the service node set;
acquiring an execution sequence of the service nodes according to the metadata;
wherein the metadata includes an execution order of service nodes in the set of service nodes.
3. The method of claim 2,
the obtaining of the metadata corresponding to the service node set includes:
acquiring a service node set corresponding to a service type; the service node set comprises a plurality of service nodes, and the service nodes in the service node set are used for processing the data requests of the service types;
acquiring metadata corresponding to the service node set according to the service type;
wherein the metadata further comprises the service type.
4. The method of claim 1, wherein the data processing mode comprises a trigger condition, and determining the data processing mode corresponding to the service node according to the path information comprises:
determining a first adjacent service node in front of the service node according to the path information;
determining a trigger condition corresponding to the service node according to the first adjacent service node;
the triggering condition is used for indicating that after the processing result of the first adjacent service node is received, the data request is processed based on the processing result of the first adjacent service node.
5. The method of claim 1, wherein the data processing mode comprises a receiving condition, and determining the data processing mode corresponding to the service node according to the path information comprises:
determining a second adjacent service node behind the service node according to the path information;
determining a receiving condition corresponding to the service node according to the second adjacent service node; wherein the receiving condition is used for indicating that the processing result of the service node is sent to a second adjacent service node.
6. The method of claim 1, wherein the data processing method includes a trigger condition and a receiving condition, and determining the data processing method corresponding to the service node according to the path information includes:
determining a first adjacent service node in front of the service node according to the path information;
determining a second adjacent service node behind the service node according to the path information;
determining a trigger condition corresponding to the service node according to the first adjacent service node;
determining a receiving condition corresponding to the service node according to the second adjacent service node;
the triggering condition is used for indicating that after the processing result of the first adjacent service node is received, the data request is processed based on the processing result of the first adjacent service node; the receiving condition is used for indicating that the processing result of the service node is sent to the second adjacent service node.
7. The method according to any one of claims 4-6, wherein after determining the data processing manner corresponding to the service node according to the path information, the method further comprises:
sending the trigger condition to the service node, so that the service node processes the data request based on the processing result of the first adjacent service node after acquiring the data request and receiving the processing result of the first adjacent service node based on the trigger condition; alternatively, the first and second electrodes may be,
sending the receiving condition to the service node, so that the service node processes the data request after acquiring the data request based on the receiving condition, and sends a processing result of the service node to the second adjacent service node; alternatively, the first and second electrodes may be,
and sending the triggering condition and the receiving condition to the service node, so that the service node obtains a data request based on the triggering condition and the receiving condition, receives a processing result of the first adjacent service node, processes the data request based on the processing result of the first adjacent service node, and sends the processing result of the service node to the second adjacent service node.
8. The method according to claim 1, wherein the path information is specifically: a pipeline-based directed acyclic path; alternatively, a convergent-based directed acyclic pathway; or, based on a non-convergent, directed acyclic path; or a directed acyclic path based on multiple parallel processes.
9. A method for processing a data request, the method comprising:
aiming at service nodes in a service node set, acquiring an execution sequence of the service nodes;
determining a data processing mode corresponding to the service node according to the execution sequence; the service node is used for processing the data request by using the data processing mode.
10. The method of claim 9,
determining a data processing mode corresponding to the service node according to the execution sequence, wherein the data processing mode comprises the following steps:
if the data processing mode comprises a trigger condition, determining a first adjacent service node in front of the service node according to the execution sequence, and determining the trigger condition corresponding to the service node according to the first adjacent service node;
if the data processing mode comprises a receiving condition, determining a second adjacent service node behind the service node according to the execution sequence, and determining the receiving condition corresponding to the service node according to the second adjacent service node;
if the data processing mode comprises a triggering condition and a receiving condition, determining a first adjacent service node in front of the service node according to the execution sequence, and determining a second adjacent service node behind the service node according to the execution sequence; determining a trigger condition corresponding to the service node according to a first adjacent service node, and determining a receiving condition corresponding to the service node according to a second adjacent service node;
the triggering condition is used for indicating that after the processing result of the first adjacent service node is received, the data request is processed based on the processing result of the first adjacent service node; the receiving condition is used for indicating that the processing result of the service node is sent to the second adjacent service node.
11. An apparatus for processing a data request, the apparatus comprising:
the acquisition module is used for acquiring the execution sequence of the service nodes in the service node set;
a determining module, configured to determine path information corresponding to the service node set according to the execution sequence, and determine a data processing manner corresponding to the service node according to the path information; the service node is used for processing the data request by using the data processing mode.
12. The apparatus according to claim 11, wherein the determining module, when determining the data processing manner corresponding to the service node according to the path information, is specifically configured to:
if the data processing mode comprises a triggering condition, determining a first adjacent service node in front of the service node according to the path information; determining a trigger condition corresponding to a service node according to a first adjacent service node;
if the data processing mode comprises a receiving condition, determining a second adjacent service node behind the service node according to the path information; determining a receiving condition corresponding to the service node according to a second adjacent service node;
if the data processing mode comprises a triggering condition and a receiving condition, determining a first adjacent service node in front of the service node according to the path information; determining a second adjacent service node behind the service node according to the path information; determining a trigger condition corresponding to the service node according to the first adjacent service node; determining a receiving condition corresponding to the service node according to the second adjacent service node;
the triggering condition is used for indicating that after the processing result of the first adjacent service node is received, the data request is processed based on the processing result of the first adjacent service node; the receiving condition is used for indicating that the processing result of the service node is sent to the second adjacent service node.
13. A server, characterized in that the server comprises:
a processor and a machine-readable storage medium having stored thereon a plurality of computer instructions, the processor when executing the computer instructions performs:
aiming at service nodes in a service node set, acquiring an execution sequence of the service nodes;
determining path information corresponding to the service node set according to the execution sequence;
determining a data processing mode corresponding to the service node according to the path information; the service node is used for processing the data request by using the data processing mode.
CN201910030276.7A 2019-01-14 2019-01-14 Data request processing method, device and equipment Active CN111435938B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910030276.7A CN111435938B (en) 2019-01-14 2019-01-14 Data request processing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910030276.7A CN111435938B (en) 2019-01-14 2019-01-14 Data request processing method, device and equipment

Publications (2)

Publication Number Publication Date
CN111435938A true CN111435938A (en) 2020-07-21
CN111435938B CN111435938B (en) 2022-11-29

Family

ID=71580553

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910030276.7A Active CN111435938B (en) 2019-01-14 2019-01-14 Data request processing method, device and equipment

Country Status (1)

Country Link
CN (1) CN111435938B (en)

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030177162A1 (en) * 1998-05-20 2003-09-18 Wolfgang Staiger Method for assigning tasks, data processing system, client data processing nodes and machine-readable storage medium
US20070052743A1 (en) * 2005-09-05 2007-03-08 Fuji Xerox Co., Ltd. Waveform data-processing device and waveform data-processing method
CN1939007A (en) * 2004-03-30 2007-03-28 英国电讯有限公司 Treatment of data in networks
CN101150431A (en) * 2007-06-06 2008-03-26 中兴通讯股份有限公司 A method for alarm processing streamline and alarm processing
CN102222110A (en) * 2011-06-28 2011-10-19 用友软件股份有限公司 Data processing device and method
CN103369042A (en) * 2013-07-10 2013-10-23 中国人民解放军国防科学技术大学 Data processing method and data processing device
US20140172897A1 (en) * 2012-09-18 2014-06-19 International Business Machines Corporation Device, method, and program for processing data with tree structure
CN104508637A (en) * 2012-07-30 2015-04-08 华为技术有限公司 Method for peer to peer cache forwarding
CN104599078A (en) * 2015-02-03 2015-05-06 浪潮(北京)电子信息产业有限公司 Data stream processing method and system
CN104954483A (en) * 2015-06-30 2015-09-30 深圳清华大学研究院 Method for deploying distributed services through bidding nodes in cloud computing platform
CN105511956A (en) * 2014-09-24 2016-04-20 中国电信股份有限公司 Method and system for task scheduling based on share scheduling information
US20160294921A1 (en) * 2015-03-31 2016-10-06 International Business Machines Corporation Command processing in distributed computing systems
CN106485390A (en) * 2015-09-01 2017-03-08 北京奇虎科技有限公司 The generation method of examination & approval stream and device
CN106603723A (en) * 2017-01-20 2017-04-26 腾讯科技(深圳)有限公司 Request message processing method and device
CN107370808A (en) * 2017-07-13 2017-11-21 盐城工学院 A kind of method for being used to carry out big data task distributed treatment
CN107819693A (en) * 2016-09-12 2018-03-20 北京百度网讯科技有限公司 data flow processing method and device for data flow system
CN108696559A (en) * 2017-04-11 2018-10-23 华为技术有限公司 Method for stream processing and device
CN109067920A (en) * 2018-09-27 2018-12-21 电子科技大学 A kind of load balancing and method for routing for server content update
CN109144735A (en) * 2018-09-29 2019-01-04 百度在线网络技术(北京)有限公司 Method and apparatus for handling data

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030177162A1 (en) * 1998-05-20 2003-09-18 Wolfgang Staiger Method for assigning tasks, data processing system, client data processing nodes and machine-readable storage medium
CN1939007A (en) * 2004-03-30 2007-03-28 英国电讯有限公司 Treatment of data in networks
US20070052743A1 (en) * 2005-09-05 2007-03-08 Fuji Xerox Co., Ltd. Waveform data-processing device and waveform data-processing method
CN101150431A (en) * 2007-06-06 2008-03-26 中兴通讯股份有限公司 A method for alarm processing streamline and alarm processing
CN102222110A (en) * 2011-06-28 2011-10-19 用友软件股份有限公司 Data processing device and method
CN104508637A (en) * 2012-07-30 2015-04-08 华为技术有限公司 Method for peer to peer cache forwarding
US20140172897A1 (en) * 2012-09-18 2014-06-19 International Business Machines Corporation Device, method, and program for processing data with tree structure
CN103369042A (en) * 2013-07-10 2013-10-23 中国人民解放军国防科学技术大学 Data processing method and data processing device
CN105511956A (en) * 2014-09-24 2016-04-20 中国电信股份有限公司 Method and system for task scheduling based on share scheduling information
CN104599078A (en) * 2015-02-03 2015-05-06 浪潮(北京)电子信息产业有限公司 Data stream processing method and system
US20160294921A1 (en) * 2015-03-31 2016-10-06 International Business Machines Corporation Command processing in distributed computing systems
CN104954483A (en) * 2015-06-30 2015-09-30 深圳清华大学研究院 Method for deploying distributed services through bidding nodes in cloud computing platform
CN106485390A (en) * 2015-09-01 2017-03-08 北京奇虎科技有限公司 The generation method of examination & approval stream and device
CN107819693A (en) * 2016-09-12 2018-03-20 北京百度网讯科技有限公司 data flow processing method and device for data flow system
CN106603723A (en) * 2017-01-20 2017-04-26 腾讯科技(深圳)有限公司 Request message processing method and device
CN108696559A (en) * 2017-04-11 2018-10-23 华为技术有限公司 Method for stream processing and device
CN107370808A (en) * 2017-07-13 2017-11-21 盐城工学院 A kind of method for being used to carry out big data task distributed treatment
CN109067920A (en) * 2018-09-27 2018-12-21 电子科技大学 A kind of load balancing and method for routing for server content update
CN109144735A (en) * 2018-09-29 2019-01-04 百度在线网络技术(北京)有限公司 Method and apparatus for handling data

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
NARIMAN FARSAD: "Neural Network Detection of Data Sequences in communication Systems", 《IEEE》 *
潘珊珊: "大数据中心网络环境的拓扑表示学习方法研究", 《软件导刊》 *

Also Published As

Publication number Publication date
CN111435938B (en) 2022-11-29

Similar Documents

Publication Publication Date Title
TWI743458B (en) Method, device and system for parallel execution of blockchain transactions
CN108765159B (en) Block chain-based uplink and state processing method and device and interconnection system
EP3565219B1 (en) Service execution method and device
US11275568B2 (en) Generating a synchronous digital circuit from a source code construct defining a function call
CN110474820B (en) Flow playback method and device and electronic equipment
US20170185454A1 (en) Method and Electronic Device for Determining Resource Consumption of Task
CN111177433B (en) Method and apparatus for parallel processing of information
CN111163130A (en) Network service system and data transmission method thereof
CN112800466A (en) Data processing method and device based on privacy protection and server
CN106257507B (en) Risk assessment method and device for user behavior
CN111488529A (en) Information processing method, information processing apparatus, server, and storage medium
CN116933886B (en) Quantum computing execution method, quantum computing execution system, electronic equipment and storage medium
CN111402058B (en) Data processing method, device, equipment and medium
US10782933B2 (en) Computer data processing method and apparatus for large number operations
CN115952526B (en) Ciphertext ordering method, equipment and storage medium
CN111435938B (en) Data request processing method, device and equipment
CN110347973B (en) Method and device for generating information
CN110059097B (en) Data processing method and device
CN108958902B (en) Graph calculation method and system
CN109150643B (en) Service processing abnormity detection method and device
CN110545296A (en) Log data acquisition method, device and equipment
US9172729B2 (en) Managing message distribution in a networked environment
CN113553203A (en) Request processing method, device, server and storage medium
CN110391952B (en) Performance analysis method, device and equipment
CN107122303B (en) Test method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40034023

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant