CN117675906A - Data processing method and device, electronic equipment and computer readable storage medium - Google Patents

Data processing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN117675906A
CN117675906A CN202311675622.XA CN202311675622A CN117675906A CN 117675906 A CN117675906 A CN 117675906A CN 202311675622 A CN202311675622 A CN 202311675622A CN 117675906 A CN117675906 A CN 117675906A
Authority
CN
China
Prior art keywords
data
pushed
target
pushing
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311675622.XA
Other languages
Chinese (zh)
Inventor
唐良泽
李涛
杨毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhongguancun Kejin Technology Co Ltd
Original Assignee
Beijing Zhongguancun Kejin Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhongguancun Kejin Technology Co Ltd filed Critical Beijing Zhongguancun Kejin Technology Co Ltd
Priority to CN202311675622.XA priority Critical patent/CN117675906A/en
Publication of CN117675906A publication Critical patent/CN117675906A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure provides a data processing method and apparatus, an electronic device, and a computer readable storage medium, where the method includes: acquiring task information of a target data pushing task to be processed, wherein the task information comprises a tenant identifier and a service identifier, the tenant identifier is used for representing a target tenant, the target tenant is any tenant for transmitting the data to be pushed, and the task information is generated according to the data to be pushed, which is transmitted by the target tenant and corresponds to the target service represented by the service identifier; acquiring first data to be pushed according to the tenant identification and the service identification, wherein the target data set comprises the first data to be pushed and comprises all data to be pushed corresponding to the tenant identification and the service identification; and carrying out data pushing processing on the first data to be pushed. According to the embodiment of the disclosure, the data pushing efficiency can be improved.

Description

Data processing method and device, electronic equipment and computer readable storage medium
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a data processing method and device, an electronic device, and a computer readable storage medium.
Background
webhook, a lightweight event processing application, is becoming increasingly useful in business systems, where webhook is a way for applications to provide real-time information to other applications, and when data push processing is performed based on webhook, typically, as soon as data is generated, the webhook application can push the data to a preconfigured application interface, i.e., an API interface, immediately upon data generation.
In the related art, when the webhook data pushing process is performed, generally, each tenant, that is, each service system directly sends the service data to be sent to the webhook application, and then the webhook application sequentially sends the received service data to the corresponding application interface. The data pushing method has the problem of low pushing efficiency in the scene of complex service or large data volume to be processed.
Disclosure of Invention
The disclosure provides a data processing method and device, electronic equipment and a computer readable storage medium.
In a first aspect, the present disclosure provides a data processing method, the data processing method comprising:
acquiring task information of a target data pushing task to be processed, wherein the task information comprises a tenant identifier and a service identifier, the tenant identifier is used for representing a target tenant, the target tenant is any tenant which sends the data to be pushed, and the task information is generated according to the data to be pushed, which is sent by the target tenant and corresponds to a target service represented by the service identifier;
Acquiring first data to be pushed according to the tenant identification and the service identification, wherein a target data set comprises the first data to be pushed and comprises all data to be pushed corresponding to the tenant identification and the service identification;
and carrying out data pushing processing on the first data to be pushed.
In a second aspect, the present disclosure provides a data processing apparatus comprising:
the task acquisition unit is used for acquiring task information of a target data pushing task to be processed, wherein the task information comprises a tenant identifier and a service identifier, the tenant identifier is used for representing a target tenant, the target tenant is any tenant which sends the data to be pushed, and the task information is generated according to the data to be pushed, which is sent by the target tenant and corresponds to a target service represented by the service identifier;
the data acquisition unit is used for acquiring first data to be pushed according to the tenant identification and the service identification, wherein a target data set comprises the first data to be pushed, and the target data set comprises all data to be pushed corresponding to the tenant identification and the service identification;
And the data pushing unit is used for carrying out data pushing processing on the first data to be pushed.
In a third aspect, the present disclosure provides an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores one or more computer programs executable by the at least one processor, one or more of the computer programs being executable by the at least one processor to enable the at least one processor to perform the data processing method of the first aspect described above.
In a fourth aspect, the present disclosure provides a computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the data processing method of the first aspect.
The embodiment provided by the disclosure is different from the processing mode that in the related art, each service system, namely, service data sent by each tenant is directly obtained, and each service data is pushed according to the service data receiving time sequence, and according to the embodiment provided by the disclosure, the first data to be pushed in the target data set corresponding to the current target data pushing task is obtained based on the tenant identification and the service identification in the task information by obtaining the task information of the target data pushing task to be processed, and the data pushing processing is performed on the first data to be pushed.
Specifically, in the embodiment of the present disclosure, instead of performing data pushing processing on each service data according to a receiving time sequence of each service data, by creating a data pushing task corresponding to each data to be pushed, and performing data pushing processing by acquiring task information of the data pushing task to schedule each data pushing task, since each data pushing task is generated according to a tenant identifier and a service identifier in the data to be pushed, performing data pushing processing based on the task information of the data pushing task, isolation of pushing processing on tenants and services can be achieved, and delay pushing of data to be pushed by other tenants and/or other services due to a large amount of data to be pushed by a single tenant and/or a single service is avoided, so that data pushing efficiency can be improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure, without limitation to the disclosure. The above and other features and advantages will become more readily apparent to those skilled in the art by describing in detail exemplary embodiments with reference to the attached drawings, in which:
FIG. 1 is a diagram of a data pushing process in the related art;
FIG. 2 is a flow chart of a data processing method according to an embodiment of the present disclosure;
FIG. 3 is a flow chart for illustrating data processing provided by an embodiment of the present disclosure
FIG. 4 is a block diagram of a data processing apparatus provided by an embodiment of the present disclosure;
fig. 5 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
For a better understanding of the technical solutions of the present disclosure, exemplary embodiments of the present disclosure will be described below with reference to the accompanying drawings, in which various details of the embodiments of the present disclosure are included to facilitate understanding, and they should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Embodiments of the disclosure and features of embodiments may be combined with each other without conflict.
As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect.
It should be noted that, in the technical solution of the present disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing, etc. of the personal information of the user all conform to the rules of the related laws and regulations, and do not violate the popular regulations. The use of user data in the technical scheme complies with national relevant laws and regulations (for example, information security technology personal information security standards, etc.). Such as: the personal information access control takes corresponding prescribed measures; presentation of personal information gives regulatory restrictions; the personal information is not used beyond the direct or reasonable association range; the definite identity directivity is eliminated when personal information is used, and accurate positioning to specific individuals is avoided.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure, and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Referring to fig. 1, in the related art, when performing data pushing processing, for example, performing webhook data pushing processing, an application is generally divided into a data center and an http request area for performing webhook callback, that is, performing data pushing processing, where the data center includes each tenant, that is, each service system, as shown in fig. 1, for example, may include a service system a, a service system B, and a service system C, and each tenant in the data center is used for generating service data and sending each service data to the http request area for performing data pushing processing; the http request area comprises a request driver, and is specifically used for executing the http request, executing log record and other processes.
As shown in fig. 1, the data pushing processing manner in the related art can only meet the situations of single service and smaller concurrent data volume, and as the data pushing manner cannot achieve isolation between the tenant and the service, when the service data generated by the single tenant and/or the single service, that is, the data to be pushed is larger, the pushing of the service data of other tenants and/or other services is affected, which results in low data pushing efficiency.
In view of this, in the embodiment of the present disclosure, the electronic device for performing data pushing processing does not perform data pushing processing on the electronic device according to the sequence of receiving service data, but creates a corresponding data pushing task according to the tenant identifier and the service identifier in the data to be pushed after receiving the service data sent by each tenant, that is, after receiving the service data to be pushed, and obtains, based on the tenant identifier and the service identifier in the task information, first data to be pushed in the target data set corresponding to the current target data pushing task, and performs data pushing processing on the first data to be pushed, and because each data pushing task is generated according to the tenant identifier and the service identifier in the data to be pushed, performing data pushing processing on the task information based on the data pushing task can implement isolation of pushing processing on the tenant and the service, so that delay pushing of the data to be pushed by other tenants and/or other services due to a large amount of data to be pushed by a single tenant and/or a single service can be avoided.
Referring to fig. 2, a flowchart of a data processing method according to an embodiment of the disclosure is shown. The method can be applied to electronic equipment, the electronic equipment can be a server, more specifically, the method can be applied to any one of a plurality of servers, and each server can be provided with a data pushing application, and the data pushing application can be a webhook callback application; of course, the electronic device may also be a terminal device, for example, an edge terminal device in an edge computing scenario, which is not particularly limited herein.
As shown in fig. 2, the data processing method provided in the embodiment of the present disclosure includes the following steps S201 to S203, which are described in detail below.
Step S201, task information of a target data pushing task to be processed is obtained, where the task information includes a tenant identifier and a service identifier, the tenant identifier is used to represent a target tenant, the target tenant is any tenant that sends data to be pushed, and the task information is generated according to the data to be pushed sent by the target tenant and corresponding to the target service represented by the service identifier.
A multi-tenant model, generally referred to as a single product instance, may serve multiple users or systems; in the multi-tenant mode, a user or system that uses product instances simultaneously is referred to as a tenant.
In particular, in the embodiments of the present disclosure, the target tenant may be any one of a plurality of service systems using a data push application. For example, in a case where each of the service system a, the service system B, and the service system C uses a data pushing application in an electronic device that executes the method according to the embodiments of the present disclosure to perform data pushing processing on service data generated by each of the service systems, for example, perform a webhook callback, each of the service system a, the service system B, and the service system C may be regarded as a tenant of the data pushing application.
The target data pushing task may be any one of a plurality of data pushing tasks, where any one data pushing task may be a task created by the electronic device for executing the method according to the tenant identifier and the service identifier in the data to be pushed after receiving the service data sent by any tenant, that is, the data to be pushed.
The task information of the target data pushing task may include a tenant identifier corresponding to the data pushing task, that is, a tenant ID and a service identifier, where the service identifier may be, for example, a webhook service type code.
Of course, in actual implementation, other information may be included in the task information, for example, the execution time and the target push type may also be included.
The execution time of the data pushing task may be used to represent the last execution time of the corresponding data pushing task, for example, if task 1 was scheduled to be executed at time T1 last time and there is data to be pushed continuously after execution, the execution time of task 1 may be updated to T1, and if task 1 is scheduled to be executed, the fusing process is triggered at time T2 and the fusing is resumed at time T3, the fusing resuming time, i.e., T3 may be used as the execution time of the task.
The target push type of the data push task may be used to indicate whether all data to be pushed under the data push task is processed in a synchronous (sync) manner or an asynchronous (async) manner, and in some embodiments, a first preset value, e.g., 0, may be used to indicate that all data to be pushed under the task is processed in a synchronous manner, and a second preset value, e.g., 1, may be used to indicate that all data to be pushed under the task is processed in an asynchronous manner.
In the embodiment of the disclosure, when each tenant, that is, each service system sends service data, that is, data to be pushed, to a data pushing application, for example, a webhook application, the data pushing application may send the data to be pushed based on a message middleware, for example, a kafka queue, and the data pushing application sequentially obtains the data to be pushed from the kafka queue to perform push processing, unlike a method in the related art, in the embodiment of the disclosure, in order to avoid that in the case that tenant isolation and/or service isolation is not performed, data push processing on data to be pushed of a certain tenant and/or a certain service is delayed due to a larger data amount of the data to be pushed of the other tenant and/or service, the disclosure does not directly process the data to be pushed, but first extracts task information corresponding to the data to be pushed from the data to be pushed, if the data push task already exists, the data to be pushed is updated to a data set under the data push task, and if the data push task does not exist, the data push task to be updated to the data set under the data push task after the data push task is updated.
In actual implementation, the data set under the data pushing task can write the data to be pushed into the data table of the database by taking the task information of the data pushing task, for example, the tenant identification and the service identification as primary keys, and query the data to be pushed based on the tenant identification and the service identification when the data to be pushed under the task needs to be acquired; of course, in actual implementation, after creating a data pushing task, task information of the data pushing task may also be written into a corresponding data table of the database, for example, a task table; the database may be, for example, a mysql database, or may be another database, which is not particularly limited herein.
In practical implementation, the electronic device for executing the method according to the embodiments of the present disclosure may be multiple, for example, multiple servers, where the multiple servers may obtain all current data pushing tasks to be processed in response to triggering of the timing task, and the target data pushing task may be any one of all the data pushing tasks to be processed.
For example, a timing task may be set, and each server is triggered every 2 seconds to acquire all data pushing tasks to be processed, so as to perform data pushing processing on the data to be pushed under each task.
Step S202, obtaining first data to be pushed according to tenant identification and service identification, wherein a target data set comprises the first data to be pushed, and the target data set comprises all data to be pushed corresponding to the tenant identification and the service identification.
Because there may be multiple data pushing tasks to process at the same time, and the data amount of the data to be pushed under some 1 or more data pushing tasks may be relatively large, in order to improve the data pushing efficiency and avoid that the data amount under a single task is relatively large and delay the pushing processing of the data under other tasks, in the embodiment of the present disclosure, for each data pushing task, a batch processing manner may be adopted, that is, in each scheduling process, a preset number under one data pushing task, for example, 5 pieces of data to be pushed are obtained to perform the data pushing processing.
For example, if the target data set corresponding to the target data pushing task includes 100 pieces of data to be pushed, after the server acquires the target data pushing task in the process of triggering and executing task scheduling by the timing task, 5 pieces of data to be pushed can be acquired from the target data set as data to be pushed currently, and other 95 pieces of data to be pushed can be processed in the process of subsequently scheduling the task.
Step S203, performing data pushing processing on the first data to be pushed.
After the first data to be pushed is obtained based on step S203, the data pushing process may be performed on the first data to be pushed.
Specifically, when the data pushing process is webhook pushing process, pre-configured pushing address information, namely request url, can be queried according to tenant identification and service identification, and the first data to be pushed is pushed to an interface of an application corresponding to the pushing address information according to the pushing address information.
That is, in some embodiments, the data push process includes a webhook data push process; in such an embodiment, the task information may also include push address information; in step S203, performing data pushing processing on the first data to be pushed may include: and under the condition that the distributed lock corresponding to the first data to be pushed is obtained, performing webhook data pushing processing on the first data to be pushed so as to push the first data to be pushed to an interface represented by push address information.
Of course, in actual implementation, in the case where the data pushing process is webhook pushing process, in order to improve security in the data transmission process, information such as signature information of the interface represented by the push address information, AES encryption information of the data, and successful identification of the interface may be queried based on the tenant identification and the service identification, which is not limited herein.
Therefore, according to the data processing method provided by the embodiment of the disclosure, by creating the data pushing task corresponding to each data to be pushed, and performing data pushing processing in a manner of scheduling each data pushing task by acquiring task information of the data pushing task, because each data pushing task is generated according to the tenant identifier and the service identifier in the data to be pushed, the data pushing processing is performed based on the task information of the data pushing task, so that isolation of pushing processing on tenants and services can be realized, and delayed pushing of data to be pushed of other tenants and/or other services due to larger data to be pushed of a single tenant and/or a single service is avoided, thereby improving data pushing efficiency.
In some embodiments, the method may be applied to any one of a plurality of servers, n+.1; in this embodiment, the task information may further include a target push type, where the target push type may be used to represent a processing manner of data to be pushed under a corresponding data push task, and specifically, the target push type may include a first preset value and a second preset value, where the first preset value may represent that data to be pushed in a target data set is subjected to data push processing in a synchronous (sync) processing manner; the second preset value may represent data pushing processing of data to be pushed in the target data set in an asynchronous (async) processing manner.
It can be understood that, in the case that the target push type is the first preset value, that is, the data to be pushed under the target data push task needs to be processed in a synchronous manner, the target data push task can only be executed by one server at the same time, and in the case that the target data push task is processed in an asynchronous manner, the target data push task can be executed by multiple servers at the same time, that is, the data to be pushed under the target data push task can be processed in multiple servers in parallel, and the data to be pushed under the task processed in different servers is different.
In this embodiment, before performing step S202, that is, the step of acquiring the first data to be pushed according to the tenant identifier and the service identifier, the method may further include: acquiring a first hash value corresponding to the tenant identifier and the service identifier under the condition that the target push type is a first preset value; determining a first server identification according to the first hash value and the number of the plurality of servers; under the condition that the first server identification is consistent with the second server identification of the current server, executing the step of acquiring first data to be pushed according to the tenant identification and the service identification, wherein the first server identification is the identification of the server for processing the target data pushing task; or executing the step of acquiring the first data to be pushed according to the tenant identification and the service identification under the condition that the target pushing type is the second preset value.
Specifically, in the embodiment of the present disclosure, in the case that there is a sequence in service data generated by a tenant, that is, a service system, because there is a service meaning in the time of generating the data, when performing data pushing processing, the data to be pushed needs to be pushed in a synchronous processing manner, so as to avoid the influence on the service caused by incorrect sequence of the data reaching a pushing end, and in this case, it needs to be ensured that the data pushing task is executed only in one server; in this embodiment, in order to ensure that each data pushing task can be uniformly distributed to different servers for executing, after the timing task triggers each server to acquire all the current data pushing tasks to be processed, each server may acquire a hash value of a tenant identifier and a service identifier of each data pushing task, and determine, according to the hash value and the number of the plurality of servers, a server identifier for processing each data pushing task, that is, a first server identifier, and if the first server identifier is consistent with a second server identifier of the server, then process the data pushing task, otherwise, may discard the data pushing task.
In this embodiment, the first hash value may be, for example, a hash value of the tenant identifier and the service identifier, where the first hash value may be obtained by a hash (tenant identifier+service identifier);
in addition, in this embodiment, the server identifiers of the servers in the embodiments of the disclosure may be identified according to the number thereof, for example, if the number of servers is 5, the identifiers of the 5 servers may be 0, 1, 2, 3, and 4, respectively.
And determining the first server identifier according to the first hash value and the number of the plurality of servers, wherein the first server identifier can be obtained by performing modulo processing on the number of the servers and the first hash value.
Taking the first hash value corresponding to the data push task 1 as hash1, taking the number of servers as 5, and taking the modulo function as mod (), in the case where mod (hash 1, 5) is 1, it may be determined that the server for processing the data push task 1 is the server whose server identifier is 1, that is, may be the second server of the 5 servers.
In the case that the target push type in the task information is the second preset value, that is, indicates that the data to be pushed is asynchronously processed, the step S202 may be directly executed.
It can be seen that, based on the method provided by the embodiment of the present disclosure, whether to process the target data pushing task in parallel may be determined according to different pushing types of the data to be pushed, so that data pushing processing may be performed efficiently under the condition of tenant and/or service isolation.
In some embodiments, in the case that the target push type is the second preset value, before performing step S203, that is, performing the step of performing the data push processing on the first data to be pushed, the method may further include: acquiring a second hash value corresponding to the first data to be pushed; obtaining a third server identifier according to the second hash value and the number of the plurality of servers, wherein the second server identifier is the identifier of the server which is determined to be used for processing the target data pushing task; and executing the step of performing data pushing processing on the first data to be pushed under the condition that the third server identifier is consistent with the second server identifier.
That is, in the case that the target push type is the second preset value, it indicates that there is no time sequence of the data to be pushed in the target data push task, in this case, the data push task may be processed by the multiple servers at the same time, but in this embodiment, similar to the principle of determining the server based on the first hash value, in this embodiment, the determining, by which server of the multiple servers processes the first data to be pushed according to the second hash value, for example, the hash value, of the first data to be pushed, so that the multiple servers may process different data to be pushed in the same data push task in parallel, so as to improve the data push efficiency.
That is, as shown in fig. 3, in the embodiment of the present disclosure, after the timing task triggers each server to perform data pushing processing, the servers, for example, server 0 and server 1, respectively acquire a task list 1, where the task list 1 includes all current data pushing tasks to be processed, for example, may include task 1, task 2, and task 3, where the 3 tasks may be tenants in the data center, for example, tenant 1, tenant 2, tenant 3, and other tenants send data to be pushed based on the kafka queue, that is, 3 tasks generated by service data, and all data to be pushed under the 3 tasks may be stored in mysql; after the timing task triggers each server to perform data pushing processing, the server 0 and the server 1 acquire a task list 1, acquire a target pushing type in each task information to judge whether to perform synchronous processing or asynchronous processing on data to be pushed, as shown in fig. 3, if the pushing type in each task information indicates that each task 1, each task 2 and each task 3 are processed synchronously, the server 0 and the server 1 acquire the hash values of the tenant identification id11 and the service identification id12 of the task 1 respectively, namely, acquire the hash1 through the way of hash (id11+id12), and acquire the hash values of the tenant identification id21 and the service identification id22 of the task 2 respectively, namely, hash2, acquire the hash values of the tenant identification id31 and the service identification id32 of the task 3, namely, hash3, and perform module-taking processing on the hash1, the hash2 and the number 2 of servers respectively, and if the module-taking values are 0, 0 and 1 respectively, the module-taking values indicate that the server 0 processes the task 1 and the task 3; then, the server 1 may obtain a batch of data to be pushed in the current schedule, for example, 5 pieces of data to be pushed, and perform data pushing processing on the batch of data to be pushed in the current schedule, for example, performing webhook callback processing on the batch of data to be pushed in the current schedule, and the server 1 may obtain a batch of data to be pushed in the current schedule of the task 3 and perform data pushing processing on the batch of data to be pushed in the current schedule.
With continued reference to fig. 3, if the push type in each task information indicates that each of the tasks 1, 2 and 3 is processed asynchronously, it means that each task may be processed by the server 0 and the server 1 in parallel, in this case, the server 0 and the server 1 may respectively obtain a batch of data to be pushed under the tasks 1, 2 and 3, for example, 20 pieces of data, and in order to avoid that the same data under the same task is repeatedly processed by multiple servers, in this case, for each piece of data of each task, for example, for the data to be pushed data101 under the task 1, the server 0 and the server 1 respectively obtain their hash values, that is, hash4, and then perform modulo processing on the hash4 and the server number 2, if the modulo processed value is 0, it means that the data101 is processed by the server 0, and the server 1 may not process the data101.
In some embodiments, the task information acquired in step S201 may further include a target push type, and in this embodiment, in step S202, acquiring the first to-be-pushed data according to the tenant identifier and the service identifier may include: under the condition that the target pushing type is a first preset value, a preset number of first current data to be pushed are obtained from a target data set according to the ascending order of the generation time of each data to be pushed and put into a first target queue; sequentially acquiring first current data to be pushed from a first target queue as first data to be pushed, wherein a preset number of first current data to be pushed are arranged in the first target queue according to the generation time ascending order; under the condition that the target pushing type is a second preset value, acquiring the task number of all data pushing tasks to be processed, and determining the batch data processing amount according to the task number; the method comprises the steps that second current data to be pushed of batch data processing capacity is obtained from a target data set and is put into a second target queue, and one piece of second current data to be pushed is sequentially obtained from the second target queue to serve as first data to be pushed; the first preset value represents that data pushing processing is carried out on data to be pushed in the target data set in a synchronous processing mode; the second preset value represents that data pushing processing is carried out on data to be pushed in the target data set in an asynchronous processing mode.
The first target queue and the second target queue may be Kafka message queues, or may be other types of queues, which are not particularly limited herein.
Specifically, under the synchronous processing mode, 1 thread pool can be initialized and established in each server, and the thread pool can comprise 10 core threads and 10 maximum thread numbers; at the same time, a first target queue for storing the data to be pushed of the current batch of all data pushing tasks can be set, and the capacity of the first target queue can be set to be 100 or can be set according to requirements.
Because the service data is required to be pushed according to the sequence of the generation time of the service data in the synchronous processing mode, all data to be pushed in the target data set can be arranged in an ascending order according to the generation time of the service data, then the preset number, for example 5 pieces of data to be pushed, which need to be processed in the current scheduling processing, can be obtained, namely, the first current data to be pushed is put into the first target queue, and then any piece of first current data to be pushed can be obtained from the first target queue by the sequence of threads in the thread pool to be used as the first data to be pushed for data pushing processing.
In the asynchronous processing mode, because a plurality of servers can process different data to be pushed under all data pushing tasks in parallel, in this case, a thread pool can be initially created in each server, and the thread pool can include 50 core threads and 50 maximum threads, and meanwhile, a second target queue for storing the data to be pushed of the current batch of all data pushing tasks can be set, and the capacity of the second target queue can be set to 2000, for example, or can be set as required.
In this embodiment, the determining the batch data throughput according to the number of tasks may be: and determining the batch of data processing capacity according to the capacity of the second target queue and the task number of all the data pushing tasks to be processed currently, namely the total number of the tasks. For example, with the number of tasks of the data pushing task to be processed being 10, considering that the capacity of the second target queue is 1000, in order to avoid the overflow of the queue, the batch data processing amount may be obtained as follows: 1000/10=20; if the task number is 19, the batch data throughput may be: 1000/19=52, i.e. the batch data throughput can be determined from the quotient of the capacity and the number of tasks.
After the batch data processing amount is determined, a corresponding amount of second current data to be pushed can be obtained from the target data set, the second data to be pushed is put into the second target queue, and the first data to be pushed can be obtained from the second target queue in sequence.
Therefore, according to the method provided by the embodiment of the disclosure, under the condition that the data to be pushed is synchronously or asynchronously processed, the data to be pushed under one data pushing task is obtained in batches, all the current data to be pushed to be processed are put into the first target queue or the second target queue for caching in one server, and the data pushing processing is performed on the current data to be pushed in a multithreading manner, so that the efficient data pushing processing can be realized under the condition that the tenant and/or service isolation is fully ensured.
In some embodiments, after performing the data push processing on the first data to be pushed, the method further includes: if the data pushing processing is successfully carried out on the first data to be pushed, the number of pushing retries is set to be a preset initial value; otherwise, carrying out self-increasing 1 treatment on the push retry times and the target abnormal retry times; the pushing retry number is used for indicating the abnormal pushing number corresponding to the first data to be pushed, and the target abnormal retry number is the sum of the abnormal pushing numbers corresponding to the data to be pushed under the target data pushing task.
In some embodiments, after the self-increment 1 processing is performed on the push retry number and the target abnormal retry number, the method further comprises: under the condition that the target abnormal retry number after the self-increasing 1 is greater than a first preset threshold value, fusing the target data pushing task, and updating the execution time in the task message to be the time for fusing interrupt processing; and under the condition that the push retry number after the self-increasing 1 processing is greater than a second preset threshold value, removing the first data to be pushed from the target data set.
That is, in the embodiment of the present disclosure, if data pushing fails, the number of pushing retries corresponding to the data to be pushed may be increased by 1, and meanwhile, the number of abnormal retries of the data pushing task to which the data pushing task belongs is determined, that is, whether the total number of abnormal retries of all the data to be pushed under one task is greater than 30, if so, the data pushing task is modified to be in a fusing state so as to perform fusing processing on the data pushing task, and meanwhile, the execution time in the task information of the task may be modified, and the number of pushing retries corresponding to the data to be pushed may be updated, if the updated number of pushing retries is greater than a second preset threshold, for example, greater than 3, the data to be pushed is deleted, that is, if one piece of data is repeatedly pushed fails, the data pushing processing is not performed on the data.
It can be understood that if the data is pushed successfully, the corresponding push retry number may be updated to a preset initial value, for example, to 0; of course, the successfully pushed data may also be deleted from the target data set to avoid repeating the pushing process.
In some embodiments, after performing the data push processing on the first data to be pushed, the method further includes: determining whether the target data set contains data to be pushed which is not subjected to data pushing processing; removing the target data pushing task from all the data pushing tasks to be processed under the condition that the target data set does not contain the data to be pushed which is not subjected to the data pushing processing; and under the condition that the target data set contains data to be pushed which is not subjected to data pushing processing, updating the execution time in the task information to the current time so as to acquire new first data to be pushed from the target data set and execute the data pushing processing of the new first data to be pushed under the condition that the preset condition is met.
That is, after all the data to be pushed in the current batch are pushed, judging whether the data to be pushed still exist in the target data set of the target data pushing task, if not, removing the target data pushing task, otherwise, modifying the execution time of the target data pushing task to continue processing the target data pushing task in the next task scheduling.
It will be appreciated that the above-mentioned method embodiments of the present disclosure may be combined with each other to form a combined embodiment without departing from the principle logic, and are limited to the description of the present disclosure. It will be appreciated by those skilled in the art that in the above-described methods of the embodiments, the particular order of execution of the steps should be determined by their function and possible inherent logic.
In addition, the disclosure further provides a data processing apparatus, an electronic device, and a computer readable storage medium, where the foregoing may be used to implement any one of the data processing methods provided in the disclosure, and corresponding technical schemes and descriptions and corresponding descriptions referring to method parts are not repeated.
Fig. 4 is a block diagram of a data processing apparatus according to an embodiment of the present disclosure.
Referring to fig. 4, an embodiment of the present disclosure provides a data processing apparatus 400 including: a task acquisition unit 401, a data acquisition unit 402, and a data push unit 403.
The task obtaining unit 401 is configured to obtain task information of a target data pushing task to be processed, where the task information includes a tenant identifier and a service identifier, the tenant identifier is used to represent a target tenant, the target tenant is any tenant that sends data to be pushed, and the task information is generated according to the data to be pushed sent by the target tenant and corresponding to a target service represented by the service identifier;
The data obtaining unit 402 is configured to obtain first data to be pushed according to a tenant identifier and a service identifier, where a target data set includes the first data to be pushed, and the target data set includes all data to be pushed corresponding to the tenant identifier and the service identifier;
the data pushing unit 403 is configured to perform data pushing processing on the first data to be pushed.
In some embodiments, the apparatus may be applied to any one of a plurality of servers, n+.1; the task information also comprises a target push type; the apparatus 400 further comprises a first judging unit, which may be configured to: before executing the step of acquiring first data to be pushed according to the tenant identifier and the service identifier, acquiring a first hash value corresponding to the tenant identifier and the service identifier under the condition that the target pushing type is a first preset value; determining a first server identification according to the first hash value and the number of the plurality of servers; under the condition that the first server identification is consistent with the second server identification of the current server, executing the step of acquiring first data to be pushed according to the tenant identification and the service identification, wherein the first server identification is the identification of the server for processing the target data pushing task; or under the condition that the target pushing type is a second preset value, executing the step of acquiring the first data to be pushed according to the tenant identification and the service identification; the first preset value represents that data pushing processing is carried out on data to be pushed in the target data set in a synchronous processing mode; the second preset value represents that data pushing processing is carried out on data to be pushed in the target data set in an asynchronous processing mode.
In some embodiments, the apparatus 400 further includes a second determining unit, which may be configured to: under the condition that the target pushing type is a second preset value, before the step of performing data pushing processing on the first data to be pushed is executed, a second hash value corresponding to the first data to be pushed is obtained; obtaining a third server identifier according to the second hash value and the number of the plurality of servers, wherein the second server identifier is the identifier of the server which is determined to be used for processing the target data pushing task; and executing the step of performing data pushing processing on the first data to be pushed under the condition that the third server identifier is consistent with the second server identifier.
In some embodiments, the data push process includes a webhook data push process; the task information also comprises push address information; the data pushing unit 403 may be configured to, when performing data pushing processing on the first data to be pushed: and under the condition that the distributed lock corresponding to the first data to be pushed is obtained, performing webhook data pushing processing on the first data to be pushed so as to push the first data to be pushed to an interface represented by push address information.
In some embodiments, the apparatus 400 further comprises a failure processing unit configured to: after data pushing processing is carried out on the first data to be pushed, if the data pushing processing is successfully carried out on the first data to be pushed, the number of pushing retries is set to be a preset initial value; otherwise, carrying out self-increasing 1 treatment on the push retry times and the target abnormal retry times; the pushing retry number is used for indicating the abnormal pushing number corresponding to the first data to be pushed, and the target abnormal retry number is the sum of the abnormal pushing numbers corresponding to the data to be pushed under the target data pushing task.
In some embodiments, the failure processing unit may be further configured to, after performing the self-increment 1 processing on the push retry number and the target abnormal retry number: under the condition that the target abnormal retry number after the self-increasing 1 is greater than a first preset threshold value, fusing the target data pushing task, and updating the execution time in the task message to be the time for fusing interrupt processing; and under the condition that the push retry number after the self-increasing 1 processing is greater than a second preset threshold value, removing the first data to be pushed from the target data set.
In some embodiments, the task information further includes a target push type; the data obtaining unit 402, when obtaining the first data to be pushed according to the tenant identifier and the service identifier, may be configured to: under the condition that the target pushing type is a first preset value, a preset number of first current data to be pushed are obtained from a target data set according to the ascending order of the generation time of each data to be pushed and put into a first target queue; sequentially acquiring first current data to be pushed from a first target queue as first data to be pushed, wherein a preset number of first current data to be pushed are arranged in the first target queue according to the generation time ascending order; under the condition that the target pushing type is a second preset value, acquiring the task number of all data pushing tasks to be processed, and determining the batch data processing amount according to the task number; the method comprises the steps that second current data to be pushed of batch data processing capacity is obtained from a target data set and is put into a second target queue, and one piece of second current data to be pushed is sequentially obtained from the second target queue to serve as first data to be pushed; the first preset value represents that data pushing processing is carried out on data to be pushed in the target data set in a synchronous processing mode; the second preset value represents that data pushing processing is carried out on data to be pushed in the target data set in an asynchronous processing mode.
In some embodiments, the apparatus 400 further includes a third determining unit, which may be configured to: after data pushing processing is carried out on the first data to be pushed, determining whether the target data set contains data to be pushed which is not subjected to the data pushing processing; removing the target data pushing task from all the data pushing tasks to be processed under the condition that the target data set does not contain the data to be pushed which is not subjected to the data pushing processing; and under the condition that the target data set contains data to be pushed which is not subjected to data pushing processing, updating the execution time in the task information to the current time so as to acquire new first data to be pushed from the target data set and execute the data pushing processing of the new first data to be pushed under the condition that the preset condition is met.
It can be seen that, according to the data processing apparatus provided in the embodiments of the present disclosure, by acquiring the task information of the target data pushing task to be processed, and acquiring the first data to be pushed in the target data set corresponding to the current target data pushing task based on the tenant identifier and the service identifier in the task information, and performing data pushing processing on the first data to be pushed, delayed pushing of data to be pushed by other tenants and/or other services due to a larger data amount to be pushed by a single tenant and/or a single service may be avoided, so that data pushing efficiency may be improved.
Each of the modules in the above-described apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
Fig. 5 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Referring to fig. 5, an embodiment of the present disclosure provides an electronic device 500 including: at least one processor 501; at least one memory 502, and one or more I/O interfaces 503, coupled between the processor 501 and the memory 502; wherein the memory 502 stores one or more computer programs executable by the at least one processor 501, the one or more computer programs being executable by the at least one processor 501 to enable the at least one processor 501 to perform the data processing methods described above.
The various modules in the electronic device described above may be implemented in whole or in part in software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
The disclosed embodiments also provide a computer readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the data processing method described above. The computer readable storage medium may be a volatile or nonvolatile computer readable storage medium.
Embodiments of the present disclosure also provide a computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when executed in a processor of an electronic device, performs the above-described data processing method.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, functional modules/units in the apparatus, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between the functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed cooperatively by several physical components. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer-readable storage media, which may include computer storage media (or non-transitory media) and communication media (or transitory media).
The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable program instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, random Access Memory (RAM), read Only Memory (ROM), erasable Programmable Read Only Memory (EPROM), static Random Access Memory (SRAM), flash memory or other memory technology, portable compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical disc storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable program instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and may include any information delivery media.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for performing the operations of the present disclosure can be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
The computer program product described herein may be embodied in hardware, software, or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Example embodiments have been disclosed herein, and although specific terms are employed, they are used and should be interpreted in a generic and descriptive sense only and not for purpose of limitation. In some instances, it will be apparent to one skilled in the art that features, characteristics, and/or elements described in connection with a particular embodiment may be used alone or in combination with other embodiments unless explicitly stated otherwise. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the disclosure as set forth in the appended claims.

Claims (11)

1. A method of data processing, comprising:
acquiring task information of a target data pushing task to be processed, wherein the task information comprises a tenant identifier and a service identifier, the tenant identifier is used for representing a target tenant, the target tenant is any tenant which sends the data to be pushed, and the task information is generated according to the data to be pushed, which is sent by the target tenant and corresponds to a target service represented by the service identifier;
Acquiring first data to be pushed according to the tenant identification and the service identification, wherein a target data set comprises the first data to be pushed and comprises all data to be pushed corresponding to the tenant identification and the service identification;
and carrying out data pushing processing on the first data to be pushed.
2. The method according to claim 1, wherein the method is applied to any one of a plurality of servers, n being 1 or more; the task information also comprises a target push type;
before executing the step of obtaining the first data to be pushed according to the tenant identifier and the service identifier, the method further includes:
acquiring a first hash value corresponding to the tenant identification and the service identification under the condition that the target push type is a first preset value; determining a first server identification according to the first hash value and the number of the plurality of servers; executing the step of acquiring first data to be pushed according to the tenant identity and the service identity under the condition that the first server identity is consistent with the second server identity of the current server, wherein the first server identity is the identity of the server which is determined to be used for processing the target data pushing task; or,
Executing the step of acquiring first data to be pushed according to the tenant identification and the service identification under the condition that the target pushing type is a second preset value;
the first preset value represents that data pushing processing is carried out on data to be pushed in the target data set in a synchronous processing mode; and the second preset value represents that data pushing processing is carried out on the data to be pushed in the target data set in an asynchronous processing mode.
3. The method according to claim 2, wherein in case the target push type is a second preset value, before performing the step of performing the data push processing on the first data to be pushed, the method further comprises:
acquiring a second hash value corresponding to the first data to be pushed;
obtaining a third server identifier according to the second hash value and the number of the plurality of servers, wherein the second server identifier is the identifier of the server which is determined to be used for processing the target data pushing task;
and executing the step of performing data pushing processing on the first data to be pushed under the condition that the third server identifier is consistent with the second server identifier.
4. The method of claim 1, wherein the data push process comprises a webhook data push process; the task information also comprises push address information;
the data pushing processing for the first data to be pushed includes:
and under the condition that the distributed lock corresponding to the first data to be pushed is obtained, carrying out the webhook data pushing processing on the first data to be pushed so as to push the first data to the interface represented by the pushing address information.
5. The method of claim 1, wherein after performing the data push processing on the first data to be pushed, the method further comprises:
if the data pushing processing is successfully carried out on the first data to be pushed, the number of pushing retries is set to be a preset initial value; otherwise, performing self-increasing 1 treatment on the push retry times and the target abnormal retry times;
the pushing retry number is used for indicating the abnormal pushing number corresponding to the first data to be pushed, and the target abnormal retry number indicates the sum of the abnormal pushing numbers corresponding to the data to be pushed under the target data pushing task.
6. The method of claim 5, wherein after the self-increment 1 processing is performed on the push retry number and the target abnormal retry number, the method further comprises:
under the condition that the target abnormal retry number after the self-increasing 1 is greater than a first preset threshold value, fusing the target data pushing task, and updating the execution time in the task message to the time for fusing interrupt processing;
and under the condition that the push retry number after the self-increasing 1 processing is greater than a second preset threshold value, removing the first data to be pushed from the target data set.
7. The method of claim 1, wherein the task information further comprises a target push type; the obtaining the first data to be pushed according to the tenant identifier and the service identifier includes:
under the condition that the target pushing type is a first preset value, a preset number of first current data to be pushed are obtained from the target data set according to the ascending sequence of the generation time of each data to be pushed and are put into a first target queue; sequentially obtaining first current data to be pushed from the first target queue as the first data to be pushed, wherein the preset number of first current data to be pushed are arranged in the first target queue according to the ascending order of the generation time;
Under the condition that the target pushing type is a second preset value, acquiring the task number of all data pushing tasks to be processed, and determining the batch data processing amount according to the task number; the second current data to be pushed of the batch data processing amount is obtained from the target data set and is put into a second target queue, and one piece of second current data to be pushed is sequentially obtained from the second target queue to serve as the first data to be pushed;
the first preset value represents that data pushing processing is carried out on data to be pushed in the target data set in a synchronous processing mode; and the second preset value represents that data pushing processing is carried out on the data to be pushed in the target data set in an asynchronous processing mode.
8. The method of claim 1, wherein after performing the data push processing on the first data to be pushed, the method further comprises:
determining whether the target data set contains data to be pushed which is not subjected to data pushing processing;
removing the target data pushing task from all the data pushing tasks to be processed under the condition that the target data set does not contain the data to be pushed which is not subjected to the data pushing processing;
And under the condition that the target data set contains data to be pushed which is not subjected to data pushing processing, updating the execution time in the task information to the current time so as to acquire new first data to be pushed from the target data set and execute the data pushing processing of the new first data to be pushed under the condition that a preset condition is met.
9. A data processing apparatus, comprising:
the task acquisition unit is used for acquiring task information of a target data pushing task to be processed, wherein the task information comprises a tenant identifier and a service identifier, the tenant identifier is used for representing a target tenant, the target tenant is any tenant which sends the data to be pushed, and the task information is generated according to the data to be pushed, which is sent by the target tenant and corresponds to a target service represented by the service identifier;
the data acquisition unit is used for acquiring first data to be pushed according to the tenant identification and the service identification, wherein a target data set comprises the first data to be pushed, and the target data set comprises all data to be pushed corresponding to the tenant identification and the service identification;
And the data pushing unit is used for carrying out data pushing processing on the first data to be pushed.
10. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores one or more computer programs executable by the at least one processor to enable the at least one processor to perform the data processing method of any one of claims 1-8.
11. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the data processing method according to any of claims 1-8.
CN202311675622.XA 2023-12-07 2023-12-07 Data processing method and device, electronic equipment and computer readable storage medium Pending CN117675906A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311675622.XA CN117675906A (en) 2023-12-07 2023-12-07 Data processing method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311675622.XA CN117675906A (en) 2023-12-07 2023-12-07 Data processing method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN117675906A true CN117675906A (en) 2024-03-08

Family

ID=90074793

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311675622.XA Pending CN117675906A (en) 2023-12-07 2023-12-07 Data processing method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN117675906A (en)

Similar Documents

Publication Publication Date Title
US9183072B1 (en) Error troubleshooting using a correlated knowledge base
EP2989543B1 (en) Method and device for updating client
CN111538980B (en) Account binding method, device and system for application program
CN109873863B (en) Asynchronous calling method and device of service
US10997569B2 (en) Method and device for processing virtual cards
WO2023151439A1 (en) Account login processing
CN114338793A (en) Message pushing method and device, electronic equipment and readable storage medium
CN111722995B (en) Data processing method and device
CN108616361A (en) A kind of method and device of identification equipment uniqueness
US8914815B2 (en) Automated framework for tracking and maintaining kernel symbol list types
CN110602163A (en) File uploading method and device
CN113935737A (en) Random number generation method and device based on block chain
US10838845B2 (en) Processing failed events on an application server
CN113645260A (en) Service retry method, device, storage medium and electronic equipment
CN110471896B (en) Data processing method, system and server
US10642907B2 (en) Processing service data
US10530940B2 (en) Information processing method, information processing apparatus, and non-transitory recording medium storing instructions for executing an information processing method
CN117675906A (en) Data processing method and device, electronic equipment and computer readable storage medium
CN107070770B (en) Resource transmission method and device
CN115063123A (en) Intelligent manufacturing method and system and electronic equipment
CN109842498A (en) A kind of client terminal configuring method, server, client and electronic equipment
CN111049883B (en) Data reading method, device and system of distributed table system
CN113283891A (en) Information processing method and device and electronic equipment
US11347884B2 (en) Data security tool
CN113868479A (en) Method and device for processing service data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination