CN113127064A - Method and related device for concurrently scheduling and executing time sequence data - Google Patents

Method and related device for concurrently scheduling and executing time sequence data Download PDF

Info

Publication number
CN113127064A
CN113127064A CN201911426140.4A CN201911426140A CN113127064A CN 113127064 A CN113127064 A CN 113127064A CN 201911426140 A CN201911426140 A CN 201911426140A CN 113127064 A CN113127064 A CN 113127064A
Authority
CN
China
Prior art keywords
data
unit
instance
time sequence
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911426140.4A
Other languages
Chinese (zh)
Inventor
钟斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN201911426140.4A priority Critical patent/CN113127064A/en
Publication of CN113127064A publication Critical patent/CN113127064A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4482Procedural

Abstract

The application discloses a method and a related device for scheduling and executing time sequence data concurrently, which are applied to a time sequence data scheduling and executing system, wherein the method comprises the following steps: acquiring and caching input time sequence data, wherein the time sequence data comprises a first amount of different example data; acquiring and assembling a first quantity of different instance data to obtain a second quantity of instance data and corresponding processing requests; the second quantity of example data and the corresponding processing requests are processed concurrently through the second quantity of threads to obtain a second quantity of data packets; and performing time sequence reconstruction and rectification operation on the time sequence data of the second number of data packets, and outputting the first number of concurrent example data. By implementing the embodiment of the invention, one instance data can be served by a plurality of threads at the same time, and a plurality of instance data can be jointly scheduled by one thread, thereby improving the processing performance aiming at single instance data and realizing the concurrent scheduling and execution optimal performance of time sequence data.

Description

Method and related device for concurrently scheduling and executing time sequence data
Technical Field
The present invention relates to the field of electronic technologies, and in particular, to a method and a related apparatus for concurrently scheduling and executing time series data.
Background
The time sequence data means that input data are input in time sequence, and the processing output needs to keep the data consistent with the time sequence of the original input, such as alarm data, index data, performance data and the like in the telecommunication system. Especially within video processing of AI. The detection/feature extraction stage of the data of all videos can be considered as the processing of time series data. In an actual system, the processing of time series data is very critical and is the maximum constraint factor of the system from the traffic throughput, in the prior art, aiming at the processing of time series data, multi-thread concurrent processing is started, but each time series processing instance starts one thread; or each thread is bound with a time sequence processing instance, and one thread has a thread which can only be bound with the thread, so that the processing capacity of a single time sequence data instance needs to exceed the processing capacity of a single thread, or the number of threads generally has a reasonable upper limit, and the problem that the performance of the system is deteriorated if the processing capacity exceeds the upper limit can occur. In summary, a technique for high-concurrency scheduling of time-series data that can be efficiently completed is urgently needed.
Disclosure of Invention
The embodiment of the invention provides a method and a related device for concurrently scheduling and executing time sequence data, and aims to serve one instance data through a plurality of threads simultaneously and schedule a plurality of instance data jointly through one thread, so that the processing performance of single instance data is improved, and the optimal performance of concurrently scheduling and executing the time sequence data is realized.
In a first aspect, an embodiment of the present application provides a method for scheduling and executing time series data concurrently, where the method is applied to a time series data scheduling and executing system, and the method includes:
acquiring and caching input time sequence data, wherein the time sequence data comprises a first amount of different instance data;
acquiring and assembling the first quantity of different instance data to obtain a second quantity of instance data and corresponding processing requests;
carrying out concurrent processing on the second quantity of instance data and the corresponding processing requests through a second quantity of threads to obtain a second quantity of data packets;
and performing time sequence reconstruction and rectification operation on the time sequence data of the second number of data packets, and outputting a first number of concurrent instance data.
In a second aspect, an embodiment of the present application provides an apparatus for scheduling and executing time series data concurrently, which includes a processing unit and a communication unit, wherein,
the processing unit is used for acquiring and caching input time sequence data through the communication unit, wherein the time sequence data comprises a first amount of different instance data; acquiring and assembling the first quantity of different instance data to obtain a second quantity of instance data and corresponding processing requests; carrying out concurrent processing on the second quantity of instance data and the corresponding processing requests through a second quantity of threads to obtain a second quantity of data packets; and performing time sequence reconstruction and rectification operation on the time sequence data of the second number of data packets, and outputting a first number of concurrent instance data.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing steps in any method of the first aspect of the embodiment of the present application.
In a fourth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform part or all of the steps described in any one of the methods of the first aspect of the present application.
In a fifth aspect, the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps as described in any one of the methods of the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
It can be seen that, in the embodiment of the present application, input time series data is obtained and cached, and the time series data includes a first number of different instance data; acquiring and assembling a first quantity of different instance data to obtain a second quantity of instance data and corresponding processing requests; the second quantity of example data and the corresponding processing requests are processed concurrently through the second quantity of threads to obtain a second quantity of data packets; and performing time sequence reconstruction and rectification operation on the time sequence data of the second number of data packets, and outputting the first number of concurrent example data. Therefore, the second number of example data and the corresponding processing requests are processed concurrently through the second number of threads to obtain the second number of data packets, that is, a plurality of threads are enabled to serve one time sequence data example at the same time, and the processing performance of single example data is improved; and different input instance data can be combined into the same data packet for execution, namely, one thread can jointly schedule a plurality of instance data, and the concurrent scheduling and execution optimal performance of time sequence data are realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts. Wherein:
FIG. 1 is a diagram illustrating a system for concurrently scheduling and executing time series data according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for concurrently scheduling and executing time series data according to another embodiment of the present invention;
FIG. 3a is a schematic diagram of a timing data input and example buffer unit according to an embodiment of the present invention;
fig. 3b is a schematic diagram of a multiplexing configuration information structure according to an embodiment of the present invention;
FIG. 3c is a schematic structural diagram of an out-of-order packing unit according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a timing reconstruction unit according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating another method for concurrently scheduling and executing time series data according to an embodiment of the present invention;
fig. 6 is a schematic diagram of an electronic device according to an embodiment of the present invention;
fig. 7 is a block diagram illustrating functional units of an apparatus for concurrently scheduling and executing time series data according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The following are detailed below.
The terms "first" and "second" in the description and claims of the present invention and the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
As shown in fig. 1, fig. 1 is a schematic diagram of a system 100 for concurrently scheduling and executing time series data, the system 100 for concurrently scheduling and executing sequential data includes a sequential data input and instance cache unit 110, an out-of-order packing unit 120, an out-of-order concurrent execution unit 130, and a sequential reconstruction unit 140, the sequential data input and instance buffer unit 110, the out-of-order packaging unit 120, the out-of-order concurrent execution unit 130 and the sequential reconstruction unit 140 are sequentially connected, the sequential data input and instance buffer unit 110 is used for realizing the buffer processing of the sequential data at the input end and sending the sequential data to the out-of-order packaging unit 120 to complete the packet data acquisition and assembly of different instance data, then the out-of-order concurrent execution unit 130 implements concurrent processing of all requests through multiple threads, and finally the time sequence reconstruction unit 140 completes time sequence reconstruction operation of time sequence data and rectification operation of data. And enabling the output data to realize the time-sequence output independent according to the example. The system 100 for concurrently scheduling and executing time series data may include an integrated single device or multiple devices, and for convenience of description, the system 100 for concurrently scheduling and executing time series data is generally referred to as an electronic device. It will be apparent that the electronic device may include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem having wireless communication capabilities, as well as various forms of User Equipment (UE), Mobile Stations (MS), terminal equipment (terminal), and the like.
Currently, the processing capacity of a single time series data instance needs to exceed the processing capacity of a single thread; the characteristics of the computing system and the number of threads generally have a reasonable upper limit, the performance of the system is deteriorated when the upper limit is exceeded, and due to the characteristics of the computing system, the underlying computing hopes to further combine data in a larger embodiment range for concurrent processing, and cannot realize the most efficient scheduling execution of time series data.
Based on this, the embodiments of the present application provide a method for concurrently scheduling and executing time series data to solve the above problems, and the embodiments of the present application are described in detail below.
First, referring to fig. 2, fig. 2 is a flowchart illustrating a method for scheduling and executing time series data concurrently according to an embodiment of the present invention, and the method is applied to the time series data scheduling and executing system shown in fig. 1, where as shown in fig. 2, the method for scheduling and executing time series data concurrently according to an embodiment of the present invention may include:
s201, obtaining and caching input time sequence data, wherein the time sequence data comprises a first amount of different example data.
Wherein the time sequence data can be input in different time sequence data queues. By using the time sequence data instance as the granularity, the data is buffered, so that the operation based on First-in First-out (FIFO) is realized.
S202, acquiring and assembling the first quantity of different instance data to obtain a second quantity of instance data and corresponding processing requests.
In specific implementation, actual batch processing and packaging operations and necessary processing delay management are performed on the cached time series data.
S203, carrying out concurrent processing on the second quantity of instance data and the corresponding processing requests through a second quantity of threads to obtain a second quantity of data packets;
and S204, performing time sequence reconstruction and rectification operation on the time sequence data of the second number of data packets, and outputting a first number of concurrent example data.
The time sequence reconstruction unit stores cache information in the time sequence data reconstruction process, for example: the data processing system comprises a time sequence data buffer queue and a Reorder Timing clock, wherein the Reorder Timing clock refers to the latest moment when all previous data processed by the current Reorder are finished.
It can be seen that, in the embodiment of the present application, input time series data is obtained and cached, and the time series data includes a first number of different instance data; acquiring and assembling a first quantity of different instance data to obtain a second quantity of instance data and corresponding processing requests; the second quantity of example data and the corresponding processing requests are processed concurrently through the second quantity of threads to obtain a second quantity of data packets; and performing time sequence reconstruction and rectification operation on the time sequence data of the second number of data packets, and outputting the first number of concurrent example data. Therefore, the second number of example data and the corresponding processing requests are processed concurrently through the second number of threads to obtain the second number of data packets, that is, a plurality of threads are enabled to serve one time sequence data example at the same time, and the processing performance of single example data is improved; and different input instance data can be combined into the same data packet for execution, namely, one thread can jointly schedule a plurality of instance data, and the concurrent scheduling and execution optimal performance of time sequence data are realized.
In one possible example, the system for scheduling and executing time series data includes a time series data queue and a data multiplexing unit, where the data multiplexing unit includes a multiplexing configuration unit and a data obtaining and packaging unit, and the obtaining and packaging unit obtains and packages the first number of different instance data to obtain a second number of instance data and corresponding processing requests, and includes: caching the time sequence data by taking a time sequence data example as granularity through the time sequence data queue; storing and providing, by the multiplexing configuration unit, configuration information of multiplexing of the output time-series data; and acquiring and packaging the first amount of different example data by the data acquisition and packaging unit according to a preset rule to obtain a second amount of example data and a corresponding processing request.
As shown in fig. 3a, fig. 3a is a schematic diagram of a structure of a time sequence data input and instance buffer unit, different time sequence data queues (e.g., time sequence data queue 1, time sequence data queue 2 … …, time sequence data queue N-1, time sequence data queue N) are input to the data multiplexing unit for data buffering, configuration information related to multiplexing is stored and provided by a multiplexing configuration unit, and then data is obtained and packed according to a preset rule.
As can be seen, in this example, the time series data is cached by taking the time series data instance as the granularity, and the cached time series data is output according to the preset rule, so that different input instances are combined into the same packet, and joint scheduling of multiple time series data instance data is realized, thereby realizing the optimal scheduling performance.
In one possible example, the obtaining and packaging, by the data obtaining and packaging unit, the first number of different instance data according to a preset rule to obtain a second number of instance data and a corresponding processing request includes: acquiring a packaging request through the data acquisition packaging unit; after acquiring a package request, acquiring the size of a level package and initializing a level package variable; and inputting the size of the stage packet and the stage packet variable into the multiplexing configuration unit for processing, and outputting corresponding time sequence data.
As shown in fig. 3b, fig. 3b is a schematic diagram of a multiplexing configuration information structure, where the data packet acquisition includes a waiting packet request, and when the packet request is acquired and triggered, the size of a level packet is acquired, a level packet variable is initialized, then the size of the packet and the level packet variable are input into the multiplexing configuration information structure shown in fig. 3b to perform polling on instance data, and then corresponding time sequence data is output.
In a specific implementation, when a level group packet is initialized, a timer is initialized, the timing is set to T1, and when T1 is overtime, example data is polled, the quota margin is equal to the quota configuration, and then the example data is polled.
As can be seen, in this example, by obtaining a group package request; acquiring the size of the level packet and initializing a level packet variable; and then, the multiplexing configuration unit is used for processing, corresponding time sequence data are output, different input examples are combined into the same packet, and the data processing efficiency is improved.
In one possible example, the multiplexing configuration unit includes multiple multiplexing configuration information structures, where the multiplexing configuration information structures include a multiplexing configuration management buffer unit, an instance number obtaining unit, a quota configuration unit, and a quota margin unit, and the inputting the size of the time-series data level packet and the level packet variable into the multiplexing configuration unit for processing and outputting a second amount of instance data and a corresponding processing request includes: judging whether an example element of the time sequence data is successfully acquired; if yes, acquiring an instance number of the instance element through the instance number acquisition unit, and acquiring two variables of a configuration margin variable through the quota configuration unit and the quota margin unit; acquiring the queue depth of the corresponding instance queue according to the instance number; and outputting a second amount of example data and a corresponding processing request according to the comparison result of the two variables of the queue depth and the configuration margin variable or performing the operation on the next multiplexing configuration information structure.
The multiplexing configuration information structure shown in fig. 3b includes a multiplexing configuration management cache unit, an instance number obtaining unit, a quota configuration unit, and a quota margin unit.
In specific implementation, when a request is triggered, the size of a stage packet is acquired first, N is set, and a stage packet variable M is initialized to 0; then polling example data is carried out, namely after an example element is obtained, two variables left (x) of an example ID (x) and a configuration allowance variable in the example element are obtained, and the depth (x) of an example queue is obtained according to the example ID (x), wherein the obtaining principle is as follows: when depth (x) > = left (x), obtaining left (x) data, and enabling M ═ M + left (x); when depth (x) < left (x), obtaining a depth (x) element, and making M equal to M + depth (x); and judging the sizes of M and N, acquiring an example element when M is less than N, and outputting the current data packet when M is equal to N. And then detecting quota margins of all instance elements, and if the margin is 0, sending a quota reset signal and re-detecting the packaging request.
In a specific implementation, when the quota margins of all the instance elements are detected to be 0, a quota reset signal is sent, an instance element is obtained to perform instance polling, the quota margin is equal to the quota configuration, and then instance data is polled.
As can be seen, in this example, the size of the level packet is obtained and the level packet variable is initialized; and then processing the size of the stage packet and the stage packet variable, then adopting corresponding operation according to the processing result to realize that the same input instance is executed by different execution units concurrently, and combining different input instances into the same packet to improve the data processing efficiency.
In one possible example, the scheduling execution system for time series data includes an out-of-order group packing unit, where the out-of-order group packing unit includes a batch processing group packing unit, a batch processing configuration unit, a batch processing signal unit, and a time delay signal unit, and the obtaining and assembling different instance data of the first quantity to obtain instance data of a second quantity and a corresponding processing request includes: acquiring a data packet pending signal, and inquiring the queue depth corresponding to the data packet pending signal; when the depth of the queue is greater than the preset depth, performing batch packaging operation through a batch packaging unit; acquiring configuration information of a batch processing group package and storing the configuration information in a batch processing configuration unit; performing signal management of batch package through a batch signal processing unit to obtain signal information; processing the configuration information and the signal information through a batch processing group packaging unit to obtain the second amount of instance data and corresponding processing requests; and when the queue depth is smaller than the preset depth, performing processing delay management through the delay signal unit.
As shown in fig. 3c, fig. 3c is a schematic structural diagram of the out-of-order group packing unit. The batch configuration unit 303, the batch signal unit 304 and the time delay signal unit 302 are all connected to the batch packetizing unit 301. After the data packet pending signal is acquired, inquiring the queue depth corresponding to the data packet pending signal; when the queue depth is greater than the preset depth, performing batch packaging operation through the batch packaging unit 301; acquiring configuration information of a batch package and storing the configuration information in a batch configuration unit 303; signal management of batch package is performed through the batch signal processing unit 304 to obtain signal information; processing the configuration information and the signal information through a batch packaging unit to obtain a second number of example data and corresponding processing requests; when the queue depth is smaller than the preset depth, the processing delay management is performed by the delay signal unit 302. The configuration information of the batch group package stored by the batch configuration unit includes the batch processing capacity (y) (i.e. the maximum batch processing package size) of each downstream processing unit.
When the batch processing group package unit receives a data packet Pending signal, inquiring the total queue depth TotalDepth of the time sequence data at the moment and performing signal polling, namely acquiring an example element in a data example, reading state information, if the state is Ready, reading an ID (x) number of the processing unit, acquiring the batch processing capacity (y) in batch processing configuration information through the ID (x) number, and when the TotalDepth < capacity (y), performing data packet Pending signal waiting again; when the TotalDepth > is Capacity (y), the batch processing packet data is acquired and sent to the downstream processing unit.
In a specific implementation, when the depth of the queue is smaller than a preset depth, the delay signal unit performs processing delay management, for example, the delay signal unit receives a Pending signal of a data packet and then performs zero clearing of the WatchDog. After the time delay signal unit is triggered, waiting for a WatchDog overtime signal, inquiring the total queue depth totaledepth at the moment and carrying out signal polling when receiving the overtime signal, namely acquiring an example element, reading the state in the example element, reading the ID (x) number of the processing unit if the state is Ready, acquiring the batch processing capability (capacity) in the batch processing configuration information through the ID (x) number, and waiting for a data packet Pending signal again when the totaledepth < capacity (y); when the TotalDepth > is Capacity (y), the batch processing packet data is acquired and sent to the downstream processing unit.
As can be seen, in this example, a packet pending signal is obtained, and a queue depth corresponding to the packet pending signal is queried; when the depth of the queue is greater than the preset depth, performing batch packaging operation through a batch packaging unit; acquiring configuration information of a batch processing group package and storing the configuration information in a batch processing configuration unit; performing signal management of batch package through a batch signal processing unit to obtain signal information; the configuration information and the signal information are processed through the batch processing group packaging unit to obtain the second amount of example data and processing requests of the time sequence data, so that the magnitude order of the upper limit of the performance of a single example is promoted, and the combined scheduling of a plurality of time sequence data example data is realized, thereby realizing the optimal scheduling performance.
In one possible example, the time-series data scheduling execution system comprises a reconstruction output packet scheduling unit and a reconstruction buffer queue; performing time sequence reconstruction and rectification operations on the time sequence data of the second number of data packets, and outputting a first number of concurrent instance data, including: when the reconstruction output packet scheduling unit detects that a second amount of instance data is input, adding the second amount of instance data into a reconstruction buffer queue; carrying out reconstruction timing detection on the data packet subjected to time sequence increment; if the data packet after the time sequence increment is in the reconstruction buffer queue, outputting a first amount of concurrent instance data after the time sequence increment; if the data packet after the time sequence increment is not in the reconstruction buffer queue, waiting for a second number of example data input
As shown in fig. 4, fig. 4 is a schematic structural diagram of the time sequence reconstruction unit, which includes a reconstruction output package, a scheduling unit, and a reconstruction buffer queue of time sequence data, where the reconstruction buffer queue of time sequence data includes a plurality of multiplexing configuration management buffer units, and each multiplexing configuration management buffer unit includes a time sequence data number, i.e., an instant data instance ID. Wherein, the stored content of each instance element comprises a time sequence data buffer queue and Reorder Timing.
In a specific implementation, the process of reconstructing output and scheduling includes waiting for a new data packet to be input, adding the data packet into a time series data buffer queue when the data packet arrives, and then performing a reconstruction timing (ReorderTime) test, for example, when a time series increment is delta (t) each time, determining whether the data packet corresponding to ReorderTime + delta (t) is in the Reorder buffer queue, if so, outputting the data packet corresponding to current ReorderTime + delta (t), updating ReorderTime + delta (t), then determining a relationship between the data packet corresponding to ReorderTime + delta (t) and the Reorder buffer queue, and if not, ending the current processing and waiting for a new data packet to be input.
As can be seen, in this example, the reconstruction output packet scheduling unit and the reconstruction buffer queue perform time sequence reconstruction and rectification on the time sequence data of the plurality of packets, output target time sequence data, achieve order of magnitude improvement of the upper limit of performance of a single instance, and achieve joint scheduling of data of a plurality of time sequence data instances, thereby achieving optimal scheduling performance.
In a possible example, after the second amount of instance data is added to the reconstruction buffer queue after the second amount of instance data is detected by the reconstruction output packet scheduling unit, the method further includes: identifying an instance number of the input time series data; and acquiring a corresponding reconstruction buffer queue according to the instance number, and storing the reconstruction buffer queue and the time for completing the data concurrency before reconstruction processing.
The content stored by each instance element in the reconstruction buffer queue is the following two parts: the data processing system comprises a time sequence data buffer queue and a Reorder Timing clock, wherein the Reorder Timing clock refers to the latest moment when all previous data processed by the current Reorder are completed.
As can be seen, in this example, the reconstruction output packet scheduling unit and the reconstruction buffer queue perform time sequence reconstruction and rectification on the time sequence data of the plurality of packets, output target time sequence data, achieve order of magnitude improvement of the upper limit of performance of a single instance, and achieve joint scheduling of data of a plurality of time sequence data instances, thereby achieving optimal scheduling performance.
Referring to fig. 5, fig. 5 is a flowchart illustrating a method for concurrently scheduling and executing time series data according to an embodiment of the present application, which is applied to the time series data scheduling and executing system shown in fig. 1, and as shown in the figure, the method for concurrently scheduling and executing time series data includes:
s501, caching the time sequence data by taking a time sequence data example as granularity through the time sequence data queue;
s502, storing and providing multiplexed configuration information for outputting the time-series data through the multiplexing configuration unit;
s503, acquiring and packaging the first quantity of different instance data through the data acquisition and packaging unit according to a preset rule to obtain a second quantity of instance data and a corresponding processing request;
s504, acquiring and assembling the first quantity of different instance data to obtain a second quantity of instance data and corresponding processing requests;
s505, the second quantity of instance data and the corresponding processing requests are processed concurrently through a second quantity of threads to obtain a second quantity of data packets;
s506, performing time sequence reconstruction and rectification operation on the time sequence data of the second number of data packets, and outputting the first number of concurrent example data.
It can be seen that, in the embodiment of the present application, input time series data is obtained and cached, and the time series data includes a first number of different instance data; acquiring and assembling a first quantity of different instance data to obtain a second quantity of instance data and corresponding processing requests; the second quantity of example data and the corresponding processing requests are processed concurrently through the second quantity of threads to obtain a second quantity of data packets; and performing time sequence reconstruction and rectification operation on the time sequence data of the second number of data packets, and outputting the first number of concurrent example data. Therefore, the second number of example data and the corresponding processing requests are processed concurrently through the second number of threads to obtain the second number of data packets, that is, a plurality of threads are enabled to serve one time sequence data example at the same time, and the processing performance of single example data is improved; and different input instance data can be combined into the same data packet for execution, namely, one thread can jointly schedule a plurality of instance data, and the concurrent scheduling and execution optimal performance of time sequence data are realized.
In accordance with the embodiments shown in fig. 2 and fig. 5, please refer to fig. 6, fig. 6 is a schematic structural diagram of an electronic device 600 according to an embodiment of the present application, and as shown in the figure, the electronic device 600 includes an application processor 610, a memory 620, a communication interface 630, and one or more programs 621, where the one or more programs 621 are stored in the memory 620 and configured to be executed by the application processor 610, and the one or more programs 621 include instructions for performing the following steps;
acquiring and caching input time sequence data, wherein the time sequence data comprises a first amount of different instance data;
acquiring and assembling the first quantity of different instance data to obtain a second quantity of instance data and corresponding processing requests;
carrying out concurrent processing on the second quantity of instance data and the corresponding processing requests through a second quantity of threads to obtain a second quantity of data packets;
and performing time sequence reconstruction and rectification operation on the time sequence data of the second number of data packets, and outputting a first number of concurrent instance data.
It can be seen that, in the embodiment of the present application, input time series data is obtained and cached, and the time series data includes a first number of different instance data; acquiring and assembling a first quantity of different instance data to obtain a second quantity of instance data and corresponding processing requests; the second quantity of example data and the corresponding processing requests are processed concurrently through the second quantity of threads to obtain a second quantity of data packets; and performing time sequence reconstruction and rectification operation on the time sequence data of the second number of data packets, and outputting the first number of concurrent example data. Therefore, the second number of example data and the corresponding processing requests are processed concurrently through the second number of threads to obtain the second number of data packets, that is, a plurality of threads are enabled to serve one time sequence data example at the same time, and the processing performance of single example data is improved; and different input instance data can be combined into the same data packet for execution, namely, one thread can jointly schedule a plurality of instance data, and the concurrent scheduling and execution optimal performance of time sequence data are realized.
In one possible example, the time-series data scheduling execution system includes a time-series data queue and a data multiplexing unit, the data multiplexing unit includes a multiplexing configuration unit and a data obtaining and packaging unit, and in terms of obtaining and packaging the first number of different instance data to obtain a second number of instance data and corresponding processing requests, the instructions in the program are specifically configured to perform the following operations: caching the time sequence data by taking a time sequence data example as granularity through the time sequence data queue; storing and providing, by the multiplexing configuration unit, configuration information of multiplexing of the output time-series data; and acquiring and packaging the first amount of different example data by the data acquisition and packaging unit according to a preset rule to obtain a second amount of example data and a corresponding processing request.
In one possible example, in terms of obtaining a second number of instance data and corresponding processing requests by the data obtaining and packaging unit obtaining and packaging the first number of different instance data according to a preset rule, the instructions in the program are specifically configured to perform the following operations: acquiring a packaging request through the data acquisition packaging unit; after acquiring a package request, acquiring the size of a level package and initializing a level package variable; and inputting the size of the stage packet and the stage packet variable into the multiplexing configuration unit for processing, and outputting corresponding time sequence data.
In one possible example, the multiplexing configuration unit includes multiple multiplexing configuration information structures, and the multiplexing configuration information structures include a multiplexing configuration management buffer unit, an instance number obtaining unit, a quota configuration unit, and a quota margin unit, and in terms of inputting the size of the time-series data level packet and a level group packet variable into the multiplexing configuration unit for processing, and outputting a second amount of instance data and corresponding processing requests, the instructions in the program are specifically configured to perform the following operations: judging whether an example element of the time sequence data is successfully acquired; if yes, acquiring an instance number of the instance element through the instance number acquisition unit, and acquiring two variables of a configuration margin variable through the quota configuration unit and the quota margin unit; acquiring the queue depth of the corresponding instance queue according to the instance number; and outputting a second amount of example data and a corresponding processing request according to the comparison result of the two variables of the queue depth and the configuration margin variable or performing the operation on the next multiplexing configuration information structure.
In one possible example, the time-series data scheduling execution system includes an out-of-order group packing unit, where the out-of-order group packing unit includes a batch processing group packing unit, a batch processing configuration unit, a batch processing signal unit, and a time delay signal unit, and in terms of obtaining and assembling the first number of different instance data to obtain a second number of instance data and corresponding processing requests, the instructions in the program are specifically configured to perform the following operations: acquiring a data packet pending signal, and inquiring the queue depth corresponding to the data packet pending signal; when the depth of the queue is greater than the preset depth, performing batch packaging operation through a batch packaging unit; acquiring configuration information of a batch processing group package and storing the configuration information in a batch processing configuration unit; performing signal management of batch package through a batch signal processing unit to obtain signal information; processing the configuration information and the signal information through a batch processing group packaging unit to obtain the second amount of instance data and corresponding processing requests; and when the depth of the queue is smaller than the preset depth, performing necessary processing delay management through the delay signal unit.
In one possible example, the time-series data scheduling execution system comprises a reconstruction output packet scheduling unit and a reconstruction buffer queue; in terms of performing time-series reconstruction and rectification operations on the time-series data of the second number of packets, and outputting the first number of concurrent instance data, the instructions in the program are specifically configured to perform the following operations: when the reconstruction output packet scheduling unit detects that a second amount of instance data is input, adding the second amount of instance data into a reconstruction buffer queue; carrying out reconstruction timing detection on the data packet subjected to time sequence increment; if the data packet after the time sequence increment is in the reconstruction buffer queue, outputting a first amount of concurrent instance data after the time sequence increment; and if the data packet after the time sequence increment is not in the reconstruction buffer queue, waiting for a second amount of example data to be input.
In one possible example, the instructions in the program are further specifically configured to: after the reconstruction output packet scheduling unit detects that a second amount of instance data is input, adding the second amount of instance data into a reconstruction buffer queue, and identifying the instance number of the input time sequence data; and acquiring a corresponding reconstruction buffer queue according to the instance number and storing the reconstruction buffer queue and the time for completing the data before reconstruction processing.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Fig. 7 is a block diagram of functional units of an apparatus 700 for concurrently scheduling and executing time series data according to an embodiment of the present application. The apparatus 700 for concurrently scheduling and executing time series data is applied to an electronic device comprising a processing unit 701 and a communication unit 702, wherein,
the processing unit 701 is configured to obtain and buffer input time series data through the communication unit 702, where the time series data includes a first number of different instance data; acquiring and assembling the first quantity of different instance data to obtain a second quantity of instance data and corresponding processing requests; carrying out concurrent processing on the second quantity of instance data and the corresponding processing requests through a second quantity of threads to obtain a second quantity of data packets; and performing time sequence reconstruction and rectification operation on the time sequence data of the second number of data packets, and outputting a first number of concurrent instance data.
The apparatus 700 for concurrently scheduling and executing time series data may further include a storage unit 703 for storing program codes and data of the electronic device. The processing unit 701 may be a processor, the communication unit 702 may be an internal communication interface, and the storage unit 703 may be a memory.
It can be seen that, in the embodiment of the present application, input time series data is obtained and cached, and the time series data includes a first number of different instance data; acquiring and assembling a first quantity of different instance data to obtain a second quantity of instance data and corresponding processing requests; the second quantity of example data and the corresponding processing requests are processed concurrently through the second quantity of threads to obtain a second quantity of data packets; and performing time sequence reconstruction and rectification operation on the time sequence data of the second number of data packets, and outputting the first number of concurrent example data. Therefore, the second number of example data and the corresponding processing requests are processed concurrently through the second number of threads to obtain the second number of data packets, that is, a plurality of threads are enabled to serve one time sequence data example at the same time, and the processing performance of single example data is improved; and different input instance data can be combined into the same data packet for execution, namely, one thread can jointly schedule a plurality of instance data, and the concurrent scheduling and execution optimal performance of time sequence data are realized.
In one possible example, the time-series data scheduling execution system includes a time-series data queue and a data multiplexing unit, where the data multiplexing unit includes a multiplexing configuration unit and a data obtaining and packaging unit, and in terms of obtaining and packaging the first number of different instance data to obtain a second number of instance data and corresponding processing requests, the processing unit 701 is specifically configured to: caching the time sequence data by taking a time sequence data example as granularity through the time sequence data queue; storing and providing, by the multiplexing configuration unit, configuration information of multiplexing of the output time-series data; and acquiring and packaging the first amount of different example data by the data acquisition and packaging unit according to a preset rule to obtain a second amount of example data and a corresponding processing request.
In a possible example, in the aspect that the obtaining and packaging of the first number of different instance data are performed by the data obtaining and packaging unit according to a preset rule to obtain a second number of instance data and corresponding processing requests, the processing unit 701 is specifically configured to: acquiring a packaging request through the data acquisition packaging unit; after acquiring a package request, acquiring the size of a level package and initializing a level package variable; and inputting the size of the stage packet and the stage packet variable into the multiplexing configuration unit for processing, and outputting corresponding time sequence data.
In one possible example, the multiplexing configuration unit includes a plurality of multiplexing configuration information structures, where each multiplexing configuration information structure includes a multiplexing configuration management cache unit, an instance number obtaining unit, a quota configuration unit, and a quota margin unit, and in terms of inputting the size of the time-series data level packet and a level packet variable into the multiplexing configuration unit for processing, and outputting a second amount of instance data and a corresponding processing request, the processing unit 701 is specifically configured to: judging whether an example element of the time sequence data is successfully acquired; if yes, acquiring an instance number of the instance element through the instance number acquisition unit, and acquiring two variables of a configuration margin variable through the quota configuration unit and the quota margin unit; acquiring the queue depth of the corresponding instance queue according to the instance number; and outputting a second amount of example data and a corresponding processing request according to the comparison result of the two variables of the queue depth and the configuration margin variable or performing the operation on the next multiplexing configuration information structure.
In a possible example, the time-series data scheduling execution system includes an out-of-order group packing unit, where the out-of-order group packing unit includes a batch processing group packing unit, a batch processing configuration unit, a batch processing signal unit, and a time delay signal unit, and in terms of obtaining and assembling the first number of different instance data to obtain a second number of instance data and corresponding processing requests, the processing unit 701 is specifically configured to: acquiring a data packet pending signal, and inquiring the queue depth corresponding to the data packet pending signal; when the depth of the queue is greater than the preset depth, performing batch packaging operation through a batch packaging unit; acquiring configuration information of a batch processing group package and storing the configuration information in a batch processing configuration unit; performing signal management of batch package through a batch signal processing unit to obtain signal information; processing the configuration information and the signal information through a batch processing group packaging unit to obtain the second amount of instance data and corresponding processing requests; and when the depth of the queue is smaller than the preset depth, performing necessary processing delay management through the delay signal unit.
In one possible example, the time-series data scheduling execution system comprises a reconstruction output packet scheduling unit and a reconstruction buffer queue; in terms of performing time sequence reconstruction and rectification operations on the time sequence data of the second number of data packets, and outputting the first number of concurrent instance data, the processing unit 701 is specifically configured to: when the reconstruction output packet scheduling unit detects that a second amount of instance data is input, adding the second amount of instance data into a reconstruction buffer queue; carrying out reconstruction timing detection on the data packet subjected to time sequence increment; if the data packet after the time sequence increment is in the reconstruction buffer queue, outputting a first amount of concurrent instance data after the time sequence increment; and if the data packet after the time sequence increment is not in the reconstruction buffer queue, waiting for a second amount of example data to be input.
In a possible example, the processing unit 701 is further configured to add the second amount of instance data to a reconstruction buffer queue after the second amount of instance data is detected to be input by the reconstruction output packet scheduling unit, and identify an instance number of the input time series data; and acquiring a corresponding reconstruction buffer queue according to the instance number and storing the reconstruction buffer queue and the time for completing the data before reconstruction processing.
In one possible example, the time-series data scheduling execution system includes a time-series data queue and a data multiplexing unit, where the data multiplexing unit includes a multiplexing configuration unit and a data obtaining and packaging unit, and in terms of obtaining and buffering input time-series data, the processing unit 701 is specifically configured to: caching the time sequence data by taking a time sequence data example as granularity through the time sequence data queue; storing and providing, by the multiplexing configuration unit, configuration information of multiplexing of the output time-series data; and acquiring and packaging time sequence data according to a preset rule through the data acquisition and packaging unit.
In a possible example, in the aspect of acquiring and packaging the time-series data according to a preset rule by the data acquisition and packaging unit, the processing unit 701 is specifically configured to: acquiring a packaging request through the data acquisition packaging unit; after acquiring a package request, acquiring the size of a level package and initializing a level package variable; and inputting the size of the stage packet and the stage packet variable into the multiplexing configuration unit for processing, and outputting corresponding time sequence data.
In a possible example, the multiplexing configuration unit includes multiple multiplexing configuration information structures, where each multiplexing configuration information structure includes a multiplexing configuration management cache unit, an instance number obtaining unit, a quota configuration unit, and a quota margin unit, and in terms of inputting the size of the level packet and a level packet variable into the multiplexing configuration unit for processing and outputting corresponding time series data, the processing unit 701 is specifically configured to: judging whether an example element of the time sequence data is successfully acquired; if yes, acquiring an instance number of the instance element through the instance number acquisition unit, and acquiring two variables of a configuration margin variable through the quota configuration unit and the quota margin unit; acquiring the queue depth of the corresponding instance queue according to the instance number; and judging and outputting corresponding time sequence data or performing the operation on the next multiplexing configuration information structure according to the two variables of the queue depth and the configuration margin variable.
In one possible example, the time-series data scheduling execution system includes an out-of-order group packing unit, where the out-of-order group packing unit includes a batch processing group packing unit, a batch processing configuration unit, a batch processing signal unit, and a time delay signal unit, and the processing unit 701 is specifically configured to, in the acquiring and assembling second quantity of instance data and processing requests of the time-series data: acquiring a data packet pending signal, and inquiring the queue depth corresponding to the data packet pending signal; when the depth of the queue is greater than the preset depth, performing batch packaging operation through a batch packaging unit; acquiring configuration information of a batch processing group package and storing the configuration information in a batch processing configuration unit; performing signal management of batch package through a batch signal processing unit to obtain signal information; processing the configuration information and the signal information through a batch processing group packaging unit to obtain a second amount of example data and processing requests of the time sequence data; and when the depth of the queue is smaller than the preset depth, performing necessary processing delay management through the delay signal unit.
In one possible example, the time-series data scheduling execution system comprises a reconstruction output packet scheduling unit and a reconstruction buffer queue; in the aspect of performing time series reconstruction and rectification operations on the plurality of sets of packet time series data and outputting target time series data, the processing unit 701 is specifically configured to: when the reconstruction output packet scheduling unit detects that a second amount of instance data is input, adding the second amount of instance data into a reconstruction buffer queue; carrying out reconstruction timing detection on the data packet subjected to time sequence increment; if the data packet after the time sequence increment is in the reconstruction buffer queue, outputting the data packet after the time sequence increment; and if the data packet after the time sequence increment is not in the reconstruction buffer queue, waiting for a second amount of example data to be input.
In a possible example, the processing unit 701 is further specifically configured to: after the reconstruction output packet scheduling unit detects that a second amount of instance data is input, adding the second amount of instance data into a reconstruction buffer queue, and identifying the instance number of the input time sequence data; and acquiring a corresponding reconstruction buffer queue according to the instance number and storing the reconstruction buffer queue and the time for completing the data before reconstruction processing.
It can be understood that, since the method embodiment and the apparatus embodiment are different presentation forms of the same technical concept, the content of the method embodiment portion in the present application should be synchronously adapted to the apparatus embodiment portion, and is not described herein again.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method for scheduling and executing time series data concurrently is applied to a time series data scheduling and executing system, and the method comprises the following steps:
acquiring and caching input time sequence data, wherein the time sequence data comprises a first amount of different instance data;
acquiring and assembling the first quantity of different instance data to obtain a second quantity of instance data and corresponding processing requests;
carrying out concurrent processing on the second quantity of instance data and the corresponding processing requests through a second quantity of threads to obtain a second quantity of data packets;
and performing time sequence reconstruction and rectification operation on the time sequence data of the second number of data packets, and outputting a first number of concurrent instance data.
2. The method of claim 1, wherein the time-series data scheduling execution system comprises a time-series data queue and a data multiplexing unit, the data multiplexing unit comprises a multiplexing configuration unit and a data obtaining and packaging unit, and the obtaining and packaging of the first number of different instance data to obtain a second number of instance data and corresponding processing requests comprises:
caching the time sequence data by taking a time sequence data example as granularity through the time sequence data queue;
storing and providing, by the multiplexing configuration unit, configuration information of multiplexing of the output time-series data;
and acquiring and packaging the first amount of different example data by the data acquisition and packaging unit according to a preset rule to obtain a second amount of example data and a corresponding processing request.
3. The method according to claim 2, wherein the obtaining and packaging the first number of different instance data by the data obtaining and packaging unit according to a preset rule to obtain a second number of instance data and corresponding processing requests comprises:
acquiring a packaging request through the data acquisition packaging unit;
after acquiring a packaging request, acquiring the size of a time sequence data level package and initializing a level packaging variable;
and inputting the size of the time sequence data stage packet and the stage packet variable into the multiplexing configuration unit for processing, and outputting a second amount of example data and corresponding processing requests.
4. The method according to claim 3, wherein the multiplexing configuration unit includes a plurality of multiplexing configuration information structures, the multiplexing configuration information structures include a multiplexing configuration management buffer unit, an instance number obtaining unit, a quota configuration unit, and a quota margin unit, and the inputting the size of the time-series data level packet and the level packet variable into the multiplexing configuration unit for processing and outputting a second amount of instance data and a corresponding processing request includes:
judging whether an example element of the time sequence data is successfully acquired;
if yes, acquiring an instance number of the instance element through the instance number acquisition unit, and acquiring two variables of a configuration margin variable through the quota configuration unit and the quota margin unit;
acquiring the queue depth of the corresponding instance queue according to the instance number;
and outputting a second amount of example data and a corresponding processing request according to the comparison result of the two variables of the queue depth and the configuration margin variable or performing the operation on the next multiplexing configuration information structure.
5. The method of claim 4, wherein the time-series data scheduling execution system comprises an out-of-order group packing unit, the out-of-order group packing unit comprises a batch processing group packing unit, a batch processing configuration unit, a batch processing signal unit and a time delay signal unit, and the obtaining and assembling the first number of different instance data to obtain a second number of instance data and corresponding processing requests comprises:
acquiring a data packet pending signal, and inquiring the queue depth corresponding to the data packet pending signal;
when the depth of the queue is greater than the preset depth, performing batch packaging operation through a batch packaging unit;
acquiring configuration information of a batch processing group package and storing the configuration information in a batch processing configuration unit;
performing signal management of batch package through a batch signal processing unit to obtain signal information;
processing the configuration information and the signal information through a batch processing group packaging unit to obtain the second amount of instance data and corresponding processing requests;
and when the queue depth is smaller than the preset depth, performing processing delay management through the delay signal unit.
6. The method according to any one of claims 1-5, wherein the time-series data scheduling execution system comprises a reconstruction output packet scheduling unit and a reconstruction buffer queue; performing time sequence reconstruction and rectification operations on the time sequence data of the second number of data packets, and outputting a first number of concurrent instance data, including:
when the reconstruction output packet scheduling unit detects that a second amount of instance data is input, adding the second amount of instance data into a reconstruction buffer queue;
carrying out reconstruction timing detection on the data packet subjected to time sequence increment;
if the data packet after the time sequence increment is in the reconstruction buffer queue, outputting a first amount of concurrent instance data after the time sequence increment;
and if the data packet after the time sequence increment is not in the reconstruction buffer queue, waiting for a second amount of example data to be input.
7. The method of claim 6, wherein the adding the second amount of instance data into a reconstruction buffer queue after detecting the second amount of instance data input by the reconstruction output packet scheduling unit, further comprises:
identifying an instance number of the input time series data;
and acquiring a corresponding reconstruction buffer queue according to the instance number, and storing the reconstruction buffer queue and the time for completing the data concurrency before reconstruction processing.
8. An apparatus for concurrently scheduling and executing time series data, comprising a processing unit and a communication unit, wherein,
the processing unit is used for acquiring and caching input time sequence data through the communication unit, wherein the time sequence data comprises a first amount of different instance data; acquiring and assembling the first quantity of different instance data to obtain a second quantity of instance data and corresponding processing requests; carrying out concurrent processing on the second quantity of instance data and the corresponding processing requests through a second quantity of threads to obtain a second quantity of data packets; and performing time sequence reconstruction and rectification operation on the time sequence data of the second number of data packets, and outputting a first number of concurrent instance data.
9. An electronic device comprising a processor, a memory, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-7.
CN201911426140.4A 2019-12-31 2019-12-31 Method and related device for concurrently scheduling and executing time sequence data Pending CN113127064A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911426140.4A CN113127064A (en) 2019-12-31 2019-12-31 Method and related device for concurrently scheduling and executing time sequence data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911426140.4A CN113127064A (en) 2019-12-31 2019-12-31 Method and related device for concurrently scheduling and executing time sequence data

Publications (1)

Publication Number Publication Date
CN113127064A true CN113127064A (en) 2021-07-16

Family

ID=76770933

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911426140.4A Pending CN113127064A (en) 2019-12-31 2019-12-31 Method and related device for concurrently scheduling and executing time sequence data

Country Status (1)

Country Link
CN (1) CN113127064A (en)

Similar Documents

Publication Publication Date Title
US7751404B2 (en) Method, system, and computer program product for high performance bonding resequencing
US20050240688A1 (en) Efficient data transfer from an ASIC to a host using DMA
US8631152B2 (en) System and method for data packet transmission and reception
CN111163018B (en) Network equipment and method for reducing transmission delay thereof
CN111030945B (en) Disaster recovery method, disaster recovery gateway, storage medium, device and system
CN111615692A (en) Data transfer method, calculation processing device, and storage medium
CN108304272B (en) Data IO request processing method and device
EP2884398A1 (en) Data forwarding device, data forwarding method, and program
CN112698959A (en) Multi-core communication method and device
US20150304227A1 (en) Queue Management Method and Apparatus
CN110830388A (en) Data scheduling method, device, network equipment and computer storage medium
CN102098215B (en) Priority management method for multi-application packet reception
CN109800074A (en) Task data concurrently executes method, apparatus and electronic equipment
CN106533976A (en) Data packet processing method and device
CN113127064A (en) Method and related device for concurrently scheduling and executing time sequence data
CN110045924B (en) Hierarchical storage method and device, electronic equipment and computer readable storage medium
CN109597566A (en) A kind of reading data, storage method and device
CN105307207B (en) Method for data transmission in wireless networking device and wireless networking device
CN102055671A (en) Priority management method for multi-application packet sending
CN106776393B (en) uninterrupted serial port data receiving method and device
US10250515B2 (en) Method and device for forwarding data messages
CN114500403A (en) Data processing method and device and computer readable storage medium
CN111367494B (en) Serial data frame receiving method and device
CN110764710A (en) Data access method and storage system of low-delay and high-IOPS
CN110764707A (en) Data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination