CN112711471A - Big data task execution method and device, server and storage medium - Google Patents

Big data task execution method and device, server and storage medium Download PDF

Info

Publication number
CN112711471A
CN112711471A CN202011628541.0A CN202011628541A CN112711471A CN 112711471 A CN112711471 A CN 112711471A CN 202011628541 A CN202011628541 A CN 202011628541A CN 112711471 A CN112711471 A CN 112711471A
Authority
CN
China
Prior art keywords
task
tasks
big data
small
small tasks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011628541.0A
Other languages
Chinese (zh)
Inventor
仇昌栋
廖长军
周英能
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Hangzhou Information Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Hangzhou Information Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202011628541.0A priority Critical patent/CN112711471A/en
Publication of CN112711471A publication Critical patent/CN112711471A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals

Abstract

The embodiment of the invention relates to the field of big data, and discloses a method and a device for executing a big data task, a server and a storage medium. The method comprises the steps of obtaining at least two big data tasks; disassembling each big data task to obtain a plurality of small tasks; wherein, the small task is a task only containing one task target; carrying out duplicate removal processing on a plurality of small tasks; recombining the small tasks after the duplicate removal treatment to obtain a plurality of recombined tasks; several reorganization tasks are performed. Therefore, the calculation amount and the consumption of a Central Processing Unit (CPU) are reduced, and the execution efficiency of the big data task is improved.

Description

Big data task execution method and device, server and storage medium
Technical Field
The embodiment of the invention relates to the field of big data, in particular to a method and a device for executing a big data task, a server and a storage medium.
Background
With the wide application of artificial intelligence in various industries, the application of big data is increasingly emphasized by users, and a large amount of data can be called by users in various fields at every moment. Users load data by establishing big data tasks and mine information valuable to business decisions according to the loaded data.
The inventor finds that at least the following problems exist in the process of loading data by a big data task: because the established big data task needs to load a large amount of data for calculation processing, a large amount of Input and Output (IO) resources need to be consumed in the loading process, and a large amount of central processing units and a large amount of memory resources are occupied in the calculation process.
Disclosure of Invention
Embodiments of the present invention provide a method and an apparatus for executing a big data task, a server, and a storage medium, which reduce the amount of computation and the consumption of a Central Processing Unit (CPU), and improve the execution efficiency of the big data task.
In order to solve the above technical problem, an embodiment of the present invention provides a method for executing a big data task, including: acquiring at least two big data tasks; disassembling each big data task to obtain a plurality of small tasks; wherein, the small task is a task only containing one task target; carrying out duplicate removal processing on a plurality of small tasks; recombining the small tasks after the duplicate removal treatment to obtain a plurality of recombined tasks; several reorganization tasks are performed.
The embodiment of the invention also provides an executing device of the big data task, which comprises the following steps: the system comprises an acquisition module, a disassembly module, a duplication elimination module, a recombination module and an execution module; the acquisition module is used for acquiring at least two big data tasks; the disassembling module is used for disassembling each big data task to obtain a plurality of small tasks; wherein, the small task is a task only containing one task target; the duplication eliminating module is used for carrying out duplication eliminating processing on a plurality of small tasks; the recombination module is used for recombining the plurality of small tasks after the duplicate removal processing to obtain a plurality of recombination tasks; the execution module is used for executing a plurality of recombination tasks.
An embodiment of the present invention further provides a server, including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the execution method of the big data task.
Embodiments of the present invention further provide a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the execution method of the big data task.
Compared with the related art, the method and the device for processing the big data tasks acquire at least two big data tasks to be executed, each big data task is disassembled to obtain a plurality of small tasks, the small tasks are subjected to duplicate removal processing, the small tasks subjected to the duplicate removal processing are recombined to obtain a plurality of recombined tasks, and the small tasks obtained through the disassembly are all tasks only containing one task target, so that the task targets contained in the small tasks left after the duplicate removal processing are different, repeated calculation of the same task target is avoided, the calculation amount and the CPU consumption are reduced, and the execution efficiency of the big data tasks is improved.
In addition, the duplicate removal processing is carried out on a plurality of tasklets, and comprises the following steps: comparing task targets in any two small tasks; if the task targets in any two small tasks are completely the same, deleting one of any two small tasks; and repeating the step of comparing the task targets in any two small tasks until any one of the small tasks is compared with other small tasks. By doing so, it is easier to determine whether the big data tasks contain common small tasks, and eliminate duplication between the big data tasks, thereby reducing the amount of computation.
In addition, comparing task targets in any two tasklets includes: and comparing the MD5 codes corresponding to the task targets. The MD5 codes corresponding to different task targets are different, whether the two small tasks are the same or not is determined by comparing the MD5 codes, and the comparison efficiency is improved.
In addition, the recombination of a plurality of small tasks after the deduplication processing comprises the following steps: determining task targets corresponding to a plurality of small tasks after duplicate removal processing; wherein the task object at least comprises definition information and an index; and recombining the plurality of tasklets after the duplicate removal processing according to the limiting information and the index.
In addition, the definition information includes at least: region information, time information, and order information.
In addition, according to the limiting information and the indexes, a plurality of tasklets after the duplicate removal processing are recombined, and the method comprises the following steps: and recombining the small tasks with the same limiting information and different indexes to serve as a recombination task until the plurality of small tasks after the deduplication processing are recombined. The small tasks with the same limited information and different indexes are combined to be used as a recombination task, the times of loading the related data list during the task execution can be reduced, and therefore the consumption of Input and Output (IO) resources is reduced.
In addition, a plurality of recombination tasks are executed through the big data task scheduling framework.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
FIG. 1 is a flow chart of a method of execution of a big data task according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of a big data task according to a first embodiment of the present invention;
FIG. 3 is a diagram illustrating a big data task broken down into small tasks according to a first embodiment of the present invention;
FIG. 4 is a flow chart of a method of execution of a big data task according to a second embodiment of the present invention;
FIG. 5 is a schematic diagram of a reorganization of a tasklet into a first reorganization task according to a second embodiment of the present invention;
FIG. 6 is a schematic diagram of a big data task execution device according to a third embodiment of the present invention;
fig. 7 is a schematic configuration diagram of a server according to a fourth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention will be described in detail with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that in various embodiments of the invention, numerous technical details are set forth in order to provide a better understanding of the present application. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments.
The following embodiments are divided for convenience of description, and should not constitute any limitation to the specific implementation manner of the present invention, and the embodiments may be mutually incorporated and referred to without contradiction.
The first embodiment of the invention relates to a big data task execution method, which comprises the following steps: acquiring at least two big data tasks; disassembling each big data task to obtain a plurality of small tasks; wherein, the small task is a task only containing one task target; carrying out duplicate removal processing on a plurality of small tasks; recombining the small tasks after the duplicate removal treatment to obtain a plurality of recombined tasks; several reorganization tasks are performed. Therefore, the calculation amount and the consumption of a Central Processing Unit (CPU) are reduced, and the execution efficiency of the big data task is improved. The following describes implementation details of the big data task execution method of the present embodiment in detail, and the following is only provided for easy understanding and is not necessary for implementing the present embodiment.
The execution method of the big data task related to the embodiment is shown in fig. 1, and includes:
step 101, at least two big data tasks are obtained.
Specifically, the task goals contained within a big data task are different in different scenarios. For example, in an application scenario where passenger flow is analyzed based on base station signaling, the input big data includes signaling of communication between a large number of mobile phones and a base station (referred to as base station signaling for short), a base station list (referred to as a zone dimension table for short) included in a zone such as a transportation junction, a scenic spot, etc., a mobile phone attribution, customer basic information such as a sex of a mobile phone customer, etc., and in this scenario, the big data task defines an index calculated in a specified time period in a specified zone according to user configuration, for example, the calculated index may be an index of passenger flow, passenger flow sex distribution, passenger flow attribution distribution, etc. For another example, in the e-commerce management scenario, the input big data includes customer information, order information, merchant information, and the like of the user, and in this scenario, the big data task defines that an index of a specific merchant in a specific time period is calculated according to the order information, for example, the calculated index may be an index of the number of orders, a commodity type of the orders, and the like.
The content included in the big data task may be as shown in fig. 2, and the big data task 1 and the big data task 2 established in the passenger flow management scene in fig. 2 are shown, where the big data task 1 is a task established by a user in the tourist office, and the big data task 2 is a task established by a user in the transportation department, and according to the limiting conditions and the task targets included in the big data task 1, the user in the tourist office wants to know the passenger flow volume, the passenger flow gender distribution, the passenger flow attribution distribution, and other task targets at the time 2019-7-25 of the regional scenic spot S1 and the transportation hub T1. Similarly, the task goals required by the users of the transportation department can also be known from the big data task 2. The big data task is received at the user side, and the task target of the big data task is changed according to the user requirement and the application scene.
And 102, disassembling each big data task to obtain a plurality of small tasks.
Specifically, the big data task is split according to a plurality of task targets contained in the big data task, the task targets comprise limiting information and indexes, and the big data task is split in a fine-grained manner according to the limiting information and the indexes to form a reusable unit. For example, the big data task is split according to the definition information and the index contained in the big data task, each split small task contains one or more definition information and one index, each split small task represents a value corresponding to the index under the condition of the definition information, for example, if the definition information is scenic spot S1 and the index is passenger flow volume, then the small task split according to the definition information and the index represents the passenger flow volume calculated in scenic spot S1. In such a way, the small tasks formed after the splitting can be recombined into a recombined task comprising a plurality of task targets. The specific splitting manner is as shown in fig. 3, the big data task 1 is split into six small tasks on the left side of the dotted line, and each small task only includes information of one region, information of one time, and one calculated index. Similarly, the big data task 2 is split to obtain four small tasks on the right side of the dotted line. Based on the splitting mode, different big data tasks can be split into a plurality of small tasks. The granularity of the small tasks formed after the splitting is small, so that the small tasks can be conveniently compared and judged whether the identical small tasks exist.
And 103, carrying out duplicate removal processing on a plurality of tasklets.
Specifically, when the split small tasks are subjected to deduplication processing, the limitation information and the index of the small tasks may be compared one by one, for example, the limitation information (area, time) and the index of the first small task in the first large data task and the limitation information (area, time) and the index of the first small task in the second large data task are respectively compared one by one, if all the limitation information and the indexes of the two small tasks are completely the same, the two small tasks are described as repetitive small tasks, and if any one of the limitation information and the indexes of the two small tasks is different, the two small tasks are non-repetitive small tasks. In addition, specific identification information can be generated according to the limit information and the index of the tasklets, then the identification information is used for comparison, and if the identification information of two tasklets is the same, the two tasklets are the repeated tasklets. The specific identification information generated may be identification information of MD5 type.
The following illustrates the deduplication process, i.e. four small tasks within the dashed box shown in fig. 3, i.e. two repeated small tasks with identical contents. The two groups of repeated small tasks have the same result when in calculation, so that one of the two groups of repeated small tasks is deleted, thereby reducing the calculation amount and reducing the consumption of the central processing unit of the cluster and the bandwidth resource.
And 104, recombining the plurality of small tasks after the duplicate removal processing to obtain a plurality of recombined tasks.
Specifically, taking a plurality of the subjobs after the deduplication processing shown in fig. 3 as an example, after the duplicate subjobs are deleted, 10 subjobs split in fig. 3 remain 8 mutually nonrepeating subjobs, and the remaining 8 subjobs are freely combined to obtain one or more recomposed tasks.
Step 105, a number of reorganization tasks are performed.
Specifically, the reorganization task may be submitted to a common big data task scheduling framework (such as Yarn) for scheduling execution.
Compared with the related art, the method and the device for processing the big data tasks acquire at least two big data tasks to be executed, each big data task is disassembled to obtain a plurality of small tasks, the small tasks are subjected to duplicate removal processing, the small tasks subjected to the duplicate removal processing are recombined to obtain a plurality of recombined tasks, and the small tasks obtained through the disassembly are all tasks only containing one task target, so that the task targets contained in the small tasks left after the duplicate removal processing are different, repeated calculation of the same task target is avoided, the calculation amount and the CPU consumption are reduced, and the execution efficiency of the big data tasks is improved.
The second embodiment of the invention relates to a method for executing a big data task, and particularly describes a combination mode of a recombination task, so that the times of loading a related data list during the task execution are reduced, and the consumption of Input and Output (IO) resources is reduced.
The execution method of the big data task related to the embodiment is shown in fig. 4, and includes:
step 401, at least two big data tasks are obtained.
And 402, disassembling each big data task to obtain a plurality of small tasks.
And step 403, performing duplicate removal processing on a plurality of tasklets.
Steps 401 to 403 correspond to steps 101 to 103 in the first embodiment one to one, and are not described herein again to avoid repetition.
And step 404, determining task targets corresponding to the plurality of tasklets after the deduplication processing, wherein the task targets comprise limitation information and indexes.
And 405, recombining a plurality of tasklets after the duplicate removal treatment according to the limiting information and the indexes.
Specifically, the small tasks with the same limiting information and different indexes in the rest of the small tasks are combined to be used as the reorganization task until the rest of the small tasks are combined to be used as the reorganization task. By the method, all the small tasks can be formed into the recombination task when the tasks are recombined, and the omission of the small tasks is avoided. In addition, combining the small tasks with the same limiting information as one reorganization task can reduce the number of times of loading the related data list during the task execution, thereby reducing the consumption of Input and Output (IO) resources. For example, as shown in fig. 5, the definition information of the eight remaining tasklets after the deduplication is analyzed, wherein the definition information of the left three tasklets is the scenic spot S1 and the time 2019-7-25, and the three tasklets are regarded as the reorganization task 1; the limiting information of the middle three tasklets is traffic junction T1 and time 2019-7-25, and the three tasklets are used as recombination task 2; in the same way, the two small tasks on the right side are the small tasks with the same limited information, and the three small tasks are taken as a reorganization task 3. The recombination mode ensures that all the small tasks are recombined into the recombined tasks, and avoids the omission of the small tasks.
In addition, the recombination task can be recombined in other ways. For example, after determining the repeated tasklets of the at least two big data tasks, one of the tasklets of each group of repeated tasklets is combined as a recomposing task, and the remaining tasklets of each big data task except the repeated tasklets constitute further recomposing tasks. Taking the small tasks obtained after splitting in fig. 3 as an example, two repeated small tasks (two groups of small tasks in a dashed line frame) are taken as a reorganization task, namely two small tasks (traffic hubs T1, 2019-7-25, passenger flow volume) and small tasks (traffic hubs T1, 2019-7-25, passenger flow gender distribution) are taken as a reorganization task. The remaining small tasks of the big data task 1 are taken as a reorganization task, namely the small tasks (scenic spots S1, 2019-7-25, passenger flow volume), the small tasks (scenic spots S1, 2019-7-25, passenger flow gender distribution), the small tasks (scenic spots S1, 2019-7-25, passenger flow attribution distribution) and the small tasks (traffic hubs T1, 2019-7-25, passenger flow attribution distribution) are taken as a reorganization task. The remaining small tasks of the big data task 2 are taken as a reorganization task, namely the small tasks (traffic hubs T2, 2019-7-25, passenger flow volume) and the small tasks (traffic hubs T2, 2019-7-25, passenger flow gender distribution) are taken as a reorganization task. By doing so, operations required by the reorganization can be reduced, repeated calculation can be reduced only by extracting repeated small tasks contained in each big data task, and the reorganization efficiency is improved.
Step 406, a reorganization task is performed.
In this embodiment, a small task reorganization manner is described, so that all small tasks are guaranteed to be composed of reorganization tasks when the tasks are reorganized, omission of the small tasks is avoided, and the number of times for loading a relevant data list when the tasks are executed is reduced, thereby reducing consumption of Input and Output (IO) resources.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
A third embodiment of the present invention relates to a big data task execution device, as shown in fig. 6, including: an acquisition module 61, a disassembly module 62, a deduplication module 63, a reassembly module 64 and an execution module 65; the obtaining module 61 is configured to obtain at least two big data tasks; the disassembling module 62 is configured to disassemble each big data task to obtain a plurality of small tasks; wherein, the small task is a task only containing one task target; the duplicate removal module 63 is configured to perform duplicate removal processing on a plurality of tasklets; the restructuring module 64 is configured to restructure the plurality of small tasks after the deduplication processing to obtain a plurality of restructuring tasks; the execution module 65 is used to execute several reorganization tasks.
In addition, the duplicate removal module 63 is used for comparing task targets in any two small tasks; if the task targets in any two small tasks are completely the same, deleting one of any two small tasks; and repeating the step of comparing the task targets in any two small tasks until any one of the small tasks is compared with other small tasks.
In addition, the recombination module is used for determining task targets corresponding to the plurality of small tasks after the duplicate removal processing; wherein the task object at least comprises definition information and an index; and recombining the plurality of tasklets after the duplicate removal processing according to the limiting information and the index.
In addition, the recombination module is used for recombining the small tasks with the same limiting information and different indexes to serve as a recombination task until the plurality of small tasks after the deduplication processing are recombined.
In addition, the execution module is used for executing a plurality of recombination tasks through the big data task scheduling framework.
A fourth embodiment of the present invention relates to a server, as shown in fig. 7, including at least one processor 701; and, a memory 702 communicatively coupled to the at least one processor 701; the memory 702 stores instructions executable by the at least one processor 701, and the instructions are executed by the at least one processor 701, so that the at least one processor 701 can execute the execution method of the big data task in any method embodiment described above.
The memory 702 and the processor 701 are coupled by a bus, which may comprise any number of interconnecting buses and bridges that couple one or more of the various circuits of the processor 701 and the memory 702. The bus may also connect various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor 701 is transmitted over a wireless medium through an antenna, which receives the data and transmits the data to the processor 701.
The processor 701 is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And the memory 702 may be used for storing data used by the processor 701 in performing operations.
A fifth embodiment of the present invention relates to a computer-readable storage medium storing a computer program. The computer program realizes the above-described method embodiments when executed by a processor.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific embodiments for practicing the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (10)

1. A big data task execution method is characterized by comprising the following steps:
acquiring at least two big data tasks;
disassembling each big data task to obtain a plurality of small tasks; wherein the tasklet is a task which only comprises one task target;
carrying out duplicate removal processing on the plurality of small tasks;
recombining the small tasks after the duplicate removal treatment to obtain a plurality of recombined tasks;
and executing the recombination tasks.
2. The big data task execution method according to claim 1, wherein the performing the deduplication processing on the plurality of small tasks includes:
comparing task targets in any two small tasks;
if the task targets in any two of the tasklets are completely the same, deleting one of the tasklets in any two of the tasklets;
and repeating the step of comparing the task targets in any two of the small tasks until any one of the small tasks is compared with other small tasks.
3. The method according to claim 2, wherein the comparing task targets in any two of the small tasks comprises:
and comparing the MD5 codes corresponding to the task targets, wherein the MD5 codes corresponding to different task targets are different.
4. The method for executing a big data task according to claim 1, wherein the reassembling the plurality of small tasks after the deduplication processing includes:
determining limit information and indexes which correspond to the plurality of small tasks after the duplicate removal processing respectively;
and recombining the plurality of tasklets after the deduplication processing according to the limiting information and the index.
5. The big data task execution method according to claim 4, wherein the definition information at least comprises: region information, time information, and order information.
6. The big data task execution method according to claim 4, wherein the reorganizing the plurality of small tasks after the deduplication processing according to the constraint information and the index includes:
and taking the small tasks with the same limiting information and different indexes as a recombination task until all the small tasks after the deduplication processing are recombined.
7. The big data task execution method of claim 1, wherein the recombination tasks are executed by a big data task scheduling framework.
8. An apparatus for executing a big data task, comprising: the system comprises an acquisition module, a disassembly module, a duplication elimination module, a recombination module and an execution module;
the acquisition module is used for acquiring at least two big data tasks;
the disassembling module is used for disassembling each big data task to obtain a plurality of small tasks; wherein the tasklet is a task which only comprises one task target;
the duplication elimination module is used for carrying out duplication elimination processing on the plurality of small tasks;
the recombination module is used for recombining the small tasks after the duplicate removal processing to obtain a plurality of recombination tasks;
the execution module is used for executing the recombination tasks.
9. A server, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method of performing a big data task as claimed in any of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out a method of carrying out a big data task according to any one of claims 1 to 7.
CN202011628541.0A 2020-12-31 2020-12-31 Big data task execution method and device, server and storage medium Pending CN112711471A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011628541.0A CN112711471A (en) 2020-12-31 2020-12-31 Big data task execution method and device, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011628541.0A CN112711471A (en) 2020-12-31 2020-12-31 Big data task execution method and device, server and storage medium

Publications (1)

Publication Number Publication Date
CN112711471A true CN112711471A (en) 2021-04-27

Family

ID=75547715

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011628541.0A Pending CN112711471A (en) 2020-12-31 2020-12-31 Big data task execution method and device, server and storage medium

Country Status (1)

Country Link
CN (1) CN112711471A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103699441A (en) * 2013-12-05 2014-04-02 深圳先进技术研究院 MapReduce report task execution method based on task granularity
WO2020093208A1 (en) * 2018-11-05 2020-05-14 深圳市欢太科技有限公司 Application processing method and apparatus, computer device, and computer readable storage medium
CN111427681A (en) * 2020-02-19 2020-07-17 上海交通大学 Real-time task matching scheduling system and method based on resource monitoring in edge computing
CN111581932A (en) * 2020-03-16 2020-08-25 北京掌行通信息技术有限公司 Data-driven big data analysis method, system, device, storage medium and terminal
CN111950847A (en) * 2020-07-08 2020-11-17 泰康保险集团股份有限公司 Task allocation method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103699441A (en) * 2013-12-05 2014-04-02 深圳先进技术研究院 MapReduce report task execution method based on task granularity
WO2020093208A1 (en) * 2018-11-05 2020-05-14 深圳市欢太科技有限公司 Application processing method and apparatus, computer device, and computer readable storage medium
CN111427681A (en) * 2020-02-19 2020-07-17 上海交通大学 Real-time task matching scheduling system and method based on resource monitoring in edge computing
CN111581932A (en) * 2020-03-16 2020-08-25 北京掌行通信息技术有限公司 Data-driven big data analysis method, system, device, storage medium and terminal
CN111950847A (en) * 2020-07-08 2020-11-17 泰康保险集团股份有限公司 Task allocation method and device

Similar Documents

Publication Publication Date Title
CN106557307B (en) Service data processing method and system
CN114818446B (en) Power service decomposition method and system facing 5G cloud edge terminal cooperation
US9424260B2 (en) Techniques for data assignment from an external distributed file system to a database management system
CN111736982B (en) Data forwarding processing method and server of 5G data forwarding plane
CN110083536B (en) Test resource allocation method and device, electronic equipment and storage medium
CN112711471A (en) Big data task execution method and device, server and storage medium
CN109684324B (en) Data processing method and related products thereof
CN113297188B (en) Data processing method and device
CN116226134A (en) Method and device for writing data into file and data writing database
CN112764897B (en) Task request processing method, device and system and computer readable storage medium
CN112860412B (en) Service data processing method and device, electronic equipment and storage medium
CN115437757A (en) Scheduling method, system, server and computer readable storage medium
CN112100208A (en) Operation request forwarding method and device
CN113778850A (en) Data processing method and device, electronic equipment and computer readable medium
CN110033145B (en) Financial sharing job order separating method and device, equipment and storage medium
CN112184027A (en) Task progress updating method and device and storage medium
WO2023185726A1 (en) Model acquisition method, information sending method, information receiving method, device and network element
CN111143326A (en) Method and device for reducing database operation, computer equipment and storage medium
CN116755805B (en) Resource optimization method and device applied to C++, and resource optimization device applied to C++
CN115545352B (en) Data processing method and system based on water resource partition management
US11502971B1 (en) Using multi-phase constraint programming to assign resource guarantees of consumers to hosts
CN113297244B (en) Database operation method, device, equipment and storage medium
CN114817315B (en) Data processing method and system
CN111460269B (en) Information pushing method and device
CN115293730A (en) Vehicle engineering data sending method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination