CN110597615A - Method for processing coding instruction and node equipment - Google Patents

Method for processing coding instruction and node equipment Download PDF

Info

Publication number
CN110597615A
CN110597615A CN201810603360.9A CN201810603360A CN110597615A CN 110597615 A CN110597615 A CN 110597615A CN 201810603360 A CN201810603360 A CN 201810603360A CN 110597615 A CN110597615 A CN 110597615A
Authority
CN
China
Prior art keywords
spark
plan
physical plan
physical
application system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810603360.9A
Other languages
Chinese (zh)
Other versions
CN110597615B (en
Inventor
邓长春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201810603360.9A priority Critical patent/CN110597615B/en
Publication of CN110597615A publication Critical patent/CN110597615A/en
Application granted granted Critical
Publication of CN110597615B publication Critical patent/CN110597615B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30007Arrangements for executing specific machine instructions to perform operations on data operands
    • G06F9/30029Logical and Boolean instructions, e.g. XOR, NOT
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system

Abstract

The application provides a processing method of a coding instruction and a node device, wherein the method is applied to the node device in a Spark application system, and comprises the following steps: after an executable physical plan in a Spark application system is obtained, whether the physical plan is submitted to a Spark cluster arranged in the Spark application system is judged; when the physical plan is submitted to the Spark cluster, the physical plan is submitted to the Spark cluster when the node device serves as the main device of the Spark application system.

Description

Method for processing coding instruction and node equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method for processing an encoding instruction and a node device.
Background
In the prior art, when the Spark application system executes a batch job or executes a job of a structured query class, the Spark application system processes the job in a serial processing manner, so that when the Spark application system processes a plurality of jobs, the processing efficiency of the Spark application system is greatly limited by the performance of a certain device, and therefore, when the performance of a certain device in the Spark application system is certain and when the number of jobs processed by the Spark application system is large, the time consumed by the Spark application system to process the job is long, for example: when the Spark application system needs to process three jobs, the Spark application system processes the three jobs one by one, that is, after the Spark application system processes one job, the Spark application system processes the next job, so that the time consumed by the Spark application system in processing the three jobs is long.
Disclosure of Invention
In view of this, the present application provides a method for processing an encoded instruction and a node device, so as to improve efficiency when a Spark application system processes a job.
Specifically, the method is realized through the following technical scheme:
in a first aspect, the present application provides a method for processing an encoded instruction, where the method is applied to a node device in a Spark application system, and includes:
after an executable physical plan in the Spark application system is obtained, whether the physical plan is submitted to a Spark cluster arranged in the Spark application system is judged;
when it is determined to submit the physical plan to the Spark cluster,
when the node device serves as a main device of the Spark application system, submitting the physical plan to the Spark cluster;
when the node device serves as a slave device of the Spark application system, the physical plan is sent to the master device, and the master device submits the physical plan to the Spark cluster.
Optionally, the determining whether to submit the physical plan to the Spark cluster includes:
determining a logical plan parameter value of a logical plan corresponding to the physical plan;
comparing the logic planning parameter value M1 with a preset logic planning standard value M2;
if M1 is greater than or equal to M2, determining to commit the physical plan to the Spark cluster;
if M1 is less than M2, it is determined not to commit the physical plan to the Spark cluster.
Optionally, the logic plan parameter value is a Spark operator complexity of the logic plan.
Optionally, when it is determined that the physical plan is not submitted to the Spark cluster, the method further includes:
and executing the physical plan in a multi-thread concurrent execution mode.
Optionally, the submitting the physical plan to the Spark cluster includes:
storing the physical plan;
and sending the physical plan to the Spark cluster according to a first-in first-out queue rule.
In a second aspect, the present application provides a node device, where the node device is applied in a Spark application system, and the node device includes:
the judging unit is used for judging whether to submit the physical plan to a Spark cluster arranged in the Spark application system after acquiring the executable physical plan in the Spark application system;
a sending unit, configured to submit the physical plan to the Spark cluster when it is determined that the physical plan is submitted to the Spark cluster and when a local node device serves as a master device of the Spark application system; or, the node device is configured to send the physical plan to the master device when it is determined that the physical plan is submitted to the Spark cluster and the node device serves as a slave device of the Spark application system, where the master device submits the physical plan to the Spark cluster.
Optionally, when the determining unit is configured to determine whether to submit the physical plan to a Spark cluster in the Spark application system, the determining unit is specifically configured to:
determining a logical plan parameter value of a logical plan corresponding to the physical plan;
comparing the logic planning parameter value M1 with a preset logic planning standard value M2;
if M1 is greater than or equal to M2, determining to commit the physical plan to the Spark cluster;
if M1 is less than M2, it is determined not to commit the physical plan to the Spark cluster.
Optionally, the logic plan parameter value is a Spark operator complexity of the logic plan.
Optionally, the node device further includes:
and the execution unit is used for executing the physical plan in a multi-thread concurrent execution mode when judging that the physical plan is not submitted to the Spark cluster.
Optionally, the sending unit is configured to, when submitting the physical plan to the Spark cluster, specifically:
storing the physical plan;
and sending the physical plan to the Spark cluster according to a first-in first-out queue rule.
Any one of the above technical solutions has the following beneficial effects:
in the embodiment of the present application, when it is determined that a physical plan is submitted to a Spark cluster, when the node device serves as a master device of the Spark application system, the physical plan is submitted to the Spark cluster, when the node device serves as a slave device of the Spark application system, the physical plan is sent to the master device, and the master device submits the physical plan to the Spark cluster, that is, in the present application, the node device in the Spark application system includes a master device and a slave device, because the master device submits the physical plan to the Spark cluster when it is determined that the physical plan is submitted to the Spark cluster, and the slave device sends the physical plan to the master device when it is determined that the physical plan is submitted to the Spark cluster, in the present application, both the master device and the slave device can obtain the physical plan, further, in the present application, the master device and the slave device can obtain their respective physical plans in parallel, and when the performance of the node device in the Spark application system is constant, and when the number of jobs processed by the Spark application system is large, compared with the prior art, because the application can process a plurality of jobs in parallel to obtain the physical plan, the application is favorable for reducing the time consumed by the Spark application system when processing the jobs, thereby being favorable for improving the efficiency of the Spark application system when processing the jobs, and because the physical plan obtained by the master device or the physical plan obtained by the slave device is transmitted to the Spark cluster by the master device, namely the master device manages the physical plan, thereby enabling the physical plan to be orderly pushed to the Spark cluster, being favorable for avoiding the Spark cluster receiving more physical plans in a period of time, and further being favorable for reducing the load burden of the Spark cluster.
Drawings
FIG. 1 is a flow chart illustrating a method of processing an encoded instruction according to an exemplary embodiment of the present application;
FIG. 2 is a flow chart illustrating another method of processing encoded instructions according to an exemplary embodiment of the present application;
FIG. 3 is a flow chart illustrating another method of processing encoded instructions according to an exemplary embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a node device according to an exemplary embodiment of the present application;
fig. 5 is a schematic structural diagram of another node device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first and second may be used herein to describe node devices, these node devices should not be limited to these terms. These terms are only used to distinguish node devices of the same type from one another. For example, a master device may also be referred to as a slave device, and similarly, a slave device may also be referred to as a master device, without departing from the scope of the present application.
The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Fig. 1 is a flowchart of a processing method for encoding instructions, which is applied to a node device in a Spark application system, where the Spark application system includes at least two node devices, and the at least two node devices include a master device and at least one slave device, as shown in fig. 1, where the method includes the following steps:
101. after acquiring an executable physical plan in the Spark application system, judging whether to submit the physical plan to a Spark cluster arranged in the Spark application system, and executing step 102 when judging that the physical plan is submitted to the Spark cluster and the node device is configured as a master device of the Spark application system; when it is determined that the physical plan is submitted to the Spark cluster, when the node device is configured as a slave device of the Spark application system, step 103 is executed.
Specifically, the node device can receive an encoding instruction sent by a user device (e.g., a computer), where the node device receiving the encoding instruction includes a master device and/or a slave device, and the encoding instruction is specifically received by which node device or node devices, and may be set according to actual needs, for example, when the encoding instruction is one, the encoding instruction may be received by the master device, or may also be received by one slave device, when the encoding instruction is two, the encoding instruction may be received by the master device and one slave device, or, when the number of the slave devices is at least two, the encoding instruction may be received by two slave devices, and the node device receiving the encoding instruction is not specifically limited herein.
It should be noted that the master device and the slave device may be assigned by a user, or may be determined after competition by the node device, and the manner of how to form the master device and the slave device is not specifically limited herein.
After the node device receives the coding instruction, it may obtain a corresponding logic plan according to the coding instruction, where the logic plan is an equivalent representation of the coding instruction in a computer process, the logic plan is a tree structure in a program, and then generate a corresponding physical plan according to the logic plan, where the physical plan is an instruction executable in the computer process, the physical plan is a tree structure in the program, and the physical plan can be directly processed by a Spark application system, further, after both the master device and the slave device receive the coding instruction, the master device and the slave device can process the coding instruction in parallel, for example, when the master device receives one coding instruction and the slave device receives another coding instruction, the master device and the slave device can independently process the respective received coding instruction to obtain the corresponding logic plan, and then generating a corresponding physical plan according to the obtained logic plan, wherein in the Spark application system, the number of the coding instructions processed each time is less than or equal to the number of the node devices in the Spark application system.
It should be noted again that the encoding instruction may be composed of DataFrame, Dataset, SQL (Structured Query Language) code, or the like.
After step 101, it may be determined which physical plans may be submitted to the Spark cluster and which physical plans may not be submitted to the Spark cluster, so as to be beneficial to reducing load burden of the Spark cluster, for example, corresponding physical plans such as a simple full-table scan, a simple process of pulling data from an external data source, or some condition query may not be submitted to the Spark cluster, and the physical plans may be pushed to a professional index system, which is beneficial to reducing load burden of the Spark cluster and improving processing speed of the physical plans.
102. The physical plan is submitted to the Spark cluster.
103. And sending the physical plan to the master device, and submitting the physical plan to the Spark cluster by the master device.
In a possible embodiment, the master device is provided with a scheduler, which may be provided in one scheduling module, and when the slave device sends the physical plan to the master device, the slave device may send the physical plan to the scheduling module of the master device, and the master device may also send the physical plan to its own scheduling module, so that the scheduling module in the master device may uniformly manage all the physical plans to facilitate subsequent processing.
In the embodiment of the present application, as shown in fig. 1, when it is determined that a physical plan is submitted to a Spark cluster, when the node device serves as a master device of the Spark application system, the physical plan is submitted to the Spark cluster, and when the node device serves as a slave device of the Spark application system, the physical plan is sent to the master device, and the master device submits the physical plan to the Spark cluster, that is, in the present application, the node device in the Spark application system includes a master device and a slave device, because the master device submits the physical plan to the Spark cluster when it is determined that the physical plan is submitted to the Spark cluster, and the slave device sends the physical plan to the master device when it is determined that the physical plan is submitted to the Spark cluster, in the present application, the master device and the slave device can both obtain the physical plans, and further, in the present application, the master device and the slave device can obtain their respective physical plans in parallel, when the performance of the node device in the Spark application system is certain, and the number of jobs processed by the Spark application system is large, compared with the prior art, the method and the device for processing the jobs in parallel can process a plurality of jobs simultaneously to obtain the physical plan, so that the method and the device are beneficial to reducing the time consumed by the Spark application system when the Spark application system processes the jobs, and are beneficial to improving the efficiency of the Spark application system when the Spark application system processes the jobs.
In one possible embodiment, when it is determined that the physical plan is not to be submitted to the Spark cluster, the method further comprises: the physical plan is executed in a multi-threaded concurrent execution manner.
Specifically, when the physical plan is not submitted to the Spark cluster, local processing may be performed, and when the local processing is performed, the physical plan may be executed in the node device in a multi-thread concurrent execution manner.
In one possible embodiment, when the master device submits the physical plan to the Spark cluster, the physical plan may be stored first; and then sends the physical plan to the Spark cluster according to the first-in first-out queue rule.
Specifically, the physical plan of the slave device is sent to the master device, that is, the physical plan stored in the master device may include the physical plan of the master device itself and the physical plan of the slave device, and in order to enable the master device to push the physical plan to the Spark cluster in order, the physical plan may be pushed by using a first-in first-out queue rule, so that the Spark cluster is prevented from processing multiple physical plans at a time, load burden of the Spark cluster is reduced, and processing speed of the Spark cluster is increased.
It should be noted that, when the master device sends the physical plan to the Spark cluster, the master device may also send the physical plan according to other scheduling rules, for example, according to the complexity of the Spark operator, and the specific scheduling rule is not specifically limited herein.
In a possible implementation, fig. 2 is a flowchart of another processing method for encoding instructions according to an exemplary embodiment of the present application, where, as shown in fig. 2, when determining whether to submit a physical plan to a Spark cluster, the method includes the following steps:
201. a logical plan parameter value for a logical plan corresponding to the physical plan is determined.
Specifically, the logic plan parameter value is used to represent the logic difficulty of the logic plan, where the higher the logic difficulty of the logic plan, the larger the logic plan parameter value is, and conversely, the lower the logic difficulty of the logic plan, the smaller the logic plan parameter value is.
The above description is provided for the details of how to obtain the logic plan according to the encoding instruction, and is not repeated herein.
In a possible implementation manner, the logic plan parameter value corresponding to the logic plan includes a Spark operator complexity corresponding to the logic plan, where the higher the logic difficulty of the logic plan is, the higher the corresponding Spark operator complexity is, and conversely, the lower the logic difficulty of the logic plan is, the lower the corresponding Spark operator complexity is.
202. Comparing the logic plan parameter value M1 with a preset logic plan standard value M2, if M1 is greater than or equal to M2, executing step 203, and if M1 is less than M2, executing step 204.
It should be noted that the logic planning standard value can be set according to actual needs, and is not specifically limited herein.
203. A determination is made to commit the physical plan to the Spark cluster.
204. It is determined not to commit the physical plan to the Spark cluster.
Specifically, as shown in fig. 2, in the Spark application system, only the physical plan corresponding to the logic plan having the logic plan parameter value higher than or equal to the preset logic plan standard value is submitted to the Spark cluster, and the physical plan corresponding to the logic plan having the logic plan parameter value lower than the preset logic plan standard value is not submitted to the Spark cluster.
It should be noted that the node device may further determine whether to submit the physical plan to the Spark cluster according to other setting manners, for example, calculate the logical plan by using a specific algorithm and output a result, and then the node device may determine whether to submit the logical plan to the Spark cluster according to the result, for example, when the result belongs to the first result cluster, it is determined to submit the corresponding physical plan to the Spark cluster, and when the result does not belong to the first result cluster or when the result belongs to the second result cluster, it is determined not to submit the corresponding physical plan to the Spark cluster, where a specific algorithm is not specifically limited herein.
In a possible embodiment, after the logic plan is obtained according to the encoded instruction, the logic plan parameter values corresponding to the logic plan are directly obtained from the logic plan, and then step 202 is executed, where the specific execution order is not specifically limited herein.
To further illustrate the technical idea and implementation manner of the present application, an embodiment of the present invention is now described in detail with reference to a specific application scenario, specifically, a Spark application system includes a master device and two slave devices, fig. 3 is a flowchart of another processing method for encoding instructions, shown in an exemplary embodiment of the present application, and the method includes the following steps:
301. an encoding instruction is received.
Specifically, as shown in fig. 3, the master device and the two slave devices each receive an encoding instruction sent by the user device.
302. And acquiring a logic plan corresponding to the coding instruction.
Specifically, as shown in fig. 3, the master device and the two slave devices each obtain a logic plan corresponding to their respective encoding instructions.
303. And acquiring a physical plan corresponding to the logic plan and acquiring a logic plan parameter value corresponding to the logic plan.
Specifically, as shown in fig. 3, the master device and the two slave devices each obtain a physical plan corresponding to a respective logic plan, and obtain a logic plan parameter value corresponding to the respective logic plan.
304. Comparing the logic plan parameter value with a preset logic plan standard value, and if the logic plan parameter value is higher than or equal to the preset logic plan standard value and the node device is configured as a master device of the Spark application system, executing step 305; if the logic plan parameter value is higher than or equal to the preset logic plan standard value, and when the node device is configured as a slave device of the Spark application system, executing step 306; if the logic plan parameter value is lower than the preset logic plan standard value, and when the node device is configured as a master device of the Spark application system, executing step 307; step 307 is also executed if the logic plan parameter value is lower than the preset logic plan standard value and when the node device is configured as a slave device of the Spark application system.
305. The physical plan is submitted to the Spark cluster.
306. And sending the physical plan to the master device, and submitting the physical plan to the Spark cluster by the master device.
In one possible embodiment, the master device stores its own physical plan and the physical plan transmitted by the slave device into a scheduling module in the master device, and then transmits the stored physical plan to the Spark cluster according to a first-in first-out queue rule through the scheduling module.
307. The physical plan is executed in a multi-threaded concurrent execution manner.
In the present application, as shown in fig. 3, the master device and the slave device may obtain respective physical plans in parallel, when the performance of the node device in the Spark application system is certain, and when the number of jobs processed by the Spark application system is large, compared with the prior art, because the present application may perform parallel processing on a plurality of jobs simultaneously to obtain the physical plans, the present application is favorable to reduce the time consumed by the Spark application system when processing the jobs, thereby being favorable to improve the efficiency of the Spark application system when processing the jobs, and because the physical plan obtained by the master device or the physical plan obtained by the slave device is managed and sent from the master device to the Spark cluster, that is, the master device manages and sends the physical plan, thereby enabling the physical plan to be orderly pushed to the Spark cluster, and being favorable to prevent the Spark cluster from receiving more physical plans within a period of time, and is favorable for reducing the load burden of the Spark cluster.
Fig. 4 is a schematic structural diagram of a node device according to an exemplary embodiment of the present application, where as shown in fig. 4, the node device is applied in a Spark application system, and the node device includes;
the determining unit 41 is configured to determine whether to submit the physical plan to a Spark cluster in the Spark application system after acquiring the executable physical plan in the Spark application system.
A sending unit 42, configured to submit the physical plan to the Spark cluster when it is determined that the node device is a master device of the Spark application system; or, the node device is configured to send the physical plan to the master device when determining that the physical plan is submitted to the Spark cluster and the node device serves as a slave device of the Spark application system, and the master device submits the physical plan to the Spark cluster.
In a possible embodiment, when the determining unit 41 is configured to determine whether to submit the physical plan to a Spark cluster provided in the Spark application system, specifically, to: determining a logical plan parameter value of a logical plan corresponding to the physical plan; comparing the logic planning parameter value M1 with a preset logic planning standard value M2; if M1 is greater than or equal to M2, determining to commit the physical plan to the Spark cluster; if M1 is less than M2, then a determination is made not to commit the physical plan to the Spark cluster.
In one possible embodiment, the logic plan parameter value is the Spark operator complexity of the logic plan.
In a possible implementation, fig. 5 is a schematic structural diagram of another node device shown in an exemplary embodiment of the present application, and as shown in fig. 5, the node device further includes:
and the execution unit 43 is configured to execute the physical plan in a multi-thread concurrent execution manner when it is determined that the physical plan is not submitted to the Spark cluster.
In a possible embodiment, the sending unit 42 is configured to, when submitting the physical plan to the Spark cluster, specifically: storing the physical plan; and sending the physical plan to the Spark cluster according to the first-in first-out queue rule.
In the embodiment of the present application, when it is determined that a physical plan is submitted to a Spark cluster, when the node device serves as a master device of the Spark application system, the physical plan is submitted to the Spark cluster, when the node device serves as a slave device of the Spark application system, the physical plan is sent to the master device, and the master device submits the physical plan to the Spark cluster, that is, in the present application, the node device in the Spark application system includes a master device and a slave device, because the master device submits the physical plan to the Spark cluster when it is determined that the physical plan is submitted to the Spark cluster, and the slave device sends the physical plan to the master device when it is determined that the physical plan is submitted to the Spark cluster, in the present application, both the master device and the slave device can obtain the physical plan, further, in the present application, the master device and the slave device can obtain their respective physical plans in parallel, and when the performance of the node device in the Spark application system is constant, and when the number of jobs processed by the Spark application system is large, compared with the prior art, because the application can process a plurality of jobs in parallel to obtain the physical plan, the application is favorable for reducing the time consumed by the Spark application system when processing the jobs, thereby being favorable for improving the efficiency of the Spark application system when processing the jobs, and because the physical plan obtained by the master device or the physical plan obtained by the slave device is transmitted to the Spark cluster by the master device, namely the master device manages the physical plan, thereby enabling the physical plan to be orderly pushed to the Spark cluster, being favorable for avoiding the Spark cluster receiving more physical plans in a period of time, and further being favorable for reducing the load burden of the Spark cluster.
The principle and description of the structures shown in fig. 4 and fig. 5 may refer to the related contents shown in the method corresponding to the node device, and are not described in detail here.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. In other instances, features described in connection with one embodiment may be implemented as discrete components or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. Further, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some implementations, multitasking and parallel processing may be advantageous.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (10)

1. A processing method of coded instructions, applied to a node device in a Spark application system, includes:
after an executable physical plan in the Spark application system is obtained, whether the physical plan is submitted to a Spark cluster arranged in the Spark application system is judged;
when it is determined to submit the physical plan to the Spark cluster,
when the node device serves as a main device of the Spark application system, submitting the physical plan to the Spark cluster;
when the node device serves as a slave device of the Spark application system, the physical plan is sent to the master device, and the master device submits the physical plan to the Spark cluster.
2. The process of claim 1,
the determining whether to submit the physical plan to the Spark cluster includes:
determining a logical plan parameter value of a logical plan corresponding to the physical plan;
comparing the logic planning parameter value M1 with a preset logic planning standard value M2;
if M1 is greater than or equal to M2, determining to commit the physical plan to the Spark cluster;
if M1 is less than M2, it is determined not to commit the physical plan to the Spark cluster.
3. The method of claim 2, in which the logical plan parameter value is a Spark operator complexity of the logical plan.
4. The method of claim 1, wherein when it is determined that the physical plan is not to be committed to the Spark cluster, the method further comprises:
and executing the physical plan in a multi-thread concurrent execution mode.
5. The method of claim 1, wherein the submitting the physical plan to the Spark cluster comprises:
storing the physical plan;
and sending the physical plan to the Spark cluster according to a first-in first-out queue rule.
6. A node device, wherein the node device is used in a Spark application system, the node device comprising:
the judging unit is used for judging whether to submit the physical plan to a Spark cluster arranged in the Spark application system after acquiring the executable physical plan in the Spark application system;
a sending unit, configured to submit the physical plan to the Spark cluster when it is determined that the physical plan is submitted to the Spark cluster and when a local node device serves as a master device of the Spark application system; or, the node device is configured to send the physical plan to the master device when it is determined that the physical plan is submitted to the Spark cluster and the node device serves as a slave device of the Spark application system, where the master device submits the physical plan to the Spark cluster.
7. The node device of claim 6, wherein, when the determining unit is configured to determine whether to submit the physical plan to a Spark cluster provided in the Spark application system, the determining unit is specifically configured to:
determining a logical plan parameter value of a logical plan corresponding to the physical plan;
comparing the logic planning parameter value M1 with a preset logic planning standard value M2;
if M1 is greater than or equal to M2, determining to commit the physical plan to the Spark cluster;
if M1 is less than M2, it is determined not to commit the physical plan to the Spark cluster.
8. The node apparatus of claim 7, in which the logical plan parameter value is a Spark operator complexity of the logical plan.
9. The node device of claim 6, wherein the node device further comprises:
and the execution unit is used for executing the physical plan in a multi-thread concurrent execution mode when judging that the physical plan is not submitted to the Spark cluster.
10. The node device of claim 6, wherein the sending unit, when submitting the physical plan to the Spark cluster, is specifically configured to:
storing the physical plan;
and sending the physical plan to the Spark cluster according to a first-in first-out queue rule.
CN201810603360.9A 2018-06-12 2018-06-12 Method for processing coding instruction and node equipment Active CN110597615B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810603360.9A CN110597615B (en) 2018-06-12 2018-06-12 Method for processing coding instruction and node equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810603360.9A CN110597615B (en) 2018-06-12 2018-06-12 Method for processing coding instruction and node equipment

Publications (2)

Publication Number Publication Date
CN110597615A true CN110597615A (en) 2019-12-20
CN110597615B CN110597615B (en) 2022-07-01

Family

ID=68848953

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810603360.9A Active CN110597615B (en) 2018-06-12 2018-06-12 Method for processing coding instruction and node equipment

Country Status (1)

Country Link
CN (1) CN110597615B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013160174A (en) * 2012-02-07 2013-08-19 Toyota Motor Corp Control device for internal combustion engine
CN105279286A (en) * 2015-11-27 2016-01-27 陕西艾特信息化工程咨询有限责任公司 Interactive large data analysis query processing method
CN106257960A (en) * 2015-06-18 2016-12-28 中兴通讯股份有限公司 The method and apparatus of many equipment collaborations operation
CN106547627A (en) * 2016-11-24 2017-03-29 郑州云海信息技术有限公司 The method and system that a kind of Spark MLlib data processings accelerate
CN107122443A (en) * 2017-04-24 2017-09-01 中国科学院软件研究所 A kind of distributed full-text search system and method based on Spark SQL

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013160174A (en) * 2012-02-07 2013-08-19 Toyota Motor Corp Control device for internal combustion engine
CN106257960A (en) * 2015-06-18 2016-12-28 中兴通讯股份有限公司 The method and apparatus of many equipment collaborations operation
CN105279286A (en) * 2015-11-27 2016-01-27 陕西艾特信息化工程咨询有限责任公司 Interactive large data analysis query processing method
CN106547627A (en) * 2016-11-24 2017-03-29 郑州云海信息技术有限公司 The method and system that a kind of Spark MLlib data processings accelerate
CN107122443A (en) * 2017-04-24 2017-09-01 中国科学院软件研究所 A kind of distributed full-text search system and method based on Spark SQL

Also Published As

Publication number Publication date
CN110597615B (en) 2022-07-01

Similar Documents

Publication Publication Date Title
CN109993299B (en) Data training method and device, storage medium and electronic device
US8595735B2 (en) Holistic task scheduling for distributed computing
Nayak et al. Improved round robin scheduling using dynamic time quantum
CN104765640B (en) A kind of intelligent Service dispatching method
CN109992407B (en) YARN cluster GPU resource scheduling method, device and medium
US9218210B2 (en) Distributed processing system
CN108139926B (en) Server system, method and storage medium for scheduling jobs for web applications
WO2014052942A1 (en) Random number generator in a parallel processing database
US9104491B2 (en) Batch scheduler management of speculative and non-speculative tasks based on conditions of tasks and compute resources
CN110389816A (en) Method, apparatus and computer program product for scheduling of resource
CN105912387A (en) Method and device for dispatching data processing operation
CN109726004B (en) Data processing method and device
CN109886859A (en) Data processing method, system, electronic equipment and computer readable storage medium
CN111651864B (en) Event centralized emission type multi-heterogeneous time queue optimization simulation execution method and system
CN107316124B (en) Extensive affairs type job scheduling and processing general-purpose system under big data environment
Şen et al. A strong preemptive relaxation for weighted tardiness and earliness/tardiness problems on unrelated parallel machines
US9038081B2 (en) Computing job management based on priority and quota
CN113010286A (en) Parallel task scheduling method and device, computer equipment and storage medium
CN112463334B (en) Training task queuing reason analysis method, system, equipment and medium
CN109871270B (en) Scheduling scheme generation method and device
Özpeynirci A heuristic approach based on time-indexed modelling for scheduling and tool loading in flexible manufacturing systems
CN110597615B (en) Method for processing coding instruction and node equipment
CN109800078A (en) A kind of task processing method, task distribution terminal and task execution terminal
CN106897199B (en) Batch job execution time prediction method based on big data processing framework
CN113222099A (en) Convolution operation method and chip

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant