CN110717992A - Method, device, computer system and readable storage medium for scheduling model - Google Patents

Method, device, computer system and readable storage medium for scheduling model Download PDF

Info

Publication number
CN110717992A
CN110717992A CN201910947251.3A CN201910947251A CN110717992A CN 110717992 A CN110717992 A CN 110717992A CN 201910947251 A CN201910947251 A CN 201910947251A CN 110717992 A CN110717992 A CN 110717992A
Authority
CN
China
Prior art keywords
model
models
platform
scheduling
scheduled
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910947251.3A
Other languages
Chinese (zh)
Other versions
CN110717992B (en
Inventor
李培道
吴勇义
刘彬彬
刘志宛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netshen Information Technology (beijing) Co Ltd
Qianxin Technology Group Co Ltd
Original Assignee
Netshen Information Technology (beijing) Co Ltd
Qianxin Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netshen Information Technology (beijing) Co Ltd, Qianxin Technology Group Co Ltd filed Critical Netshen Information Technology (beijing) Co Ltd
Priority to CN201910947251.3A priority Critical patent/CN110717992B/en
Publication of CN110717992A publication Critical patent/CN110717992A/en
Application granted granted Critical
Publication of CN110717992B publication Critical patent/CN110717992B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Stored Programmes (AREA)

Abstract

The present disclosure provides a method of a scheduling model applied to a scheduling platform, comprising: acquiring a scheduling task, wherein the scheduling task comprises a plurality of models to be scheduled and a dependency relationship between the plurality of models to be scheduled, the plurality of models to be scheduled at least comprise two models written by different languages or at least comprise two models written by the same language, and the configuration information of the models written by different languages is different; selecting a first model meeting the operating conditions from a plurality of models to be scheduled according to scheduling logic; determining a first operation platform for operating the first model according to the configuration information of the first model; and sending the execution file of the first model to the first running platform so that the first running platform executes the execution file of the first model. The present disclosure also provides an apparatus of a scheduling model applied to a scheduling platform, a computer system, and a computer-readable storage medium.

Description

Method, device, computer system and readable storage medium for scheduling model
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method applied to a scheduling model of a scheduling platform, an apparatus applied to a scheduling model of a scheduling platform, a computer system, and a computer-readable storage medium.
Background
In the related art, the modeling platform can realize functions of project management, data processing, model management and the like. Different customers can build a model meeting self business requirements at a modeling platform application end, and the built model can be used for realizing a business target. For example, customers build predictive models based on big data for predicting data trends. However, with the cross-business requirements among multiple customers, interaction between models built by different customers is inevitably needed in order to achieve certain business goals.
In implementing the disclosed concept, the inventors found that there are at least the following problems in the related art: the current modeling platform can only schedule a single model generally, and lacks the capability of collaboratively scheduling a plurality of models, so that the business can not be developed.
Disclosure of Invention
In view of the above, the present disclosure provides a method applied to a scheduling model of a scheduling platform, an apparatus applied to a scheduling model of a scheduling platform, a computer system, and a computer-readable storage medium.
One aspect of the present disclosure provides a method applied to a scheduling model of a scheduling platform, including: acquiring a scheduling task, wherein the scheduling task comprises a plurality of models to be scheduled and a dependency relationship between the plurality of models to be scheduled, the plurality of models to be scheduled at least comprises two models written by different languages, and the configuration information of the models written by different languages is different or at least comprises two models written by the same language; selecting a first model meeting the operating conditions from the plurality of models to be scheduled according to scheduling logic; determining a first operation platform for operating the first model according to the configuration information of the first model; and sending the execution file of the first model to the first running platform so that the first running platform executes the execution file of the first model.
According to an embodiment of the present disclosure, the method further comprises: receiving state information from the first running platform for running the first model; under the condition that the state information represents that the first model finishes running, selecting a second model meeting the running condition from the multiple models to be scheduled according to the dependency relationship among the multiple models to be scheduled; determining a second operation platform for operating the second model according to the configuration information of the second model; and sending the execution file of the second model to the second running platform so that the second running platform executes the execution file of the second model.
According to an embodiment of the present disclosure, the method further comprises: receiving a first output result and a first log file from an execution file of the first running platform executing the first model; receiving a second output result and a second log file from an execution file of the second running platform executing the second model; and storing the first output result, the first log file, the second output result, and the second log file.
According to an embodiment of the present disclosure, the method further comprises: and in the process that the second operation platform executes the execution file of the second model, providing the data stored by the scheduling platform for the second operation platform so as to realize data sharing in the process that different operation platforms operate the plurality of models to be scheduled.
According to an embodiment of the present disclosure, selecting a second model that meets the operating condition from the plurality of models to be scheduled according to the dependency relationship between the plurality of models to be scheduled includes: determining one or more non-running models in the plurality of models to be scheduled; determining whether the operation of the pre-models which are depended by the one or more models which are not operated is finished or not according to the dependency relationship among the multiple models to be scheduled; and determining an un-run model of the dependent front model after running as a second model meeting the running condition.
According to an embodiment of the present disclosure, the method further comprises: acquiring registration requests for registering the plurality of models to be scheduled before acquiring the scheduling tasks; and responding to the registration request, and storing the execution files corresponding to the plurality of models to be scheduled in a model library.
Another aspect of the present disclosure provides an apparatus of a scheduling model applied to a scheduling platform, including: the scheduling method comprises a first obtaining module, a second obtaining module and a scheduling module, wherein the scheduling task comprises a plurality of models to be scheduled and a dependency relationship between the plurality of models to be scheduled, the plurality of models to be scheduled at least comprise two models written by different languages or at least comprise two models written by the same language, and configuration information of the models written by different languages is different; the first selection module is used for selecting a first model meeting the operating conditions from the plurality of models to be scheduled according to the scheduling logic; the first determining module is used for determining a first running platform for running the first model according to the configuration information of the first model; and the first sending module is used for sending the execution file of the first model to the first running platform so that the first running platform executes the execution file of the first model.
According to an embodiment of the present disclosure, the apparatus further comprises: the first receiving module is used for receiving state information of the first running platform for running the first model; the second selection module is used for selecting a second model meeting the operation condition from the plurality of models to be scheduled according to the dependency relationship among the plurality of models to be scheduled under the condition that the state information represents that the first model finishes operating; the second determining module is used for determining a second operation platform for operating the second model according to the configuration information of the second model; and the second sending module is used for sending the execution file of the second model to the second running platform so that the second running platform executes the execution file of the second model.
According to an embodiment of the present disclosure, the apparatus further comprises: the second receiving module is used for receiving a first output result and a first log file of an execution file of the first model executed by the first running platform; a third receiving module, configured to receive a second output result and a second log file from the execution file of the second model executed by the second operation platform; and a first storage module for storing the first output result, the first log file, the second output result and the second log file.
According to an embodiment of the present disclosure, the apparatus further comprises: and the sharing module is used for providing the data stored by the scheduling platform to the second running platform in the process of executing the execution file of the second model by the second running platform so as to realize data sharing in the process of running the multiple models to be scheduled by different running platforms.
According to an embodiment of the present disclosure, the second selection module is configured to: determining one or more non-running models in the plurality of models to be scheduled; determining whether the operation of the pre-models which are depended by the one or more models which are not operated is finished or not according to the dependency relationship among the multiple models to be scheduled; and determining an un-run model of the dependent front model after running as a second model meeting the running condition.
According to an embodiment of the present disclosure, the apparatus further comprises: a second obtaining module, configured to obtain a registration request for registering the multiple models to be scheduled before obtaining the scheduling task; and the second storage module is used for responding to the registration request and storing the execution files corresponding to the plurality of models to be scheduled in a model library.
Another aspect of the present disclosure provides a computer system comprising: one or more processors; memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method as described above.
Another aspect of the disclosure provides a computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to implement the method as described above.
Another aspect of the disclosure provides a computer program comprising computer executable instructions for implementing the method as described above when executed.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
fig. 1 schematically illustrates an exemplary system architecture to which methods and apparatus of a scheduling model applied to a scheduling platform may be applied, according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of a method applied to a scheduling model of a scheduling platform according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a flow chart of a method applied to a scheduling model of a scheduling platform according to another embodiment of the present disclosure;
FIG. 4 schematically illustrates a schematic diagram of a method applied to a scheduling model of a scheduling platform according to another embodiment of the present disclosure;
FIG. 5 is a flow chart that schematically illustrates a method for selecting a second model that meets an operating condition from a plurality of models to be scheduled, based on dependencies between the plurality of models to be scheduled, in accordance with an embodiment of the present disclosure;
FIG. 6 schematically illustrates a flow chart of a method applied to a scheduling model of a scheduling platform according to another embodiment of the present disclosure;
FIG. 7 schematically illustrates a block diagram of an apparatus applied to a scheduling model of a scheduling platform, in accordance with an embodiment of the present disclosure; and
FIG. 8 schematically illustrates a block diagram of a computer system suitable for implementing the above-described method, according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Embodiments of the present disclosure provide a method applied to a scheduling model of a scheduling platform, an apparatus applied to a scheduling model of a scheduling platform, a computer system, and a computer-readable storage medium. The method comprises the following steps: acquiring a scheduling task, wherein the scheduling task comprises a plurality of models to be scheduled and a dependency relationship between the plurality of models to be scheduled, the plurality of models to be scheduled at least comprise two models written by different languages or at least comprise two models written by the same language, and the configuration information of the models written by different languages is different; selecting a first model meeting the operating conditions from a plurality of models to be scheduled according to scheduling logic; determining a first operation platform for operating the first model according to the configuration information of the first model; and sending the execution file of the first model to the first running platform so that the first running platform executes the execution file of the first model.
Fig. 1 schematically illustrates an exemplary system architecture to which methods and apparatuses of a scheduling model applied to a scheduling platform may be applied, according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the system architecture 100 according to this embodiment may include a terminal device 101, a network 102 and a network 104, a scheduling platform 103, a running platform 105, and a running platform 106. Networks 102 and 104 are the medium used to provide communication links between terminal equipment 101, scheduling platform 103, and execution platforms 105 and 106. Networks 102 and 104 may include various connection types, such as wired and/or wireless communication links, and so forth.
The terminal device 101 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
A user may use the terminal device 101 to interact with the scheduling platform 103 over the network 102 to receive or send messages or the like. Various messaging client applications, such as a web browser application, a search-type application, an instant messaging tool, a mailbox client, and/or social platform software, etc. (by way of example only) may be installed on terminal device 101. The user can register the model which needs to be run by the user through the browser application, and in the process of registering the model, the information such as the code and the description file of the model can be uploaded to the scheduling platform 103 through the network 102.
The scheduling platform 103 may be comprised of one or more servers, such as a background management server (for example only) that may provide support for websites browsed by users using the terminal devices 101. The background management server may analyze and perform other processing on the received data such as the user request, and feed back a processing result (e.g., a webpage, information, or data obtained or generated according to the user request) to the terminal device.
The scheduling platform 103 may send the execution file of the model to the execution platforms 105 and/or 106 through the network 104, so that the execution platforms 105 and/or 106 may execute the execution file of the model, and the execution file may include the code of the model, thereby achieving the effect of executing the model.
The execution platforms 105 and/or 106 may be comprised of one or more servers capable of supporting the execution of multi-language scripts.
It should be noted that the method of scheduling model provided by the embodiments of the present disclosure may be generally executed by the scheduling platform 103. Accordingly, the apparatus for scheduling model provided by the embodiments of the present disclosure may be generally disposed in the scheduling platform 103.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 schematically shows a flow chart of a method applied to a scheduling model of a scheduling platform according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S210 to S240.
In operation S210, a scheduling task is obtained, where the scheduling task includes a plurality of models to be scheduled and a dependency relationship between the plurality of models to be scheduled, the plurality of models to be scheduled includes at least two models written in different languages or includes at least two models written in a same language, and configuration information of the models written in different languages is different.
According to the embodiment of the disclosure, the scheduling task may be generated based on the operation of the user on the client, and the client may send a result obtained after responding to the operation of the user to the scheduling platform.
For example, a user can connect a plurality of models to be scheduled to be executed together on a visual interface of a client in a dragging and connecting manner, generate a scheduling task, and then send the scheduling task to a scheduling platform.
According to the embodiment of the disclosure, before the scheduling task is obtained, a registration request for registering a plurality of models to be scheduled may be obtained, and in response to the registration request, the execution files corresponding to the plurality of models to be scheduled are stored in the model library.
According to the embodiment of the disclosure, the execution file corresponding to the model to be scheduled may include information such as a code file and a description file of the model, and the model may be managed in a model library.
According to the embodiment of the disclosure, a user can write a model to be scheduled on a modeling platform of the user, the writing languages which can be used can be different according to different modeling platforms, and the types of the obtained model scripts are different. For example, the SQL script is compiled according to the SQL language, the JavaScript script is compiled according to the JavaScript language, and the python script is compiled according to the python language.
According to an embodiment of the present disclosure, each model has corresponding configuration information, for example, configuration information of each model, including but not limited to model ID, name, description, parameters, operating environment information, and the like.
In operation S220, a first model meeting the operating condition is selected from a plurality of models to be scheduled according to the scheduling logic.
According to the embodiment of the disclosure, the first model may be the first model to be executed after the scheduling task is submitted, or may be a model that does not depend on the output of other models as input in the scheduling task.
According to an embodiment of the present disclosure, the first model may include a plurality. In other words, the scheduling platform may schedule multiple models to run simultaneously to the same or different runtime platforms. When the scripts of the models are different and the required operating environments are different, the models can be simultaneously scheduled to different operating platforms to synchronously operate. For example, the model of the SQL script is dispatched to a running platform providing a sparkSQL running environment, the model of the JavaScript script is dispatched to a running platform providing a JavaScript engine, and the model of the python script is dispatched to a running platform providing a python container.
In operation S230, a first execution platform for executing the first model is determined according to the configuration information of the first model.
According to an embodiment of the present disclosure, the configuration information of the first model may be script type information of the model. And determining an operation platform capable of operating the model according to the script type of the model.
According to an embodiment of the present disclosure, models written in different languages may be run by different runtime platforms.
In operation S240, the execution file of the first model is transmitted to the first runtime platform, so that the first runtime platform executes the execution file of the first model.
By the embodiment of the disclosure, the service target can be completed by scheduling the plurality of models through the scheduling platform, the plurality of models can be models written in different languages, different models are scheduled to different operation platforms for operation, cross-process and cross-platform model scheduling and cooperation can be realized, model operation in different languages is supported, and the plurality of models can cooperatively complete service data processing. Therefore, the problem that the service cannot be developed due to the fact that a modeling platform in the related technology generally can only schedule a single model and lacks of cooperative scheduling capability of a plurality of models can be solved.
According to the embodiment of the disclosure, the scheduling platform may receive state information for running the first model from the first running platform, select a second model meeting a running condition from the multiple models to be scheduled according to a dependency relationship between the multiple models to be scheduled when the state information represents that the first model is completely run, determine a second running platform for running the second model according to configuration information of the second model, and send an execution file of the second model to the second running platform, so that the second running platform executes the execution file of the second model.
According to the embodiment of the disclosure, the scheduling platform can abstract the behaviors of the model such as input, output, state update and the like, provide a standardized interface for the running platform to call when running the model, and provide the description specification of the model dependency chain.
The method provided by the present disclosure is further described with reference to fig. 3-5 in conjunction with specific embodiments.
Fig. 3 schematically shows a flow chart of a method applied to a scheduling model of a scheduling platform according to another embodiment of the present disclosure.
As shown in fig. 3, the method includes operations S310 to S330.
In operation S310, a first output result of an execution file executing a first model and a first log file are received from a first runtime platform.
In operation S320, a second output result of the execution file executing the second model and a second log file are received from the second execution platform.
In operation S330, the first output result, the first log file, the second output result, and the second log file are stored.
According to the embodiment of the disclosure, in the process that the second running platform executes the execution file of the second model, the data stored by the scheduling platform can be provided for the second running platform, so that data sharing in the process that different running platforms run a plurality of models to be scheduled is realized.
Fig. 4 schematically shows a schematic diagram of a method applied to a scheduling model of a scheduling platform according to another embodiment of the present disclosure.
As shown in FIG. 4, a scheduling platform 402 has a dependent database 401 from which data can be read by the underlying capabilities when the model is run. The scheduling platform 402 may provide a variety of services, such as base services, data read-write services, logging services, status services, and task scheduling services. The different execution platform 403 may obtain information from the scheduling platform 402, e.g., the second execution platform may obtain status information of the first execution platform from the scheduling platform 402, obtain output results of the first execution platform, etc.
According to an embodiment of the present disclosure, the base service may be a service for providing data sharing between a plurality of models to be scheduled.
According to the embodiment of the disclosure, the first to third operating platforms may feed back the output result, the log file, the state information, and the like to the scheduling platform 402. The scheduling platform 402 may provide the runtime platform with information sharing capabilities. Scheduling platform 402 may schedule different models for execution by different runtime platforms.
Fig. 5 schematically shows a flowchart of a method for selecting a second model meeting an operating condition from a plurality of models to be scheduled according to a dependency relationship between the plurality of models to be scheduled, according to an embodiment of the present disclosure.
As shown in fig. 5, the method includes operations S510 to S530.
In operation S510, one or more non-running models of a plurality of models to be scheduled are determined.
According to the embodiment of the disclosure, one or more non-operating models in the plurality of models to be scheduled can be determined again when the first model has been operated completely.
In operation S520, it is determined whether a front-end model, on which one or more non-running models depend, is completely run according to a dependency relationship between a plurality of models to be scheduled.
According to the embodiment of the disclosure, the plurality of models to be scheduled may logically form a directed acyclic graph, and the dependency relationship between the plurality of models to be scheduled may be, for example, that the input of the following model is the output of the preceding model, and the following model can start to run only if the preceding model has already run.
In operation S530, an un-run model in which the dependent pre-model is run completely is determined as a second model that meets the running condition.
According to the embodiment of the disclosure, the number of the second models may include a plurality of second models, and the plurality of second models may be simultaneously sent to different operation platforms for operation.
FIG. 6 schematically shows a flowchart of an example method applied to a scheduling model of a scheduling platform, in accordance with an embodiment of the present disclosure.
As shown in fig. 6, the method includes operations S610 to S670.
In operation S610, the scheduling platform may obtain a scheduled task from a client.
In operation S620, the scheduling platform selects a model meeting the operation condition through the scheduling logic, determines whether an un-operated model exists in the current task, and if not, executes operation S630, marks that the current task is successfully operated, and ends the task. Otherwise, the next operation S640 is performed.
In operation S640, the scheduling platform determines whether the non-running model has a pre-model or whether the pre-models have been successfully run, and if so, performs operation S650 to submit the current model.
After the model is submitted, the runtime platform may run in an asynchronous manner, first obtaining model parameters and input data from the scheduling platform, in operation S650.
In operation S660, the scheduling platform may package files such as basic capability of the scheduling platform and model parameters together into a task package that can be identified by the operating platform, the basic capability of the scheduling platform is provided to other operating platforms through interfaces, the interfaces are packaged in the SDK, and the model calls the basic capability of the scheduling platform through the interfaces provided in the SDK. According to the embodiment of the disclosure, after the task is submitted to the running platform to run, the operation may jump to S620 for subsequent scheduling.
In operation S670, the runtime platform executes the encapsulated task package. The running platform can feed back the state and record the log in real time, and finally stores the data generated by the model. Specifically, the running platform can call a state service of the scheduling platform to report the running state through the SDK, call a log service of the scheduling platform to store the running log, and call a data service of the scheduling platform to acquire input data and store output data. And in the model operation process, information sharing and communication with other models can be realized through basic services of the scheduling platform.
According to the embodiment of the disclosure, the scheduling platform can be responsible for model interaction, state collection, log collection, persistence of model intermediate results and the like, and a plurality of models are packaged into tasks according to the specification and are distributed to the running platform for execution. By the embodiment of the disclosure, the service target can be completed by scheduling the plurality of models through the scheduling platform, the plurality of models can be models written in different languages, different models are scheduled to different operation platforms for operation, cross-process and cross-platform model scheduling and cooperation can be realized, model operation in different languages is supported, and the plurality of models can cooperatively complete service data processing. The problem that a modeling platform in the related technology can only generally carry out scheduling on a single model, and the service cannot be developed due to the lack of the cooperative scheduling capability of a plurality of models is solved.
Fig. 7 schematically illustrates a block diagram of an apparatus applied to a scheduling model of a scheduling platform according to an embodiment of the present disclosure.
As shown in fig. 7, the apparatus 700 for a scheduling model applied to a scheduling platform includes a first obtaining module 710, a first selecting module 720, a first determining module 730, and a first transmitting module 740.
The first obtaining module 710 is configured to obtain a scheduling task, where the scheduling task includes a plurality of models to be scheduled and a dependency relationship between the plurality of models to be scheduled, the plurality of models to be scheduled includes at least two models written in different languages or at least two models written in a same language, and configuration information of the models written in different languages is different.
The first selection module 720 is configured to select a first model meeting the operating condition from a plurality of models to be scheduled according to the scheduling logic.
The first determining module 730 is configured to determine a first running platform for running the first model according to the configuration information of the first model.
The first sending module 740 is configured to send the execution file of the first model to the first running platform, so that the first running platform executes the execution file of the first model.
By the embodiment of the disclosure, the service target can be completed by scheduling the plurality of models through the scheduling platform, the plurality of models can be models written in different languages, different models are scheduled to different operation platforms for operation, cross-process and cross-platform model scheduling and cooperation can be realized, model operation in different languages is supported, and the plurality of models can cooperatively complete service data processing. The problem that a modeling platform in the related technology can only generally carry out scheduling on a single model, and the service cannot be developed due to the lack of the cooperative scheduling capability of a plurality of models is solved.
The apparatus 700 applied to the scheduling model of the scheduling platform further includes a first receiving module, a second selecting module, a second determining module, and a second transmitting module.
The first receiving module is used for receiving state information of a first model operated by the first operating platform.
And the second selection module is used for selecting a second model meeting the operation condition from the plurality of models to be scheduled according to the dependency relationship among the plurality of models to be scheduled under the condition that the state information represents that the first model finishes operating.
The second determining module is used for determining a second operation platform for operating the second model according to the configuration information of the second model.
The second sending module is used for sending the execution file of the second model to the second running platform, so that the second running platform executes the execution file of the second model.
According to an embodiment of the present disclosure, the apparatus 700 applied to the scheduling model of the scheduling platform further includes a second receiving module, a third receiving module, and a first storage module.
The second receiving module is used for receiving a first output result and a first log file of an execution file of the first model executed by the first running platform.
The third receiving module is used for receiving a second output result and a second log file of the execution file of the second model executed by the second running platform.
The first storage module is used for storing a first output result, a first log file, a second output result and a second log file.
According to an embodiment of the present disclosure, the apparatus 700 applied to the scheduling model of the scheduling platform further includes a sharing module, configured to provide data stored by the scheduling platform to the second running platform in a process that the second running platform executes an execution file of the second model, so as to implement data sharing in a process that different running platforms run a plurality of models to be scheduled.
According to an embodiment of the present disclosure, the second selection module is configured to determine one or more non-operational models of a plurality of models to be scheduled; determining whether the operation of the pre-models which are depended by one or more non-operation models is finished or not according to the dependency relationship among the multiple models to be scheduled; and determining the non-operation model of the dependent preposed model after operation as a second model meeting the operation condition.
According to an embodiment of the present disclosure, the apparatus 700 applied to the scheduling model of the scheduling platform further includes a second obtaining module and a second storing module.
The second obtaining module is used for obtaining a registration request for registering a plurality of models to be scheduled before obtaining the scheduling task.
And the second storage module is used for responding to the registration request and storing the execution files corresponding to the plurality of models to be scheduled in the model library.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any plurality of the first obtaining module 710, the first selecting module 720, the first determining module 730, and the first sending module 740 may be combined and implemented in one module/unit/sub-unit, or any one of the modules/units/sub-units may be split into a plurality of modules/units/sub-units. Alternatively, at least part of the functionality of one or more of these modules/units/sub-units may be combined with at least part of the functionality of other modules/units/sub-units and implemented in one module/unit/sub-unit. According to an embodiment of the present disclosure, at least one of the first obtaining module 710, the first selecting module 720, the first determining module 730, and the first sending module 740 may be at least partially implemented as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or implemented by any one of three implementations of software, hardware, and firmware, or an appropriate combination of any several of them. Alternatively, at least one of the first obtaining module 710, the first selecting module 720, the first determining module 730 and the first sending module 740 may be at least partially implemented as a computer program module, which when executed may perform a corresponding function.
It should be noted that a device part utilizing a scheduling platform scheduling model in the embodiment of the present disclosure corresponds to a method part utilizing a scheduling platform scheduling model in the embodiment of the present disclosure, and description of the device part utilizing a scheduling platform scheduling model specifically refers to the method part utilizing a scheduling platform scheduling model, which is not described herein again.
There is also provided, in accordance with an embodiment of the present disclosure, a computer system, including: one or more processors; memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method as described above.
There is also provided, in accordance with an embodiment of the present disclosure, a computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to implement the method as described above.
FIG. 8 schematically illustrates a block diagram of a computer system suitable for implementing the above-described method, according to an embodiment of the present disclosure. The computer system illustrated in FIG. 8 is only one example and should not impose any limitations on the scope of use or functionality of embodiments of the disclosure.
As shown in fig. 8, a computer system 800 according to an embodiment of the present disclosure includes a processor 801 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. The processor 801 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 801 may also include onboard memory for caching purposes. The processor 801 may include a single processing unit or multiple processing units for performing different actions of the method flows according to embodiments of the present disclosure.
In the RAM 803, various programs and data necessary for the operation of the system 800 are stored. The processor 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. The processor 801 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM 802 and/or RAM 803. Note that the programs may also be stored in one or more memories other than the ROM 802 and RAM 803. The processor 801 may also perform various operations of method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
System 800 may also include an input/output (I/O) interface 805, also connected to bus 804, according to an embodiment of the disclosure. The system 800 may also include one or more of the following components connected to the I/O interface 805: an input portion 806 including a keyboard, a mouse, and the like; an output section 807 including a signal such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 808 including a hard disk and the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. A drive 810 is also connected to the I/O interface 805 as necessary. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as necessary, so that a computer program read out therefrom is mounted on the storage section 808 as necessary.
According to embodiments of the present disclosure, method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 809 and/or installed from the removable medium 811. The computer program, when executed by the processor 801, performs the above-described functions defined in the system of the embodiments of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to an embodiment of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium. Examples may include, but are not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
For example, according to embodiments of the present disclosure, a computer-readable storage medium may include the ROM 802 and/or RAM 803 described above and/or one or more memories other than the ROM 802 and RAM 803.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (10)

1. A method of a scheduling model applied to a scheduling platform, comprising:
acquiring a scheduling task, wherein the scheduling task comprises a plurality of models to be scheduled and a dependency relationship between the plurality of models to be scheduled, the plurality of models to be scheduled at least comprise two models written by different languages or at least comprise two models written by the same language, and the configuration information of the models written by different languages is different;
selecting a first model meeting the operating conditions from the plurality of models to be scheduled according to scheduling logic;
determining a first operation platform for operating the first model according to the configuration information of the first model; and
and sending the execution file of the first model to the first running platform so that the first running platform executes the execution file of the first model.
2. The method of claim 1, further comprising:
receiving state information for running the first model from the first running platform;
under the condition that the state information represents that the first model finishes running, selecting a second model meeting the running condition from the multiple models to be scheduled according to the dependency relationship among the multiple models to be scheduled;
determining a second operation platform for operating the second model according to the configuration information of the second model; and
and sending the execution file of the second model to the second running platform so that the second running platform executes the execution file of the second model.
3. The method of claim 2, further comprising:
receiving a first output result and a first log file of an execution file executing the first model from the first runtime platform;
receiving a second output result and a second log file of an execution file executing the second model from the second runtime platform; and
storing the first output result, the first log file, the second output result, and the second log file.
4. The method of claim 3, further comprising:
and in the process that the second operation platform executes the execution file of the second model, providing the data stored by the scheduling platform for the second operation platform so as to realize data sharing in the process that different operation platforms operate the plurality of models to be scheduled.
5. The method of claim 2, wherein selecting a second model from the plurality of models to be scheduled that meets the operating condition according to dependencies between the plurality of models to be scheduled comprises:
determining one or more non-running models in the plurality of models to be scheduled;
determining whether the operation of the pre-models which are depended by the one or more models which are not operated is finished or not according to the dependency relationship among the multiple models to be scheduled; and
and determining the non-operation model of the dependent preposed model after the operation is completed as a second model meeting the operation condition.
6. The method of claim 1, further comprising:
acquiring registration requests for registering the plurality of models to be scheduled before acquiring the scheduling tasks; and
and responding to the registration request, and storing the execution files corresponding to the plurality of models to be scheduled in a model library.
7. An apparatus of a scheduling model applied to a scheduling platform, comprising:
the scheduling method comprises a first obtaining module, a second obtaining module and a scheduling module, wherein the scheduling task comprises a plurality of models to be scheduled and a dependency relationship between the plurality of models to be scheduled, the plurality of models to be scheduled at least comprise two models written by different languages or at least comprise two models written by the same language, and configuration information of the models written by different languages is different;
the first selection module is used for selecting a first model meeting the operating conditions from the plurality of models to be scheduled according to the scheduling logic;
the first determining module is used for determining a first running platform for running the first model according to the configuration information of the first model; and
a first sending module, configured to send the execution file of the first model to the first operating platform, so that the first operating platform executes the execution file of the first model.
8. The apparatus of claim 7, further comprising:
a first receiving module, configured to receive, from the first running platform, state information for running the first model;
the second selection module is used for selecting a second model meeting the operation condition from the plurality of models to be scheduled according to the dependency relationship among the plurality of models to be scheduled under the condition that the state information represents that the first model finishes operating;
the second determining module is used for determining a second operation platform for operating the second model according to the configuration information of the second model; and
and the second sending module is used for sending the execution file of the second model to the second running platform so that the second running platform executes the execution file of the second model.
9. A computer system, comprising:
one or more processors;
a memory for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-6.
10. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to carry out the method of any one of claims 1 to 6.
CN201910947251.3A 2019-09-30 2019-09-30 Method, apparatus, computer system and readable storage medium for scheduling model Active CN110717992B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910947251.3A CN110717992B (en) 2019-09-30 2019-09-30 Method, apparatus, computer system and readable storage medium for scheduling model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910947251.3A CN110717992B (en) 2019-09-30 2019-09-30 Method, apparatus, computer system and readable storage medium for scheduling model

Publications (2)

Publication Number Publication Date
CN110717992A true CN110717992A (en) 2020-01-21
CN110717992B CN110717992B (en) 2023-10-20

Family

ID=69212195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910947251.3A Active CN110717992B (en) 2019-09-30 2019-09-30 Method, apparatus, computer system and readable storage medium for scheduling model

Country Status (1)

Country Link
CN (1) CN110717992B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112685150A (en) * 2020-12-21 2021-04-20 联想(北京)有限公司 Multi-language program execution method, device and storage medium
TWI825317B (en) * 2020-05-13 2023-12-11 日商Spp科技股份有限公司 Manufacturing process determination device for substrate processing apparatus, substrate processing system, manufacturing process determination method for substrate processing apparatus, computer program, method and program for generating learning model group

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110321051A1 (en) * 2010-06-25 2011-12-29 Ebay Inc. Task scheduling based on dependencies and resources
CN109271238A (en) * 2017-07-12 2019-01-25 北京京东尚科信息技术有限公司 Support the task scheduling apparatus and method of a variety of programming languages

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110321051A1 (en) * 2010-06-25 2011-12-29 Ebay Inc. Task scheduling based on dependencies and resources
CN109271238A (en) * 2017-07-12 2019-01-25 北京京东尚科信息技术有限公司 Support the task scheduling apparatus and method of a variety of programming languages

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郭辉;陈松乔;: "基于J2EE架构的Java语言学习平台的设计与实现" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI825317B (en) * 2020-05-13 2023-12-11 日商Spp科技股份有限公司 Manufacturing process determination device for substrate processing apparatus, substrate processing system, manufacturing process determination method for substrate processing apparatus, computer program, method and program for generating learning model group
CN112685150A (en) * 2020-12-21 2021-04-20 联想(北京)有限公司 Multi-language program execution method, device and storage medium

Also Published As

Publication number Publication date
CN110717992B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
US11210109B2 (en) Method and system for loading resources
US20210311859A1 (en) Orchestration for automated performance testing
US11138645B2 (en) Virtualized services discovery and recommendation engine
CN113778848A (en) Test code generation method, device, computer system and medium
US20130047148A1 (en) Automated service solution delivery
CN111913738A (en) Access request processing method, device, computing equipment and medium
CN111782988B (en) Method, apparatus, computer system and storage medium for determining source of application program
US11269756B1 (en) Self-healing web applications
CN110717992B (en) Method, apparatus, computer system and readable storage medium for scheduling model
CN111611086A (en) Information processing method, information processing apparatus, electronic device, and medium
CN112965916B (en) Page testing method, page testing device, electronic equipment and readable storage medium
CN113515271A (en) Service code generation method and device, electronic equipment and readable storage medium
CN113191889A (en) Wind control configuration method, configuration system, electronic device and readable storage medium
CN113176907A (en) Interface data calling method and device, computer system and readable storage medium
CN109960505B (en) User interface component plug-in method, system, equipment and storage medium
CN111930629A (en) Page testing method and device, electronic equipment and storage medium
CN112506781B (en) Test monitoring method, device, electronic equipment, storage medium and program product
CN113986258A (en) Service publishing method, device, equipment and storage medium
CN113535590A (en) Program testing method and device
CN115248680A (en) Software construction method, system, device, medium, and program product
CN114677114A (en) Approval process generation method and device based on graph dragging
CN114035864A (en) Interface processing method, interface processing device, electronic device, and storage medium
CN113448578A (en) Page data processing method, processing system, electronic device and readable storage medium
CN112860344A (en) Component processing method and device, electronic equipment and storage medium
CN111859403A (en) Method and device for determining dependency vulnerability, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 332, 3 / F, Building 102, 28 xinjiekouwei street, Xicheng District, Beijing 100088

Applicant after: QAX Technology Group Inc.

Applicant after: Qianxin Wangshen information technology (Beijing) Co.,Ltd.

Address before: Room 332, 3 / F, Building 102, 28 xinjiekouwei street, Xicheng District, Beijing 100088

Applicant before: QAX Technology Group Inc.

Applicant before: LEGENDSEC INFORMATION TECHNOLOGY (BEIJING) Inc.

GR01 Patent grant
GR01 Patent grant