CN115269159A - Scheduling system and method based on artificial intelligence and edge calculation support - Google Patents

Scheduling system and method based on artificial intelligence and edge calculation support Download PDF

Info

Publication number
CN115269159A
CN115269159A CN202211179073.2A CN202211179073A CN115269159A CN 115269159 A CN115269159 A CN 115269159A CN 202211179073 A CN202211179073 A CN 202211179073A CN 115269159 A CN115269159 A CN 115269159A
Authority
CN
China
Prior art keywords
scheduling
format
artificial intelligence
scheduling instruction
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211179073.2A
Other languages
Chinese (zh)
Other versions
CN115269159B (en
Inventor
李勇
韩懿彤
邹会宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Meiji Supply Chain Management Co ltd
Original Assignee
Suzhou Meiji Supply Chain Management Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Meiji Supply Chain Management Co ltd filed Critical Suzhou Meiji Supply Chain Management Co ltd
Priority to CN202211179073.2A priority Critical patent/CN115269159B/en
Publication of CN115269159A publication Critical patent/CN115269159A/en
Application granted granted Critical
Publication of CN115269159B publication Critical patent/CN115269159B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention provides a scheduling system and method based on artificial intelligence and edge calculation support, and belongs to the technical field of intelligent scheduling. The method comprises the steps of S1, receiving a plurality of scheduling instructions input by a plurality of user terminals in parallel; s2: predicting an amount of resources required to run the transportation optimization model; s3: acquiring available edge resources at the current time point; s4: responding to the scheduling instructions in the scheduling queue in sequence; s5: and updating the available edge resources and returning to the step S1. The scheduling system comprises a transportation optimization model, a plurality of user terminals, an edge resource obtaining unit, a resource demand parameter predicting unit and a resource scheduling unit. The plurality of user terminals comprise a plurality of target edge computing terminals; or, the plurality of user terminals are all edge computing terminals. The scheduling scheme of the invention can fully utilize the existing edge computing resources at present and realize the scheduling and optimization of resources with the maximum efficiency by combining the artificial intelligence engine.

Description

Scheduling system and method based on artificial intelligence and edge calculation support
Technical Field
The invention belongs to the technical field of intelligent scheduling, and particularly relates to a scheduling system and method based on artificial intelligence and edge calculation support.
Background
The logistics transportation resources are reasonably distributed in a warehousing enterprise and a plurality of transportation vehicles, and the aim is to select a proper transportation node from a plurality of logistics service providers so as to meet the requirements of requesters at the lowest cost. The arrival of the era of rapid development of electronic commerce and block chain technology provides a new development direction for the modern logistics industry, prompts the logistics industry to consider a distributed logistics resource allocation method so as to eliminate information asymmetry and improve the resource utilization efficiency through information sharing between a logistics service requester and a provider.
In a comprehensive logistics transportation occasion, a large number of scheduling demand parameters are received at the same time, and the scheduling demand parameters respectively carry different target scheduling demand parameters such as target transportation weight, target transportation time interval and transportation destination. In order to avoid overload pressure on a scheduling engine and a server caused by a large amount of instantaneous load, the scheduling requirement parameters need to be queued and processed in sequence; in order to improve user experience and avoid queuing delay time process, distributed and parallelized scheduling processing engines are usually introduced, the distributed and parallelized scheduling processing engines are respectively provided with different transportation optimization models, and can automatically give out optimized scheduling suggestions on the basis of optimization by receiving scheduling demand parameters input by demand personnel.
However, the amount of resources required by different distributed and parallelized scheduling processing engines is different, and the amount of available resources that can be used at each scheduling time varies, so that the operation of the scheduling model may not be optimal.
Disclosure of Invention
In order to solve the above technical problems, the present invention provides a scheduling system and method based on artificial intelligence and edge calculation support.
In a first aspect of the present invention, a scheduling method based on artificial intelligence and edge computing support is provided, the method includes the following steps:
s1, receiving a plurality of scheduling instructions input by a plurality of user terminals in parallel, wherein the scheduling instructions are input parameters of a transportation optimization model;
s2: predicting an amount of resources required to run the transport optimization model based on the plurality of scheduling instructions;
s3: acquiring available edge resources at the current time point;
s4: sequentially responding to the scheduling instructions in the scheduling queue based on the available edge resources at the current time point and the resource quantity required by the operation of the transportation optimization model;
s5: updating available edge resources, and returning to the step S1;
wherein after the step S1 and before the step S2, the method further comprises the steps of:
s11: whether the currently received scheduling instruction conforms to a preset format is analyzed based on an artificial intelligence engine;
when the currently received scheduling instruction accords with a preset format, inserting the currently received scheduling instruction into the tail position of the scheduling queue;
when the currently received scheduling instruction does not conform to the preset format, the artificial intelligence engine carries out format conversion on the currently received scheduling instruction to obtain a format scheduling instruction, and the format scheduling instruction is inserted into the scheduling queue.
The edge resource comprises a plurality of edge computing terminals;
the plurality of user terminals comprises the plurality of edge computing terminals;
or, the plurality of user terminals are all edge computing terminals.
When the currently received scheduling instruction does not conform to a preset format, the artificial intelligence engine performs format conversion on the currently received scheduling instruction to obtain a format scheduling instruction, and inserts the format scheduling instruction into the scheduling queue, which specifically includes:
determining the priority of the format scheduling instruction based on the resource requirement parameter contained in the format scheduling instruction;
determining an insertion position of the format scheduling instruction in the scheduling queue based on the priority;
inserting the format scheduling instruction into the scheduling queue based on the insertion position.
The step S4 specifically includes: and sending the scheduling instructions in the scheduling queue to a plurality of target edge computing terminals in batches.
The artificial intelligence engine comprises a semantic analysis engine and a format conversion engine;
the semantic analysis engine is used for extracting semantic keywords in the scheduling instruction;
the format conversion engine is used for converting the scheduling instruction which does not conform to the preset format into a format scheduling instruction based on the semantic keyword.
The resource demand parameters comprise resources required by the transportation optimization model operation;
the resources required by the transportation optimization model operation are determined by the input scheduling value currently received by the transportation optimization model;
and the input scheduling values are input in parallel by M user terminals, wherein M is an integer larger than 1.
The M user terminals comprise N edge computing terminals;
or, the M user terminals are all edge computing terminals; n is an integer greater than 1.
In a second aspect of the present invention, a scheduling system based on artificial intelligence and edge computing support is provided, where the scheduling system includes a transportation optimization model, multiple user terminals, an edge resource obtaining unit, a resource demand parameter prediction unit, and a resource scheduling unit;
the user terminals are used for inputting a plurality of scheduling instructions of the transportation optimization model in parallel;
the edge resource obtaining unit is used for obtaining the available edge resources at the current time point;
the resource demand parameter prediction unit predicts the amount of resources required for running the transportation optimization model based on the plurality of scheduling instructions;
the resource scheduling unit sequentially responds to scheduling instructions in the scheduling queue based on available edge resources at the current time point and the amount of resources required for operating the transportation optimization model;
the response comprises that the dispatching instructions are sent to a plurality of target edge computing terminals in batches;
the plurality of user terminals comprises the plurality of target edge computing terminals;
or, the plurality of user terminals are all edge computing terminals.
The system also comprises a scheduling instruction analysis unit;
the scheduling instruction analyzing unit analyzes whether the currently received scheduling instruction conforms to a preset format or not based on the artificial intelligence engine;
when the currently received scheduling instruction accords with a preset format, inserting the currently received scheduling instruction into the tail position of a scheduling queue;
when the currently received scheduling instruction does not conform to a preset format, the artificial intelligence engine carries out format conversion on the currently received scheduling instruction to obtain a format scheduling instruction, and the format scheduling instruction is inserted into a scheduling queue.
The artificial intelligence engine comprises a semantic analysis engine and a format conversion engine;
the semantic analysis engine is used for extracting semantic keywords in the scheduling instruction;
the format conversion engine is used for converting the scheduling instruction which does not conform to the preset format into a format scheduling instruction based on the semantic keyword.
The scheduling scheme of the invention can fully utilize the existing edge computing resources at present and realize the scheduling and optimization of resources with the maximum efficiency by combining the artificial intelligence engine.
Further embodiments and improvements of the present invention will be further described with reference to the accompanying drawings and specific embodiments.
Drawings
FIG. 1 is a flowchart illustrating the steps of a scheduling method based on artificial intelligence and edge computing support, in accordance with an embodiment of the present invention;
FIG. 2 is a schematic flow chart of the method of FIG. 1 for scheduling instructions into a scheduling queue;
FIG. 3 is a flowchart illustrating the steps of a scheduling method based on artificial intelligence and edge computing support in accordance with yet another embodiment of the present invention;
FIG. 4 is a block diagram of a scheduling system based on artificial intelligence and edge computing support according to an embodiment of the present invention;
fig. 5 is a schematic diagram of the internal components of a portion of the modules of the scheduling system of fig. 4.
Detailed Description
The invention is further described with reference to the following drawings and detailed description.
FIG. 1 is a flowchart illustrating the steps of a scheduling method based on artificial intelligence and edge computing support in accordance with one embodiment of the present invention;
the embodiment of the method illustrated in fig. 1 comprises a loop iteration step of steps S1-S5, as follows:
s1, receiving a plurality of scheduling instructions input by a plurality of user terminals in parallel, wherein the scheduling instructions are input parameters of a transportation optimization model;
s2: predicting an amount of resources required to run the transport optimization model based on the plurality of scheduling instructions;
s3: acquiring available edge resources at the current time point;
s4: responding to the dispatching instructions in the dispatching queue in sequence based on the available edge resources at the current time point and the resource quantity required by the transportation optimization model;
s5: and updating the available edge resources and returning to the step S1.
With further reference to fig. 2 on the basis of fig. 1, fig. 2 is a schematic flow chart of the scheduling method of fig. 1 for scheduling instructions into a scheduling queue.
Specifically, before the step S1, the method further includes:
s0: establishing a scheduling queue;
after the step S1, before the step S2, the method further comprises the steps of:
s11: analyzing whether the currently received scheduling instruction conforms to a preset format or not based on an artificial intelligence engine;
when the currently received scheduling instruction conforms to a preset format, inserting the currently received scheduling instruction into the tail position of the scheduling queue;
when the currently received scheduling instruction does not conform to a preset format, the artificial intelligence engine carries out format conversion on the currently received scheduling instruction to obtain a format scheduling instruction, and the format scheduling instruction is inserted into the scheduling queue.
The scheduling instructions may be scheduling demand parameters defining a target shipping weight, a target shipping period, and a shipping destination.
An example of a scheduling instruction is described below.
In the first case, the scheduling instructions are input according to a predetermined format, expressed in a fixed format, for example:
target transportation weight: XX ton, a volume of a (cm) xb (cm) xc (cm) \8230
The target transportation period: 8:00-12:00am; alternatively, 14:00-18:00pm;
a transportation destination: route a, B, to C;
at this time, the currently received scheduling instruction conforms to a preset format, and the currently received scheduling instruction is inserted into the tail position of the scheduling queue;
in the second case, the scheduling instructions may be expressed in natural language, for example:
"need to be at 8:00-12:00am; alternatively, 14:00-18:00pm, XX supplies with transportation XX tons and volume of a (cm) xb (cm) xc (cm), route A, B, reach C'.
At this time, the currently received scheduling instruction does not conform to the predetermined format, and the artificial intelligence engine is required to perform format conversion on the currently received scheduling instruction to obtain a format scheduling instruction, for example, the format scheduling instruction is converted into the predetermined format in the first case, and the format scheduling instruction is inserted into the scheduling queue.
The difference exists because the authority of the user terminal inputting the scheduling command is different, and the user terminal with lower authority can only input the scheduling command according to a preset format; and for the user terminal with higher user authority, the scheduling instruction can be freely input (for example, voice input).
Therefore, the scheduling instruction input by the first user terminal conforms to the predetermined format, the scheduling instruction input by the second user terminal does not conform to the predetermined format, and the user right of the second user terminal is higher than that of the first user terminal.
At this time, the scheduling instruction input by the first user terminal conforms to a preset format, and the authority of the first user terminal is lower, so that the first user terminal is directly inserted into the tail position of the scheduling queue;
for the scheduling instruction input by the first user terminal, the insertion position needs to be determined after the priority is determined.
Specifically, when the currently received scheduling instruction does not conform to a predetermined format, the artificial intelligence engine performs format conversion on the currently received scheduling instruction to obtain a format scheduling instruction, and inserts the format scheduling instruction into the scheduling queue, which specifically includes:
determining the priority of the format scheduling instruction based on the resource requirement parameter contained in the format scheduling instruction;
determining an insertion position of the format scheduling instruction in the scheduling queue based on the priority;
the higher the priority, the closer the insertion position is to the queue dequeuing position;
inserting the format scheduling instruction into the scheduling queue based on the insertion location.
Then, based on the available edge resources at the current time point and the resource quantity required by the transportation optimization model, the scheduling instructions in the scheduling queue are sequentially responded,
specifically, the distributed and parallelized scheduling processing engine may sequentially fetch the elements from the scheduling queue for response.
As a further improvement, in the invention, the resources required by the distributed and parallelized scheduling processing engine come from each edge computing terminal, namely, the scheduling response is completed by using the edge resources.
The amount of resources required by each distributed, parallelized scheduling processing engine is different.
Specifically, the resources required by each distributed and parallelized scheduling processing engine are determined by the plurality of scheduling instructions input at the time.
As an example, logs can be run in different distributed and parallelized scheduling processing engines through scheduling instructions historically input by a database record, the call resource amount of different scheduling instructions in the execution process of different scheduling processing engines in historical running is counted, and the corresponding relation among different scheduling instructions, different scheduling processing engines and the corresponding call resource amount is established;
and after receiving the current scheduling instruction, predicting the resource amount required by running the transportation optimization model according to the corresponding relation.
The distributed, parallelized schedule processing engine can be a traffic optimization model, such as an AI traffic scheduling model common to the art.
At the moment, the resource demand parameters comprise resources required by the transportation optimization model operation; the resources required by the transportation optimization model operation are determined by the input scheduling value currently received by the transportation optimization model;
and the input scheduling values are input in parallel by M user terminals, wherein M is an integer larger than 1.
Accordingly, the method further comprises:
predicting an amount of resources required to run the transport optimization model based on the plurality of scheduling instructions;
acquiring available edge resources at the current time point;
and sequentially responding to the dispatching instructions in the dispatching queue based on the available edge resources at the current time point and the resource quantity required by the transportation optimization model.
Specifically, the scheduling instructions in the scheduling queue may be sent to a plurality of target edge computing terminals in batches.
After each distributed and parallelized scheduling processing engine performs response scheduling, because each distributed and parallelized scheduling processing engine already occupies a certain edge resource, after the available edge resource needs to be updated, the above steps are repeated, i.e. the step S1 is returned to.
In the above embodiment, the edge resource includes a plurality of edge computing terminals; the plurality of user terminals comprises the plurality of edge computing terminals; or, the plurality of user terminals are all edge computing terminals.
Obviously, the above configuration makes it possible to improve resource utilization efficiency through information sharing between the logistics service requester and the provider.
To better introduce the technical solution of the method of the present invention, fig. 3 is a flowchart illustrating steps of a scheduling method based on artificial intelligence and edge computing support according to still another embodiment of the present invention.
In fig. 3, the method includes a scheduling instruction parsing step, an edge resource obtaining step, a resource scheduling step, and an edge resource updating step, and each step is specifically implemented as follows:
a scheduling instruction analyzing step: the system is used for analyzing whether the currently received scheduling instruction conforms to a preset format or not;
an edge resource obtaining step: the method comprises the steps of obtaining available edge resources at a current time point;
resource scheduling step: sequentially responding to the scheduling instructions in the scheduling queue based on the available edge resources at the current time point;
the scheduling instruction comprises at least one resource requirement parameter;
the scheduling instruction analyzing step is based on whether the currently received scheduling instruction is in accordance with a preset format or not analyzed by the artificial intelligence engine;
when the currently received scheduling instruction conforms to a preset format, inserting the currently received scheduling instruction into the tail position of the scheduling queue;
when the currently received scheduling instruction does not conform to the preset format, the artificial intelligence engine carries out format conversion on the currently received scheduling instruction to obtain a format scheduling instruction, and the format scheduling instruction is inserted into the scheduling queue.
The artificial intelligence engine comprises a semantic analysis engine and a format conversion engine;
the semantic analysis engine is used for extracting semantic keywords in the scheduling instruction;
the format conversion engine is used for converting the scheduling instruction which does not conform to the preset format into a format scheduling instruction based on the semantic keyword.
As previously described, the edge resource includes a plurality of edge computing terminals; the plurality of user terminals comprises the plurality of edge computing terminals; or, the plurality of user terminals are all edge computing terminals, so that the resource utilization efficiency can be improved through information sharing between the logistics service requester and the provider.
The process can be automatically realized by adopting a computer programming language and through instruction programming.
For this purpose, the parameterization is used as follows:
the edge resource comprises N edge computing terminals
Figure 458035DEST_PATH_IMAGE001
Corresponding to N state parameters
Figure 806846DEST_PATH_IMAGE002
Status parameter
Figure 85380DEST_PATH_IMAGE003
Wherein:
Figure 780935DEST_PATH_IMAGE004
indicating the ith edge calculation terminal
Figure 195736DEST_PATH_IMAGE005
The maximum time length of each work, unit of second;
Figure 455728DEST_PATH_IMAGE006
indicating the ith edge calculation terminal
Figure 221558DEST_PATH_IMAGE007
The number of included compute nodes;
Figure 455225DEST_PATH_IMAGE008
indicating the ith edge calculation terminal
Figure 131057DEST_PATH_IMAGE009
The unit of the storage space of (2) is megabytes;
Figure 306823DEST_PATH_IMAGE010
indicating the ith edge calculation terminal
Figure 809218DEST_PATH_IMAGE011
The maximum buffer size of (a) in megabytes;
the resource scheduling step is to respond to the scheduling instructions in the scheduling queue in sequence based on the available edge resources at the current time point, and specifically includes:
sending the scheduling instructions in the scheduling queue to k target edge computing terminals in batches
Figure 95842DEST_PATH_IMAGE012
Upper, k>1;
The k target edge computing terminals satisfy the following conditions:
for
Figure 235968DEST_PATH_IMAGE013
Figure 848215DEST_PATH_IMAGE014
Figure 572326DEST_PATH_IMAGE015
Presentation fetch set
Figure 803587DEST_PATH_IMAGE016
Any one of the elements;
wherein, the first and the second end of the pipe are connected with each other,
Figure 47487DEST_PATH_IMAGE017
queuing a waiting time, unit of second, for an element of the scheduling queue;
Figure 315788DEST_PATH_IMAGE018
the unit of the data size corresponding to the scheduling instruction contained in the scheduling queue is megabyte.
When the currently received scheduling instruction does not conform to the preset format, the artificial intelligence engine performs format conversion on the currently received scheduling instruction to obtain a format scheduling instruction, and inserts the format scheduling instruction into the scheduling queue, which specifically includes:
determining the priority of the format scheduling instruction based on the resource requirement parameter contained in the format scheduling instruction;
determining an insertion position of the format scheduling instruction in the scheduling queue based on the priority;
the priority PR is determined based on the permission level a of the user terminal corresponding to the original scheduling instruction (i.e. scheduling instruction not conforming to the predetermined format) corresponding to the format scheduling instruction and the similarity B between the resource demand parameter and the existing available edge resource:
Figure 543507DEST_PATH_IMAGE019
the grade A is quantized by numbers between 1 and 10, and the larger the numerical value is, the higher the grade is;
similarity B is a percentage between (0, 1), and can predict the resource amount required for running the transportation optimization model based on the resource demand parameters, and is determined by similarity (difference) with the corresponding available edge resource. There are various methods for calculating the similarity, such as cosine similarity, etc., and the embodiment of the present invention is not limited to this specifically, and the method for calculating the similarity is not the main point of the present invention, and is not expanded specifically.
Inserting the format scheduling instruction into the scheduling queue based on the insertion position.
The method also comprises an edge resource updating step;
the edge resource updating step is used for updatingN edge computing terminals
Figure 421202DEST_PATH_IMAGE020
The state parameter of (a);
and after the scheduling instructions in the scheduling queue are sent to k target edge computing terminals in batches, executing the edge resource updating step.
The resource demand parameters comprise resources required by the transportation optimization model operation;
the resources required by the transportation optimization model operation are determined by the input scheduling value currently received by the transportation optimization model;
the input scheduling values are input in parallel by the M first terminals. The M first terminals comprise the N edge computing terminals; or, the M first terminals are all edge computing terminals.
Based on the embodiments of fig. 1-3, fig. 4 is a schematic diagram illustrating a module architecture of a scheduling system based on artificial intelligence and edge computing support according to an embodiment of the present invention.
In fig. 4, the system includes a transportation optimization model, a plurality of user terminals, an edge resource obtaining unit, a resource demand parameter predicting unit, and a resource scheduling unit;
the user terminals are used for inputting a plurality of scheduling instructions of the transportation optimization model in parallel;
the edge resource obtaining unit is used for obtaining the available edge resources at the current time point;
the resource demand parameter prediction unit predicts the amount of resources required for running the transportation optimization model based on the plurality of scheduling instructions;
the resource scheduling unit sequentially responds to scheduling instructions in the scheduling queue based on available edge resources at the current time point and the amount of resources required for operating the transportation optimization model;
the response comprises that the dispatching instructions are sent to a plurality of target edge computing terminals in batches;
the plurality of user terminals comprise the plurality of target edge computing terminals;
or, the plurality of user terminals are all edge computing terminals.
The system also comprises a scheduling instruction parsing sheet.
Fig. 5 shows the working principle of the dispatch instruction parsing unit, i.e. the artificial intelligence engine.
The artificial intelligence engine comprises a semantic analysis engine and a format conversion engine.
The scheduling instruction analysis unit analyzes whether the currently received scheduling instruction conforms to a preset format or not based on the artificial intelligence engine;
when the currently received scheduling instruction accords with a preset format, inserting the currently received scheduling instruction into the tail position of a scheduling queue;
when the currently received scheduling instruction does not conform to the preset format, the artificial intelligence engine carries out format conversion on the currently received scheduling instruction to obtain a format scheduling instruction, and the format scheduling instruction is inserted into a scheduling queue.
The semantic analysis engine is used for extracting semantic keywords in the scheduling instruction;
the format conversion engine is used for converting the scheduling instruction which does not conform to the preset format into a format scheduling instruction based on the semantic keyword.
The scheduling scheme of the invention can fully utilize the existing edge computing resources at present and realize the scheduling and optimization of resources with the maximum efficiency by combining the artificial intelligence engine.
Specifically, aiming at a plurality of scheduling instructions input by user terminals with different levels of authority, firstly, whether the currently received scheduling instructions conform to a preset format is analyzed based on an artificial intelligence engine so as to determine the positions of the scheduling instructions inserted into a queue, then, based on the scheduling instructions, the amount of resources required for running the transportation optimization model is predicted, after the edge resources available at the current time point are obtained, based on the edge resources available at the current time point and the amount of resources required for running the transportation optimization model, the scheduling instructions in the scheduling queue are sequentially responded, then, the edge resources are updated, and the edge resources comprise a plurality of edge computing terminals; the plurality of user terminals comprises the plurality of edge computing terminals; or, the plurality of user terminals are all edge computing terminals, so that the current existing edge computing resources are fully utilized, the maximum-efficiency resource scheduling and optimization are realized by combining an artificial intelligence engine, the information sharing between a logistics service requester and a provider is embodied, and the resource utilization efficiency is improved.
It should be noted that each of the embodiments of the present invention can solve some technical problems individually, and the combination thereof can solve all the technical problems, but each of the individual embodiments is not required to solve all the technical problems and achieve all the technical effects.
The present invention is not limited to the specific module configuration described in the related art. The prior art mentioned in the background section and the detailed description section can be used as part of the invention to understand the meaning of some technical features or parameters. The scope of the present invention is defined by the claims.

Claims (10)

1. A scheduling method based on artificial intelligence and edge calculation support is characterized by comprising the following steps:
s1, receiving a plurality of scheduling instructions input by a plurality of user terminals in parallel, wherein the scheduling instructions are input parameters of a transportation optimization model;
s2: predicting an amount of resources required to run the transport optimization model based on the plurality of scheduling instructions;
s3: acquiring available edge resources at the current time point;
s4: sequentially responding to the scheduling instructions in the scheduling queue based on the available edge resources at the current time point and the resource quantity required by the operation of the transportation optimization model;
s5: updating available edge resources, and returning to the step S1;
wherein after the step S1 and before the step S2, the method further comprises the steps of:
s11: analyzing whether the currently received scheduling instruction conforms to a preset format or not based on an artificial intelligence engine;
when the currently received scheduling instruction accords with a preset format, inserting the currently received scheduling instruction into the tail position of the scheduling queue;
when the currently received scheduling instruction does not conform to a preset format, the artificial intelligence engine carries out format conversion on the currently received scheduling instruction to obtain a format scheduling instruction, and the format scheduling instruction is inserted into the scheduling queue.
2. The scheduling method based on artificial intelligence and edge computing support of claim 1,
when the currently received scheduling instruction does not conform to the preset format, the artificial intelligence engine performs format conversion on the currently received scheduling instruction to obtain a format scheduling instruction, and inserts the format scheduling instruction into the scheduling queue, which specifically includes:
determining the priority of the format scheduling instruction based on the resource requirement parameter contained in the format scheduling instruction;
determining an insertion position of the format scheduling instruction in the scheduling queue based on the priority;
inserting the format scheduling instruction into the scheduling queue based on the insertion position.
3. The scheduling method based on artificial intelligence and edge computing support of claim 1,
the edge resource comprises a plurality of edge computing terminals;
the plurality of user terminals comprises the plurality of edge computing terminals;
or, the plurality of user terminals are all edge computing terminals.
4. The scheduling method based on artificial intelligence and edge computing support of claim 1,
the step S4 specifically includes: and sending the scheduling instructions in the scheduling queue to a plurality of target edge computing terminals in batches.
5. The scheduling method based on artificial intelligence and edge computing support of claim 1,
the artificial intelligence engine comprises a semantic analysis engine and a format conversion engine;
the semantic analysis engine is used for extracting semantic keywords in the scheduling instruction;
the format conversion engine is used for converting the scheduling instruction which does not conform to the preset format into a format scheduling instruction based on the semantic keyword.
6. The scheduling method based on artificial intelligence and edge computing support of claim 2, wherein:
the resource demand parameters comprise resources required by the transportation optimization model operation;
the resources required by the transportation optimization model operation are determined by the input scheduling value currently received by the transportation optimization model;
and the input scheduling values are input in parallel by M user terminals, wherein M is an integer larger than 1.
7. The scheduling method based on artificial intelligence and edge computing support of claim 6, wherein:
the M user terminals comprise N edge computing terminals;
or, the M user terminals are all edge computing terminals; n is an integer greater than 1.
8. A scheduling system based on artificial intelligence and edge computing support, the scheduling system comprising a transportation optimization model;
characterized in that the system further comprises:
a plurality of user terminals for inputting in parallel a plurality of scheduling instructions of the transportation optimization model;
an edge resource acquisition unit: the method comprises the steps of obtaining available edge resources at a current time point;
a resource demand parameter prediction unit which predicts the amount of resources required for operating the transportation optimization model based on the plurality of scheduling instructions;
a resource scheduling unit: responding to the dispatching instructions in the dispatching queue in sequence based on the available edge resources at the current time point and the resource quantity required by the transportation optimization model;
the response comprises that the dispatching instructions are sent to a plurality of target edge computing terminals in batches;
the plurality of user terminals comprise the plurality of target edge computing terminals;
or, the plurality of user terminals are all edge computing terminals.
9. The artificial intelligence and edge computing support based scheduling system of claim 8, wherein:
the system also comprises a scheduling instruction analysis unit;
the scheduling instruction analyzing unit analyzes whether the currently received scheduling instruction conforms to a preset format or not based on the artificial intelligence engine;
when the currently received scheduling instruction accords with a preset format, inserting the currently received scheduling instruction into the tail position of a scheduling queue;
when the currently received scheduling instruction does not conform to the preset format, the artificial intelligence engine carries out format conversion on the currently received scheduling instruction to obtain a format scheduling instruction, and the format scheduling instruction is inserted into a scheduling queue.
10. The artificial intelligence and edge computing support based scheduling system of claim 9, wherein:
the artificial intelligence engine comprises a semantic analysis engine and a format conversion engine;
the semantic analysis engine is used for extracting semantic keywords in the scheduling instruction;
the format conversion engine is used for converting the scheduling instruction which does not conform to the preset format into a format scheduling instruction based on the semantic keyword.
CN202211179073.2A 2022-09-27 2022-09-27 Scheduling system and method based on artificial intelligence and edge computing support Active CN115269159B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211179073.2A CN115269159B (en) 2022-09-27 2022-09-27 Scheduling system and method based on artificial intelligence and edge computing support

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211179073.2A CN115269159B (en) 2022-09-27 2022-09-27 Scheduling system and method based on artificial intelligence and edge computing support

Publications (2)

Publication Number Publication Date
CN115269159A true CN115269159A (en) 2022-11-01
CN115269159B CN115269159B (en) 2023-05-30

Family

ID=83756618

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211179073.2A Active CN115269159B (en) 2022-09-27 2022-09-27 Scheduling system and method based on artificial intelligence and edge computing support

Country Status (1)

Country Link
CN (1) CN115269159B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115439028A (en) * 2022-11-08 2022-12-06 苏州美集供应链管理股份有限公司 Transportation resource optimization system and method following dynamic change of data
CN116560838A (en) * 2023-05-05 2023-08-08 上海玫克生储能科技有限公司 Edge computing terminal equipment, comprehensive energy station, management platform and control method thereof

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407196A (en) * 2015-07-29 2017-02-15 成都诺铱科技有限公司 Semantic analysis intelligent instruction robot applied to logistics management software
CN111427681A (en) * 2020-02-19 2020-07-17 上海交通大学 Real-time task matching scheduling system and method based on resource monitoring in edge computing
CN112738225A (en) * 2020-12-29 2021-04-30 浙江经贸职业技术学院 Edge calculation method based on artificial intelligence
CN113360265A (en) * 2021-06-18 2021-09-07 特斯联科技集团有限公司 Big data operation task scheduling and monitoring system and method
US11132224B2 (en) * 2020-08-17 2021-09-28 Essence Information Technology Co., Ltd Fine granularity real-time supervision system based on edge computing
CN113791906A (en) * 2021-08-09 2021-12-14 戴西(上海)软件有限公司 Scheduling system and optimization algorithm based on GPU resources in artificial intelligence and engineering fields

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407196A (en) * 2015-07-29 2017-02-15 成都诺铱科技有限公司 Semantic analysis intelligent instruction robot applied to logistics management software
CN111427681A (en) * 2020-02-19 2020-07-17 上海交通大学 Real-time task matching scheduling system and method based on resource monitoring in edge computing
US11132224B2 (en) * 2020-08-17 2021-09-28 Essence Information Technology Co., Ltd Fine granularity real-time supervision system based on edge computing
CN112738225A (en) * 2020-12-29 2021-04-30 浙江经贸职业技术学院 Edge calculation method based on artificial intelligence
CN113360265A (en) * 2021-06-18 2021-09-07 特斯联科技集团有限公司 Big data operation task scheduling and monitoring system and method
CN113791906A (en) * 2021-08-09 2021-12-14 戴西(上海)软件有限公司 Scheduling system and optimization algorithm based on GPU resources in artificial intelligence and engineering fields

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115439028A (en) * 2022-11-08 2022-12-06 苏州美集供应链管理股份有限公司 Transportation resource optimization system and method following dynamic change of data
CN116560838A (en) * 2023-05-05 2023-08-08 上海玫克生储能科技有限公司 Edge computing terminal equipment, comprehensive energy station, management platform and control method thereof
CN116560838B (en) * 2023-05-05 2024-03-29 上海玫克生储能科技有限公司 Edge computing terminal equipment, comprehensive energy station, management platform and control method thereof

Also Published As

Publication number Publication date
CN115269159B (en) 2023-05-30

Similar Documents

Publication Publication Date Title
CN115269159B (en) Scheduling system and method based on artificial intelligence and edge computing support
US4495570A (en) Processing request allocator for assignment of loads in a distributed processing system
CN105487930A (en) Task optimization scheduling method based on Hadoop
CN110134738B (en) Distributed storage system resource estimation method and device
CN111291054B (en) Data processing method, device, computer equipment and storage medium
CN116467076A (en) Multi-cluster scheduling method and system based on cluster available resources
CN113434303A (en) Batch-processed remote sensing image intelligent processing model prediction performance optimization system and method
CN116974994A (en) High-efficiency file collaboration system based on clusters
US7756951B2 (en) Adaptively changing application server processing power based on data volume
CN106844024A (en) The GPU/CPU dispatching methods and system of a kind of self study run time forecast model
CN115658263A (en) Task scheduling method and system for cloud computing platform
CN114356712A (en) Data processing method, device, equipment, readable storage medium and program product
CN112637288A (en) Streaming data distribution method and system
Du et al. OctopusKing: A TCT-aware task scheduling on spark platform
CN114116150A (en) Task scheduling method and device and related equipment
CN116578406B (en) Task platform continuous operation scheduling method for distributed machine learning system
CN116938934B (en) Task switching control method and system based on message
CN116974771B (en) Resource scheduling method, related device, electronic equipment and medium
US20230144238A1 (en) System and method for scheduling machine learning jobs
CN117851049A (en) Multi-computing-framework calculation fusion scheduling method and device, electronic equipment and storage medium
CN117669852A (en) Network defect intelligent dispatch method and system based on service response time constraint
CN115858921A (en) Model processing method, device, equipment and storage medium
CN116700960A (en) Big data distributed computation daily scheduling task optimization method and system
CN116501466A (en) Task processing method and device, electronic equipment and storage medium
CN116646930A (en) Method and system for optimal scheduling of power distribution network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant