WO2023159576A1 - Method and apparatus of scheduling applications - Google Patents

Method and apparatus of scheduling applications Download PDF

Info

Publication number
WO2023159576A1
WO2023159576A1 PCT/CN2022/078314 CN2022078314W WO2023159576A1 WO 2023159576 A1 WO2023159576 A1 WO 2023159576A1 CN 2022078314 W CN2022078314 W CN 2022078314W WO 2023159576 A1 WO2023159576 A1 WO 2023159576A1
Authority
WO
WIPO (PCT)
Prior art keywords
apps
das
scheduling
edge devices
sas
Prior art date
Application number
PCT/CN2022/078314
Other languages
French (fr)
Inventor
Xu Zhao
Wenfeng Liu
Zijian Wang
Shunjie Fan
Original Assignee
Siemens Aktiengesellschaft
Siemens Ltd., China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Aktiengesellschaft, Siemens Ltd., China filed Critical Siemens Aktiengesellschaft
Priority to PCT/CN2022/078314 priority Critical patent/WO2023159576A1/en
Publication of WO2023159576A1 publication Critical patent/WO2023159576A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • G06F9/5088Techniques for rebalancing the load in a distributed system involving task migration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45575Starting, stopping, suspending or resuming virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects

Definitions

  • the present invention generally relates to the field of industrial digitization, and in particular, to a method and apparatus of scheduling applications (Apps) .
  • IT information technology
  • OT operational technology
  • edge devices play an important role in the convergence of IT and OT.
  • An App orchestration system is used for deploying a plurality of Apps to run on different edge devices.
  • Existing orchestration technologies such as Kubernetes usually use predefined rules to specify the packaging and configuration of Apps, resulting in excessive loads on some edge devices and affecting the performance and real-time performance of the edge devices.
  • the present invention provides a method and apparatus of scheduling applications (Apps) to balance loads of edge devices, avoid overloading of some edge devices, and improve the performance and real-time performance of the edge devices.
  • Apps scheduling applications
  • the present invention provides a method of scheduling Apps, which is used for scheduling the Apps among a plurality of edge devices.
  • the method of scheduling the Apps includes: classifying the Apps into static Apps (SAs) and dynamic Apps (DAs) , where the SAs are associated with the edge devices, and the DAs are not associated with the edge devices; deploying a plurality of Apps to the plurality of edge devices; and calculating a load of each of the edge devices when the Apps thereon are running, comparing the load with a threshold, and in a case that the load is greater than the threshold, scheduling DAs deployed on the edge device to run on another edge device, until the loads of all the edge devices are less than the threshold.
  • SAs static Apps
  • DAs dynamic Apps
  • Apps are classified into SAs and DAs, loads of edge devices when the Apps thereon are running are calculated, and DAs on edge devices with excessively large loads are scheduled to run on edge devices with free loads, so that the loads of the edge devices are balanced, overloading of some edge devices is avoided, and the performance and real-time performance of the edge devices are improved.
  • the classifying the Apps into SAs and DAs includes: obtaining classification labels of the Apps during development, and classifying the Apps into the SAs and the DAs according to the classification labels. Therefore, the developer manually adds a classification label to the App. According to the classification label, it can be determined whether the type of the App is an SA or a DA. Therefore, the flexibility of App classification is improved.
  • the classifying the Apps into SAs and DAs includes: obtaining an amount of data exchange of each App, comparing the amount of data exchange with a specified value, classifying the App as the SA in a case that the amount of data exchange of the App is greater than the specified value, and classifying the App as the DA in a case that the amount of data exchange of the App is less than the specified value. Therefore, whether the type of an App is an SA or a DA can be automatically determined.
  • the calculating a load of each of the edge devices when the Apps thereon are running includes: summing up memory usage of the SAs, memory usage of the DAs, and bandwidth usage of the DAs on the edge devices. Therefore, loads of edge devices when the Apps thereon are running are calculated.
  • the scheduling DAs deployed on the edge device to run on another edge device includes: calculating load consumption of the DAs, and scheduling the DAs to run on the another edge device according to the load consumption. Therefore, edge devices to be scheduled are determined by load consumption, which implements the scheduling of DAs.
  • the method further includes: sorting the DAs in descending order according to the load consumption, and preferentially scheduling top-ranked DAs to run on the another edge device. Therefore, the scheduling of DAs consuming more loads is prioritized, which can reduce the overall scheduling complexity and improve the processing efficiency of scheduling.
  • the present invention further provides an apparatus of scheduling Apps, which is used for scheduling the Apps among a plurality of edge devices.
  • the apparatus of scheduling the Apps includes: a classification module, configured to classify the Apps into SAs and DAs, where the SAs are associated with the edge devices, and the DAs are not associated with the edge devices; a deployment module, configured to deploy a plurality of Apps to the plurality of edge devices; and a scheduling module, configured to calculate a load of each of the edge devices when the Apps thereon are running, compare the load with a threshold, and in a case that the load is greater than the threshold, schedule DAs deployed on the edge device to run on another edge device, until the loads of all the edge devices are less than the threshold.
  • the classifying, by the classification module, the Apps into SAs and DAs includes: obtaining classification labels of the Apps during development, and classifying the Apps into the SAs and the DAs according to the classification labels.
  • the classifying, by the classification module, the Apps into SAs and DAs includes: obtaining an amount of data exchange of each App, comparing the amount of data exchange with a specified value, classifying the App as the SA in a case that the amount of data exchange of the App is greater than the specified value, and classifying the App as the DA in a case that the amount of data exchange of the App is less than the specified value.
  • the calculating, by the scheduling module, a load of each of the edge devices when the Apps thereon are running includes: summing up memory usage of the SAs, memory usage of the DAs, and bandwidth usage of the DAs on the edge devices.
  • the scheduling, by the scheduling module, DAs deployed on the edge device to run on another edge device includes: calculating load consumption of the DAs, and scheduling the DAs to run on the another edge device according to the load consumption.
  • the method further includes: sorting the DAs in descending order according to the load consumption, and preferentially scheduling top-ranked DAs to run on the another edge device.
  • the present invention further provides an electronic device, including a processor, a memory and instructions stored in the memory, where the instructions, when executed by the processor, implement the method as described above.
  • the present invention further provides a computer-readable storage medium, storing computer instructions thereon, where the computer instructions, when executed, implement the method as described above.
  • FIG. 1 is a flowchart of a method of scheduling according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of an industrial automation system according to an embodiment of the present invention.
  • FIG. 3 to FIG. 5 are schematic diagrams of scheduling applications (Apps) in an industrial automation system according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of an apparatus of scheduling according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of an electronic device according to an embodiment of the present invention.
  • FIG. 1 is a flowchart of a method 100 of scheduling according to an embodiment of the present invention. As shown in FIG. 1, the method 100 of scheduling Apps includes:
  • Step 110 Classify the Apps into static Apps (SAs) and dynamic Apps (DAs) , where the SAs are associated with the edge devices, and the DAs are not associated with the edge devices.
  • SAs static Apps
  • DAs dynamic Apps
  • Apps can be classified into SAs and DAs during development.
  • the SAs are associated with the edge devices and are bound to field devices controlled by the edge devices.
  • the SAs exchange a large amount of data with the field devices, and only run on the deployed edge devices. Once being deployed, the SAs are not scheduled to run on other edge devices.
  • the SAs may be data monitoring apps.
  • the DAs are not associated with the edge devices, and are not bound to the field devices controlled by the edge devices.
  • the DAs exchange a small amount of data with the field devices, and can be scheduled to run on different edge devices.
  • the DAs may be alarm response apps.
  • the classifying the Apps into SAs and DAs may include: obtaining classification labels of the Apps during development, and classifying the Apps into the SAs and the DAs according to the classification labels. Specifically, when developing an App, the developer manually adds a classification label to the App. According to the classification label, it can be determined whether the type of the App is an SA or a DA. Therefore, the flexibility of App classification is improved.
  • the classifying the Apps into SAs and DAs may include: obtaining an amount of data exchange of each App, comparing the amount of data exchange with a specified value, classifying the App as the SA in a case that the amount of data exchange of the App is greater than the specified value, and classifying the App as the DA in a case that the amount of data exchange of the App is less than the specified value.
  • an amount of data exchange is designed during the development of an App. If the amount of data exchange is too large, it indicates that the App is closely associated with a field device, otherwise, the correlation is not closely associated with a field device.
  • the specified value can be used as a criterion to measure the amount of data exchange. Therefore, whether the type of an App is an SA or a DA can be automatically determined.
  • Step 120 Deploy a plurality of Apps to the plurality of edge devices.
  • a plurality of Apps are deployed to a plurality of edge devices. Because SAs are associated with field devices, and cannot be scheduled after being deployed, the SAs are deployed to corresponding edge devices. For example, data monitoring software of a sensor is deployed to an edge device corresponding to the sensor. Then, DAs can be deployed to a plurality of edge devices randomly, or the DAs can be deployed to the plurality of edge devices according to other rules.
  • FIG. 2 is a schematic diagram of an industrial automation system 200 according to an embodiment of the present invention.
  • the industrial automation system 200 includes an edge management device 210, a plurality of edge devices 220-1, 220-2, 220-3, ..., and 220-N, and a plurality of field devices 230-1, 230-2, 230-3, ..., and 230-N.
  • the edge devices control the corresponding field devices. That is, the edge device 220-1 controls the field device 230-1, the edge device 220-2 controls the field device 230-2, the edge device 220-3 controls the field device 230-3, and the edge management device 210 deploys Apps to the plurality of edge devices. As shown in FIG.
  • an SA 1.1, an SA 1.2, a DA 1.1, and a DA 1.2 are deployed on the edge device 220-1
  • an SA 2.1 and a DA 2.1 are deployed on the edge device 220-2
  • an SA 3.1 and a DA 3.1 are deployed on the edge device 220-3
  • an SA N. 1 and a DA N. 1 are deployed on the edge device 220-N.
  • Step 130 Calculate a load of each of the edge devices when the Apps thereon are running, compare the load with a threshold, and in a case that the load is greater than the threshold, schedule DAs deployed on the edge device to run on another edge device, until the loads of all the edge devices are less than the threshold.
  • Apps that are run by the edge devices can be determined. Then, the loads of the Apps are calculated.
  • the threshold is deployed to the edge devices with the Apps at the same time.
  • the loads calculated on the edge devices are compared with the threshold. In a case that the loads of the edge devices are less than the threshold, it indicates that the edge devices are not overloaded. In a case that the loads of the edge devices are greater than the threshold, it indicates that the edge devices are overloaded.
  • the DAs on the edge devices are scheduled to run on other edge devices until the loads of all the edge devices are less than the threshold.
  • the calculating a load of each of the edge devices when the Apps thereon are running may include: summing up memory usage of the SAs, memory usage of the DAs, and bandwidth usage of the DAs on the edge devices. For example, in a case that an SA or a DA needs to occupy 20%of memory resources, then memory usage of the App is 20%, and in a case that the DA needs to occupy 10%of bandwidth resources, then bandwidth usage of the DA is 10%.
  • the loads of the edge devices when the Apps thereon are running can be calculated by summing up the memory usage and the bandwidth usage.
  • the loads of the edge devices when the Apps thereon are running can be calculated by using the following formula:
  • W i is a load of an edge device i
  • W s is the memory usage of the SA
  • W d is the memory usage of the DA
  • N d is the bandwidth consumption of the DA
  • D d is the amount of data of the DA
  • k is a conversion coefficient, and is the bandwidth usage.
  • the scheduling DAs deployed on the edge device to run on another edge device may include: calculating load consumption of the DAs, and scheduling the DAs to run on the another edge device according to the load consumption. Specifically, the load consumption of the DAs is calculated. The load consumption indicates the amount of load that the DAs need to consume. Other edge devices are searched. If remaining resources of the edge devices can accommodate the load consumption of the DAs, the DAs are scheduled to run on the edge devices with the remaining resources. Preferably, the DAs are scheduled to run on the nearest edge devices.
  • the load consumption of the DAs can be calculated according to the following formula:
  • C D is the load consumption of the DA
  • W d is the memory usage of the DA
  • N d is the bandwidth consumption of the DA
  • D d is the amount of data of the DA
  • k is a conversion coefficient
  • the method may further include: sorting the DAs in descending order according to the load consumption, and preferentially scheduling top-ranked DAs to run on the another edge device.
  • the DA with the larger load consumption has a greater priority. That is, the scheduling of the DA consuming more load is prioritized, which can reduce the overall scheduling complexity and improve the processing efficiency of scheduling.
  • FIG. 3 to FIG. 5 are schematic diagrams of scheduling Apps in an industrial automation system according to an embodiment of the present invention.
  • the industrial automation system 300 includes an edge management device 310, three edge devices 321, 322, and 323, and three corresponding field devices 331, 332, and 333.
  • the edge device 321 is deployed with an SA 1.1 (memory usage 0.2) , an SA 1.2 (memory usage 0.2) , a DA 1.1 (memory usage 0.2, and bandwidth usage 0.01) , and a DA 1.2 (memory usage 0.2, and bandwidth usage 0.01) .
  • the edge device 322 is deployed with an SA 2.1 (memory usage 0.2) , and a DA 2.1 (memory usage 0.1, and bandwidth usage 0.01) .
  • the edge device 323 is deployed with an SA 3.1 (memory usage 0.65) , and a DA 3.1 (memory usage 0.1, and bandwidth usage 0.01) .
  • the threshold is 0.7.
  • the load of the edge device 321 is 0.82, and the load of the edge device 323 is 0.76, which are greater than the threshold.
  • the load of the edge device 322 is 0.31, which is less than the threshold. If run according to the deployment mode, the edge devices 321 and 323 are overloaded.
  • the load consumption required by the DA 1.1 on the edge device 321 is 0.19 (memory usage 0.2-bandwidth usage 0.01) .
  • the edge device 322 has enough space to run the DA 1.1, and the DA 1.1 can be scheduled to the edge device 322, as shown in FIG. 3 and FIG. 4.
  • the memory usage required by the DA 3.1 on the edge device 323 is 0.99 (memory usage 0.1-bandwidth usage 0.01) .
  • the edge device 322 still has enough space to run the DA 3.1, and the DA 3.1 can be scheduled to the edge device 322, as shown in FIG. 4 and FIG. 5. Therefore, the loads of the edge devices 321, 322, and 323 are all less than the threshold, and the edge devices 321, 322, and 323 are not overloaded.
  • the embodiments of the present invention provide a method for scheduling Apps.
  • the Apps are classified into SAs and DAs, loads of edge devices when the Apps thereon are running are calculated, and DAs on edge devices with excessively large loads are scheduled to run on edge devices with free loads, so that loads of edge devices are balanced, overloading of some edge devices is avoided, and the performance and real-time performance of the edge devices are improved.
  • FIG. 6 is a schematic diagram of an apparatus 600 of scheduling according to an embodiment of the present invention. As shown in FIG. 6, the apparatus 600 of scheduling Apps includes:
  • a classification module 610 configured to classify the Apps into SAs and DAs, where the SAs are associated with the edge devices, and the DAs are not associated with the edge devices;
  • a deployment module 620 configured to deploy a plurality of Apps to the plurality of edge devices
  • a scheduling module 630 configured to calculate a load of each of the edge devices when the Apps thereon are running, compare the load with a threshold, and in a case that the load is greater than the threshold, schedule DAs deployed on the edge device to run on another edge device, until the loads of all the edge devices are less than the threshold.
  • the classifying, by the classification module 610, the Apps into SAs and DAs includes: obtaining classification labels of the Apps during development, and classifying the Apps into the SAs and the DAs according to the classification labels.
  • the classifying, by the classification module 610, the Apps into SAs and DAs includes: obtaining an amount of data exchange of the App, comparing the amount of data exchange with a specified value, classifying the App as the SA in a case that the amount of data exchange of the App is greater than the specified value, and classifying the App as the DA in a case that the amount of data exchange of the App is less than the specified value.
  • the calculating, by the scheduling module 630, a load of each of the edge devices when the Apps thereon are running includes: summing up memory usage of the SAs, memory usage of the DAs, and bandwidth usage of the DAs on the edge devices.
  • the scheduling, by the scheduling module 630, DAs deployed on the edge device to run on another edge device includes: the scheduling module 630 calculates load consumption of the DAs, and schedules the DAs to run on the another edge device according to the load consumption.
  • the method further includes: sorting the DAs in descending order according to the load consumption, and preferentially scheduling top-ranked DAs to run on the another edge device.
  • FIG. 7 is a schematic diagram of an electronic device 700 according to an embodiment of the present invention.
  • the electronic device 700 includes a processor 710 and a memory 720.
  • the memory 720 stores instructions, and the instructions are executed by the processor 710 to implement the method 100 described above.
  • the present invention further provides a computer-readable storage medium, storing computer instructions thereon, where the computer instructions, when executed, implement the method 100 as described above.
  • Some aspects of the method and apparatus of the present invention may be entirely executed by hardware, may be entirely executed by software (including firmware, resident software, microcode, and the like) , or may be executed by a combination of hardware and software.
  • the foregoing hardware or software may be referred to as “data block” , “module” , “engine” , “unit” , “component” or “system” .
  • the processor may be one or more application specific integrated circuits (ASICs) , digital signal processors (DSPs) , digital signal processing devices (DSPDs) , programmable logic devices (PLDs) , field programmable gate arrays (FPGAs) , processors, controllers, microcontrollers, microprocessors, or a combination thereof.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • various aspects of the present invention may be embodied as computer products located in one or more computer-readable media, the product including computer-readable program code.
  • the computer-readable medium may include, but is not limited to, a magnetic storage device (for example, a hard disk, a floppy disk, a magnetic tape, etc. ) , an optical disk (for example, a compact disk (CD) , a digital versatile disk (DVD) , etc. ) , a smart card, and a flash memory device (for example, a card, a stick, a key driver, etc. ) .

Abstract

A method of scheduling Apps, which is used for scheduling the Apps among a plurality of edge devices, includes: classifying the Apps into static Apps (SAs) and dynamic Apps (DAs) (110), where the SAs are associated with the edge devices, and the DAs are not associated with the edge devices; deploying a plurality of Apps to the plurality of edge devices (120); and calculating a load of each of the edge devices when the Apps thereon are running, comparing the load with a threshold, and in a case that the load is greater than the threshold, scheduling DAs deployed on the edge device to run on another edge device, until the loads of all the edge devices are less than the threshold (130).

Description

METHOD AND APPARATUS OF SCHEDULING APPLICATIONS TECHNICAL FIELD
The present invention generally relates to the field of industrial digitization, and in particular, to a method and apparatus of scheduling applications (Apps) .
BACKGROUND
The convergence of information technology (IT) and operational technology (OT) is a major trend in the field of industrial digitalization, and edge devices play an important role in the convergence of IT and OT. There are a plurality of applications (Apps) running on the edge devices to achieve different functions. An App orchestration system is used for deploying a plurality of Apps to run on different edge devices. Existing orchestration technologies (such as Kubernetes) usually use predefined rules to specify the packaging and configuration of Apps, resulting in excessive loads on some edge devices and affecting the performance and real-time performance of the edge devices.
SUMMARY
To resolve the above technical problems, the present invention provides a method and apparatus of scheduling applications (Apps) to balance loads of edge devices, avoid overloading of some edge devices, and improve the performance and real-time performance of the edge devices.
To achieve the above purpose, the present invention provides a method of scheduling Apps, which is used for scheduling the Apps among a plurality of edge devices. The method of scheduling the Apps includes: classifying the Apps into static Apps (SAs) and dynamic Apps (DAs) , where the SAs are associated with the edge devices, and the DAs are not associated with the edge devices; deploying a plurality of Apps to the plurality of edge devices; and calculating a load of each of the edge devices when the Apps thereon are running, comparing the load with a threshold, and in a case that the load is greater than the threshold, scheduling DAs deployed on the edge device to run on another edge device, until the loads of all the edge devices are less than the threshold. Therefore, Apps are classified into SAs and DAs, loads of edge devices when the Apps thereon are running are calculated, and DAs on edge devices with excessively large loads are scheduled to run on edge devices with free loads, so that the loads of the edge devices are balanced, overloading of some edge  devices is avoided, and the performance and real-time performance of the edge devices are improved.
Optionally, the classifying the Apps into SAs and DAs includes: obtaining classification labels of the Apps during development, and classifying the Apps into the SAs and the DAs according to the classification labels. Therefore, the developer manually adds a classification label to the App. According to the classification label, it can be determined whether the type of the App is an SA or a DA. Therefore, the flexibility of App classification is improved.
Optionally, the classifying the Apps into SAs and DAs includes: obtaining an amount of data exchange of each App, comparing the amount of data exchange with a specified value, classifying the App as the SA in a case that the amount of data exchange of the App is greater than the specified value, and classifying the App as the DA in a case that the amount of data exchange of the App is less than the specified value. Therefore, whether the type of an App is an SA or a DA can be automatically determined.
Optionally, the calculating a load of each of the edge devices when the Apps thereon are running includes: summing up memory usage of the SAs, memory usage of the DAs, and bandwidth usage of the DAs on the edge devices. Therefore, loads of edge devices when the Apps thereon are running are calculated.
Optionally, the scheduling DAs deployed on the edge device to run on another edge device includes: calculating load consumption of the DAs, and scheduling the DAs to run on the another edge device according to the load consumption. Therefore, edge devices to be scheduled are determined by load consumption, which implements the scheduling of DAs.
Optionally, after the calculating load consumption of the DAs, the method further includes: sorting the DAs in descending order according to the load consumption, and preferentially scheduling top-ranked DAs to run on the another edge device. Therefore, the scheduling of DAs consuming more loads is prioritized, which can reduce the overall scheduling complexity and improve the processing efficiency of scheduling.
The present invention further provides an apparatus of scheduling Apps, which is used for scheduling the Apps among a plurality of edge devices. The apparatus of scheduling the Apps includes: a classification module, configured to classify the Apps into SAs and DAs, where the SAs are associated with the edge devices, and the DAs are not associated with the edge devices; a deployment module, configured to deploy a plurality of Apps to the plurality of edge devices; and a scheduling module, configured to calculate a load of each of the edge devices when the Apps thereon are running, compare the load with a threshold, and in a case that the load is greater than the threshold, schedule DAs deployed on the edge device to run  on another edge device, until the loads of all the edge devices are less than the threshold.
Optionally, the classifying, by the classification module, the Apps into SAs and DAs includes: obtaining classification labels of the Apps during development, and classifying the Apps into the SAs and the DAs according to the classification labels.
Optionally, the classifying, by the classification module, the Apps into SAs and DAs includes: obtaining an amount of data exchange of each App, comparing the amount of data exchange with a specified value, classifying the App as the SA in a case that the amount of data exchange of the App is greater than the specified value, and classifying the App as the DA in a case that the amount of data exchange of the App is less than the specified value.
Optionally, the calculating, by the scheduling module, a load of each of the edge devices when the Apps thereon are running includes: summing up memory usage of the SAs, memory usage of the DAs, and bandwidth usage of the DAs on the edge devices.
Optionally, the scheduling, by the scheduling module, DAs deployed on the edge device to run on another edge device includes: calculating load consumption of the DAs, and scheduling the DAs to run on the another edge device according to the load consumption.
Optionally, after the calculating, by the scheduling module, load consumption of the DAs, the method further includes: sorting the DAs in descending order according to the load consumption, and preferentially scheduling top-ranked DAs to run on the another edge device.
The present invention further provides an electronic device, including a processor, a memory and instructions stored in the memory, where the instructions, when executed by the processor, implement the method as described above.
The present invention further provides a computer-readable storage medium, storing computer instructions thereon, where the computer instructions, when executed, implement the method as described above.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings below are merely intended to provide exemplary descriptions and explanations for the present invention, but are not intended to limit the scope of the present invention. In the figures:
FIG. 1 is a flowchart of a method of scheduling according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an industrial automation system according to an  embodiment of the present invention;
FIG. 3 to FIG. 5 are schematic diagrams of scheduling applications (Apps) in an industrial automation system according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an apparatus of scheduling according to an embodiment of the present invention; and
FIG. 7 is a schematic diagram of an electronic device according to an embodiment of the present invention.
DESCRIPTION OF REFERENCE NUMERALS
100 Method of scheduling
110-130 Steps
200 Industrial automation system
210 Edge management device
220-1, 220-2, 220-3, and 220-N Edge devices
230-1, 230-2, 230-3, and 230-N Field devices
300 Industrial automation system
310 Edge management device
321, 322, and 323 Edge devices
331, 332, and 333 Field devices
600 Apparatus of scheduling
610 Classification module
620 Deployment module
630 Scheduling module
700 Electronic device
710 Processor
720 Memory
DETAILED DESCRIPTION
To provide a clearer understanding of the technical features, objectives, and effects of the present invention, specific implementations of the present invention are described with reference to the accompanying drawings.
In the following description, many specific details are provided to give a full understanding of the present invention. However, the present invention may also be implemented in other manners different from those described herein. Therefore, the present  invention is not limited to the specific embodiments disclosed below.
As shown in the present application and the claims, terms such as "a/an" , "one" , "one kind" , and/or "the" do not refer specifically to singular forms and may also include plural forms, unless the context expressly indicates an exception. In general, terms "comprise" and "include" merely indicate including clearly identified steps and elements. The steps and elements do not constitute an exclusive list. A method or a device may also include other steps or elements.
The present invention provides a method of scheduling applications (Apps) , which is used for scheduling the Apps among a plurality of edge devices. FIG. 1 is a flowchart of a method 100 of scheduling according to an embodiment of the present invention. As shown in FIG. 1, the method 100 of scheduling Apps includes:
Step 110: Classify the Apps into static Apps (SAs) and dynamic Apps (DAs) , where the SAs are associated with the edge devices, and the DAs are not associated with the edge devices.
Apps can be classified into SAs and DAs during development. The SAs are associated with the edge devices and are bound to field devices controlled by the edge devices. The SAs exchange a large amount of data with the field devices, and only run on the deployed edge devices. Once being deployed, the SAs are not scheduled to run on other edge devices. For example, the SAs may be data monitoring apps. The DAs are not associated with the edge devices, and are not bound to the field devices controlled by the edge devices. The DAs exchange a small amount of data with the field devices, and can be scheduled to run on different edge devices. For example, the DAs may be alarm response apps.
In some embodiments, the classifying the Apps into SAs and DAs may include: obtaining classification labels of the Apps during development, and classifying the Apps into the SAs and the DAs according to the classification labels. Specifically, when developing an App, the developer manually adds a classification label to the App. According to the classification label, it can be determined whether the type of the App is an SA or a DA. Therefore, the flexibility of App classification is improved.
In some embodiments, the classifying the Apps into SAs and DAs may include: obtaining an amount of data exchange of each App, comparing the amount of data exchange with a specified value, classifying the App as the SA in a case that the amount of data exchange of the App is greater than the specified value, and classifying the App as the DA in a case that the amount of data exchange of the App is less than the specified value. Specifically, an amount of data exchange is designed during the development of an App. If  the amount of data exchange is too large, it indicates that the App is closely associated with a field device, otherwise, the correlation is not closely associated with a field device. The specified value can be used as a criterion to measure the amount of data exchange. Therefore, whether the type of an App is an SA or a DA can be automatically determined.
Step 120: Deploy a plurality of Apps to the plurality of edge devices.
After the development is completed, a plurality of Apps are deployed to a plurality of edge devices. Because SAs are associated with field devices, and cannot be scheduled after being deployed, the SAs are deployed to corresponding edge devices. For example, data monitoring software of a sensor is deployed to an edge device corresponding to the sensor. Then, DAs can be deployed to a plurality of edge devices randomly, or the DAs can be deployed to the plurality of edge devices according to other rules.
FIG. 2 is a schematic diagram of an industrial automation system 200 according to an embodiment of the present invention. As shown in FIG. 2, the industrial automation system 200 includes an edge management device 210, a plurality of edge devices 220-1, 220-2, 220-3, …, and 220-N, and a plurality of field devices 230-1, 230-2, 230-3, …, and 230-N. The edge devices control the corresponding field devices. That is, the edge device 220-1 controls the field device 230-1, the edge device 220-2 controls the field device 230-2, the edge device 220-3 controls the field device 230-3, and the edge management device 210 deploys Apps to the plurality of edge devices. As shown in FIG. 2, an SA 1.1, an SA 1.2, a DA 1.1, and a DA 1.2 are deployed on the edge device 220-1, an SA 2.1 and a DA 2.1 are deployed on the edge device 220-2, an SA 3.1 and a DA 3.1 are deployed on the edge device 220-3, and an SA N. 1 and a DA N. 1 are deployed on the edge device 220-N.
Step 130: Calculate a load of each of the edge devices when the Apps thereon are running, compare the load with a threshold, and in a case that the load is greater than the threshold, schedule DAs deployed on the edge device to run on another edge device, until the loads of all the edge devices are less than the threshold.
After the Apps are deployed to the edge devices, Apps that are run by the edge devices can be determined. Then, the loads of the Apps are calculated. The threshold is deployed to the edge devices with the Apps at the same time. The loads calculated on the edge devices are compared with the threshold. In a case that the loads of the edge devices are less than the threshold, it indicates that the edge devices are not overloaded. In a case that the loads of the edge devices are greater than the threshold, it indicates that the edge devices are overloaded. At this time, the DAs on the edge devices are scheduled to run on other edge devices until the loads of all the edge devices are less than the threshold.
In some embodiments, the calculating a load of each of the edge devices when the Apps thereon are running may include: summing up memory usage of the SAs, memory usage of the DAs, and bandwidth usage of the DAs on the edge devices. For example, in a case that an SA or a DA needs to occupy 20%of memory resources, then memory usage of the App is 20%, and in a case that the DA needs to occupy 10%of bandwidth resources, then bandwidth usage of the DA is 10%. The loads of the edge devices when the Apps thereon are running can be calculated by summing up the memory usage and the bandwidth usage. The loads of the edge devices when the Apps thereon are running can be calculated by using the following formula:
Figure PCTCN2022078314-appb-000001
where W i is a load of an edge device i, W s is the memory usage of the SA, W d is the memory usage of the DA, N d is the bandwidth consumption of the DA, D d is the amount of data of the DA, k is a conversion coefficient, and
Figure PCTCN2022078314-appb-000002
is the bandwidth usage.
In some embodiments, the scheduling DAs deployed on the edge device to run on another edge device may include: calculating load consumption of the DAs, and scheduling the DAs to run on the another edge device according to the load consumption. Specifically, the load consumption of the DAs is calculated. The load consumption indicates the amount of load that the DAs need to consume. Other edge devices are searched. If remaining resources of the edge devices can accommodate the load consumption of the DAs, the DAs are scheduled to run on the edge devices with the remaining resources. Preferably, the DAs are scheduled to run on the nearest edge devices. The load consumption of the DAs can be calculated according to the following formula:
C D=W d-k·N d·D d,
where C D is the load consumption of the DA, W d is the memory usage of the DA, N d is the bandwidth consumption of the DA, D d is the amount of data of the DA, and k is a conversion coefficient.
In some embodiments, after the calculating load consumption of the DAs, the method may further include: sorting the DAs in descending order according to the load consumption, and preferentially scheduling top-ranked DAs to run on the another edge device. The DA with the larger load consumption has a greater priority. That is, the scheduling of the DA consuming more load is prioritized, which can reduce the overall scheduling complexity and improve the processing efficiency of scheduling.
FIG. 3 to FIG. 5 are schematic diagrams of scheduling Apps in an industrial automation system according to an embodiment of the present invention. The industrial automation system 300 includes an edge management device 310, three  edge devices  321, 322, and 323, and three  corresponding field devices  331, 332, and 333. The edge device 321 is deployed with an SA 1.1 (memory usage 0.2) , an SA 1.2 (memory usage 0.2) , a DA 1.1 (memory usage 0.2, and bandwidth usage 0.01) , and a DA 1.2 (memory usage 0.2, and bandwidth usage 0.01) . The edge device 322 is deployed with an SA 2.1 (memory usage 0.2) , and a DA 2.1 (memory usage 0.1, and bandwidth usage 0.01) . The edge device 323 is deployed with an SA 3.1 (memory usage 0.65) , and a DA 3.1 (memory usage 0.1, and bandwidth usage 0.01) . The threshold is 0.7. The load of the edge device 321 is 0.82, and the load of the edge device 323 is 0.76, which are greater than the threshold. The load of the edge device 322 is 0.31, which is less than the threshold. If run according to the deployment mode, the  edge devices  321 and 323 are overloaded. According to the method for scheduling provided by the embodiments of the present invention, the load consumption required by the DA 1.1 on the edge device 321 is 0.19 (memory usage 0.2-bandwidth usage 0.01) . The edge device 322 has enough space to run the DA 1.1, and the DA 1.1 can be scheduled to the edge device 322, as shown in FIG. 3 and FIG. 4. For the edge device 323, the memory usage required by the DA 3.1 on the edge device 323 is 0.99 (memory usage 0.1-bandwidth usage 0.01) . The edge device 322 still has enough space to run the DA 3.1, and the DA 3.1 can be scheduled to the edge device 322, as shown in FIG. 4 and FIG. 5. Therefore, the loads of the  edge devices  321, 322, and 323 are all less than the threshold, and the  edge devices  321, 322, and 323 are not overloaded.
The embodiments of the present invention provide a method for scheduling Apps. The Apps are classified into SAs and DAs, loads of edge devices when the Apps thereon are running are calculated, and DAs on edge devices with excessively large loads are scheduled to run on edge devices with free loads, so that loads of edge devices are balanced, overloading of some edge devices is avoided, and the performance and real-time performance of the edge devices are improved.
The present invention further provides an apparatus of scheduling, which is used for scheduling Apps among a plurality of edge devices. FIG. 6 is a schematic diagram of an apparatus 600 of scheduling according to an embodiment of the present invention. As shown in FIG. 6, the apparatus 600 of scheduling Apps includes:
classification module 610, configured to classify the Apps into SAs and DAs, where the SAs are associated with the edge devices, and the DAs are not associated with the edge devices;
deployment module 620, configured to deploy a plurality of Apps to the plurality of edge devices; and
scheduling module 630, configured to calculate a load of each of the edge devices when the Apps thereon are running, compare the load with a threshold, and in a case that the load is greater than the threshold, schedule DAs deployed on the edge device to run on another edge device, until the loads of all the edge devices are less than the threshold.
In some embodiments, the classifying, by the classification module 610, the Apps into SAs and DAs includes: obtaining classification labels of the Apps during development, and classifying the Apps into the SAs and the DAs according to the classification labels.
In some embodiments, the classifying, by the classification module 610, the Apps into SAs and DAs includes: obtaining an amount of data exchange of the App, comparing the amount of data exchange with a specified value, classifying the App as the SA in a case that the amount of data exchange of the App is greater than the specified value, and classifying the App as the DA in a case that the amount of data exchange of the App is less than the specified value.
In some embodiments, the calculating, by the scheduling module 630, a load of each of the edge devices when the Apps thereon are running includes: summing up memory usage of the SAs, memory usage of the DAs, and bandwidth usage of the DAs on the edge devices.
In some embodiments, the scheduling, by the scheduling module 630, DAs deployed on the edge device to run on another edge device includes: the scheduling module 630 calculates load consumption of the DAs, and schedules the DAs to run on the another edge device according to the load consumption.
In some embodiments, after the calculating load consumption of the DAs, the method further includes: sorting the DAs in descending order according to the load consumption, and preferentially scheduling top-ranked DAs to run on the another edge device.
The present invention further provides an electronic device 700. FIG. 7 is a schematic diagram of an electronic device 700 according to an embodiment of the present invention. As shown in FIG. 7, the electronic device 700 includes a processor 710 and a memory 720. The memory 720 stores instructions, and the instructions are executed by the processor 710 to implement the method 100 described above.
The present invention further provides a computer-readable storage medium, storing computer instructions thereon, where the computer instructions, when executed, implement the method 100 as described above.
Some aspects of the method and apparatus of the present invention may be entirely  executed by hardware, may be entirely executed by software (including firmware, resident software, microcode, and the like) , or may be executed by a combination of hardware and software. The foregoing hardware or software may be referred to as "data block" , "module" , "engine" , "unit" , "component" or "system" . The processor may be one or more application specific integrated circuits (ASICs) , digital signal processors (DSPs) , digital signal processing devices (DSPDs) , programmable logic devices (PLDs) , field programmable gate arrays (FPGAs) , processors, controllers, microcontrollers, microprocessors, or a combination thereof. In addition, various aspects of the present invention may be embodied as computer products located in one or more computer-readable media, the product including computer-readable program code. For example, the computer-readable medium may include, but is not limited to, a magnetic storage device (for example, a hard disk, a floppy disk, a magnetic tape, etc. ) , an optical disk (for example, a compact disk (CD) , a digital versatile disk (DVD) , etc. ) , a smart card, and a flash memory device (for example, a card, a stick, a key driver, etc. ) .
Flow diagrams are used herein to illustrate operations performed by the method according to the embodiments of the present application. It should be understood that the foregoing operations are not necessarily performed precisely in order. On the contrary, the steps may be performed in reverse order or simultaneously. In addition, other operations may be alternatively added into the processes, or one or more steps are removed from the processes.
It should be understood that, although this specification is described according to each embodiment, each embodiment may not include only one independent technical solution. The description manner of this specification is merely for clarity. This specification should be considered as a whole by a person skilled in the art, and the technical solution in each embodiment may also be properly combined, to form other implementations that can be understood by the person skilled in the art.
The foregoing are merely specific schematic implementations of the present invention, and are not intended to limit the scope of the present invention. Any equivalent change, modification, and combination made by the person skilled in the art without departing from the conception and principles of the present invention should all fall within the protection scope of the present invention.

Claims (14)

  1. A method (100) of scheduling applications (Apps) , used for scheduling Apps among a plurality of edge devices, the method (100) of scheduling Apps comprising:
    classifying the Apps into static Apps (SAs) and dynamic Apps (DAs) (110) , wherein the SAs are associated with the edge devices, and the DAs are not associated with the edge devices;
    deploying a plurality of Apps to the plurality of edge devices (120) ; and
    calculating a load of each of the edge devices when the Apps thereon are running, comparing the load with a threshold, and in a case that the load is greater than the threshold, scheduling DAs deployed on the edge device to run on another edge device, until the loads of all the edge devices are less than the threshold (130) .
  2. The method (100) of scheduling Apps according to claim 1, wherein the classifying the Apps into SAs and DAs comprises: obtaining classification labels of the Apps during development and classifying the Apps into the SAs and the DAs according to the classification labels.
  3. The method (100) of scheduling Apps according to claim 1, wherein the classifying the Apps into SAs and DAs comprises: obtaining an amount of data exchange of each App, comparing the amount of data exchange with a specified value, classifying the App as the SA in a case that the amount of data exchange of the App is greater than the specified value, and classifying the App as the DA in a case that the amount of data exchange of the App is less than the specified value.
  4. The method (100) of scheduling Apps according to claim 1, wherein the calculating a load of each of the edge devices when the Apps thereon are running comprises: summing up memory usage of the SAs, memory usage of the DAs, and bandwidth usage of the DAs on the edge devices.
  5. The method (100) of scheduling Apps according to claim 4, wherein the scheduling DAs deployed on the edge device to run on another edge device comprises: calculating load consumption of the DAs and scheduling the DAs to run on the another edge device according to the load consumption.
  6. The method (100) of scheduling Apps according to claim 5, wherein after the calculating load consumption of the DAs, the method further comprises: sorting the DAs in descending order according to the load consumption, and preferentially scheduling  top-ranked DAs to run on the another edge device.
  7. An apparatus (600) of scheduling applications (Apps) , configured to schedule the Apps among a plurality of edge devices, the apparatus (600) of scheduling Apps comprising:
    a classification module (610) , configured to classify the Apps into static Apps (SAs) and dynamic Apps (DAs) , wherein the SAs are associated with the edge devices, and the DAs are not associated with the edge devices;
    a deployment module (620) , configured to deploy a plurality of Apps to the plurality of edge devices; and
    a scheduling module (630) , configured to calculate a load of each of the edge devices when the Apps thereon are running, compare the load with a threshold, and in a case that the load is greater than the threshold, schedule DAs deployed on the edge device to run on another edge device, until the loads of all the edge devices are less than the threshold.
  8. The apparatus (600) of scheduling Apps according to claim 7, wherein the classifying, by the classification module (610) , the Apps into SAs and DAs comprises: obtaining classification labels of the Apps during development and classifying the Apps into the SAs and the DAs according to the classification labels.
  9. The apparatus (600) of scheduling Apps according to claim 7, wherein the classifying, by the classification module (610) , the Apps into SAs and DAs comprises: obtaining an amount of data exchange of each App, comparing the amount of data exchange with a specified value, classifying the App as the SA in a case that the amount of data exchange of the App is greater than the specified value, and classifying the App as the DA in a case that the amount of data exchange of the App is less than the specified value.
  10. The apparatus (600) of scheduling Apps according to claim 7, wherein the calculating, by the scheduling module (630) , a load of each of the edge devices when the Apps thereon are running comprises: summing up memory usage of the SAs, memory usage of the DAs, and bandwidth usage of the DAs on the edge devices.
  11. The apparatus (600) of scheduling Apps according to claim 10, wherein the scheduling, by the scheduling module (630) , DAs deployed on the edge device to run on another edge device comprises: calculating load consumption of the DAs and scheduling the DAs to run on the another edge device according to the load consumption.
  12. The apparatus (600) of scheduling Apps according to claim 11, wherein after the calculating, by the scheduling module (630) , load consumption of the DAs, the method further comprises: sorting the DAs in descending order according to the load consumption, and preferentially scheduling top-ranked DAs to run on the another edge device.
  13. An electronic device (700) , comprising a processor (710) , a memory (720) , and instructions stored in the memory (720) , wherein the instructions, when executed by the processor (710) , implement the method according to any one of claims 1 to 6.
  14. A computer-readable storage medium, storing computer instructions thereon, wherein the computer instructions, when executed, implement the method according to any one of claims 1 to 6.
PCT/CN2022/078314 2022-02-28 2022-02-28 Method and apparatus of scheduling applications WO2023159576A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/078314 WO2023159576A1 (en) 2022-02-28 2022-02-28 Method and apparatus of scheduling applications

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/078314 WO2023159576A1 (en) 2022-02-28 2022-02-28 Method and apparatus of scheduling applications

Publications (1)

Publication Number Publication Date
WO2023159576A1 true WO2023159576A1 (en) 2023-08-31

Family

ID=87764424

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/078314 WO2023159576A1 (en) 2022-02-28 2022-02-28 Method and apparatus of scheduling applications

Country Status (1)

Country Link
WO (1) WO2023159576A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190123959A1 (en) * 2017-10-24 2019-04-25 Honeywell International Inc. Systems and methods for adaptive industrial internet of things (iiot) edge platform
US20200142753A1 (en) * 2018-11-02 2020-05-07 EMC IP Holding Company LLC Dynamic reallocation of resources in accelerator-as-a-service computing environment
US20200327371A1 (en) * 2019-04-09 2020-10-15 FogHorn Systems, Inc. Intelligent Edge Computing Platform with Machine Learning Capability
CN112799789A (en) * 2021-03-22 2021-05-14 腾讯科技(深圳)有限公司 Node cluster management method, device, equipment and storage medium
US20210297891A1 (en) * 2020-03-18 2021-09-23 Equinix, Inc. Application workload routing and interworking for network defined edge routing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190123959A1 (en) * 2017-10-24 2019-04-25 Honeywell International Inc. Systems and methods for adaptive industrial internet of things (iiot) edge platform
US20200142753A1 (en) * 2018-11-02 2020-05-07 EMC IP Holding Company LLC Dynamic reallocation of resources in accelerator-as-a-service computing environment
US20200327371A1 (en) * 2019-04-09 2020-10-15 FogHorn Systems, Inc. Intelligent Edge Computing Platform with Machine Learning Capability
US20210297891A1 (en) * 2020-03-18 2021-09-23 Equinix, Inc. Application workload routing and interworking for network defined edge routing
CN112799789A (en) * 2021-03-22 2021-05-14 腾讯科技(深圳)有限公司 Node cluster management method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US11567795B2 (en) Minimizing impact of migrating virtual services
US7979864B2 (en) Apparatus for setting used license of executing job into unused license state and allocating the set unused license to a to be executed job based on priority
Yadwadkar et al. Wrangler: Predictable and faster jobs using fewer resources
EP3198429B1 (en) Heterogeneous thread scheduling
US10365994B2 (en) Dynamic scheduling of test cases
US9575548B2 (en) Apparatus for migrating virtual machines to another physical server, and method thereof
US20110072437A1 (en) Computer job scheduler with efficient node selection
US20170149690A1 (en) Resource Aware Classification System
US20060250301A1 (en) Method and apparatus for managing executions of a management program within a data processing system
CN111381970B (en) Cluster task resource allocation method and device, computer device and storage medium
US10120721B2 (en) Pluggable engine for application specific schedule control
US9436517B2 (en) Reliability-aware application scheduling
US8745622B2 (en) Standalone software performance optimizer system for hybrid systems
JP2015026197A (en) Job delaying detection method, information processor and program
US20230168895A1 (en) Automated runtime configuration for dataflows
WO2023159576A1 (en) Method and apparatus of scheduling applications
US20140122403A1 (en) Loading prediction method and electronic device using the same
US11347557B2 (en) Method and system for predicting optimal number of threads for application running on electronic device
US8255642B2 (en) Automatic detection of stress condition
US20230222012A1 (en) Method for scaling up microservices based on api call tracing history
CN114697213A (en) Upgrading method and device
JP2009048358A (en) Information processor and scheduling method
CN115061811A (en) Resource scheduling method, device, equipment and storage medium
CN116185772B (en) File batch detection method and device
JP2716019B2 (en) Job class determination method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22927857

Country of ref document: EP

Kind code of ref document: A1