AU2020102381A4 - Efficient resource utilization and less complex technique for mapping machine learning algorithms in to embedded systems - Google Patents

Efficient resource utilization and less complex technique for mapping machine learning algorithms in to embedded systems Download PDF

Info

Publication number
AU2020102381A4
AU2020102381A4 AU2020102381A AU2020102381A AU2020102381A4 AU 2020102381 A4 AU2020102381 A4 AU 2020102381A4 AU 2020102381 A AU2020102381 A AU 2020102381A AU 2020102381 A AU2020102381 A AU 2020102381A AU 2020102381 A4 AU2020102381 A4 AU 2020102381A4
Authority
AU
Australia
Prior art keywords
cache
machine learning
resource utilization
svm
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2020102381A
Inventor
Priyanka Kumari Bhansali
Ravaleedhar Murthy
Ashish Sharma
Penumathsa Suresh Varma
Sesha Bhargavi Velagaleti
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bhansali Priyanka Kumari Mrs
Varma Penumathsa Suresh Dr
Original Assignee
Bhansali Priyanka Kumari Mrs
Varma Penumathsa Suresh Dr
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bhansali Priyanka Kumari Mrs, Varma Penumathsa Suresh Dr filed Critical Bhansali Priyanka Kumari Mrs
Priority to AU2020102381A priority Critical patent/AU2020102381A4/en
Application granted granted Critical
Publication of AU2020102381A4 publication Critical patent/AU2020102381A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Debugging And Monitoring (AREA)

Abstract

EFFICIENT RESOURCE UTILIZATION AND LESS COMPLEX TECHNIQUE FOR MAPPING MACHINE LEARNING ALGORITHMS IN TO EMBEDDED SYSTEMS ABSTRACT: The resource sharing becomes critical if the amount of data to be shared becomes high. So the scheduling should be carried out efficiently. The remedy is given by the utilization of the resources properly. Also, to implement a less complicated technique for machine learning algorithms and embed it using embedded Support Vector Machine. The various factors which affect resource utilization are power, memory, etc., which will be adequately maintained to attain the high execution time and accuracy. The power consumption is properly utilized using smart grid implementation. The memory is utilized correctly and reduces the constraint using multicore architecture. Then the classification is carried out in machine learning using Support Vector Machine (SVM). The clusters are generated, and it is calculated using K-means clustering for maintaining accuracy. Further, it is mapped to embed Support Vector Machine (SVM) by gathering the data and lead to the libsvm. Then run using the applet available. 11 P a g e EFFICIENT RESOURCE UTILIZATION AND LESS COMPLEX TECHNIQUE FOR MAPPING MACHINE LEARNING ALGORITHMS IN TO EMBEDDED SYSTEMS Drawings Multicore architecture RMSE calculation for Smart grid for power for execution time and accuracy consumption memory SVM and clusTerm-ng lusmg K-ivieans Mapping using embedded SVM Figure 1: Overall architecture of the proposed system I A Ho Operati g Syste ore Cor Cor C re i-Cace d-Cache Cache d Cache Cache d Cache i-Cache d-Cache L Cache L2 Cac e L2 Cac e L2 Cche L3 Cac e B s / Inter nnect Multicore Processor Me ory Con oiler 1 1/O Device Con oiler Main 1/O Device Memory 11 P a g e

Description

EFFICIENT RESOURCE UTILIZATION AND LESS COMPLEX TECHNIQUE FOR MAPPING MACHINE LEARNING ALGORITHMS IN TO EMBEDDED SYSTEMS
Drawings
Multicore architecture RMSE calculation for Smart grid for power for execution time and accuracy consumption memory
SVM and clusTerm-ng lusmg K-ivieans
Mapping using embedded SVM
Figure 1: Overall architecture of the proposed system
A I
Ho Operati g Syste
ore Cor Cor C re i-Cace d-Cache Cache d Cache Cache d Cache i-Cache d-Cache L Cache L2 Cac e L2 Cac e L2 Cche L3 Cac e B s / Inter nnect Multicore Processor Me ory Con oiler 11/O Device Con oiler
Main 1/O Device Memory
11 P a g e
EFFICIENT RESOURCE UTILIZATION AND LESS COMPLEX TECHNIQUE FOR MAPPING MACHINE LEARNING ALGORITHMS IN TO EMBEDDED SYSTEMS
Description
Field of the Invention:
This invention relates to the utilization of the resources properly. Also, to implement a less complicated technique for machine learning algorithms and embed it using embedded Support Vector Machine. The various factors which affect resource utilization are power, memory, etc., which will be appropriately maintained to attain the high execution time and accuracy. The power consumption is properly utilized using smart grid implementation. The memory is utilized correctly and reduces the constraint using multicore architecture. Further, it is mapped to embed Support Vector Machine (SVM).
Background of the invention:
Raj Kumarl et al. suggest that for the effective utilization of resources, resource sharing plays a vital role as the number of resources is high. Also, for the effective utilization of the same, scheduling of the resources is essential. This is achieved using the min-max algorithm in the scheduling of Particle Swarm Optimization (PSO) the resource utilization can be performed effectively.
Jing Huang et al. proposes an energy utilization scheme that concentrates on both the power consumption and the load balancing schemes. Based on the scheme, the utilization of the resources is adequately controlled. They implemented the same with the help of the Lagrange method.
Hamid Sarbazi-Azad et al. proposed the effective utilization of the resources in cloud computing. In cloud computing, there is an enormous amount of space for sharing and distribution of the resources. But resource sharing becomes critical if the amount of data to be shared becomes high. So the scheduling should be carried out efficiently. This is effectively carried out using the mechanism named task
1|Page consolidation. As the tasks are consolidated, the task to be executed concurrently. Thus the power consumption is low.
Gamal Eldin I. Selim et al. proposed an algorithm which will reduce the power consumption with effective virtual resource utilization. As there are service level agreements between the resources and the cloud environment, it will also increase the Central Processing units (CPU) utilization.
Fernando H. L. Buzato et al. proposed a scheme that effectively utilized the CPU, memory, and the disk. For which they used various deployment schemes. Finally, they achieved good network utilization.
Michael Borkowski et al. proposed a mechanism for predicting the utilization of resources. They used a machine learning mechanism for analyzing the prediction based on each task and each resource. Thus the prediction error which occurred previously got eradicated.
Tajwar Mehmood et al. predicted resource utilization in a better way. For which a combination of algorithms was introduced and used. For which the stack generalization was implemented. Based on the scheme, the accuracy in the resource utilization prediction was attained.
Zhe Li et al. introduced cogent confabulation, which predicts resource utilization in a better way. The correlation between the predictions in multiple dimensions was analyzed. This yields an improved accuracy greater than 26% when compared with the previous algorithms.
Sergio Branco et al. introduced a machine learning mechanism for resource utilization and incorporated the same with the embedded system. Also, the machine learning mechanism with optimization, compression, applications, and future trends were explained.
Mois6s Arredondo-Velizquez et al. explains how a Convolutional Neural Network (CNN) is mapped onto an embedded system. Based on this strategy, it provides a way to do the same with a two-way approach. One is by using the algorithmic approach, and the other is through a hardware approach.
21Page
Objects of the Invention:
• The objective is to use the resources properly. Also, to implement a less complicated technique for machine learning algorithms and embed it using embedded Support Vector Machine. • The various resources, like power, memory, etc., will be adequately maintained to attain high execution time and accuracy. • The power consumption is properly utilized using smart grid implementation. • The memory is utilized correctly and reduces the constraint using multicore architecture. • The accuracy is maintained by checking using the Root Mean Square Error Method.
Summary of the Invention
The resources like memory, power are the resources which are affecting the accuracy, execution time, and increase in power consumption. This is eradicated with the help of implementing various measurements.
The memory is utilized effectively by implementing the multicore architectures. As multicore architectures contain L2 cache and L3 cache for storing the data, the constraint is removed. L2 cache will store the private data, and the L3 cache will store the shared data. This will improve the time. If any data which is transmitted has lost, then the data which is in the shared will be updated then and there so that the lost data can be fetched from there itself. This will improve memory.
As the multicore architecture has more cores, the execution is carried out parallel. Due to the parallelism, the fork and join mechanism is implemented, similar to the divide and conquer mechanism. This will make the processing of the input carried out parallel, thus improving the execution time.
The implementation of a smart grid reduces power consumption. This contains the smart meter, and the proper maintenance of the system, too, will reduce it. The smart
31Page meter will automatically take a reading and thereby be analyzed to reduce power consumption.
The accuracy is to be adequately maintained for resource utilization. This is bringing through the RMSE method. The training data is considered asVe, and the running data is considered as Pref. Based on the predictions made, it is compared with the actual reference. And it was found that it suits well if the Root Mean Square Error (RMSE) is very low, it was rejected. In the SVM, the clusters are formed based on the grouping, calculated through the K-means clustering method.
Then the whole is subjected to embed SVM. It gathers the data and trains it using libsym and finding the minimum and maximum value. Then run the SVM model on the trained data using Java. Then in microcontroller dump the gathered data. Scale the data and predict it.
Detailed Description of the Invention:
As resource sharing becomes critical if the amount of data to be shared becomes high. So the scheduling should be carried out efficiently. The remedy is given by the utilization of the resources properly.
The resources like memory, power are the resources which are affecting the accuracy, execution time, and increase in power consumption. This is eradicated with the help of implementing various measurements.
As explained in figure 1, the overall mechanism of the proposed system was depicted. The memory is utilized effectively by implementing the multicore architectures. As multicore architectures contain L2 cache and L3 cache for storing the data, the constraint is removed. L2 cache will store the private data, and the L3 cache will store the shared data. This will improve the time. If any data which is transmitted has lost, then the data which is in the shared will be updated then and there so that the lost data can be fetched from there itself. This will improve memory.
As the multicore architecture depicted in figure 2 is having more cores, the execution is carried out parallel. Due to the parallelism, the fork and join mechanism is implemented, similar to the divide and conquer mechanism. This will make the processing of the input carried out parallel, thus improving the execution time.
41Page
The power consumption is reduced by the implementation of the smart grid, as depicted in figure 3. This contains the smart meter, and the proper maintenance of the system, too, will reduce it. The smart meter will automatically take a reading and thereby analyzed to reduce power consumption.
The accuracy is to be adequately maintained for resource utilization. This is bringing through the RMSE method. The training data is considered asVe, and the running data is considered as Pref. Based on the predictions made, it is compared with the actual reference. And it was found that it suits well if the Root Mean Square Error (RMSE) is very low, it was rejected. Also, in the SVM, the clusters are formed based on the grouping, calculated through the K-means clustering method.
Then the whole is subjected to embed SVM. It gathers the data and trains it using libsym and finding the minimum and maximum value. Then run the SVM model on the trained data using Java. Then in microcontroller dump the gathered data. Scale the data and predict it using the applet depicted in figure 4.
51Page
EFFICIENT RESOURCE UTILIZATION AND LESS COMPLEX TECHNIQUE FOR MAPPING MACHINE LEARNING ALGORITHMS IN TO EMBEDDED SYSTEMS CLAIMS:
The proposed method is capable of:
1. Embed the machine learning algorithm using embedded Support Vector Machine. 2. The various resources, like power, memory, etc., will be adequately maintained to attain high execution time and accuracy. 3. The power consumption is properly utilized using smart grid implementation. 4. The memory is utilized correctly and reduces the constraint using multicore architecture. 5. The accuracy is maintained by checking using the Root Mean Square Error Method.
1| P a g e
EFFICIENT RESOURCE UTILIZATION AND LESS COMPLEX TECHNIQUE FOR MAPPING MACHINE LEARNING ALGORITHMS IN TO EMBEDDED SYSTEMS
Drawings 2020102381
Multicore architecture RMSE calculation for Smart grid for power for execution time and accuracy consumption memory
SVM and clustering using K-Means
Mapping using embedded SVM
Figure 1: Overall architecture of the proposed system
1|Page
Figure 2: Multicore architecture for execution time and memory 23 Sep 2020 2020102381
Figure 3: Smart grid for power consumption reduction
Figure 4: Applet of SVM
2|Page
AU2020102381A 2020-09-23 2020-09-23 Efficient resource utilization and less complex technique for mapping machine learning algorithms in to embedded systems Ceased AU2020102381A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2020102381A AU2020102381A4 (en) 2020-09-23 2020-09-23 Efficient resource utilization and less complex technique for mapping machine learning algorithms in to embedded systems

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2020102381A AU2020102381A4 (en) 2020-09-23 2020-09-23 Efficient resource utilization and less complex technique for mapping machine learning algorithms in to embedded systems

Publications (1)

Publication Number Publication Date
AU2020102381A4 true AU2020102381A4 (en) 2020-11-05

Family

ID=73016616

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2020102381A Ceased AU2020102381A4 (en) 2020-09-23 2020-09-23 Efficient resource utilization and less complex technique for mapping machine learning algorithms in to embedded systems

Country Status (1)

Country Link
AU (1) AU2020102381A4 (en)

Similar Documents

Publication Publication Date Title
Alresheedi et al. Improved multiobjective salp swarm optimization for virtual machine placement in cloud computing
San Miguel et al. Load value approximation
Yang et al. Intelligent resource scheduling at scale: a machine learning perspective
Wang et al. A task scheduling strategy in edge-cloud collaborative scenario based on deadline
CN108845886B (en) Cloud computing energy consumption optimization method and system based on phase space
Shen et al. Host load prediction with bi-directional long short-term memory in cloud computing
Xu et al. Laser: A deep learning approach for speculative execution and replication of deadline-critical jobs in cloud
CN104572501A (en) Access trace locality analysis-based shared buffer optimization method in multi-core environment
Kalantari et al. Dynamic software rejuvenation in web services: a whale optimizationalgorithm-based approach
Netaji Vhatkar et al. Self‐improved moth flame for optimal container resource allocation in cloud
AU2020102381A4 (en) Efficient resource utilization and less complex technique for mapping machine learning algorithms in to embedded systems
Pabitha et al. Proactive Fault Prediction and Tolerance in Cloud Computing
Martinez-Alvarez et al. Multi-objective adaptive evolutionary strategy for tuning compilations
Haghshenas et al. CO 2 Emission Aware Scheduling for Deep Neural Network Training Workloads
Shuang et al. Task Scheduling Based on Grey Wolf Optimizer Algorithm for Smart Meter Embedded Operating System
Banicescu et al. Towards the robustness of dynamic loop scheduling on large-scale heterogeneous distributed systems
Ghiasi et al. Smart virtual machine placement using learning automata to reduce power consumption in cloud data centers
Westerlund et al. A generalized scalable software architecture for analyzing temporally structured big data in the cloud
Du et al. OctopusKing: A TCT-aware task scheduling on spark platform
Carroll et al. Applied on-chip machine learning for dynamic resource control in multithreaded processors
WO2021262139A1 (en) Distributed machine learning models
Gao et al. An improved selection method based on crowded comparison for multi-objective optimization problems in intelligent computing
Nammouchi et al. Quantum Machine Learning in Climate Change and Sustainability: A Short
Son et al. Multi-objective optimization method for resource scaling in cloud computing
El Motaki et al. A prediction-based model for virtual machine live migration monitoring in a cloud datacenter

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry