AU2020102026A4 - ELSM- Quantum Computing: EXTREMELY LARGE DATABASES STORE INTO A SMALL MEMORY USING QUANTUM COMPUTING. - Google Patents

ELSM- Quantum Computing: EXTREMELY LARGE DATABASES STORE INTO A SMALL MEMORY USING QUANTUM COMPUTING. Download PDF

Info

Publication number
AU2020102026A4
AU2020102026A4 AU2020102026A AU2020102026A AU2020102026A4 AU 2020102026 A4 AU2020102026 A4 AU 2020102026A4 AU 2020102026 A AU2020102026 A AU 2020102026A AU 2020102026 A AU2020102026 A AU 2020102026A AU 2020102026 A4 AU2020102026 A4 AU 2020102026A4
Authority
AU
Australia
Prior art keywords
data
quantum
task
computational
tasks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2020102026A
Inventor
N Sandeep Chaitanya
Somavarapu Jahnavi
Shaik Mahammad Jameeruddin
A Aruna Kumari
Ravikanth Motupalli
Kandula Neha
Tejaswi Potluri
Pinamala Sruthi
Chekuri Sri Sumanth
Ramesh. Vatambeti
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to AU2020102026A priority Critical patent/AU2020102026A4/en
Application granted granted Critical
Publication of AU2020102026A4 publication Critical patent/AU2020102026A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N10/00Quantum computing, i.e. information processing based on quantum-mechanical phenomena
    • G06N10/80Quantum programming, e.g. interfaces, languages or software-development kits for creating or handling programs capable of running on quantum computers; Platforms for simulating or accessing quantum computers, e.g. cloud-based quantum computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Mathematics (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

Patent Title: ELSM- Quantum Computing: EXTREMELY LARGE DATABASES STORE INTO A SMALL MEMORY USING QUANTUM COMPUTING. ABSTRACT Our Invention "ELSM- Quantum Computing" is a method, systems, process technique, and apparatus for validating, training a machine learning model to multi route received computational tasks in a system including at least one quantum computing resource. The invention also includes obtaining a first set of large data, the first set of intelligent data comprising data representing multiple real time computational tasks previously performed by the system and obtaining input large data for the multiple computational tasks previously performed by the system. The invention also comprising data representing a type of computing resource the task was routed to obtaining a second set of large data, the second set of data comprising data representing properties associated with using the one or more quantum computing resources to solve the multiple computational tasks. The validating, training the machine learning model to route received data representing a computational task to be performed using the (i) first set of lager data, (ii) input data, and (iii) second set of data. The invention also Described herein are methods, systems technique and media for generating a quantum-ready or quantum enabled real-time software development kit (RSDK) for an advanced quantum computing system. The invented methods may comprise as per required size and accepting user input from an application at an application interface which application is executed on a digital computer and implementing more the one best required algorithm. The algorithms layer that may be solved intelligent way, heuristically way or exactly depending on the requirements of the user input as per software specification. The invented system as per required needed more than algorithms a complexity of the application transforming the one or more algorithms from the application space into the one or more instructions in intelligent polynomial unconstrained binary optimization (IPUBO) form. 30 Mr. N Sandeep Chaitanya (Assistant Professor) Dr. Ramesh. Vatambeti Mr. Ravikanth Motupalli (Assistant Professor) Mr. Chekuri Sri Sumanth (Assistant Professor) Mrs. A Aruna Kumari (Assistant Professor) Mrs. Somavarapu Jahnavi (Assistant Professor) Mrs. Tejaswi Potluri (Assistant Professor) Mrs. Pinamala Sruthi (Associate Professor) Ms. Kandula Neha (Assistant Professor) Mr. Shaik Mahammad Jameeruddin TOTAL NO OF SHEET: 05 NO OF FIG: 05 10o 1 -agra.hEL= Forecasting !20.ca|J24 Input 1 Gloal JJM Output Atin Computation engineALSS Monitonnl2M Machine leaming module jU FIG. 1A: DEPICTS AN EXAMPLE SYSTEM FOR PERFORMING COMPUTATIONAL TASKS.

Description

Mr. N Sandeep Chaitanya (Assistant Professor) Dr. Ramesh. Vatambeti Mr. Ravikanth Motupalli (Assistant Professor) Mr. Chekuri Sri Sumanth (Assistant Professor) Mrs. A Aruna Kumari (Assistant Professor) Mrs. Somavarapu Jahnavi (Assistant Professor) Mrs. Tejaswi Potluri (Assistant Professor) Mrs. Pinamala Sruthi (Associate Professor) Ms. Kandula Neha (Assistant Professor) Mr. Shaik Mahammad Jameeruddin TOTAL NO OF SHEET: 05 NO OF FIG: 05 10o
1 -agra.hEL=
Forecasting !20.ca|J24
Input 1 Gloal JJM Output Atin
Computation engineALSS
Monitonnl2M
Machine leaming module jU
FIG. 1A: DEPICTS AN EXAMPLE SYSTEM FOR PERFORMING COMPUTATIONAL TASKS.
Australian Government IP Australia Innovation Patent Australia
Patent Title: ELSM- Quantum Computing: EXTREMELY LARGE DATABASES STORE INTO A SMALL MEMORY USING QUANTUM COMPUTING.
Name and address of patentees(s):
Mr. N Sandeep Chaitanya (Assistant Professor) Address: Dept. of CSE, Vallurupalli Nageswara Rao Vignana Jyothi Institute of Engineering &Technology Hyderabad, Telangana, India.
Dr. Ramesh. Vatambeti Address: Dept. of Computer Science and Engineering, Presidency University, Bengaluru, Karnataka, India.
Mr. Ravikanth Motupalli (Assistant Professor) Address: Dept. of CSE, Vallurupalli Nageswara Rao Vignana Jyothi Institute of Engineering &Technology Hyderabad, Telangana, India.
Mr. Chekuri Sri Sumanth (Assistant Professor) Address: Dept. of CSE, Vallurupalli Nageswara Rao Vignana Jyothi Institute of Engineering &Technology Hyderabad, Telangana, India.
Mrs. A Aruna Kumari (Assistant Professor) Address: Dept. of CSE, Vallurupalli Nageswara Rao Vignana Jyothi Institute of Engineering &Technology Hyderabad, Telangana, India.
Mrs. Somavarapu Jahnavi (Assistant Professor) Address: Dept. of CSE, Vallurupalli Nageswara Rao Vignana Jyothi Institute of Engineering &Technology Hyderabad, Telangana, India.
Mrs. Tejaswi Potluri (Assistant Professor) Address: Dept. of CSE, Vallurupalli Nageswara Rao Vignana Jyothi Institute of Engineering &Technology Hyderabad, Telangana, India.
Mrs. Pinamala Sruthi (Associate Professor) Address: CMR College of Engineering & Technology, Hyderabad, Telangana, India.
Ms. Kandula Neha (Assistant Professor) Address: Vidya Jyothi Institute of Technology, Hyderabad, Telangana, India.
Mr. Shaik Mahammad Jameeruddin Address: Research Assistant ICRISAT Hyderabad, Telangana ,India.
Complete Specification: Australian Government.
FIELD OF THE INVENTION
Our invention "ELSM- Quantum Computing" is related to extremely large databases store into a small memory using quantum computing.
BACKGROUND OF THE INVENTION
For some computational tasks, quantum computing devices may offer a computational speed up compared to classical devices. For example, quantum computers may achieve a speed up for tasks such as database search, evaluating NAND trees, integer factorization or the simulation of quantum many-body systems.
As another example, adiabatic quantum annealers may achieve a computational speed up compared to classical annealers for some optimization tasks. To perform an optimization task, quantum hardware may be constructed and programmed to encode the solution to a corresponding optimization task into an energy spectrum of a many-body quantum Hamiltonian characterizing the quantum hardware. For example, the solution is encoded in the ground state of the Hamiltonian.
Recognized herein is the need for a quantum-ready or quantum-enabled software development kit (SDK) that can broaden access to quantum computing while shielding users from the quantum 'machine code. ' Fast and efficient analysis of large data sets may be essential in many fields, including financial analysis, social media, drug discovery, and job scheduling. Applications in these fields may frequently solve computationally expensive NP-complete and NP-hard problems. With the fast development of quantum computers, such as those by D-Wave Systems, Nippon Telegraph and Telephone (NTT), IBM, and Google, platform agnostic software, which may be compatible with both classical and quantum hardware, may be needed to enable users to leverage a pre-built library of algorithms and solvers.
Described herein is a software development kit (SDK) which may enable users to build quantum-ready or quantum-enabled solutions. Such solutions may be executed on any of a number of suitable platforms capable of performing quantum computing operations, such as a quantum computing system (e.g., a quantum annealer, an Ising solver, an optical parametric oscillator (OPO), a gate model of quantum computing, or another type of quantum computer). There are many challenges faced by the use of a quantum computer, such as embedding issues, manual errors, implementation effort cost, interfacing with hardware and optimizing to a polynomial unconstrained binary optimization (PUBO), e.g., a quadratic unconstrained binary optimization (QUBO).
SDKs of the present disclosure may address these issues. To help address these challenges, a quantum-ready or quantum-enabled software development kit (SDK) is developed to be usable out-of-the-box or customized by advanced users, in which at least two or at least three layers, comprising an algorithms layer, binary polynomial layer, and a solver layer comprising a common interface, are provided to work together for providing solutions to an application input. Challenges in the design of the SDK are such that it may be implemented in various layers, such as an algorithms layer, a polynomial (e.g., binary) layer, and/or a common solver layer.
The described herein is a method for generating one or more instructions for execution by a solver layer comprising a common interface, wherein the one or more instructions are generated by a digital computer comprising at least one computer processor and memory, the digital computer coupled to a quantum-ready or quantum-enabled computing system comprising the solver layer, and wherein the solver layer executes the one or more instructions to generate an output, the method comprising: a. accepting user input from an application at an application interface, which application is executed on the digital computer;
The implementing one or more algorithms, at an algorithms layer, that are solved heuristically or exactly depending at least in part on requirements of the user input, wherein the one or more algorithms abstract away a complexity of the application; c. transforming the one or more algorithms from the application space into the one or more instructions in polynomial unconstrained binary optimization (PUBO) form; and d. executing the one or more instructions in PUBO form at the common interface of the solver layer, wherein the common interface comprises one or more polynomial unconstrained binary optimization (PUBO) solvers that provide an interface that is agnostic to quantum or classical computers.
The quantum-ready or quantum-enabled computing system comprises a quantum annealer, an Ising solver, an optical parametric oscillator (OPO), a gate model of quantum computing, or another type of quantum computer. In some embodiments, the polynomial unconstrained binary optimization (PUBO) form is a quadratic unconstrained binary optimization (QUBO) form. In some embodiments, the one or more polynomial unconstrained binary optimization (PUBO) solvers of the common interface of the solver layer comprise one or more quadratic unconstrained binary optimization (QUBO) solvers. In some embodiments, the one or more algorithms are transformed at the algorithms layer. In some embodiments, the one or more algorithms are transformed using a binary polynomial layer. In some embodiments, the one or more algorithms are transformed at a polynomial constrained integer optimization layer.
The one or more algorithms are transformed at a polynomial constrained binary optimization layer. In some embodiments, the one or more algorithms are transformed by the common interface of the solver layer. In some embodiments, the algorithms layer comprises one or more of: max (maximum) k-quasi clique, chromatic number, graph similarity, coloring feasibility, max co-^-plex, minimum clique cover, ^-clique cover feasibility, linear knapsack, and balanced partitioning. In some embodiments, transforming the one or more algorithms comprises the use of one or more of: a transformation of the polynomial unconstrained binary optimization (PUBO) form to a quadratic unconstrained binary optimization (QUBO) form, binary polynomial operations, and efficient search in binary space. In some embodiments, the one or more polynomial unconstrained binary optimization (PUBO) solvers comprise one or more of: D- Wave, multi-agent Tabu 1-Opt solver, Tabu 1-Opt solver, PTICM solver, path-relinking solver, and a GPU-based simulated quantum annealing solver. In some embodiments, the one or more algorithms are implemented using a classical optimization system or a quantum oracle.
Also described herein is a system for generating one or more instructions for execution by a solver layer comprising a common interface, comprising: a. a quantum-ready or quantum- enabled computing system comprising the solver layer; b. a digital computer comprising at least one computer processor, the digital computer coupled to the quantum-ready or quantum-enabled computing system; c. a computer memory storing computer processor executable instructions which, when executed by the at least one computer processor, implement a method comprising: i. accepting user input from an application at an application interface, which application is executed on the digital computer; ii. implementing one or more algorithms, at an algorithms layer, that are solved heuristically or exactly depending at least in part on requirements of the user input, wherein the one or more algorithms abstract away a complexity of the application;
iii.transforming the one or more algorithms from the application space into the one or more instructions in polynomial unconstrained binary optimization (PUBO) form; and iv. executing the one or more instructions in PUBO form at the common interface of the solver layer, wherein the common interface comprises one or more polynomial unconstrained binary optimization (PUBO) solvers that provide an interface that is agnostic to quantum or classical computers.
Within the field of cryptography, it is well recognized that the strength of any cryptographic system depends, among other things, on the key distribution technique employed. For existing encryption to be effective, such as a symmetric key system, two communicating parties must share the same key and that key must be protected from access by others. The key must, therefore, be distributed to each of the parties.
PRIOR ART SEARCH
S20090070402A12007-09-112009-03-12Geordie Rose Systems, methods, and apparatus for a distributed network of quantum computers. US20090228888A12008-03-102009-09-lOSun Microsystems, Inc. Dynamic scheduling of application tasks in a distributed task based system. US20140298343A12013-03-262014-10-02Xerox Corporation Method and system for scheduling allocation of tasks. US9537953B12016-06-132017-O1-O3lQb Information Technologies Inc. Methods and systems for quantum ready computations on the cloud. US20060225165A12004-12-232006-10-O5Maassen Van Den Brink Alec. Analog processor comprising quantum devices. US20120326720A12011-06-142012-12-27International Business Machines Corporation Modular array of fixed-coupling quantum systems for quantum information processing. US20130144925A1*2011-11-152013-06-06D-Wave Systems Inc. Systems and methods for solving computational problems. US20140025606A1*2010-11-192014-01-23William G. Macready Methods for solving computational problems using a quantum processor.
W02015060915A2*2013-07-292015-04-30President And Fellows Of Harvard College Quantum processor problem compilation. US20150193692A1*2013-11-192015-07-09D-Wave Systems Inc. Systems and methods of finding quantum binary optimization problems. US9537953B12016-06-132017-O1-O3lQb Information Technologies Inc. Methods and systems for quantum ready computations on the cloud US8832165B2 *2007-12-122014-09-09Lockheed Martin Corporation Computer systems and methods for quantum verification and validation. US10599988B2 *2016-03-022020-03-24D-Wave Systems Inc. Systems and methods for analog processing of problem graphs having arbitrary size and/or connectivity. US20070016534A1*2005-06-162007-01-18Harrison Keith A Secure transaction method and transaction terminal for use in implementing such method. US20070110247A1*2005-08-032007-05-17Murphy Cary R Intrusion detection with the key leg of a quantum key distribution system. US20070113268A1*2005-08-032007-05-17Murphy Cary R Intrusion resistant passive fiber optic components. US20070140495A1*2003-11-132007-06-21Magiq Technologies, Inc Qkd with classical bit encryption. US20090168015A1*2005-06-202009-07-02Essilor International (Compagnie Generale D'optique.
OBJECTIVES OF THE INVENTION
1. The objective of the invention is to a method, systems, process technique, and apparatus for validating, training a machine learning model to multi route received computational tasks in a system including at least one quantum computing resource. 2. The other objective of the invention is to the invention also includes obtaining a first set of large data, the first set of intelligent data comprising data representing multiple real time computational tasks previously performed by the system and obtaining input large data for the multiple computational tasks previously performed by the system. 3. The other objective of the invention is to the invention also comprising data representing a type of computing resource the task was routed to obtaining a second set of large data, the second set of data comprising data representing properties associated with using the one or more quantum computing resources to solve the multiple computational tasks. 4. The other objective of the invention is to the validating, training the machine learning model to route received data representing a computational task to be performed using the (i) first set of lager data, (ii) input data, and (iii) second set of data. 5. The other objective of the invention is to the invention also Described herein are methods, systems technique and media for generating a quantum-ready or quantum-enabled real-time software development kit (RSDK) for an advanced quantum computing system. 6. The other objective of the invention is to the invention also Described herein are methods, systems technique and media for generating a quantum-ready or quantum-enabled real-time software development kit (RSDK) for an advanced quantum computing system. 7. The other objective of the invention is to the invented methods may comprise as per required size and accepting user input from an application at an application interface which application is executed on a digital computer and implementing more the one best required algorithm. 8. The other objective of the invention is to the algorithms layer that may be solved intelligent way, heuristically way or exactly depending on the requirements of the user input as per software specification. The invented system as per required needed more than algorithms a complexity of the application transforming the one or more algorithms from the application space into the one or more instructions in intelligent polynomial unconstrained binary optimization (IPUBO) form. 9. The other objective of the invention is to wherein the data representing multiple computational tasks previously performed by the system comprises data indicating a frequency of changes to input data sets associated with each computational tasks. 10. The other objective of the invention is to further comprising training the machine learning model using data representing properties associated with using the one or more quantum computing resources to solve the multiple computational tasks. the invention is to wherein the polynomial unconstrained binary optimization (PUBO) form is a quadratic unconstrained binary optimization (QUBO) form. 11. The other objective of the invention is to wherein the one or more polynomial unconstrained binary optimization (PUBO) solvers of the common interface of the solver layer comprise one or more quadratic unconstrained binary optimization (QUBO) solvers and also the invention is to wherein the one or more algorithms are transformed at the algorithms layer. the invention is to wherein the one or more algorithms are transformed using a binary polynomial layer.
SUMMARY OF THE INVENTION
This specification describes a machine learning module that may be used to route received computational tasks to one or more quantum computing devices or one or more classical computing devices. The machine learning module uses machine learning techniques to determine when and how to leverage the power of quantum computing.
The innovative aspect of the subject matter described in this specification can be implemented in a computer implement method for training a machine learning model to route received computational tasks in a system including at least one quantum computing resource, the method including the actions of: obtaining a first set of data, the first set of data comprising data representing multiple computational tasks previously performed by the system; obtaining a second set of data, the second set of data comprising data representing properties associated with using the one or more quantum computing resources to solve the multiple computational tasks; obtaining input data for the multiple computational tasks previously performed by the system, comprising data representing a type of computing resource the task was routed to; and training the machine learning model to route received data representing a computational task to be performed using the (i) first set of data, (ii) input data, and (iii) second set of data.
Other implementations of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods. A system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination thereof installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
The foregoing and other implementations can each optionally include one or more of the following features, alone or in combination. In some implementations the quantum computing resources comprise one or more of (i) quantum gate computers, (ii) adiabatic annealers, or (iii) quantum simulators.
The system further comprises one or more classical computing resources. In some implementations the computational tasks comprise optimization tasks. In some implementations properties of using the one or more quantum computing resources to solve the multiple computational tasks comprise, for each computational task, one or more of: (i) approximate qualities of solutions generated by the one or more quantum computing resources; (ii) computational times associated with solutions generated by the one or more quantum computing resources; or (iii) computational costs associated with solutions generated by the one or more quantum computing resources.
The data representing properties of using the one or more quantum computing resources to solve multiple computational tasks further comprises, for each quantum computing resource, one or more of (i) a number of qubits available to the quantum computing resource; and (ii) a cost associated with using the quantum computing resource. In some implementations the obtained input data for the multiple computational tasks previously performed by the system further comprises, for each computational task, one or more of: (i) data representing a size of an input data set associated with the computational task; (ii) data indicating whether an input data set associated with the computational task comprised static, real time or both static and real time input data; (iii) data representing an error tolerance associated with the computational task; and (iv) data representing a required level of confidence associated with the computational task.
The obtained input data for the multiple computational tasks previously performed by the system further comprises data indicating a frequency of changes to the input data sets associated with each computational tasks. In some implementations training the machine learning model to route received computational tasks comprises: generating a set of training examples using the (i) first set of data, (ii) input data, and (iii) second set of data, wherein each training example comprises a machine learning model input paired with a known machine learning model output; and training the machine learning model using the set of training examples.
The innovative aspect of the subject matter described in this specification can be implemented in a computer implement method for obtaining a solution to a computational task, the method comprising: receiving data representing a computational task to be performed by a system including one or more quantum computing resources and one or more classical computing resources; processing the received data using a machine learning model to determine which of the one or more quantum computing resources or the one or more classical computing resources to route the data representing the computational task to, wherein the machine learning model has been configured through training to route received data representing computational tasks to be performed in a system including at least one quantum computing resource; and routing the data representing the computational task to the determined computing resource to obtain, from the determined computing resource, data representing a solution to the computational task.
Other implementations of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods. A system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination thereof installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
The foregoing and other implementations can each optionally include one or more of the following features, alone or in combination. In some implementations the quantum computing resources comprise one or more of (i) quantum gate computers, (ii) adiabatic annealers, or (iii) quantum simulators. In some implementations the computational tasks comprise optimization tasks. In some implementations training the machine learning model to route received data representing computational tasks to be performed comprises training the machine learning model using (i) data representing multiple computational tasks previously performed by the system, and (ii) data representing a type of computing resource the task was routed to.
The data representing multiple computational tasks previously performed by the system comprises, for each computational task, one or more of: (i) data representing a size of an input data set associated with the computational task; (ii) data indicating whether an input data set associated with the computational task comprised static, real time or both static and real time input data; (iii) data representing an error tolerance associated with the computational task; and (iv) data representing a required level of confidence associated with the computational task.
The data representing multiple computational tasks previously performed by the system comprises data indicating a frequency of changes to input data sets associated with each computational tasks.
The method further comprises training the machine learning model using data representing properties associated with using the one or more quantum computing resources to solve the multiple computational tasks.
The properties associated with using the one or more quantum computing resources to solve the multiple computational tasks comprise: for each computational task, one or more of: (i) approximate qualities of solutions generated by the one or more quantum computing resources; (ii) computational times associated with solutions generated by the one or more quantum computing resources; or (iii) computational costs associated with solutions generated by the one or more quantum computing resources, and for each quantum computing resource, one or more of (i) a number of qubits available to the quantum computing resource; and (ii) a cost associated with using the quantum computing resource. The subject matter described in this specification can be implemented in particular ways so as to realize one or more of the following advantages.
For some optimization tasks, quantum computing devices may offer an improvement in computational speed compared to classical devices. For example, quantum computers may achieve an improvement in speed for tasks such as database search or evaluating NAND trees. As another example, quantum annealers may achieve an improvement in computational speed compared to classical annealers for some optimization tasks. For example, determining a global minimum or maximum of a complex manifold associated with the optimization task is an extremely challenging task. In some cases, using a quantum annealer to solve an optimization task can be an accurate and efficient alternative to using classical computing devices.
Conversely, for some optimization tasks, quantum computing devices may not offer an improvement compared to classical devices. For example, whilst quantum computing devices may offer computational speedups for some computational tasks, the costs associated with using the quantum devices to perform the computational tasks may be higher than the costs associated with using classical computing devices to perform the computational tasks. Such costs can include computational costs, i.e., the cost of resources required to build and use a computing device, and financial costs, i.e., monetary costs and fees of renting computational time on an external computing resource. Therefore, a tradeoff between the benefits of using quantum computing resources and classical computing resources exists.
A quantum computing machine learning module, as described in this specification, balances this tradeoff and learns optimal routings of computational tasks to classical or quantum computing resources. By learning when and how to utilize the power of quantum computing, a system implementing the quantum computing machine learning module may perform computational tasks more efficiently and/or accurately compared to systems that do not include quantum computing resources, or to systems that do not learn optimal routings of computational tasks to classical or quantum resources.
In addition, a quantum computing module, as described in this specification, can adapt overtime as more efficient quantum and classical systems are introduced. For example, while current implementations of quantum computations or current quantum hardware may include a significant classical overhead, e.g., leverages classical computing capabilities, well founded evidence supports that future quantum computing hardware will be able to perform exponentially more challenging tasks in less time that current quantum computing hardware or classical computing hardware. Conversely, as classical computers continue to be horizontally scalable, the additional capacity made available for computation will be factored into the machine learning to determine the best use of resources.
Also described herein is a non-transitory computer-readable medium comprising machine-executable code that, upon execution by one or more computer processors, generates an application that is executable by a digital computer comprising at least one computer processor and memory to generate one or more instructions for execution by a solver layer of a quantum- ready or quantum-enabled computing system, the solver layer comprising a common interface, to generate an output, the application comprising: a. a software module programmed or otherwise configured to accept user input from an application at an application interface, which application is executed on the digital computer; b. a software module programmed or otherwise configured to implement one or more algorithms, at an algorithms layer, that are solved heuristically or exactly at least in part depending on the requirements of the user input, wherein the one or more algorithms abstract away a complexity of the application; c. a software module programmed or otherwise configured to transform the one or more algorithms from the application space into one or more instructions in polynomial unconstrained binary optimization (PUBO) form; and d. a software module programmed or otherwise configured to execute the one or more instructions in PUBO form at the common interface of the solver layer, wherein the common interface comprises one or more polynomial unconstrained binary optimization (PUBO) solvers that provide an interface that is agnostic to quantum or classical computers.
The polynomial unconstrained binary optimization (PUBO) form is a quadratic unconstrained binary optimization (QUBO) form. In some embodiments, the one or more polynomial unconstrained binary optimization (PUBO) solvers of the common interface of the solver layer comprise one or more quadratic unconstrained binary optimization (QUBO) solvers. In some embodiments, the one or more algorithms are transformed at the algorithms layer. In some embodiments, the one or more algorithms are transformed using a binary polynomial layer. In some embodiments, the one or more algorithms are transformed by the common interface of the solver layer.
The algorithms layer comprises one or more of: max (maximum) k-quasi clique, chromatic number, graph similarity, coloring feasibility, max co-^-plex, minimum clique cover, A-clique cover feasibility, linear knapsack, and balanced partitioning. In some embodiments, the transforming of the one or more algorithms comprises the use of one or more of: a transformation of the polynomial unconstrained binary optimization (PUBO) form to a quadratic unconstrained binary optimization (QUBO) form, binary polynomial operations, and efficient search in binary space. In some embodiments, wherein the one or more polynomial unconstrained binary optimization (PUBO) solvers comprise one or more of: D-Wave, multi-agent Tabu 1-Opt solver, Tabu 1-Opt solver,
PTICM solver, path-relinking solver, and a GPU-based simulated quantum annealing solver.
Additional aspects and advantages of the present disclosure will become readily apparent to those skilled in this art from the following detailed description, wherein only illustrative embodiments of the present disclosure are shown and described. As will be realized, the present disclosure is capable of other and different embodiments, and its several details are capable of modifications in various obvious respects, all without departing from the disclosure.
A method and system for distributing quantum cryptographic keys among a group of user devices through a switch connected to the user devices are provided. In one aspect of the invention, a switch establishes a connection between two of a group of user devices. A Quantum Key Distribution (QKD) session is established between the two user devices to facilitate sharing of secret key material between the two user devices. Connections and QKD sessions are established for different pairs of the user devices.
The invention, a group of user devices and a switch are provided. Each of the user devices is configured to have a connection through the switch. The switch includes a connection establisher configured to establish a connection between pairs of the user devices according to a schedule. One of the user devices includes a QKD session manager and a secret sharer. The QKD session manager is configured to establish a QKD session with another one of the user devices via the established connection. The secret sharer is configured to obtain shared secret information with the another one of the user devices.
The invention, a switch is provided to establish a connection between pairs of quantum cryptographic user devices. The switch includes a connection establisher configured to establish a connection between pairs of the quantum cryptographic user devices according to a schedule. The invention, a user device is provided. The user device is configured to communicate with a second user device via a QKD switch, which is configured to switch connections among a group of user devices, including the user device and the second user device, according to a schedule. The user device includes a QKD session manager and a secret sharer. The QKD session manager configured to establish a QKD session with the second user device via the QKD switch. The secret sharer is configured to obtain shared secret information with the second user device over the QKD session passing through the QKD switch.
The invention, a system for distributing quantum cryptographic keys in an untrusted network is provided. The system includes means for establishing a connection between pairs of the user devices according to a schedule, means for establishing a QKD session between a pair of the user devices via the established connection, and means for agreeing on secret information during the QKD session. In a sixth aspect of the invention, a computer-readable medium is provided. The computer-readable medium has instructions stored thereon for at least one processor to perform a method. The method includes successively establishing a connection between pairs of quantum cryptographic user devices according to a schedule.
In a seventh aspect of the invention, a computer-readable medium is provided. The computer-readable medium has instructions stored thereon for at least one processor to perform a method. The method includes establishing a QKD session between a first user device and a second user device via a QKD switch, which is configured to switch connections among a group of user devices, including the first and the second user devices, according to a schedule, and agreeing on secret information derived from the QKD session between the first user device and the second user device.
BRIEF DESCRIPTION OF THE DIAGRAM
FIG. 1A: depicts an example system for performing computational tasks.
FIG. 1B: depicts an example visualization of a global search space and local search space.
FIG. 2: depicts an example quantum computing machine learning module.
FIG. 3: is a flow diagram of an example process for training a machine learning model to route received computational tasks in a system including one or more quantum computing resources.
FIG. 4: is a flow diagram of an example process for obtaining a solution to a computational task using a system including one or more quantum computing resources.
DESCRIPTION OF THE INVENTION
FIG. 1: shows one form of a conventional key distribution process. As shown in FIG. 1, for a party, Bob, to decrypt ciphertext encrypted by a party, Alice, Alice or a third party must share a copy of the key with Bob. This distribution process can be implemented in a number of conventional ways including the following: 1) Alice can select a key and physically deliver the key to Bob; 2) a third party can select a key and physically deliver the key to Bob; 3) if Alice and Bob both have an encrypted connection to a third party, the third party can deliver a key on the encrypted links to Alice and Bob; 4) if Alice and Bob have previously used an old key, Alice can transmit a new key to Bob by encrypting the new key with the old; and 5) Alice and Bob may agree on a shared key via a one-way mathematical algorithm, such as Diffie-Hellman key agreement.
All of these distribution methods are vulnerable to interception of the distributed key by an eavesdropper, Eve, or by Eve "cracking" the supposedly one-way algorithm. Eve can eavesdrop and intercept or copy a distributed key and then subsequently decrypt any intercepted ciphertext that is sent between Bob and Alice. In conventional cryptographic systems, this eavesdropping may go undetected, with the result being that any ciphertext sent between Bob and Alice is compromised.
To combat these inherent deficiencies in the key distribution process, researchers have developed a key distribution technique called quantum cryptography. Quantum cryptography employs quantum systems and applicable fundamental principles of physics to ensure the security of distributed keys. Heisenberg's uncertainty principle mandates that any attempt to observe the state of a quantum system will necessarily induce a change in the state of the quantum system. Thus, when very low levels of matter or energy, such as individual photons, are used to distribute keys, the techniques of quantum cryptography permit the key distributor and receiver to determine whether any eavesdropping has occurred during the key distribution. Quantum cryptography, therefore, prevents an eavesdropper, like Eve, from copying or intercepting a key that has been distributed from Alice to Bob without a significant probability of Bob's or Alice's discovery of the eavesdropping.
An existing quantum key distribution (QKD) scheme involves a quantum channel, through which Alice and Bob send keys using polarized or phase encoded photons, and a public channel, through which Alice and Bob send ordinary messages. Since these polarized or phase encoded photons are employed for QKD, they are often termed QKD photons. The quantum channel is a path, such as through air or an optical fiber, that attempts to minimize the QKD photons' interaction with the environment. The public channel may comprise a channel on any type of communication network, such as a Public Switched Telephone network, the Internet, or a wireless network.
An eavesdropper, Eve, may attempt to measure the photons on the quantum channel. Such eavesdropping, however, will induce a measurable disturbance in the photons in accordance with the Heisenberg uncertainty principle. Alice and Bob use the public channel to discuss and compare the photons sent through the quantum channel. If, through their discussion and comparison, they determine that there is no evidence of eavesdropping, then the key material distributed via the quantum channel can be considered completely secret.
FIG. 1A: depicts an example system 100 for performing computational tasks. The system100 is an example of a system implemented as computer programs on one or more computers in one or more locations, in which the systems, components, and techniques described below can be implemented.
The system 100 for performing computational tasks is configured to receive as input data representing a computational task to be solved, e.g., input data 102. For example, in some cases the system 100 may be configured to solve multiple computational tasks, e.g., including optimization tasks, simulation tasks, arithmetic tasks, database search, machine learning tasks, or data compression tasks. In these cases, the input data 102 may be data that specifies one of the multiple computational tasks. The input data 102 representing the computational task to be solved may specify one or more properties of the computational task. For example, in cases where the computational task is an optimization task, the input data 102 may include data representing parameters associated with the optimization task, e.g., parameters over which an objective function representing the optimization task is to be optimized, and one or more values of the parameters.
In some cases, the input data 102 may include static input data and dynamic input data, e.g., real-time input data. As an example, the input data 102 may be data that represents the task of optimizing the design of a water network in order to optimize the amount of water distributed by the network. In this example, the input data 102 may include static input data representing one or more properties of the water network, e.g., a total number of available water pipes, a total number of available connecting nodes or a total number of available water tanks.
In addition, the input data 102 may include data representing one or more parameters associated with the optimization task, e.g., level of water pressure in each pipe, level of water pressure at each connecting node, height of water level in each water tank, concentration of chemicals in the water throughout the network, water age or water source. Furthermore, the input data 102 may include dynamic input data representing one or more current properties or values of parameters of the water network, e.g., a current number of water pipes in use, a current level of water pressure in each pipe, a current concentration of chemicals in the water, or a current temperature of the water.
The input data 102 may further include data specifying one or more task objectives associated with the computational task. The task objectives may include local task objectives and global task objectives. Local task objectives may include local targets to be considered when solving the computational task, e.g., local objectives of a solution to the computational task. For example, local objectives may include constraints on values of subsets of computational task variables. Global task objectives may include global targets to be considered when solving the computational task, e.g., global objectives of a solution to the computational task.
For example, continuing the above example of the task of optimizing a water network, the input data 102 may further include data specifying local task objectives such as a constraint on the concentration of chemicals in the water, e.g., constraining the chemical concentration to between 0.2% and 0.5%, and on the number of water pipes in use, e.g., constraining the total number of water pipes to less than 1000. Another example local task objective may be to optimize a particular portion of the water network. In addition, the input data 102 may further include data specifying global task objectives such as one or more global targets, e.g., a target of keeping water wastage to below 2% or a target of distributing at least 10 million gallons of water per day.
The data specifying one or more task objectives associated with the computational task may be stored in the system100 for performing computational tasks, e.g., in task objective data store 112. For example, as described above, the system 100 for performing computational tasks may be configured to solve multiple computational tasks and the input data 102 may be data that specifies one of the multiple computational tasks. In this example, the system 100 for performing computational tasks may be configured to store task objectives corresponding to each computational task that it is configured to perform. For convenience, data specifying one or more task objectives associated with the computational task is described as being stored in task objective data store 112 throughout the remainder of this document.
The system 100 for performing computational tasks is configured to process the received input data 102 to generate output data 104. In some implementations, the generated output data 104 may include data representing a solution to the computational task specified by the input data 102, e.g., a global solution to the computational task based on one or more global task objectives 112 b.
The output data 104 may include data representing one or more local solutions to the computational task, e.g., one or more initial solutions to the optimization task that are based on local task objectives 112 a and global task objectives 112 b. Local solutions to the optimization task may include solutions to sub tasks of the optimization task. For example, local solutions may include solutions that are optimal over a subset of the parameters associated with the optimization task, e.g., where the subset is specified by the local task objectives 112 a. That is, local solutions may include solutions that are optimal over a subspace, or local space, of a global search space or the optimization task. For example, a local space may be the result of a projection of a multi-dimensional spline representing the global search space to a two-dimensional base space. An example visualization of a global search space and local space 150 is shown in FIG. 1B. In FIG. 1B, multi-dimensional spline 152 represents a global search space, and two-dimensional base space 154representsalocalspace.
As another example, in cases where the optimization task is a separable task, e.g., a task that may be written as the sum of multiple sub tasks, local solutions may include optimal solutions to each of the sub tasks in the sum of sub tasks, e.g., where the sub tasks are specified by the local task objectives 112 a.
For example, continuing the above example of the task of optimizing a water network, the output data 104 may include data representing a globally optimal configuration (with respect to global task objectives, e.g., wastage targets and distribution targets) of the above described parameters associated with the water network optimization task. Alternatively, or in addition, the output data 104 may include data representing multiple local solutions to the water network optimization task, e.g., data specifying an optimal number of water pipes to use, an associated water pressure in each pipe, or a concentration of chemicals in the water flowing through the network. In some implementations, parameter values specified by local solutions may be the same as parameter values specified by a global solution. In other implementations, parameter values specified by local solutions may differ from parameter values specified by a global solution, e.g., a local solution may suggest a chemical concentration of 0.4%, whereas a global solution may suggest a chemical concentration of 0.3%.
The output data 104 may be used to initiate one or more actions associated with the optimization task specified by the input data 102, e.g., actions 138. For example, continuing the above example of the task of optimizing a water network, the output data 104 may be used to adjust one or more parameters in the water network, e.g., increase or decrease a current water chemical concentration, increase or decrease a number of water pipes in use, or increase or decrease one or more water pipe pressures.
Optionally, the system 100 for performing computational tasks may include an integration layer 114 and a broker 136. The integration layer 114 may be configured to manage received input data, e.g., input data 102. For example, the integration layer 114 may manage data transport connectivity, manage data access authorization, or monitor data feeds coming into the system 100.
The broker 136 may be configured to receive output data 104 from the system 100 for performing optimization tasks and to generate one or more actions to be taken, e.g., actions 138. The actions may include local actions, e.g., adjustments to a subset of optimization parameters, which contribute towards achieving local and global targets of the optimization task.
The system 100 for performing computational tasks includes a computation engine 106. The computation engine 106 is configured to process the received data to obtain solutions to the computational task. The obtained solutions may include a global solution to the computational task that is based on one or more global task objectives 112 b. Alternatively or in addition, the obtained solutions may include one or more initial solutions to the optimization task that are based on the one or more local task objectives 112 a, e.g., one or more local solutions to the computational task. In some implementations, the computation engine 106 may process received input data to obtain one or more initial solutions to the optimization task that are based on local task objectives 112 a, then further process the one or more initial solutions to the optimization task to generate a global solution to the optimization task based on the global task objectives 112 b.
The computation engine 106 may be configured to process received data using one or more computing resources included in the computation engine 106 or otherwise included in the system 100 for performing computational tasks. In other implementations, the computation engine 106 may be configured to process received data using one or more external computing resources, e.g., additional computing resources 110 a-110 d. For example, the computation engine 106 may be configured to analyze the received input data 102 representing the computational task to be solved and the data representing corresponding task objectives 112 a and 112 b, and outsource one or more computations associated with solving the computational task based on the task objectives 112 a and 112 b to the additional computing resources 110 a-110 d.
The additional computing resources 110 a-110 d may include quantum annealer computing resources, e.g., quantum annealer 110 a. A quantum annealer is a device configured to perform quantum annealing-a procedure for finding the global minimum of a given objective function over a given set of candidate states using quantum tunneling. Quantum tunneling is a quantum mechanical phenomenon where a quantum mechanical system overcomes localized barriers in the energy landscape which cannot be overcome by classically described systems.
The additional computing resources 110 a-110 d may include one or more quantum gate processors, e.g., quantum gate processor 110 b. A quantum gate processor includes one or more quantum circuits, i.e., models for quantum computation in which a computation is performed using a sequence of quantum logic gates, operating on a number of qubits (quantum bits).
The additional computing resources 110 a-110 d may include one or more quantum simulators, e.g., quantum simulator 110 c. A quantum simulator is a quantum computer that may be programmed to simulate other quantum systems and their properties. Example quantum simulators include experimental platforms such as systems of ultra cold quantum gases, trapped ions, photonic systems or superconducting circuits. The additional computing resources 110 a-110 d may include one or more classical processors, e.g., classical processor 110 d. In some implementations, the one or more classical processors, e.g., classical processor 110 d, may include supercomputers, i.e., computers with high levels of computational capacity. For example, the classical processor 110 d may represent a computational system with a large number of processors, e.g., a distributed computing system or a computer cluster.
The system 100 for performing computational tasks includes a machine learning module 132 that is configured to learn which, if any, computations to route to the additional computing resources 110 a-110 d. For example, the machine learning module 132 may include a machine learning model that may be trained using training data to determine when and where to outsource certain computations. The training data may include labeled training examples, e.g., a machine learning model input paired with a respective known machine learning model output, where each training example includes data from multiple resources, as described in more detail below.
The machine learning model may process each machine learning model input to generate a respective machine learning model output, compute a loss function between the generated machine learning model output and the known machine learning model, and back propagate gradients to adjust machine learning model parameters from initial values to trained values. An example machine learning module is described in more detail below with reference to FIG. 2. Training a machine learning model to route received computations to one or more external computing resources is described in more detail below with reference to FIG. 4.
The system 100 for performing computational tasks includes a cache 124. The cache 124 is configured to store different types of data relating to the system 100 and to computational tasks performed by the system 100. For example, the cache 124 may be configured to store data representing multiple computational tasks previously performed by the system 100. In some cases, the cache 124 may be further configured to store previously generated solutions to computational tasks that the system 100 has previously solved. In some cases, this may include solutions to a same computational task, e.g., with different task objectives or different dynamic input data. In other cases, this may include solutions to different computational tasks. The cache 124 may be configured to store previously generated solutions to previously received computational tasks from within a specified time frame of interest, e.g., solutions generated within the last 24 hours.
The cache 124 may also be configured to label previously generated solutions. For example, a previously generated solution may be labelled as a successful solution if the solution was generated within a predetermined acceptable amount of time, and/or if a cost associated with generating the solution was lower than a predetermine threshold. Conversely, a previously generated solution may be labelled as an unsuccessful solution if the solution was not generating within a predetermined acceptable amount of time, and/or if a cost associated with generating the solution was higher than a predetermine threshold. Labelling the solution as successful or unsuccessful may include storing data representing the cost associated with generating the solution or data representing a time taken to generate the solution. Such information may be provided for input into the machine learning module 132. In some cases, stored unsuccessful data may be cleaned from the cache 124, e.g., to free storage space for data representing newly generated solutions.
The cache 124 may also be configured to store system input data associated with the multiple computational tasks previously performed by the system. For example, the input data may include data representing a type of computing resource that each computational task was routed to. In addition, the input data associated with the multiple computational tasks previously performed by the system may further include, for each computational task, one or more of: (i) data representing a size of an input data set associated with the computational task, (ii) data indicating whether an input data set associated with the computational task comprised static, real time or both static and real time input data, (iii) data representing an error tolerance associated with the computational task, or (iv) data representing a required level of confidence associated with the computational task. In some implementations, the cache 124 may further store data indicating a frequency of changes to input data sets associated with each computational tasks. Examples of the different types of input data stored in the cache 124 are described in more detail below with reference to FIG. 3.
Optionally, the system 100 for performing computational tasks may include a monitoring module 128. The monitoring module 128 is configured to monitor interactions between and transactions to and from the one or more additional computing resources 110 a-d. For example, the monitoring module 128 may be configured to detect failed or stuck calls to one or more of the additional computing resources 110 a-d. Example failures that can cause a call to one or more of the additional computing resources 110 a-d to fail or get stuck include issues with a transport layer included in the system 100, i.e., issues with data being moved through the cloud, security login failures, or issues with the additional computing resources 110 a-d themselves such as performance or availability of the additional computing resources 110 a-d.
The monitoring module 128 may be configured to process detected failed or stuck calls to one or more of the additional computing resources 110 a-d and determine one or more corrective actions to be taken by the system 100 in response to the failed or stuck calls. Alternatively, the monitoring module 128 may be configured to notify other components of the system 100, e.g., the global optimization engine 106 or machine learning module 132, of detected failed or stuck calls to one or more of the additional computing resources 110 a-d.
For example, if one or more computations are outsourced to a particular quantum computing resource, however the particular quantum computing resource suddenly becomes unavailable or is processing outsourced computations too slowly, the monitoring module 128 may be configured to notify relevant components of the system100, e.g., the machine learning module 132. The machine learning module 132 may then be configured to determine one or more suggested corrective actions, e.g., instructing the system 100 to outsource the computation to a different computing resource or to retry the computation using the same computing resource. Generally, the suggested corrective actions may include actions that keep the system100 successfully operating in real time, e.g., even when resource degradations outside of the system 100 are occurring.
Optionally, the system 100 for performing computational tasks may include a security component 130. The security component 130 is configured to perform operations relating to the security of the system 100. Example operations include, but are not limited to, preventing system intrusions, detecting system intrusions, providing authentication to external systems, encrypting data received by and output by the system 100, and preventing and/or remedying denial of service (DoS).
Optionally, the system 100 for performing computational tasks may include a subgraph module 122. The subgraph module 122 may be configured to partition a computational task into multiple sub tasks. For example, the subgraph module 122 may be configured to analyze data specifying a computational task to be solved, and to map the computational task to multiple minimally connected subgraphs. The minimally connected subgraphs may be provided to the computation engine 106 for processing, e.g., to be routed to the additional computing resources 110 a-110 d via the machine learning module 132.
During operation, the computation engine 106 may be configured to query the cache 124 to determine whether existing solutions to a received computational task exists in the cache. If it is determined that existing solutions do exist, the computation engine 106 may retrieve the solutions and provide the solutions directly as output, e.g., as output data 104. If it is determined that existing solutions do not exist, the computation engine 106 may process the received data as described above.
The system 100 may be configured to determine whether a solution to a similar optimization task is stored in the cache 124. For example, the system 100 may be configured to compare a received optimization task to one or more other optimization tasks, e.g., optimization tasks that have previously received by the system 100, and determine one or more respective optimization task similarity scores. If one or more of the determined similarity scores exceed a predetermined similarity threshold, the system 100 may determine that the optimization task is similar to another optimization task, and may use a previously obtained solution to the optimization task as an initial solution to the optimization task, or as a final solution to the optimization task. In some cases similarity thresholds may be predetermined as part of an initial learning and parameter configuration process.
Optionally, the system 100 for performing computational tasks may include a forecasting module 120. The forecasting module 120 forecasts future global solutions and their impact on data entering the system 100, e.g., their impact on future input data 102. In some implementations the forecasting module 120 may be configured to forecast future global solutions within a remaining time of a particular time frame of interest, e.g., for the next 10 hours of a current 24-hour period.
For example, the forecasting module 120 may include forecast data from historical periods of time. Forecast data may be compared to current conditions and optimization task objectives to determine whether a current optimization task and corresponding task objectives are similar to previously seen optimization tasks and corresponding task objectives. For example, the system 100 may include forecast data for a period of interest, e.g., a 24-hour period of interest on a particular day of the week. In this example, on a similar day of the week at a later time, the system 100 may use forecast data for the period of interest to determine whether conditions and optimization task objectives for the current period of interest is similar to the conditions and optimization task objectives for the previous period of interest. If it is determined that the conditions and optimization task objectives for the current period of interest is similar to the conditions and optimization task objectives for the previous period of interest, the system 100 may leverage previous results of previously seen optimization tasks as future forecast data points until the forecast data points are replaced by real results from current calculations.
The forecasting module 120 may be configured to receive real time input data that may be used to forecasts future global solutions and their impact on data entering the system100. For example, current weather conditions may be used to forecast future global solutions to optimization tasks related to water network optimization or precision agriculture. Optionally, the system 100 for performing computational tasks may include a data quality module 116. The data quality module 116 is configured to receive the input data 102 and to analyze the input data 102 to determine a quality of the input data 102. For example, the data quality module 116 may score the received input data 102 with respect to one or more data quality measures, e.g., completeness, uniqueness, timeliness, validity, accuracy or consistency.
For example, in some implementations the system 100 may be configured to receive a data feed from an internet of things (IoT) sensor, e.g., that tracks the position of an object or entity within an environment. If the data quality module 116 determines that one of these objects or entities has moved an unrealistic distance in a particular period of time, the data quality module 116 may determine that the quality of the received data feed is questionable and that the data feed may need to be further analyzed or suspended.
Each measure may be associated with a respective predetermined score threshold that may be used to determine whether data is of acceptable quality or not. For example, the data quality module 116 may determine that the input data 102 is of an acceptable quality if the scored input data 102 exceeds a majority of the predetermined score thresholds. If it is determined that the input data 102 is of an acceptable quality, the data quality module 116 may be configured to provide the input data 102 to an aggregation module 118. The aggregation module 118 is configured to receive repeated data inputs, e.g., including input data 102, and to combine the data inputs. The aggregation module 118 may be configured to provide combined data inputs to other components of the system 100. For example, in some implementations the system 100 may include an IoT sensor that receives input data readings every 500 ms. Typically, the system 100 or an optimization task corresponding to the input data readings may only require that input data readings be received every 5 seconds. Therefore, in this example, the aggregation module 118 may be configured to combine and aggregate the input readings in order to generate a simpler data input. In some cases, this may improve the efficiency of downstream calculations performed by the system 100.
If it is determined that the input data 102 is not of an acceptable quality the data quality module 116 may be configured to instruct the system 100 to process an alternative data input, e.g., a data input that is an average from previous data inputs or extrapolated from the current data stream. Alternatively, if the accuracy of a particular data input is determined to be critical to the system's ability to perform one or more computations, the data quality module 116 may be configured to enter an error condition. In these examples, the data quality module 116 may learn when and how to instruct the system100 to process alternative data inputs through a machine learning training process.
Optionally, the system 100 may include an analytics platform 126. The analytics platform 126 is configured to process received data, e.g., input data 102 or data representing one or more local or global solutions to an optimization task, and provide analytics and actionable insights relating to the received data.
Optionally, the system 100 may include a workflow module 134. The workflow module 134 may be configured to provide a user interface for assigning values to optimization task parameters, defining optimization task objectives, and managing the learning process by which the system 100 may be trained. The workflow module 134 may be further configured to allow for users of the system 100 to coordinate on complex objective-related tasks such that the system 100 may be used efficiently. The workflow module 134 may also be configured to allow for various levels of role-based access controls. For example, the workflow module 134 may be configured to allow a junior team member to modify some of the task objectives, but keeps them from modifying critical ones. In this manner, the workflow module 134 may reduce the likelihood that critical undesirable actions, such as the opening of large water mains in a water network, are avoided.
FIG. 2: depicts an example machine learning module 132, as introduced above with reference to FIG. 1A. The example machine learning module 132 includes a training example generator 202, machine learning model 204 and a database 206. As described above with reference to FIG. 1A, the machine learning module 132 is in communication with at least one or more additional computing resources 110 a-110 d, a cache 124 and a subgraph component 122.
The machine learning model 204 is a predictive model that may be trained to perform one or more machine learning tasks, e.g., classification tasks. For example, the machine learning model 204 may be an artificial neural network, e.g., a deep neural network such as a recurrent neural network, a decision tree, support vector machine or Bayesian network. The machine learning module 132 will support multiple software for the machine learning models 204 based on the environment scale and runtime, e.g. for a large distributed environment the model could be in C, for a cloud based implementation the model could be in R, and for a small environment the model could be in Python.
The machine learning module 132 is configured to train the machine learning model204 to route computations or sub-computations received by the machine learning module 132 to the one or more additional computing resources 110 a-110 d. The machine learning module 132 is configured to train the machine learning model 204 using a set of training examples generated by the training example generator 202 and using data stored in the database 206.
The database 206 is configured to store data representing properties associated with using the one or more additional computing resources 110 a-110 d, e.g., one or more quantum computing resources, to solve the multiple computational tasks. For example, properties of using the one or more additional computing resources 110 a-110 d to solve the multiple computational tasks may include, for each computational task, one or more of (i) approximate qualities of solutions generated by the one or more additional computing resources 110 a-110 d, (ii) computational times associated with solutions generated by the one or more additional computing resources 110 a-110 d, or (iii) computational costs associated with solutions generated by the one or more additional computing resources 110 a-110 d. In the cases where the additional computing resources are quantum computing resources, the properties of using the one or more quantum computing resources to solve multiple computational tasks may include, for each quantum computing resource, one or more of (i) a number of qubits available to the quantum computing resource, and (ii) a cost associated with using the quantum computing resource.
The training example generator may be configured to access the database 206 and cache 124 to generate the set of training examples. For example, the training example generator may be configured to generate a set of training examples using (i) data representing multiple computational tasks previously performed by the system, (ii) input data for the multiple computational tasks previously performed by the system, including data representing a type of computing resource the task was routed to, and (iii) data representing properties associated with using the one or more quantum computing resources to solve the multiple computational tasks. A process for training a machine learning model 204 to route received computational tasks or sub tasks to one or more additional computing resources 110a-110dis described in more detail below with reference to FIG. 4.
Once the machine learning model 204 has been trained to route received computational tasks to the one or more additional computing resources 110 a-110 d, during operation (A), the machine learning module 132 is configured to receive input data 102 specifying a computational task to be solved. Optionally, the input data 102 may further include data representing one or more properties of the computational task and parameters of the computational task, as described above with reference to FIG. 1A. The input data may include static data and dynamic data. In some implementations the machine learning module 132 may be configured to receive the input data 102 directly, e.g., in a form in which the input data 102 was provided to the system 100 for performing computational tasks as described above with reference to FIG. 1A. In other implementations the machine learning module 132 may be configured to receive the input data 102 from another component of the system 100 for performing computational tasks, e.g., from an integration layer 114 or data quality module 116.
The machine learning module 132 may be configured to partition the computational task into one or more sub tasks. For example, as described above with reference to FIG. 1A, the machine learning module may be in data communication with a subgraph component 122 of the system 100 for performing computational tasks, and may be configured to provide the subgraph component 122 with data representing the computational task, and to receive data representing multiple minimally connected sub graphs representing sub tasks of the computational task.
The machine learning module 132 is configured to provide data representing the computational task or computational sub tasks to the trained machine learning model 204. The machine learning model 204 is configured to process the received data and to determine which of the one or more additional computing resources 110 a 110 dto route the received data representing the computational task or sub tasks to. Although not illustrated in FIG. 2, in some implementations the machine learning model204 may determine that the received data should not be routed to the additional computing resources 110 a-110 d, and that the computation engine 106 of FIG. 1A should process the received data in order to obtain a solution to the computational task or sub tasks.
During operation (B), the machine learning model 204 is configured to provide the determined additional computing resource or resources with instructions for performing the respective computational task or computational sub tasks, e.g., data 208. For example, the machine learning model 204 may determine that a received optimization task should be routed to a quantum annealer, e.g., quantum annealer 110 a. In this example, during operation (B), the machine learning model 204 may provide the quantum annealer 110 a with instructions for performing the optimization task. As another example, the machine learning model 204 may determine that a received simulation task should be routed to a quantum simulator, e.g., quantum simulator 110 c. In this example, during operation (B), the machine learning model 204 may provide the quantum simulator 110 c with instructions for performing the simulation task. In some implementations, the machine learning model may provide multiple determined additional computing resources with instructions for performing multiple respective computational tasks or computational sub tasks in parallel.
During operation (C), the machine learning module 132 is configured to receive data representing a solution to computational task, e.g., data 210, and data representing properties of using the corresponding computing resource to solve the computational task, e.g., data 212. For example, data representing properties of using the corresponding computing resource to solve the computational task may include data representing an approximate quality of the generated solution, a computational time associated with the generated solution, or a computational cost associated with the generated solution. In some implementations, the machine learning model may receive data representing multiple solutions to multiple computational tasks and data representing properties of using the corresponding computing resources to solve the multiple computational tasks in parallel.
During operation (D), the machine learning module 132 may be configured to directly provide data representing an obtained solution to the computational task as output, e.g., as output data 210. In other implementations, the machine learning module 132 may be configured to first process received data representing solutions to sub tasks of the computational task in order to generate an overall solution to the computational task. The machine learning module 132 may then provide data representing the overall solution to the computational task as output. The data representing the obtained solution to the computational task as output may be provided as output from the system 100, or to a broker component 136, as described above with reference to FIG. 1A,
In addition, during operation (D), the machine learning module 132 may be configured to provide the data representing properties of using the corresponding computing resource to solve the computational task, e.g., data 212, to the cache 124. In this manner, the cache 124 may be regularly updated and used to generate updated training examples, e.g., for retraining or fine tuning the machine learning model 204.
FIG. 3: is a flow diagram of an example process 300 for training a machine learning model to route received computational tasks in a system including one or more quantum computing resources. For example, the system may include one or more of (i) quantum gate computers, (ii) adiabatic annealers, or (iii) quantum simulators. In some examples the system may further include one or more classical computing resources, e.g., one or more classical processors or supercomputers. For convenience, the process 300 will be described as being performed by a system of one or more computers located in one or more locations. For example, a machine learning module, e.g., the machine learning module of FIG. 1A, appropriately programmed in accordance with this specification, can perform the process 300.
The system obtains a first set of data, the first set of data including data representing multiple computational tasks previously performed by the system (step 302). In some implementations the multiple computational tasks previously performed by the system may include optimization tasks, e.g., the task of designing a water network that distributes an optimal amount of water or the task of devising a radiation plan to treat a tumor that minimizes collateral damage to tissue and body parts surrounding the tumor. In some implementations, the multiple computational tasks previously performed by the system may include computational tasks such as integer factorization, database search, arithmetic computations, machine learning tasks or video compression. In some implementations, the multiple computational tasks previously performed by the system may include simulation tasks, e.g., the task of simulating chemical reactions, materials or protein folding.
The system obtains input data for the multiple computational tasks previously performed by the system, including data representing a type of computing resource the task was routed to (step 304). For example, previously performed optimization tasks may have been routed to one or more quantum annealers or to one or more classical computers. In this example, the input data may include data representing the optimization task and an identifier of the corresponding quantum annealer or classical computer used to perform the optimization task.
As another example, previously performed integer factoring tasks, database search tasks, arithmetic tasks or machine learning tasks may have been routed to one or more quantum gate computers or to one or more classical computers. In this example, the input data may include data representing the integer factoring task or database task and an identifier of the corresponding quantum gate computer or classical computer used to perform the task. As another example, previously performed simulation tasks may have been routed to one or more quantum simulators. In this example, the input data may include data representing the simulation task and an identifier of the corresponding quantum simulator used to perform the simulation task.
The system may obtain input data for multiple computational tasks that were successfully performed by the system, including data representing a type of computing resource the task was routed to. For example, previously performed computational tasks may be assigned a success score, e.g., based on a computational cost, computational efficiency or monetary cost associated with performing the computational task. If an assigned success score is above a predetermined threshold, the obtained input data may include input data for the computational task, including data representing the type of computing resource the task was routed to.
The obtained input data may also include, for each of the computational tasks, data representing a size of an input data set associated with the computational task. For example, the size of an input data set associated with a computational task may include a size of the input data 102 described above with reference to FIGS. 1 and 2. The size of an input data set may be used to determine which computing resource to route the computational task to. For example, some computing resources may be limited as to the size of input data they are configured to receive and/or process. Therefore, a computational task with an associated input data set of a given size should be routed to a computing resource that is capable of receiving and efficiently processing the input data set.
The quantum computing resources may have associated classical overheads when performing computational tasks. In cases where the input data set is small, e.g., easily manageable by a particular quantum computing resource, the classical overhead may negate any benefits achieved by using the quantum computing resource, e.g., in terms of computational time or efficiency. In these cases, it may be more efficient to use a classical computer or other quantum computing resource to process the computational task.
As another example, in cases where the input data set is small, e.g., easily manageable by a particular quantum computing resource, the monetary cost of using the quantum computing resource, e.g., an associated rental fee, may negate the benefits of using the quantum computing resource. In these cases it may be more desirable, e.g., to minimize costs associated with performing the computational task, to use a classical computer or other quantum computing resource to process the computational task.
As another example, in cases where the input data set is large, e.g., requiring a long computational processing time, the monetary cost of using a quantum computing resource to perform the computational task may be too large. In these cases it may be more desirable, e.g., in order to minimize costs associated with performing the computational task, to use a classical computer or other quantum computing resource, to process the computational task.
Alternatively, or in addition, the obtained input data may also include data indicating whether an input data set associated with the computational task included static, real time or both static and real time input data. Typically, real time data may be likely to have more variability from data point to data point. For example, data signals from an IoT device may indicate dynamic information about a user or component of in a system represented by the optimization task, e.g., a dynamic location of a user or a system component. Such dynamic data, combined with other rapidly changing data signals, may influence how hard a computation is to perform and therefore which computing resource should be used to perform the computation.
Static data, e.g., demographics of a user in a system represented by the optimization task or static information about components of a system, may also influence how hard a computation is to perform and therefore which computing resource should be used to perform a computation. For example, some static data may be easier to incorporate in algorithms running on certain computing resources compared to other computing resources, e.g., depending on algorithm sensitivity and the variability of the static data. As another example, a quantum computing device maybe involved in processing certain frames from a real time analysis to provide deeper insights than a classical counterpart. Alternatively, a classical computing device may be used for large scale distributed static data analysis in cases where data movement to a quantum computer would decrease overall result time.
Alternatively, or in addition, the obtained input data may also include data representing an error tolerance associated with the computational task. An error tolerance associated with a computational task may be used to determine which computing resource to route the computational task to. For example, some computational tasks may have smaller error tolerances than others, e.g., an error tolerance of a solution to the task of optimizing a cancer radiotherapy treatment may be smaller than an error tolerance of a solution to the task of optimizing the wastage of waterinawater network. Computational tasks with smaller error tolerances may therefore be routed to computing resources that are more accurate than other computing resources, e.g., to computing resources who are less likely to introduce errors when performing a computational task.
In some cases, machine learning techniques applied to specific use cases may be used to teach the system what an acceptable error tolerance might be. In some cases this may further provide an opportunity for a feedback loop within the system 100 that uses quantum machine learning to not only increase the efficiency and accuracy of the system but then also to effectively deal with anomalies in the data signals being fed into the system.
Alternatively, or in addition, the obtained input data may also include data representing a required level of confidence associated with the computational task. For example, certain types of quantum computers will provide a probabilistic rather than a deterministic result, and based on the amount of cycles on the quantum computer the confidence in the result can be increased. A required level of confidence associated with a computational task may be used to determine which computing resource to route the computational task to. For example, some computing resources may be configured to generate solutions to computational tasks that are more likely to be accurate than solutions generated by other computing resources. Solutions to computational tasks that require high levels of confidence may therefore be routed to computing resources that are more likely to produce accurate solutions to the computational tasks. For example, such computational tasks may not be provided to an adiabatic quantum processor that may, in some cases, produce a range of solutions with varying degrees of confidence.
FIG. 4: is a flow diagram of an example process 400 for obtaining a solution to a computational task using a system including one or more quantum computing resources. For convenience, the process 400 will be described as being performed by a system of one or more computers located in one or more locations. For example, a system for performing computational tasks, e.g., the system 100 of FIG. 1A, appropriately programmed in accordance with this specification, can perform the process 400.
The system receives data representing a computational task to be performed by a system including one or more quantum computing resources, e.g., one or more quantum gate computers, adiabatic annealers, or quantum simulators, and one or more classical computing resources, e.g., one or more classical computers or super computers (step 402).
The system processes the received data using a machine learning model to determine which of the one or more quantum computing resources or the one or more classical computing resources to route the data representing the computational task to (step 404). As described above with reference to FIG. 2, the machine learning model is a predictive model that has been configured through training to route received data representing computational tasks to be performed in the system including at least one quantum computing resource.
WE CLAIMS 1. Our Invention "ELSM- Quantum Computing" is a method, systems, process technique, and apparatus for validating, training a machine learning model to multi route received computational tasks in a system including at least one quantum computing resource. The invention also includes obtaining a first set of large data, the first set of intelligent data comprising data representing multiple real time computational tasks previously performed by the system and obtaining input large data for the multiple computational tasks previously performed by the system. The invention also comprising data representing a type of computing resource the task was routed to obtaining a second set of large data, the second set of data comprising data representing properties associated with using the one or more quantum computing resources to solve the multiple computational tasks. The validating, training the machine learning model to route received data representing a computational task to be performed using the (i) first set of lager data, (ii) input data, and (iii) second set of data. The invention also Described herein are methods, systems technique and media for generating a quantum-ready or quantum-enabled real-time software development kit (RSDK) for an advanced quantum computing system. The invented methods may comprise as per required size and accepting user input from an application at an application interface which application is executed on a digital computer and implementing more the one best required algorithm. The algorithms layer that may be solved intelligent way, heuristically way or exactly depending on the requirements of the user input as per software specification. The invented system as per required needed more than algorithms a complexity of the application transforming the one or more algorithms from the application space into the one or more instructions in intelligent polynomial unconstrained binary optimization (IPUBO) form. 2. According to claims# the invention is to a method, systems, process technique, and apparatus for validating, training a machine learning model to multi route received computational tasks in a system including at least one quantum computing resource. 3. According to claim,2# the invention is to the invention also includes obtaining a first set of large data, the first set of intelligent data comprising data representing multiple real time computational tasks previously performed by the system and obtaining input large data for the multiple computational tasks previously performed by the system. 4. According to claim,2,3# the invention is to the invention also comprising data representing a type of computing resource the task was routed to obtaining a second set of large data, the second set of data comprising data representing properties associated with using the one or more quantum computing resources to solve the multiple computational tasks. 5. According to claim,2,4# the invention is to the validating, training the machine learning model to route received data representing a computational task to be performed using the (i) first set of lager data, (ii) input data, and (iii) second set of data. 6. According to claiml,2,3,4# the invention is to the invention also Described herein are methods, systems technique and media for generating a quantum-ready or quantum-enabled real-time software development kit (RSDK) for an advanced quantum computing system. 7. According to claim,2,5# the invention is to the invention also Described herein are methods, systems technique and media for generating a quantum-ready or quantum-enabled real-time software development kit (RSDK) for an advanced quantum computing system. 8. According to claim,2,5# the invention is to the invented methods may comprise as per required size and accepting user input from an application at an application interface which application is executed on a digital computer and implementing more the one best required algorithm. 9. According to claim,2,5,7# the invention is to the algorithms layer that may be solved intelligent way, heuristically way or exactly depending on the requirements of the user input as per software specification. The invented system as per required needed more than algorithms a complexity of the application transforming the one or more algorithms from the application space into the one or more instructions in intelligent polynomial unconstrained binary optimization (IPUBO) form. 10.According to claim,2,5,9# the invention is to wherein the data representing multiple computational tasks previously performed by the system comprises data indicating a frequency of changes to input data sets associated with each computational tasks. 11.According to claim,2,5,8# the invention is to further comprising training the machine learning model using data representing properties associated with using the one or more quantum computing resources to solve the multiple computational tasks. the invention is to wherein the polynomial unconstrained binary optimization (PUBO) form is a quadratic unconstrained binary optimization (QUBO) form. 12.According to claim,2,5,7,9# the invention is to wherein the one or more polynomial unconstrained binary optimization (PUBO) solvers of the common interface of the solver layer comprise one or more quadratic unconstrained binary optimization (QUBO) solvers and also the invention is to wherein the one or more algorithms are transformed at the algorithms layer. the invention is to wherein the one or more algorithms are transformed using a binary polynomial layer.
Date:27/8/2020 Mr. N Sandeep Chaitanya (Assistant Professor) Dr. Ramesh. Vatambeti Mr. Ravikanth Motupalli (Assistant Professor) Mr. Chekuri Sri Sumanth (Assistant Professor) Mrs. A Aruna Kumari (Assistant Professor) Mrs. Somavarapu Jahnavi (Assistant Professor) Mrs. Tejaswi Potluri (Assistant Professor) Mrs. Pinamala Sruthi (Associate Professor) Ms. Kandula Neha (Assistant Professor) Mr. Shaik Mahammad Jameeruddin
FOR Mr. N Sandeep Chaitanya (Assistant Professor) Dr. Ramesh. Vatambeti Mr. Ravikanth Motupalli (Assistant Professor) Mr. Chekuri Sri Sumanth (Assistant Professor) Mrs. A Aruna Kumari (Assistant Professor) Mrs. Somavarapu Jahnavi (Assistant Professor) Mrs. Tejaswi Potluri (Assistant Professor) Mrs. Pinamala Sruthi (Associate Professor) 28 Aug 2020
Ms. Kandula Neha (Assistant Professor) Mr. Shaik Mahammad Jameeruddin TOTAL NO OF SHEET: 05 NO OF FIG: 05 2020102026
FIG. 1A: DEPICTS AN EXAMPLE SYSTEM FOR PERFORMING COMPUTATIONAL TASKS.
FOR Mr. N Sandeep Chaitanya (Assistant Professor) Dr. Ramesh. Vatambeti Mr. Ravikanth Motupalli (Assistant Professor) Mr. Chekuri Sri Sumanth (Assistant Professor) Mrs. A Aruna Kumari (Assistant Professor) Mrs. Somavarapu Jahnavi (Assistant Professor) Mrs. Tejaswi Potluri (Assistant Professor) Mrs. Pinamala Sruthi (Associate Professor) 28 Aug 2020
Ms. Kandula Neha (Assistant Professor) Mr. Shaik Mahammad Jameeruddin TOTAL NO OF SHEET: 05 NO OF FIG: 05 2020102026
FIG. 1B: DEPICTS AN EXAMPLE VISUALIZATION OF A GLOBAL SEARCH SPACE AND LOCAL SEARCH SPACE.
FOR Mr. N Sandeep Chaitanya (Assistant Professor) Dr. Ramesh. Vatambeti Mr. Ravikanth Motupalli (Assistant Professor) Mr. Chekuri Sri Sumanth (Assistant Professor) Mrs. A Aruna Kumari (Assistant Professor) Mrs. Somavarapu Jahnavi (Assistant Professor) Mrs. Tejaswi Potluri (Assistant Professor) Mrs. Pinamala Sruthi (Associate Professor) 28 Aug 2020
Ms. Kandula Neha (Assistant Professor) Mr. Shaik Mahammad Jameeruddin TOTAL NO OF SHEET: 05 NO OF FIG: 05 2020102026
FIG. 2: DEPICTS AN EXAMPLE QUANTUM COMPUTING MACHINE LEARNING MODULE.
FOR Mr. N Sandeep Chaitanya (Assistant Professor) Dr. Ramesh. Vatambeti Mr. Ravikanth Motupalli (Assistant Professor) Mr. Chekuri Sri Sumanth (Assistant Professor) Mrs. A Aruna Kumari (Assistant Professor) Mrs. Somavarapu Jahnavi (Assistant Professor) Mrs. Tejaswi Potluri (Assistant Professor) Mrs. Pinamala Sruthi (Associate Professor) 28 Aug 2020
Ms. Kandula Neha (Assistant Professor) Mr. Shaik Mahammad Jameeruddin TOTAL NO OF SHEET: 05 NO OF FIG: 05 2020102026
FIG. 3: IS A FLOW DIAGRAM OF AN EXAMPLE PROCESS FOR TRAINING A MACHINE LEARNING MODEL TO ROUTE RECEIVED COMPUTATIONAL TASKS IN A SYSTEM INCLUDING ONE OR MORE QUANTUM COMPUTING RESOURCES.
FOR Mr. N Sandeep Chaitanya (Assistant Professor) Dr. Ramesh. Vatambeti Mr. Ravikanth Motupalli (Assistant Professor) Mr. Chekuri Sri Sumanth (Assistant Professor) Mrs. A Aruna Kumari (Assistant Professor) Mrs. Somavarapu Jahnavi (Assistant Professor) Mrs. Tejaswi Potluri (Assistant Professor) Mrs. Pinamala Sruthi (Associate Professor) 28 Aug 2020
Ms. Kandula Neha (Assistant Professor) Mr. Shaik Mahammad Jameeruddin TOTAL NO OF SHEET: 05 NO OF FIG: 05 2020102026
FIG. 4: IS A FLOW DIAGRAM OF AN EXAMPLE PROCESS FOR OBTAINING A SOLUTION TO A COMPUTATIONAL TASK USING A SYSTEM INCLUDING ONE OR MORE QUANTUM COMPUTING RESOURCES.
AU2020102026A 2020-08-28 2020-08-28 ELSM- Quantum Computing: EXTREMELY LARGE DATABASES STORE INTO A SMALL MEMORY USING QUANTUM COMPUTING. Ceased AU2020102026A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2020102026A AU2020102026A4 (en) 2020-08-28 2020-08-28 ELSM- Quantum Computing: EXTREMELY LARGE DATABASES STORE INTO A SMALL MEMORY USING QUANTUM COMPUTING.

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2020102026A AU2020102026A4 (en) 2020-08-28 2020-08-28 ELSM- Quantum Computing: EXTREMELY LARGE DATABASES STORE INTO A SMALL MEMORY USING QUANTUM COMPUTING.

Publications (1)

Publication Number Publication Date
AU2020102026A4 true AU2020102026A4 (en) 2020-10-01

Family

ID=72608288

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2020102026A Ceased AU2020102026A4 (en) 2020-08-28 2020-08-28 ELSM- Quantum Computing: EXTREMELY LARGE DATABASES STORE INTO A SMALL MEMORY USING QUANTUM COMPUTING.

Country Status (1)

Country Link
AU (1) AU2020102026A4 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115719047A (en) * 2022-11-14 2023-02-28 沐曦集成电路(上海)有限公司 Joint simulation system based on waveform GPU

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115719047A (en) * 2022-11-14 2023-02-28 沐曦集成电路(上海)有限公司 Joint simulation system based on waveform GPU

Similar Documents

Publication Publication Date Title
US11588621B2 (en) Efficient private vertical federated learning
Mo et al. PPFL: privacy-preserving federated learning with trusted execution environments
US11354539B2 (en) Encrypted data model verification
WO2022089256A1 (en) Method, apparatus and device for training federated neural network model, and computer program product and computer-readable storage medium
Antwi-Boasiako et al. Privacy preservation in Distributed Deep Learning: A survey on Distributed Deep Learning, privacy preservation techniques used and interesting research directions
Zhang et al. Privcoll: Practical privacy-preserving collaborative machine learning
Duong et al. Quantum-inspired machine learning for 6G: Fundamentals, security, resource allocations, challenges, and future research directions
Niu et al. Toward verifiable and privacy preserving machine learning prediction
Baryalai et al. Towards privacy-preserving classification in neural networks
Liu et al. A quantum-based database query scheme for privacy preservation in cloud environment
Quan et al. A simplified verifiable blind quantum computing protocol with quantum input verification
JP2023509589A (en) Privacy Preserving Machine Learning via Gradient Boosting
AU2020102026A4 (en) ELSM- Quantum Computing: EXTREMELY LARGE DATABASES STORE INTO A SMALL MEMORY USING QUANTUM COMPUTING.
Mantey et al. Federated Learning Approach for Secured Medical Recommendation in Internet of Medical Things using Homomorphic Encryption
Alnajar et al. Tactile internet of federated things: Toward fine-grained design of FL-based architecture to meet TIoT demands
Khodaiemehr et al. Navigating the Quantum Computing Threat Landscape for Blockchains: A Comprehensive Survey
Neto et al. Securing Federated Learning: A Security Analysis on Applications, Attacks, Challenges, and Trends
Li et al. VFL-R: a novel framework for multi-party in vertical federated learning
Santhosh Kumar et al. Scalable anomaly detection for large-scale heterogeneous data in cloud using optimal elliptic curve cryptography and gaussian kernel Fuzzy C-means clustering
Giannopoulos et al. Privacy preserving medical data analytics using secure multi party computation. an end-to-end use case
Wang et al. FRNet: an MCS framework for efficient and secure data sensing and privacy protection in IoVs
Qayyum et al. Quantum computing for healthcare: A review
Garge et al. Neural networks for encrypted data using homomorphic encryption
Ma et al. A Survey on Secure Outsourced Deep Learning
Ziegeldorf et al. SHIELD: A framework for efficient and secure machine learning classification in constrained environments

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry