US20200364599A1 - Quantum machine learning algorithm for knowledge graphs - Google Patents

Quantum machine learning algorithm for knowledge graphs Download PDF

Info

Publication number
US20200364599A1
US20200364599A1 US16/414,065 US201916414065A US2020364599A1 US 20200364599 A1 US20200364599 A1 US 20200364599A1 US 201916414065 A US201916414065 A US 201916414065A US 2020364599 A1 US2020364599 A1 US 2020364599A1
Authority
US
United States
Prior art keywords
entity
tensor
quantum
register
values
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US16/414,065
Other versions
US11562278B2 (en
Inventor
Yunpu Ma
Volker Tresp
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siemens AG
Original Assignee
Siemens AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens AG filed Critical Siemens AG
Priority to US16/414,065 priority Critical patent/US11562278B2/en
Assigned to SIEMENS AKTIENGESELLSCHAFT reassignment SIEMENS AKTIENGESELLSCHAFT ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MA, YUNPU, TRESP, VOLKER
Publication of US20200364599A1 publication Critical patent/US20200364599A1/en
Application granted granted Critical
Publication of US11562278B2 publication Critical patent/US11562278B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N10/00Quantum computing, i.e. information processing based on quantum-mechanical phenomena
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Definitions

  • the present invention relates to a computer-implemented method of performing an inference task on a knowledge graph comprising semantic triples of entities, wherein entity types are subject, object and predicate, and wherein each semantic triple comprises one of each entity type, using a quantum computing device, wherein a first entity of a first type and a second entity of a second type are given and the inference task is to infer a third entity of the third type.
  • the invention also relates to a computing system for performing said computer-implemented method.
  • Semantic knowledge graphs are graph-structured databases consisting of semantic triples (subject, predicate, object), where subject and object are nodes in the graph and the predicate is the label of a directed link between subject and object.
  • An existing triple normally represents a fact, e.g., (California, located in, USA) with “California” being the subject, “located in” being the predicate and “USA” being the object. Missing triples stand for triples known to be false (closed-world assumption) or with unknown truth value.
  • a number of sizable knowledge graphs have been developed. For example, the currently largest KGs contain more than 100 billion facts and hundreds of millions of distinguishable entities.
  • quantum machine learning is becoming an active research area which attracts researchers from different communities.
  • quantum machine learning exhibits great potential for accelerating classical algorithms.
  • quantum machine learning algorithms contain subroutines for singular value decomposition, singular value estimation and singular value projection of data matrices that are prepared and presented as quantum density matrices.
  • most tensor problems are NP-hard and there is no existing quantum computation method which can handle tensorized data. Since knowledge graphs comprise at least triplets of entities, and are thus modelled by at least three-dimensional tensors, such a quantum computation method is desired.
  • the present invention provides, according to a first aspect, a computer-implemented method of performing an inference task on a knowledge graph comprising semantic triples of entities, wherein entity types are subject, object and predicate, and wherein each semantic triple comprises one of each entity type, using a quantum computing device, wherein a first entity of a first type and a second entity of a second type are given and the inference task is to infer a third entity of the third type, comprising at least the steps of:
  • auxiliary register has a first eigenstate
  • the canonical basis comprises tensor products of a basis connected to the second entity type and of a basis connected to the third entity type;
  • a quantum machine learning method on tensorized data is proposed, e.g., on data derived from large knowledge graphs.
  • the presented tensor factorization method advantageously has a polylogarithmic runtime complexity.
  • Quantum machine learning algorithms on two-dimensional matrices data may be performed in any known way, for example as has been described in “Kerenidis et al.”.
  • the partially observed tensor ⁇ circumflex over ( ⁇ ) ⁇ may be interpreted as a sub-sampled (or: sparsified) tensor of a theoretically completely filled tensor ⁇ which comprises the information about all semantic triplets for all subjects, objects and predicates of given sets.
  • Providing the cutoff threshold T may comprise calculating the cutoff threshold as will be described in the following.
  • Creating the density operator in the quantum random access memory may be performed in any known way, for example as has been described in “Giovannetti et al.”.
  • the quantum computing device may be implemented in any known way, for example as has been described in “Ma et al.”.
  • Preparing the unitary operator and applying it to the first entity state may be performed in any known way, for example as has been described in “Lloyd et al.” and/or “Rebentrost et al.”.
  • Performing the quantum phase estimation on the clock register may be performed in any known way, for example as has been described in “Kitaev”.
  • Performing the computation to retrieve the singular values may be performed in any known way, for example as has been described in “Nielsen et al.”.
  • the quantum singular value projection and/or the tracing out of the clock register may be performed in any known way, for example as has been described in “Rebentrost et al.”.
  • the method described herein is highly advantageous because it shows how a meaningful cutoff for a useful approximation of a partially known tensor can be achieved.
  • matrix cutoff schemes based on matrix singular value decomposition essentially the singular values larger than, or equal to, a cutoff threshold are kept and those that are smaller are disregarded.
  • the partially observed tensor ⁇ circumflex over ( ⁇ ) ⁇ is obtained such that, for each entry of the partially observed tensor, the entry is with a probability p directly proportional to a corresponding entry of a complete tensor ⁇ modelling a complete knowledge graph and equal to 0 with a probability of 1-p, with p being smaller than 1. This allows to determine the cutoff threshold ⁇ in a suitable way so that the required computing power and required memory is minimized.
  • the cutoff threshold is chosen as smaller or equal to a quantity which is indirectly proportional to the probability p.
  • the inventors have found this to be a useful criterion for choosing a suitable cutoff threshold.
  • the probability p is chosen to be larger to or equal a maximum value out of a set of values. That set may be designated as a “lower bound set”.
  • the set of values comprises at least a value of 0.22.
  • the partially observed tensor ⁇ circumflex over ( ⁇ ) ⁇ is expressable as the sum of the complete tensor ⁇ and a noise tensor N.
  • a desired value ⁇ >0 may be defined such that the Frobenius norm ⁇ F of a rank-r-approximation , of the noise tensor is bounded such that ⁇ r ⁇ F ⁇ ⁇ ⁇ A ⁇ F .
  • the set of values comprises at least one value that is proportional to r and indirectly proportional to ⁇ to the n-th power, with n integer and n ⁇ 1.
  • the set of values comprises at least one value that is proportional to r and that is indirectly proportional to the square of ⁇ .
  • the set of values comprises at least one value that is proportional to a square root of r and that is indirectly proportional to ⁇ .
  • the set of values comprises at least one value that is independent of r and that is indirectly proportional to the square of F.
  • the present invention also provides, according to a second aspect, a computing system comprising a classical computing device and a quantum computing device, wherein the computing system is configured to perform the method according to any embodiment of the method according to the first aspect of the present invention.
  • the computing system comprises an input interface for receiving an inference task (or: query, i.e. a first entity of a first entity type and a second entity of a second entity type), and an output interface for outputting the inferred third entity of the third entity type.
  • the computing device may be realised as any device, or any means, for computing, in particular for executing a software, an app, or an algorithm.
  • the computing device may comprise a central processing unit (CPU) and a memory operatively connected to the CPU.
  • the computing device may also comprise an array of CPUs, an array of graphical processing units (GPUs), at least one application-specific integrated circuit (ASIC), at least one field-programmable gate array, or any combination of the foregoing.
  • Some, or even all, parts of the computing device may be implemented by a cloud computing platform.
  • a storage/memory may be a data storage like a magnetic storage/memory (e.g. magnetic-core memory, magnetic tape, magnetic card, magnet strip, magnet bubble storage, drum storage, hard disc drive, floppy disc or removable storage), an optical storage/memory (e.g. holographic memory, optical tape, Tesa tape, Laserdisc, Phasewriter (Phasewriter Dual, PD), Compact Disc (CD), Digital Video Disc (DVD), High Definition DVD (HD DVD), Blu-ray Disc (BD) or Ultra Density Optical (UDO)), a magneto-optical storage/memory (e.g. MiniDisc or Magneto-Optical Disk (MO-Disk)), a volatile semiconductor/solid state memory (e.g.
  • a magnetic storage/memory e.g. magnetic-core memory, magnetic tape, magnetic card, magnet strip, magnet bubble storage, drum storage, hard disc drive, floppy disc or removable storage
  • an optical storage/memory e.g. holographic memory,
  • RAM Random Access Memory
  • DRAM Dynamic RAM
  • SRAM Static RAM
  • ROM Read Only Memory
  • PROM Programmable ROM
  • EPROM Erasable PROM
  • EEPROM Electrically EPROM
  • Flash-EEPROM Flash-EEPROM (e.g. USB-Stick), Ferroelectric RAM (FRAM), Magnetoresistive RAM (MRAM) or Phase-change RAM) or a data carrier/medium.
  • FIG. 1 shows a schematic flow diagram illustrating a method according to an embodiment of the first aspect of the present invention
  • FIG. 2 shows a schematic block diagram of a computing according to an embodiment of the second aspect of the present invention
  • FIG. 3 shows results of measurements of the performance of the present method
  • FIG. 4 shows results of measurement of the present method.
  • FIG. 1 shows a schematic flow diagram illustrating a method according to an embodiment of the first aspect of the present invention.
  • the first part contributes to the classical binary tensor sparsification method.
  • the first binary tensor sparsification condition is derived under which the original tensor can be well approximated by a truncated (or: projected) tensor SVD of its subsampled tensor.
  • the second part contributes to the method of performing knowledge graph inference on universal quantum computers.
  • a quantum tensor contraction subroutine is described.
  • a quantum sampling method on knowledge graphs using quantum principal component analysis, quantum phase estimation and quantum singular value projection is described.
  • the runtime complexity is analyzed, and it is shown that this sampling-based quantum computation method provides exponential acceleration with respect to the size of the knowledge graph during inference.
  • SVD singular value decomposition
  • the spectral norm ⁇ ⁇ ⁇ of the tensor A is defined as
  • Tensor SVD a tensor single value decompositions
  • matrix singular value decomposition tensor singular value decomposition is described in detail e.g. in “Chen et al.”.
  • the partially observed tensor is herein also designated as sub-sampled or sparsified, denoted .
  • sub-sampled or sparsified denoted .
  • the following subsampling and rescaling scheme proposed in “Achlioptas et al.” is used:
  • ⁇ i 1 ⁇ ... ⁇ ⁇ i N ⁇ ( 1 p - 1 ) ⁇ ⁇ i 1 ⁇ ... ⁇ ⁇ i N with ⁇ ⁇ probability ⁇ ⁇ p - ⁇ i 1 ⁇ ... ⁇ ⁇ i N with ⁇ ⁇ probability ⁇ ⁇ 1 - p ( 3 )
  • a 3-dimensional semantic tensor ⁇ as one example of a tensor is of particular interest.
  • the present methods builds on the assumption that the original semantic tensor ⁇ modelling the complete knowledge graph (or, in from a different viewpoint, the tensor ⁇ as the complete knowledge graph) has a low-rank approximation, denoted ⁇ r , with small rank r.
  • ⁇ circumflex over ( ⁇ ) ⁇ r denotes the r-rank approximation of the subsampled tensor ⁇ circumflex over ( ⁇ ) ⁇ ; and ⁇ circumflex over ( ⁇ ) ⁇
  • denotes the projection of ⁇ circumflex over ( ⁇ ) ⁇ onto the subspaces whose absolute singular values are larger than a predefined threshold ⁇ >0.
  • a knowledge graph is modelled as a partially observed tensor ⁇ circumflex over ( ⁇ ) ⁇ in a classical computer-readable memory structure.
  • This classical computer-readable memory structure may be any volatile or non-volatile computer-readable memory structure such as a hard drive, a solid state drive and/or the like.
  • the computer-readable memory structure may be part of a classical computing device which in turn may be part of a computing system for performing an inference task on a knowledge graph.
  • FIG. 2 shows a schematic block diagram of a computing system 1000 for performing an inference task on a knowledge graph according to an embodiment of the second aspect of the present invention.
  • FIG. 2 shows that the computing system 1000 comprises two main components: a classical computing device 100 as well as a quantum computing device 200 which are connected by an exchange interface 300 therebetween.
  • the quantum computing device 200 may be implemented in any known way, for example as has been described in “Ma et al.”.
  • the computing system 1000 comprises an input interface 10 , in particular for receiving an inference task (or: query) with a first entity of a first entity type and a second entity of a second entity type, e.g. (s, p, ?) or (?, p, o) or (s, ?, o) and an output interface 90 for outputting the inferred third entity of the third entity type o, s, or p, respectively.
  • FIG. 1 may be performed using a computing system according to an embodiment of the second aspect of the present invention, in particular the computing system 1000 of FIG. 2 and/or that the computing system 1000 may be adapted to perform the method according to an embodiment of the first aspect of the present invention, in particular the method according to FIG. 1 .
  • the subjects are a first entity type which corresponds to a dimension of size d 1 in ⁇ and ⁇ circumflex over ( ⁇ ) ⁇ ;
  • the predicates are a second entity type which corresponds to a dimension of size d 2 in ⁇ and ⁇ circumflex over ( ⁇ ) ⁇ ;
  • the objects are a third entity type which corresponds to a dimension of size d 3 in ⁇ and ⁇ circumflex over ( ⁇ ) ⁇ .
  • n is a small integer corresponding to the commonly used Hits@n metric (see e.g. “Ma et al.”).
  • the inference is called successful if the correct object corresponding to the query can be found in the returned list ⁇ circumflex over ( ⁇ ) ⁇ sp1 , . . . , ⁇ circumflex over ( ⁇ ) ⁇ spn . It can be proven that the probability of a successful inference is high if the reconstruction error small enough. Therefore, in the following we provide sub-sampling conditions under which the construction error is unexpectedly small.
  • Theorem 1 gives the condition for the subsample probability under which the original tensor can be reconstructed approximately from r .
  • Theorem 1 Let ⁇ 0, 1 ⁇ d 1 ⁇ d 2 ⁇ . . . ⁇ d N . Suppose that can be well approximated by its r-rank tensor SVD r . Using the subsampling scheme defined in Eq. 2 with the sample probability
  • the original tensor can be recons from the truncated tensor SVD of the subsampled tensor .
  • the error satisfies ⁇ ⁇ r ⁇ F ⁇ ⁇ F with probability at least 1 ⁇ , where ⁇ is a function of ⁇ tilde over ( ⁇ ) ⁇ .
  • ⁇ tilde over ( ⁇ ) ⁇ together with the sample probability controls the norm of the noise tensor.
  • the Frobenius norm ⁇ F of a rank-r-approximation r of the noise tensor is bounded such that ⁇ r ⁇ F ⁇ tilde over ( ⁇ ) ⁇ A ⁇ F .
  • quantum algorithms are fundamentally different from classical algorithms.
  • classical algorithms for matrix factorization approximate a low-rank matrix by projecting it onto a subspace spanned by the eigenspaces possessing top-r singular values with predefined small r.
  • Quantum methods e.g., quantum singular value estimation, on the other hand, can read and store all singular values of a unitary operator into a quantum register.
  • Theorem 2 gives the condition under which can be reconstructed approximately from
  • N 0 log 3/2:l 1 denotes the largest index of singular values of tensor with ⁇ 1 , ⁇ , such that wen choosing the threshold as
  • the original tensor can be reconstructed from the projected tensor SVD of .
  • the error satisfies ⁇ ⁇
  • ⁇ tilde over ( ⁇ ) ⁇ together with p 1 and p 2 determine the norm of noise tensor and ⁇ 1 together with p 3 control the value of 's singular values that are located outside the projection boundary.
  • the threshold ⁇ is chosen as smaller or equal to a quantity which is indirectly proportional to the probability p and/or smaller or equal to a quantity which is indirectly proportional to ⁇ tilde over ( ⁇ ) ⁇ .
  • the probability p is advantageously chosen to be larger to or equal a maximum value out of a set of values, which set of values in the foregoing example comprises four values: 0.22, p1, p2, and p3.
  • p will always be larger or equal to at least 0.22.
  • another value in the range of 0.2 and 0.24 may be chosen.
  • 0.22 is an ideal value in order to ensure desirable properties for the threshold r.
  • experiments as well as a numerical proof by the inventors have shown that 0.22 is the minimal subsample probability that a subsampled tensor can be reconstructed with a bounded error.
  • the set of values the set of values comprises:
  • quantum states can be represented as complex-valued vectors in a Hilbert space .
  • a two-dimensional complex Hilbert 2 space can describe the quantum state of a spin-1 particle, which provides the physical realization of a qubit.
  • is used to represent the conjugate transpose of
  • ⁇ ) ⁇
  • The inner product on the Hilbert space is defined as ⁇
  • ⁇ *
  • a density matrix is a projection operator which is used to describe the statistics of a quantum system. For example, the density operator of the mixed state
  • the density matrix for the whole system is their tensor product, namely ⁇ ⁇ .
  • the time evolution of a quantum state is generated by the Hamiltonian of the system.
  • ⁇ (t) denote the quantum state at time t under the evolution of an invariant Hamiltonian H. Then according to the Schrôdinger equation
  • ⁇ (t) e ⁇ iHt
  • the present invention concerns a method for the inference on knowledge graphs using a quantum computing device 200 .
  • a quantum computing device 200 In the following we focus on the semantic tensor ⁇ 0, 1 ⁇ d 1 ⁇ d 2 ⁇ d 3 , with d 1 , d 2 , and d 3 defined as above, and let ⁇ circumflex over ( ⁇ ) ⁇ denote the partially observed tensor.
  • the preference matrix of a recommendation system normally contains multiple nonzero entries in a given user-row; items recommendations are made according to the nonzero entries in the user-row by assuming that the user is ‘typical’.
  • a knowledge graph there might be only one nonzero entry in the row (s, p, ⁇ ). Therefore, advantageously, for the inference on a knowledge graph quantum algorithm triples with the given subject s are sampled and then and post-selected on the predicate p. This is a feasible step especially if the number of semantic triples with s as subject and p as predicate is (1).
  • the present method contains the preparing and exponentiating of a density matrix derived from the tensorized classical data.
  • One of the challenges of quantum machine learning is loading classical data as quantum states and measuring the states since reading or writing high-dimensional data from quantum states might obliterate the quantum acceleration. Therefore, the technique quantum Random Access Memory (qRAM) was developed (see “Giovannetti et al.”) which can load classical data into quantum states with exponential acceleration. For details about the qRAM technique, it is referred to “Giovannetti et al.”.
  • the basic idea of the present method is to project the observed data onto the eigenspaces of ⁇ circumflex over ( ⁇ ) ⁇ whose corresponding singular values have an absolute value larger than a threshold ⁇ . Therefore, we need to create an operator which can reveal the eigenspaces and singular values of ⁇ circumflex over ( ⁇ ) ⁇ .
  • a knowledge graph is modelled as a partially observed tensor ⁇ circumflex over ( ⁇ ) ⁇ in a classical computer-readable memory structure 110 of a classical computing device 100 , see FIG. 2 .
  • a cutoff threshold r is provided, which is preferably determined as has been described in the foregoing.
  • a step S 20 the following density operator (or: density matrix) is created, on the quantum computing device 200 , from ⁇ circumflex over ( ⁇ ) ⁇ via a tensor contraction scheme:
  • ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ : ⁇ i 2 ⁇ i 3 ⁇ i 2 ′ ⁇ i 3 ′ ⁇ ⁇ i i ⁇ ⁇ ⁇ i 1 , i 2 ⁇ i 3 ⁇ ⁇ ⁇ ⁇ ⁇ i 1 , i 2 ′ ⁇ i 3 ′ ⁇ ⁇ i 2 ⁇ i 3 ⁇ ⁇ ⁇ i 2 ′ ⁇ i 3 ′ ⁇ , ( 5 )
  • ⁇ ⁇ circumflex over ( ⁇ ) ⁇ circumflex over ( ⁇ ) ⁇ can be prepared via qRAM (see “Giovannetti et al.”) in time
  • ⁇ i 1 ⁇ i 2 ⁇ i 3 ⁇ ⁇ i 1 ′ ⁇ i 2 ′ ⁇ i 3 ′ ⁇ ⁇ ⁇ i 1 ⁇ i 2 ⁇ i 3 ⁇ ⁇ i 1 ⁇ ⁇ ⁇ i 2 ⁇ ⁇ ⁇ i 3 ⁇ ⁇ ⁇ i 1 ⁇ ⁇ ⁇ i 2 ⁇ ⁇ ⁇ i 3 ⁇ ⁇ ⁇ i 1 ′ ⁇ ⁇ ⁇ i 2 ′ ⁇ ⁇ ⁇ i 3 ′ ⁇ ⁇ ⁇ ⁇ i 1 ′ ⁇ i 2 ′ ⁇ i 3 ′ ⁇ .
  • ⁇ ⁇ ⁇ ⁇ i 1 R ⁇ ⁇ i ⁇ u i ( i ) ⁇ u 2 ( i ) ⁇ u 3 ( i ) .
  • u 3 (i) of ⁇ ⁇ circumflex over ( ⁇ ) ⁇ circumflex over ( ⁇ ) ⁇ form another set of basis in the Hilbert space of the tensor product of quantum index registers.
  • ⁇ k 0 K - 1 ⁇ ⁇ k ⁇ ⁇ ⁇ ⁇ ⁇ t ⁇ ⁇ ⁇ k ⁇ ⁇ ⁇ ⁇ ⁇ t ⁇ C
  • ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ : ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ d 2 ⁇ d 3 .
  • the clock register C is needed for the phase estimation and ⁇ t determines the precision of estimated singular values.
  • ⁇ circumflex over ( ⁇ ) ⁇ s (1) I is created (or: generated) via qRAM in an input data register I, where ⁇ circumflex over ( ⁇ ) ⁇ s (1) denotes the s-row of the flattened tensor ⁇ circumflex over ( ⁇ ) ⁇ along the first dimension.
  • ⁇ k 0 K - 1 ⁇ ⁇ k ⁇ ⁇ ⁇ ⁇ t ⁇ ⁇ C ⁇ ⁇ ⁇ ⁇ s ( 1 ) ⁇ I .
  • ⁇ k 0 K - 1 ⁇ ⁇ k ⁇ ⁇ ⁇ ⁇ t ⁇ ⁇ C ⁇ ⁇ ⁇ ⁇ s ( 1 ) ⁇ I
  • ⁇ ⁇ i : ⁇ i d 2 ⁇ d 3
  • ⁇ i are the coefficients of
  • a quantum phase estimation on the clock register C is performed, preferably using the quantum phase estimation algorithm proposed in “Kitaev”.
  • ⁇ i 2 ⁇ ⁇ ⁇ ⁇ i 2 .
  • ⁇ ⁇ ( 1 ⁇ 3 ⁇ T ⁇ ⁇ ) ⁇ ⁇ ( 1 ⁇ 3 ⁇ polylog ⁇ ⁇ ( d 1 ⁇ d 2 ⁇ d 3 ) )
  • a computation on the clock register C is performed to recover the original singular values of ⁇ ⁇ circumflex over ( ⁇ ) ⁇ circumflex over ( ⁇ ) ⁇ , and obtain
  • ⁇ i 1 R ⁇ ⁇ i ⁇ ⁇ ⁇ i 2 ⁇ C ⁇ ⁇ u 2 ( i ) ⁇ I ⁇ ⁇ u 3 ( i ) ⁇ I .
  • the ⁇ i stored in the clock register C may be transferred to ⁇ i 2 , ⁇ i being a function of ⁇ i 2 .
  • the threshold operations discussed in the following as applied to the ⁇ i 2 may therefore also be, in an alternative formulation, be applied to the ⁇ i , with the threshold ⁇ being appropriately rescaled.
  • a quantum singular value projection on the quantum state obtained from the last step S 70 is performed. Notice that, classically, this step corresponds to projecting ⁇ circumflex over ( ⁇ ) ⁇ onto the subspace ⁇ circumflex over ( ⁇ ) ⁇
  • Quantum singular value projection given the threshold ⁇ >0 can be implemented in the following way. Therefore, in a step S 80 , a new auxiliary register R is created on the quantum computing device 200 using an auxiliary qubit and a unitary operation that maps
  • the step S 90 means performing, on the result of the computation S 70 to recover the singular values, a singular value projection conditioned on the state of the auxiliary register, R, such that eigenstates whose squared singular values are to one side of (here: smaller than) the squared cutoff threshold, ⁇ 2 , are entangled with a first eigenstate
  • a step S 100 the new register R is measured and post-selected on the state
  • a step S 110 the clock register C is traced out such that the following equation is obtained:
  • ⁇ i ⁇ i 2 ⁇ ⁇ 2 ⁇ ⁇ i ⁇ ⁇ u 2 ( i ) ⁇ I ⁇ ⁇ u 3 ( i ) ⁇ I .
  • the tracing out may be performed e.g. as has been described in “Nielsen et al.”.
  • a step S 120 the resulting quantum state from the last step S 110 is measured in the canonical basis of the input register I to get the triples with subject s.
  • a step S 130 they are post-selected on the predicate p. This will return objects to the inference (s, p, ?) after
  • One of the main advantages of the present invention is that a method for implementing implicit knowledge inference from tensorized data, e.g., relational databases such as knowledge graphs, on quantum computing devices is proposed.
  • the present method shows that knowledge inference from tensorized data can be implemented with exponential acceleration on quantum computing devices. Compared to classical systems, this is, as has been shown, much faster and thus less resource-consuming than classical methods.
  • the present method is based on finding the corresponding quantum counterpart of classical tensor singular value decomposition method.
  • the present method is verified by investigating the performance of classical tensor SVD on benchmark datasets: Kinship and FB15k-237, see e.g. the scientific publication by Kristina Toutanova and Danqi Chen, “Observed versus latent features for knowledge base and text inference.”, in: Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality, pages 57-66, 2015.
  • T SVD Given a semantic triple (s, p, o), the value function of T SVD is defined as
  • u s , u p , u o are vector representations of s, p, o, respectively.
  • the T SVD is trained by minimizing the objective function
  • the hyper-parameter ⁇ is used to encourage the orthonormality of embedding matrices for subjects, predicates, and objects.
  • FIG. 3 and FIG. 4 show the training curves of the TSVD on FBK-237. It shows that T SVD performs reasonably well for small rank, hence we can estimate the projection threshold ⁇ according to the Theorem 2.
  • noisy intermediate-scale quantum processing units (or: quantum computing devices) are expected to be commercially available in the near future. With the help of these quantum computing devices and the present method, learning and inference on the ever-increasing industrial knowledge graphs can be dramatically accelerated compared to conventional computers.
  • the invention provides a computer-implemented method of performing an inference task on a knowledge graph comprising semantic triples of entities, wherein entity types are subject, object and predicate, and wherein each semantic triple comprises one of each entity type, using a quantum computing device, wherein a first entity of a first type and a second entity of a second type are given and the inference task is to infer a third entity of the third type.
  • an efficient and resource-saving method is developed that utilizes the power of quantum computing systems for inference tasks on large knowledge graphs.
  • an advantageous value for a cutoff threshold for a cutoff based on singular values of a singular value tensor decomposition is prescribed, and a sequence of steps is developed in which only the squares of the singular values are of consequence and their signs are not.

Abstract

A method of performing an inference task on a knowledge graph comprising semantic triples of entities, wherein entity types are subject, object and predicate, and wherein each semantic triple comprises one of each entity type, using a quantum computing device, wherein a first entity of a first type and a second entity of a second type are given and the inference task is to infer a third entity of the third type. By performing specific steps and choosing values according to specific prescriptions, an efficient and resource-saving method is developed that utilizes the power of quantum computing systems for inference tasks on large knowledge graphs. An advantageous value for a cutoff threshold for a cutoff based on singular values of a singular value tensor decomposition is prescribed, and a sequence of steps is developed in which only the squares of the singular values are of consequence and their signs are not.

Description

    FIELD OF TECHNOLOGY
  • The present invention relates to a computer-implemented method of performing an inference task on a knowledge graph comprising semantic triples of entities, wherein entity types are subject, object and predicate, and wherein each semantic triple comprises one of each entity type, using a quantum computing device, wherein a first entity of a first type and a second entity of a second type are given and the inference task is to infer a third entity of the third type.
  • The invention also relates to a computing system for performing said computer-implemented method.
  • BACKGROUND
  • Semantic knowledge graphs (KGs) are graph-structured databases consisting of semantic triples (subject, predicate, object), where subject and object are nodes in the graph and the predicate is the label of a directed link between subject and object. An existing triple normally represents a fact, e.g., (California, located in, USA) with “California” being the subject, “located in” being the predicate and “USA” being the object. Missing triples stand for triples known to be false (closed-world assumption) or with unknown truth value. In recent years a number of sizable knowledge graphs have been developed. For example, the currently largest KGs contain more than 100 billion facts and hundreds of millions of distinguishable entities.
  • The large number of facts and entities in a knowledge graphs makes it particularly difficult to scale the learning and inference algorithm to perform inference on the entire knowledge graph.
  • In the fields of tensor decomposition and matrix factorization, among others the following algorithms have been developed so far:
  • In the publication by Jie Chen and Yousef Saad, “On the tensor svd and the optimal low rank orthogonal approximation of tensors”, SIAM Journal on Matrix Analysis and Applications, 30(4):1709-1734, 2009 (hereafter cited as “Chen et al.”), the singular value decomposition of tensors is described.
  • In the publication by Dimitris Achlioptas and Frank McSherry, “Fast computation of low-rank matrix approximations”, Journal of the ACM (JACM), 54(2):9, 2007 (hereafter cited as “Achlioptas et al.”), methods for fast computation of low-rank matrix approximations of large matrices are described.
  • In the publication by Maximilian Nickel et al., “A review of relational machine learning for knowledge graphs”, Proceedings of the IEEE, 104(1):11-33, 2016 (hereafter cited as “Nickel et al.”), some fundamental mathematical properties regarding knowledge graphs are described.
  • In the fields of quantum computing, several methods and algorithms have been developed so far.
  • For example, the publication by S. Lloyd et al., “Quantum principal component analysis”, arXiv:1307.0401v2 of Sep. 16, 2013 (cited hereafter as “Lloyd et al.”), describes a method of using a density matrix (or: a density operator) ρ, for determining eigenvectors of unknown states.
  • In the publication by I. Kerenidis et al., “Quantum Recommendation Systems”, arXiv: 1603.08675v3 of Sep. 22, 2016 (cited hereafter as “Kerenidis et al.”), a quantum algorithm for recommendation systems is described.
  • In the publication by P. Rebentrost, “Quantum singular value decomposition of non-sparse low-rank matrices”, arXiv:1607.05404v1 of Jul. 19, 2016 (cited hereafter as “Rebentrost et al.”), a method for exponentiating non-sparse indefinite low-rank matrices on a quantum computer is proposed.
  • In the publication by A. Kitaev, “Quantum measurements and the Abelian Stabilizer Problem”, arXiv: quant-ph/9511026v1 of Nov. 20, 1995 (hereafter cited as “Kitaev”), a polynomial quantum algorithm for the Abelian stabilizer problem is proposed. The method is based on a procedure for measuring an eigenvalue of a unitary operator.
  • In the publication by V. Giovannetti et al., “Quantum random access memory”, arXiv: 0708.1879v2 of Mar. 26, 2008 (hereafter cited as “Giovannetti et al.”), a method for implementing a robust quantum random access memory, qRAM, algorithm is proposed.
  • The publication by Ma et al., “Variational Quantum Circuit Model for Knowledge Graphs Embedding”, arXiv: 1903.00556v1 of Feb. 19, 2019 (hereafter cited as “Ma et al.”), variational quantum circuits for knowledge graph embedding and related methods are proposed.
  • A basic textbook about quantum information theory is the textbook by Nielsen et al., “Quantum computation and quantum information”, Cambridge University Press, ISBN 9780521635035 herafter cited as “Nielsen et al.”.
  • SUMMARY
  • It is an object of the present invention to provide an improved method of performing an inference task and an improved system for performing an inference task, in particular by utilizing the intrinsically parallel computing power of quantum computation and by providing quantum algorithms which can dramatically accelerate the inference task.
  • Thanks to the rapid development of quantum computing technologies, quantum machine learning is becoming an active research area which attracts researchers from different communities. In general, quantum machine learning exhibits great potential for accelerating classical algorithms.
  • Most of quantum machine learning algorithms contain subroutines for singular value decomposition, singular value estimation and singular value projection of data matrices that are prepared and presented as quantum density matrices. However, unlike it is the case for matrices, most tensor problems are NP-hard and there is no existing quantum computation method which can handle tensorized data. Since knowledge graphs comprise at least triplets of entities, and are thus modelled by at least three-dimensional tensors, such a quantum computation method is desired.
  • The above objectives are solved by the subject-matter of the independent claims. Advantageous options, refinements and variants are described in the dependent claims.
  • Therefore, the present invention provides, according to a first aspect, a computer-implemented method of performing an inference task on a knowledge graph comprising semantic triples of entities, wherein entity types are subject, object and predicate, and wherein each semantic triple comprises one of each entity type, using a quantum computing device, wherein a first entity of a first type and a second entity of a second type are given and the inference task is to infer a third entity of the third type, comprising at least the steps of:
  • providing a query comprising the first entity and the second entity;
  • modelling the knowledge graph as a partially observed tensor ik in a classical computer-readable memory structure;
  • providing a cutoff threshold, τ;
  • creating, from the partially observed tensor {circumflex over (χ)}, a density operator in a quantum random access memory, qRAM, on the quantum computing device;
  • preparing a unitary operator, U, based on the created density operator, comprising states on the clock register, C;
  • creating a first entity state |χs (1)
    Figure US20200364599A1-20201119-P00001
    indicating the first entity on an input data register of the qRAM, wherein the first entity state is entangled with a maximally entangled clock register, C;
  • applying the prepared unitary operator to at least the first entity state and the entangled clock register C;
  • performing thereafter a quantum phase estimation on the clock register, C;
  • performing thereafter a computation on the clock register, C, to recover singular values;
  • creating an auxiliary qubit in an auxiliary register, R, which is entangled with the state resulting from the previous step;
  • wherein the auxiliary register, R, has a first eigenstate |1
    Figure US20200364599A1-20201119-P00001
    R and a second eigenstate |0
    Figure US20200364599A1-20201119-P00001
    R;
  • performing, on the result of the computation to recover the singular values, a singular value projection conditioned on the state of the auxiliary register, R, such that eigenstates whose squared singular values are to one side of the squared cutoff threshold, τ2, are entangled with the first eigenstate |1
    Figure US20200364599A1-20201119-P00001
    R of the auxiliary register, R, and such that eigenstates whose squared values are to another side of the squared cutoff threshold, τ2, or equal to the squared cutoff threshold, τ2, are entangled with the second eigenstate |0
    Figure US20200364599A1-20201119-P00001
    R of the auxiliary register, R;
      • measuring on the auxiliary register, R, and post-selecting one of the two eigenstates |0
        Figure US20200364599A1-20201119-P00001
        R;
  • tracing out the clock register, C;
  • measuring the result thereof in a canonical basis of the input register, wherein the canonical basis comprises tensor products of a basis connected to the second entity type and of a basis connected to the third entity type;
  • post-selecting the second entity in the basis connected to the second entity type to infer the third entity.
  • Therefore, in this work a quantum machine learning method on tensorized data is proposed, e.g., on data derived from large knowledge graphs. The presented tensor factorization method advantageously has a polylogarithmic runtime complexity.
  • Quantum machine learning algorithms on two-dimensional matrices data, such as a preference matrix in a recommendation system, may be performed in any known way, for example as has been described in “Kerenidis et al.”.
  • The partially observed tensor {circumflex over (χ)} may be interpreted as a sub-sampled (or: sparsified) tensor of a theoretically completely filled tensor χ which comprises the information about all semantic triplets for all subjects, objects and predicates of given sets.
  • Providing the cutoff threshold T may comprise calculating the cutoff threshold as will be described in the following.
  • Creating the density operator in the quantum random access memory may be performed in any known way, for example as has been described in “Giovannetti et al.”.
  • The quantum computing device may be implemented in any known way, for example as has been described in “Ma et al.”.
  • Preparing the unitary operator and applying it to the first entity state may be performed in any known way, for example as has been described in “Lloyd et al.” and/or “Rebentrost et al.”.
  • Performing the quantum phase estimation on the clock register may be performed in any known way, for example as has been described in “Kitaev”.
  • Performing the computation to retrieve the singular values may be performed in any known way, for example as has been described in “Nielsen et al.”.
  • The quantum singular value projection and/or the tracing out of the clock register may be performed in any known way, for example as has been described in “Rebentrost et al.”.
  • The method described herein is highly advantageous because it shows how a meaningful cutoff for a useful approximation of a partially known tensor can be achieved. In matrix cutoff schemes based on matrix singular value decomposition, essentially the singular values larger than, or equal to, a cutoff threshold are kept and those that are smaller are disregarded.
  • However, in the tensor case, negative singular values can arise. This ordinarily creates the problem that, according to a normal ordering, singular values with large absolute values but negative sign would be arranged behind singular values with positive values with small absolute values. The classical cutoff scheme then is no longer be meaningful, as it would disregard singular values with large negative values which may potentially be important.
  • In the present invention, this issue is overcome by performing the method with the described steps so that the cutoff threshold is applied to the squares of the singular values, thus ignoring the above-discussed sign problem.
  • In some advantageous embodiments or refinements of embodiments, the partially observed tensor {circumflex over (χ)} is obtained such that, for each entry of the partially observed tensor, the entry is with a probability p directly proportional to a corresponding entry of a complete tensor χ modelling a complete knowledge graph and equal to 0 with a probability of 1-p, with p being smaller than 1. This allows to determine the cutoff threshold τ in a suitable way so that the required computing power and required memory is minimized.
  • In some advantageous embodiments or refinements of embodiments, the cutoff threshold is chosen as smaller or equal to a quantity which is indirectly proportional to the probability p. The inventors have found this to be a useful criterion for choosing a suitable cutoff threshold.
  • In some advantageous embodiments or refinements of embodiments, the probability p is chosen to be larger to or equal a maximum value out of a set of values. That set may be designated as a “lower bound set”.
  • In some advantageous embodiments or refinements of embodiments, the set of values comprises at least a value of 0.22.
  • In some advantageous embodiments or refinements of embodiments, the partially observed tensor {circumflex over (χ)} is expressable as the sum of the complete tensor χ and a noise tensor N. A desired value ϵ>0 may be defined such that the Frobenius norm ∥⋅∥F of a rank-r-approximation
    Figure US20200364599A1-20201119-P00002
    , of the noise tensor
    Figure US20200364599A1-20201119-P00002
    is bounded such that ∥
    Figure US20200364599A1-20201119-P00002
    rFϵ∥A∥F. The set of values comprises at least one value that is proportional to r and indirectly proportional to ϵ to the n-th power, with n integer and n≥1.
  • In some advantageous embodiments or refinements of embodiments, the set of values comprises at least one value that is proportional to r and that is indirectly proportional to the square of ϵ.
  • In some advantageous embodiments or refinements of embodiments, the set of values comprises at least one value that is proportional to a square root of r and that is indirectly proportional to ϵ.
  • In some advantageous embodiments or refinements of embodiments, the set of values comprises at least one value that is independent of r and that is indirectly proportional to the square of F.
  • The present invention also provides, according to a second aspect, a computing system comprising a classical computing device and a quantum computing device, wherein the computing system is configured to perform the method according to any embodiment of the method according to the first aspect of the present invention. The computing system comprises an input interface for receiving an inference task (or: query, i.e. a first entity of a first entity type and a second entity of a second entity type), and an output interface for outputting the inferred third entity of the third entity type.
  • The computing device may be realised as any device, or any means, for computing, in particular for executing a software, an app, or an algorithm. For example, the computing device may comprise a central processing unit (CPU) and a memory operatively connected to the CPU. The computing device may also comprise an array of CPUs, an array of graphical processing units (GPUs), at least one application-specific integrated circuit (ASIC), at least one field-programmable gate array, or any combination of the foregoing.
  • Some, or even all, parts of the computing device may be implemented by a cloud computing platform.
  • A storage/memory may be a data storage like a magnetic storage/memory (e.g. magnetic-core memory, magnetic tape, magnetic card, magnet strip, magnet bubble storage, drum storage, hard disc drive, floppy disc or removable storage), an optical storage/memory (e.g. holographic memory, optical tape, Tesa tape, Laserdisc, Phasewriter (Phasewriter Dual, PD), Compact Disc (CD), Digital Video Disc (DVD), High Definition DVD (HD DVD), Blu-ray Disc (BD) or Ultra Density Optical (UDO)), a magneto-optical storage/memory (e.g. MiniDisc or Magneto-Optical Disk (MO-Disk)), a volatile semiconductor/solid state memory (e.g. Random Access Memory (RAM), Dynamic RAM (DRAM) or Static RAM (SRAM)), a non-volatile semiconductor/solid state memory (e.g. Read Only Memory (ROM), Programmable ROM (PROM), Erasable PROM (EPROM), Electrically EPROM (EEPROM), Flash-EEPROM (e.g. USB-Stick), Ferroelectric RAM (FRAM), Magnetoresistive RAM (MRAM) or Phase-change RAM) or a data carrier/medium.
  • The invention will be explained in yet greater detail with reference to exemplary embodiments depicted in the drawings as appended.
  • The accompanying drawings are included to provide a further understanding of the present invention are incorporated in and constitute a part of the specification. The drawings illustrate the embodiments of the present invention and together with the description serve to illustrate the principles of the invention. Other embodiments of the present invention and many of the intended advantages of the present invention will be readily appreciated as they become better understood by reference to the following detailed description. Like reference numerals designate corresponding similar parts.
  • The numbering of method steps is intended to facilitate understanding and should not be construed, unless explicitly stated otherwise, to mean that the designated steps have to be performed according to the numbering of their reference signs. In particular, several or even all of the method steps may be performed simultaneously, in an overlapping way or sequentially.
  • BRIEF DESCRIPTION
  • FIG. 1 shows a schematic flow diagram illustrating a method according to an embodiment of the first aspect of the present invention;
  • FIG. 2 shows a schematic block diagram of a computing according to an embodiment of the second aspect of the present invention;
  • FIG. 3 shows results of measurements of the performance of the present method; and
  • FIG. 4 shows results of measurement of the present method.
  • DETAILED DESCRIPTION
  • Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that the variety of alternate and/or equivalent implementations may be substituted for the specific embodiments shown and described without departing from the scope of the present invention. Generally, this application is intended to cover any adaptations or variations of the specific embodiments discussed herein.
  • FIG. 1 shows a schematic flow diagram illustrating a method according to an embodiment of the first aspect of the present invention.
  • This description contains two parts. The first part contributes to the classical binary tensor sparsification method. Especially, the first binary tensor sparsification condition is derived under which the original tensor can be well approximated by a truncated (or: projected) tensor SVD of its subsampled tensor.
  • The second part contributes to the method of performing knowledge graph inference on universal quantum computers. In order to handle the tensorized data, a quantum tensor contraction subroutine is described. Then, a quantum sampling method on knowledge graphs using quantum principal component analysis, quantum phase estimation and quantum singular value projection is described. The runtime complexity is analyzed, and it is shown that this sampling-based quantum computation method provides exponential acceleration with respect to the size of the knowledge graph during inference.
  • All the state-of-the-art algorithms for statistical relational learning on knowledge graphs are implemented on classical computational powers, e.g., CPUs or GPUs. The major difference between a classical approach and quantum approach is that classical algorithm is learning-based, e.g., by back-propagating the gradients of loss function, while the proposed quantum algorithm is sampling-based. The present method for implicit knowledge inference on knowledge graphs is implemented at least by measuring the quantum states returned by the quantum algorithm without requiring any particular loss function and gradients update rules.
  • In the first part in the following the conditions under which the classical tensor singular value decomposition (tSVD) can be applied to recover a subsampled tensor are shown. These conditions ensure that the quantum counterpart is feasible and has good performance in comparison with benchmarking classical algorithms. The second part explains the method of implicit knowledge inference from tensorized data on universal quantum computers. Furthermore, the runtime complexity of the quantum method is analyzed.
  • As an overview, first some theoretical and practical foundations and considerations for the method are described, and then the method according to an embodiment of the first aspect of the present invention is described in more detail. In addition, we also incidentally describe a computing system according to an embodiment of the second aspect of the present invention.
  • Part 1: Classical Tensor Singular Value Decomposition
  • First, singular value decomposition (SVD) of matrices is described. Then, a tensor SVD is introduced and it is shown that a given tensor can be reconstructed with a small error from the low-rank tensor SVD of the subsampled tensor.
  • the single value decomposition, SVD, can be defined in the following: let Aϵ
    Figure US20200364599A1-20201119-P00003
    m×n, the SVD is a factorization of A of the form A=UΣVT, where Σ is a rectangle diagonal matrix with singular values on the diagonal, Uϵ
    Figure US20200364599A1-20201119-P00003
    m×m and Vϵ
    Figure US20200364599A1-20201119-P00003
    n×n are orthogonal matrices with UTU=UUT=Im and VTV=VVT=In, wherein Im is an m×m identity matrix.
  • An N-way tensor is defined as
    Figure US20200364599A1-20201119-P00004
    =(
    Figure US20200364599A1-20201119-P00004
    i 1 i 2 . . . i N
    Figure US20200364599A1-20201119-P00003
    d 1 ×d 2 × . . . ×d N , where dk is the k-th dimension. Given two tensors
    Figure US20200364599A1-20201119-P00004
    and
    Figure US20200364599A1-20201119-P00005
    with the same dimensions, the inner product is defined as
  • , F := i 1 = 1 d 1 i N = 1 d N i 1 i 2 i N i 1 i 2 i N
  • The Frobenius norm is defined as ∥
    Figure US20200364599A1-20201119-P00004
    F:=
    Figure US20200364599A1-20201119-P00006
    . The spectral norm ∥
    Figure US20200364599A1-20201119-P00004
    σ of the tensor A is defined as

  • max{
    Figure US20200364599A1-20201119-P00004
    1 x 1 . . . ⊗N x N |x k ϵS d k −1 ,k=1, . . . ,N},  (1)
  • where the tensor-vector product is defined as
  • 1 x 1 N x N := i 1 = 1 d 1 i N = 1 d N i 1 i 2 i N x 1 i 1 x 2 i 2 x Ni N
  • and Sd k −1 denotes the unit sphere in
    Figure US20200364599A1-20201119-P00003
    n k .
  • In the following, a tensor single value decompositions, Tensor SVD, is described. In analogy to the matrix singular value decomposition, tensor singular value decomposition is described in detail e.g. in “Chen et al.”.
  • Definition 1. If a tensor
    Figure US20200364599A1-20201119-P00004
    ϵ
    Figure US20200364599A1-20201119-P00003
    d 1 ×d 2 × . . . ×d N can be written as a sum of rank-1 outer product tensors
    Figure US20200364599A1-20201119-P00004
    i=1 Rσiu1 (i)⊗u2 (i) . . . ⊗uN (i), with singular values σ1≥σ2≥ . . . ≥σR and (uk (i), uk (j))=δij for k=1, . . . , N, then
    Figure US20200364599A1-20201119-P00004
    has a tensor singular value decomposition with rank R.
  • In real-world applications, we can only observe part of the non-zero entries in a given tensor
    Figure US20200364599A1-20201119-P00004
    , and the task is to infer unobserved non-zero entries with high probability. This corresponds to items recommendation for users given an observed preference matrix, or implicit knowledge inference given partially observed relational database. In other words, a “partially observed” tensor representing a knowledge graph is only partially known, since not all semantic triplets are known a priori, and the inference task is to infer interesting entities (subject, predict, or object) of semantic triples which are not contained in the “partially observed” tensor but which would be obtained in a hypothetical complete tensor
    Figure US20200364599A1-20201119-P00004
    . The partially observed tensor is herein also designated as sub-sampled or sparsified, denoted
    Figure US20200364599A1-20201119-P00007
    . Particularly, without further specifying the dimensionality of the tensor, the following subsampling and rescaling scheme proposed in “Achlioptas et al.” is used:
  • ^ i 1 i 2 i N = { i 1 i 2 i N p with probability p 0 otherwise . ( 2 )
  • This means that the non-zero elements of a hypothetical complete tensor
    Figure US20200364599A1-20201119-P00008
    are independently and identically sampled with the probability p and rescaled afterwards. The sub-sampled tensor can be rewritten as
    Figure US20200364599A1-20201119-P00007
    =
    Figure US20200364599A1-20201119-P00008
    +
    Figure US20200364599A1-20201119-P00002
    , where
    Figure US20200364599A1-20201119-P00002
    is a noise tensor. Entries of
    Figure US20200364599A1-20201119-P00002
    are thus independent random variables with distribution
  • i 1 i N = { ( 1 p - 1 ) i 1 i N with probability p - i 1 i N with probability 1 - p ( 3 )
  • In the following, a 3-dimensional semantic tensor χ as one example of a tensor
    Figure US20200364599A1-20201119-P00008
    is of particular interest. The present methods builds on the assumption that the original semantic tensor χ modelling the complete knowledge graph (or, in from a different viewpoint, the tensor χ as the complete knowledge graph) has a low-rank approximation, denoted χr, with small rank r.
  • This is a plausible assumption if the knowledge graph contains global and well-defined relational patterns, as has been shown in “Nickel et al.”. Therefore, the question may be posed under what conditions the original tensor χ can be reconstructed approximately from the low-rank approximation of subsampled semantic tensor {circumflex over (χ)} derived from the incomplete knowledge graph. In the following, {circumflex over (χ)}r denotes the r-rank approximation of the subsampled tensor {circumflex over (χ)}; and {circumflex over (χ)}|⋅|≥τ denotes the projection of {circumflex over (×)} onto the subspaces whose absolute singular values are larger than a predefined threshold τ>0.
  • Thus, with reference to FIG. 1, in a first step S10, a knowledge graph is modelled as a partially observed tensor {circumflex over (χ)} in a classical computer-readable memory structure.
  • This classical computer-readable memory structure may be any volatile or non-volatile computer-readable memory structure such as a hard drive, a solid state drive and/or the like. The computer-readable memory structure may be part of a classical computing device which in turn may be part of a computing system for performing an inference task on a knowledge graph.
  • FIG. 2 shows a schematic block diagram of a computing system 1000 for performing an inference task on a knowledge graph according to an embodiment of the second aspect of the present invention.
  • FIG. 2 shows that the computing system 1000 comprises two main components: a classical computing device 100 as well as a quantum computing device 200 which are connected by an exchange interface 300 therebetween. The quantum computing device 200 may be implemented in any known way, for example as has been described in “Ma et al.”. Moreover, the computing system 1000 comprises an input interface 10, in particular for receiving an inference task (or: query) with a first entity of a first entity type and a second entity of a second entity type, e.g. (s, p, ?) or (?, p, o) or (s, ?, o) and an output interface 90 for outputting the inferred third entity of the third entity type o, s, or p, respectively.
  • It shall be understood that the method that is being described with respect to FIG. 1 may be performed using a computing system according to an embodiment of the second aspect of the present invention, in particular the computing system 1000 of FIG. 2 and/or that the computing system 1000 may be adapted to perform the method according to an embodiment of the first aspect of the present invention, in particular the method according to FIG. 1.
  • In further description of the method of FIG. 1, in the next two theorems, subsample conditions are shown under which the original semantic tensor χ can be reconstructed approximately from {circumflex over (χ)}r or {circumflex over (χ)}|⋅|>τ. The analysis was conducted by bounding the reconstruction error ∥χ−{circumflex over (χ)}r∥F and ∥χ−{circumflex over (χ)}|⋅|>τ∥F, respectively. Bounding the reconstruction error ensures a good implicit knowledge inference.
  • Suppose that we infer on a knowledge graph given the query (s, p, ?), i.e. subject s and predicate p are given, and the inference task at hand is to infer on the object o which completes the semantic triplet—or, in other words, which makes the semantic statement (s, p, o) true.
  • In the present context, the subjects are a first entity type which corresponds to a dimension of size d1 in χ and {circumflex over (χ)}; the predicates are a second entity type which corresponds to a dimension of size d2 in χ and {circumflex over (χ)}; and the objects are a third entity type which corresponds to a dimension of size d3 in χ and {circumflex over (χ)}.
  • Then, given an incomplete semantic triple (or: query, or: inference task) as (s, p, ?), the running time for inferring the correct objects to the query scales, in classical systems, as O(d3). This is because the same algorithm is repeated at least d3 times in order to determine possible answers, leading to huge waste of computing power especially, when nowadays the sizes of knowledge graphs are consistently growing.
  • Advantageously, only top-n returns from the reconstructed tensor {circumflex over (χ)} written as {circumflex over (χ)}sp1, . . . , {circumflex over (χ)}spn, are read out, where n is a small integer corresponding to the commonly used Hits@n metric (see e.g. “Ma et al.”). The inference is called successful if the correct object corresponding to the query can be found in the returned list {circumflex over (χ)}sp1, . . . , {circumflex over (χ)}spn. It can be proven that the probability of a successful inference is high if the reconstruction error small enough. Therefore, in the following we provide sub-sampling conditions under which the construction error is unexpectedly small.
  • Without further specifying the dimension of the tensor, let us consider a high-dimensional tensor
    Figure US20200364599A1-20201119-P00009
    . Theorem 1 gives the condition for the subsample probability under which the original tensor
    Figure US20200364599A1-20201119-P00009
    can be reconstructed approximately from
    Figure US20200364599A1-20201119-P00007
    r.
  • Theorem 1. Let
    Figure US20200364599A1-20201119-P00009
    ϵ{0, 1}d 1 ×d 2 × . . . ×d N . Suppose that
    Figure US20200364599A1-20201119-P00009
    can be well approximated by its r-rank tensor SVD
    Figure US20200364599A1-20201119-P00009
    r. Using the subsampling scheme defined in Eq. 2 with the sample probability
  • p max { 0.22 , 8 r ( log ( 2 N N 0 ) k = 1 N d k + log 2 δ ) / ( ϵ ~ F ) 2 } ,
  • where N0=log 3/2, then the original tensor
    Figure US20200364599A1-20201119-P00009
    can be recons from the truncated tensor SVD of the subsampled tensor
    Figure US20200364599A1-20201119-P00007
    . The error satisfies ∥
    Figure US20200364599A1-20201119-P00009
    Figure US20200364599A1-20201119-P00007
    rF≤ϵ∥
    Figure US20200364599A1-20201119-P00009
    F with probability at least 1−δ, where ϵ is a function of {tilde over (ϵ)}. Especially, {tilde over (ϵ)} together with the sample probability controls the norm of the noise tensor.
  • In particular it is desired that the Frobenius norm ∥⋅∥F of a rank-r-approximation
    Figure US20200364599A1-20201119-P00002
    r of the noise tensor
    Figure US20200364599A1-20201119-P00002
    is bounded such that ∥
    Figure US20200364599A1-20201119-P00002
    rF≤{tilde over (ϵ)}∥A∥F.
  • Now, it is briefly discussed why tensor
    Figure US20200364599A1-20201119-P00007
    |⋅|>τ is introduced before describing the reconstruction error caused by it. Note that quantum algorithms are fundamentally different from classical algorithms. For example, classical algorithms for matrix factorization approximate a low-rank matrix by projecting it onto a subspace spanned by the eigenspaces possessing top-r singular values with predefined small r. Quantum methods, e.g., quantum singular value estimation, on the other hand, can read and store all singular values of a unitary operator into a quantum register.
  • However, singular values stored in the quantum register cannot be read out and compared simultaneously since quantum state collapses after one measurement; measuring the singular values one by one will also break the quantum advantage. Therefore, we perform a projection onto the union of operator's subspaces whose singular values are larger than a threshold; and this step can be implemented on the quantum register without destroying the superposition. Moreover, since herein quantum principal component analysis is used as a subroutine which ignores the sign of singular values during the projection, reconstruction error given by
    Figure US20200364599A1-20201119-P00010
    |⋅|≥τ for the quantum algorithm may be analyzed.
  • The following Theorem 2 gives the condition under which
    Figure US20200364599A1-20201119-P00011
    can be reconstructed approximately from
    Figure US20200364599A1-20201119-P00010
    |⋅|>τ.
  • Theorem 2. Let
    Figure US20200364599A1-20201119-P00010
    ϵ{0, 1}d 1 ×d 2 × . . . ×d N . Suppose that
    Figure US20200364599A1-20201119-P00010
    can be well approximated by its r-rank tensor SVD
    Figure US20200364599A1-20201119-P00010
    r. Using the subsampling scheme defined in Eq. 2 with the sample probability
  • p max { 0.22 , p 1 := l 1 C 0 ( ϵ ~ F ) 2 , p 2 := rC 0 ( ϵ ~ F ) 2 , p 3 := 2 rC 0 ϵ 1 ϵ ~ F } ,
  • wherein p<1, wherein
  • C 0 = 8 ( log ( 2 N N 0 ) k = 1 N d k + log 2 δ ) ,
  • N0=log 3/2:l1 denotes the largest index of singular values of tensor
    Figure US20200364599A1-20201119-P00011
    with σ1, ≥τ, such that wen choosing the threshold as
  • τ 2 C ( ) p ϵ ~ ,
  • then the original tensor
    Figure US20200364599A1-20201119-P00010
    can be reconstructed from the projected tensor SVD of
    Figure US20200364599A1-20201119-P00011
    . The error satisfies ∥
    Figure US20200364599A1-20201119-P00010
    Figure US20200364599A1-20201119-P00010
    |⋅|≥τF≤ϵ∥
    Figure US20200364599A1-20201119-P00010
    F with probability at least 1−δ, where ϵ is a function of {tilde over (ϵ)} and ϵ1. Especially, {tilde over (ϵ)} together with p1 and p2 determine the norm of noise tensor and ϵ1 together with p3 control the value of
    Figure US20200364599A1-20201119-P00011
    's singular values that are located outside the projection boundary.
  • Thus, as shown in the equation above, advantageously the threshold τ is chosen as smaller or equal to a quantity which is indirectly proportional to the probability p and/or smaller or equal to a quantity which is indirectly proportional to {tilde over (ϵ)}.
  • On the other hand, the probability p is advantageously chosen to be larger to or equal a maximum value out of a set of values, which set of values in the foregoing example comprises four values: 0.22, p1, p2, and p3. In other words, p will always be larger or equal to at least 0.22. Instead of 0.22, another value in the range of 0.2 and 0.24 may be chosen. However, experiments by the inventors have shown that 0.22 is an ideal value in order to ensure desirable properties for the threshold r. Specifically, experiments as well as a numerical proof by the inventors have shown that 0.22 is the minimal subsample probability that a subsampled tensor can be reconstructed with a bounded error.
  • Thus, the set of values the set of values comprises:
    • a) at least one value in the range of between 0.2 and 0.24, preferably between 0.21 and 0.23, most preferably of 0.22;
    • b) at least one value p2 that is proportional to r and indirectly proportional to {tilde over (ϵ)} to the n-th power, with n integer and n≥1, in particular proportional to r and indirectly proportional to the square of {tilde over (ϵ)};
    • c) at least one value p3 that is proportional to a square root of r and that is indirectly proportional to {tilde over (ϵ)};
    • and/or
    • d) at least one value p1 that is independent of r and that is indirectly proportional to the square of {tilde over (ϵ)}.
  • In the bodies of Theorem 1 and 2 there exist data-dependent parameters r and l1 which are unknown a priori. These parameters can be estimated by performing tensor SVD to the original and subsampled tensors explicitly. However, in practice, mostly the subsampled tensor is given without knowing the subsample probability. For example, given an incomplete semantic tensor it is usually not known what percentage of information is missing and therefore the entries in the incomplete tensor cannot be easily rescaled. Fortunately, unlike the prior art, the present invention provides a rational initial guess for the subsample probability numerically, and inversely an initial guess for the lower-rank r and the projection threshold τ as well.
  • Part 2: Inference on Knowledge Graphs Using Quantum Computers Quantum Mechanics
  • For ease of understanding, the Dirac notations of quantum mechanics as it is used herein is briefly described. Under Dirac's convention quantum states can be represented as complex-valued vectors in a Hilbert space
    Figure US20200364599A1-20201119-P00012
    . For example, a two-dimensional complex Hilbert
    Figure US20200364599A1-20201119-P00012
    2 space can describe the quantum state of a spin-1 particle, which provides the physical realization of a qubit.
  • By default, the basis in
    Figure US20200364599A1-20201119-P00012
    2 for a spin-1 qubit read |0
    Figure US20200364599A1-20201119-P00013
    =[1, 0]T and |1
    Figure US20200364599A1-20201119-P00013
    =[0, 1]T. The Hilbert space of a n-qubits system has dimension 2n whose computational basis can be chosen as the canonical basis |i
    Figure US20200364599A1-20201119-P00013
    ϵ{|0
    Figure US20200364599A1-20201119-P00013
    , 1)}⊗n, where ⊗ represents tensor product. Hence any quantum state |ϕ
    Figure US20200364599A1-20201119-P00014
    ϵ
    Figure US20200364599A1-20201119-P00012
    2 n can be written as a quantum superposition
  • φ = i = 1 2 n φ i i ,
  • wherein the squared coefficients |ϕi|2 can also be interpreted as the probability of observing the canonical basis state |i
    Figure US20200364599A1-20201119-P00013
    after measuring |ϕ
    Figure US20200364599A1-20201119-P00013
    using canonical basis.
  • Moreover, we use
    Figure US20200364599A1-20201119-P00013
    ϕ| is used to represent the conjugate transpose of |ϕ
    Figure US20200364599A1-20201119-P00014
    , i.e., (|ϕ
    Figure US20200364599A1-20201119-P00014
    )=
    Figure US20200364599A1-20201119-P00013
    ϕ|. Given two stats |ϕ
    Figure US20200364599A1-20201119-P00014
    and |ψ
    Figure US20200364599A1-20201119-P00014
    The inner product on the Hilbert space is defined as
    Figure US20200364599A1-20201119-P00013
    ϕ|ψ
    Figure US20200364599A1-20201119-P00014
    *=
    Figure US20200364599A1-20201119-P00013
    ψ|ϕ
    Figure US20200364599A1-20201119-P00014
    . A density matrix is a projection operator which is used to describe the statistics of a quantum system. For example, the density operator of the mixed state |ϕ
    Figure US20200364599A1-20201119-P00014
    in the canonical basis reads ρ=Σi=1 2 n i|2|i
    Figure US20200364599A1-20201119-P00014
    Figure US20200364599A1-20201119-P00013
    i|. Moreover, given two subsystems with density matrices ρ and σ the density matrix for the whole system is their tensor product, namely ρ ⊗σ.
  • The time evolution of a quantum state is generated by the Hamiltonian of the system. The Hamiltonian H is a Hermitian operator with H=H. Let |ϕ(t)
    Figure US20200364599A1-20201119-P00014
    denote the quantum state at time t under the evolution of an invariant Hamiltonian H. Then according to the Schrôdinger equation |ϕ(t)
    Figure US20200364599A1-20201119-P00014
    =e−iHt|ϕ(0)
    Figure US20200364599A1-20201119-P00014
    ,
  • where the unitary operator e−iHt can be written as the matrix exponentiation of the Hermitian matrix H, i.e.,
  • e - iHt = e - it i λ i u i u i = i e - i λ i t u i u i , ( 4 )
  • Eigenvectors of the Hamiltonian H, denoted |ui
    Figure US20200364599A1-20201119-P00014
    , also form a basis of the Hilbert space. Then the spectral decomposition of the Hamiltonian H reads H=Σiλi|ui
    Figure US20200364599A1-20201119-P00014
    Figure US20200364599A1-20201119-P00013
    ui|, where λi is the eigenvalue or the energy level of the system. Therefore, the evolution operator of a time-invariant Hamiltonian can be rewritten as
  • e - iHt = n = 0 ( iHt ) n n ! .
  • where we use the observation (|ui
    Figure US20200364599A1-20201119-P00013
    Figure US20200364599A1-20201119-P00014
    ui|)n=|ui
    Figure US20200364599A1-20201119-P00014
    Figure US20200364599A1-20201119-P00013
    ui| for n=1, . . . , ∞.
  • When applying it on an arbitrary initial state |ϕ(0)
    Figure US20200364599A1-20201119-P00015
    we obtain |ϕ(t)
    Figure US20200364599A1-20201119-P00015
    =e−iHt|ϕ(0)
    Figure US20200364599A1-20201119-P00015
    ie−iλ i tβi|ui
    Figure US20200364599A1-20201119-P00015
    , where βi indicates the overlap between the initial state and the eigenbasis of H, i.e., βi:=
    Figure US20200364599A1-20201119-P00016
    ui|ϕ(0)
    Figure US20200364599A1-20201119-P00015
    . To implement the time evolution operator e−iHt and simulate the dynamics of a quantum system using universal quantum circuits is a challenging task since it involves the matrix exponentiation of a possibly dense matrix.
  • The present invention concerns a method for the inference on knowledge graphs using a quantum computing device 200. In the following we focus on the semantic tensor χϵ{0, 1}d 1 ×d 2 ×d 3 , with d1, d2, and d3 defined as above, and let {circumflex over (χ)} denote the partially observed tensor.
  • Since knowledge graphs contain global relational patterns, χ could be approximated by a lower-rank tensor χr thereof reconstructed approximately from {circumflex over (χ)} via tensor SVD according to Theorem 1 and 2. Since our quantum method is sampling-based instead of learning-based, without loss of generality we consider sampling the correct objects given the query (s, p, ?) as an example and discuss the runtime complexity of one inference. Herein we therefore designate the given subject as a first entity of a first entity type (“subjects”), the predicate as a second entity of a second entity type (“predicates”) and the unknown object as a third entity of a third entity type (“objects”).
  • The preference matrix of a recommendation system normally contains multiple nonzero entries in a given user-row; items recommendations are made according to the nonzero entries in the user-row by assuming that the user is ‘typical’. However, in a knowledge graph there might be only one nonzero entry in the row (s, p, ⋅). Therefore, advantageously, for the inference on a knowledge graph quantum algorithm triples with the given subject s are sampled and then and post-selected on the predicate p. This is a feasible step especially if the number of semantic triples with s as subject and p as predicate is
    Figure US20200364599A1-20201119-P00017
    (1).
  • The present method contains the preparing and exponentiating of a density matrix derived from the tensorized classical data. One of the challenges of quantum machine learning is loading classical data as quantum states and measuring the states since reading or writing high-dimensional data from quantum states might obliterate the quantum acceleration. Therefore, the technique quantum Random Access Memory (qRAM) was developed (see “Giovannetti et al.”) which can load classical data into quantum states with exponential acceleration. For details about the qRAM technique, it is referred to “Giovannetti et al.”. The basic idea of the present method is to project the observed data onto the eigenspaces of {circumflex over (χ)} whose corresponding singular values have an absolute value larger than a threshold τ. Therefore, we need to create an operator which can reveal the eigenspaces and singular values of {circumflex over (χ)}.
  • As mentioned in the foregoing, in a step S10, a knowledge graph is modelled as a partially observed tensor {circumflex over (χ)} in a classical computer-readable memory structure 110 of a classical computing device 100, see FIG. 2.
  • In a step S12, which does not have to be performed in this order necessarily, a cutoff threshold r is provided, which is preferably determined as has been described in the foregoing.
  • In a step S20, the following density operator (or: density matrix) is created, on the quantum computing device 200, from {circumflex over (χ)} via a tensor contraction scheme:
  • ρ χ ^ χ ^ := i 2 i 3 i 2 i 3 i i χ ^ i 1 , i 2 i 3 χ ^ i 1 , i 2 i 3 i 2 i 3 i 2 i 3 , ( 5 )
  • where
  • i i χ ^ i 1 , i 2 i 3 χ ^ i 1 , i 2 i 3
  • means tensor contraction along the first dimension (here: the subject dimension since the exemplary inference task is (s, p, ?)); a normalization factor is neglected temporarily.
  • Especially, ρ{circumflex over (χ)}†{circumflex over (χ)} can be prepared via qRAM (see “Giovannetti et al.”) in time

  • Figure US20200364599A1-20201119-P00017
    (polylog(d 1 d 2 d 3))  (6)
  • in the following way: First, the quantum state
  • i 1 i 2 i 3 χ ^ i 1 , i 2 i 3 i 1 i 2 i 3 = i 1 i 2 i 3 χ ^ i 1 , i 2 i 3 i 1 i 2 i 3
  • is prepared via qRAM, which can be implemented in time
    Figure US20200364599A1-20201119-P00017
    (polylog(d1d2d3)), where |i1
    Figure US20200364599A1-20201119-P00018
    ⊗|i2
    Figure US20200364599A1-20201119-P00018
    ⊗|i3
    Figure US20200364599A1-20201119-P00018
    represents the tensor product of index registers in the canonical basis.
  • The corresponding density matrix of the quantum state reads
  • ρ = i 1 i 2 i 3 i 1 i 2 i 3 χ ^ i 1 i 2 i 3 i 1 i 2 i 3 i 1 i 2 i 3 χ ^ i 1 i 2 i 3 .
  • After preparation, a partial trace implemented on the first index register of the density matrix
  • tr 1 ( ρ ) = i 2 i 3 i 2 i 3 i 1 χ ^ i 1 i 2 i 3 i 2 i 3 i 2 i 3 χ ^ i 1 i 2 i 3 = i 2 i 3 i 2 i 3 i 1 χ ^ i 1 i 2 i 3 χ ^ i 1 i 2 i 3 i 2 i 3 i 2 i 3
  • gives the desired operator ρ{circumflex over (χ)}†{circumflex over (χ)}.
  • Suppose that {circumflex over (χ)} has a tensor SVD approximation with
  • χ ^ i = 1 R σ i u i ( i ) u 2 ( i ) u 3 ( i ) .
  • Then the spectral decomposition of the density operator can be written as
  • ρ χ ^ χ ^ = 1 i = 1 R σ i 2 i = 1 R σ i 2 u 2 ( i ) u 3 ( i ) u 2 ( i ) u 3 ( i ) .
  • Especially, the eigenstates |u2 (i)
    Figure US20200364599A1-20201119-P00018
    ⊗|u3 (i)
    Figure US20200364599A1-20201119-P00018
    of ρ{circumflex over (χ)}†{circumflex over (χ)} form another set of basis in the Hilbert space of the tensor product of quantum index registers.
  • Then we need to readout singular values of ρ{circumflex over (χ)}†{circumflex over (χ)} and write into another quantum register, preferably via the density matrix exponentiation method proposed in “Lloyed et al.”. This operation is also referred to as quantum principal component analysis (qPCA).
  • In order to write the singular values into a quantum register, in a step S30 the unitary operator
  • U := k = 0 K - 1 k Δ t k Δ t C exp ( - ik Δ t ρ ~ χ ^ χ ^ )
  • is prepared which is the tensor product of a maximally mixed state
  • k = 0 K - 1 k Δ t k Δ t C
  • with the exponentiation of the rescaled density matrix
  • ρ ~ χ ^ χ ^ := ρ ~ χ ^ χ ^ d 2 d 3 .
  • Especially, the clock register C is needed for the phase estimation and Δt determines the precision of estimated singular values.
  • Recall that the query is (s, p, ?) on the knowledge graph, and that the present method should return triples with subject s. Hence, in a step S40, the quantum state |{circumflex over (χ)}s (1)
    Figure US20200364599A1-20201119-P00018
    I is created (or: generated) via qRAM in an input data register I, where {circumflex over (χ)}s (1) denotes the s-row of the flattened tensor {circumflex over (χ)} along the first dimension.
  • After preparing S40 the quantum state |{circumflex over (χ)}s (1)
    Figure US20200364599A1-20201119-P00018
    I, in a step S50 the prepared unitary operator U is applied onto
  • k = 0 K - 1 k Δ t C χ ^ s ( 1 ) I .
  • Implementing the unitary operator U is nontrivial since the exponent ρ{circumflex over (χ)}†{circumflex over (χ)} in the operator U could be a dense matrix, and exponentiating a dense matrix can be very involved. Therefore, one can use the dense matrix exponentiation method recently proposed in “Rebentrost et al.”. Especially, one can show that the unitary operator e−itρ {circumflex over (χ)}†{circumflex over (χ)} can be applied to any quantum state up to an arbitrary simulation time t. The total number of steps for simulation is
  • ( t 2 ϵ T ρ ~ ) , ( 7 )
  • where ϵ is the desired accuracy, and T{circumflex over (ρ)} is the time for accessing the density matrix {circumflex over (ρ)}. Hence the unitary operator U can be applied to any quantum state given simulation time t in
  • ( t 2 ϵ T ρ ~ )
  • steps on quantum computers.
  • After applying the unitary operator U onto
  • k = 0 K - 1 k Δ t C χ ^ s ( 1 ) I
  • we have the following quantum state
  • i = 1 R β i ( k = 0 K - 1 e - ik Δ t σ ~ i 2 k Δ t C ) u 2 ( i ) I u 3 ( i ) I ( 8 )
  • where
  • σ ~ i : = σ i d 2 d 3
  • are the rescaled singular values of ρ{circumflex over (χ)}†{circumflex over (χ)} (see Eq. 4). Moreover, βi are the coefficients of |{circumflex over (χ)}s (1)
    Figure US20200364599A1-20201119-P00019
    I decomposed in the eigenbasis |u2 (i)
    Figure US20200364599A1-20201119-P00019
    I⊗|u3 (i)
    Figure US20200364599A1-20201119-P00019
    I of ρ{circumflex over (χ)}†{circumflex over (χ)}, namely |{circumflex over (χ)}s (1)
    Figure US20200364599A1-20201119-P00019
    Ii=1 Rβi|u2 (i)
    Figure US20200364599A1-20201119-P00019
    I⊗|u3 (i)
    Figure US20200364599A1-20201119-P00019
    I.
  • In a step S60, a quantum phase estimation on the clock register C is performed, preferably using the quantum phase estimation algorithm proposed in “Kitaev”. The resulting state after phase estimation reads Σi=1 Rβii
    Figure US20200364599A1-20201119-P00019
    C⊗|u2 (i)
    Figure US20200364599A1-20201119-P00019
    I⊗|u3 (i)
    Figure US20200364599A1-20201119-P00019
    I, where
  • λ i := 2 π σ ~ i 2 .
  • In fact, it can be shown that the probability amplitude of measuring the register C is maximized when
  • k Δ t = 2 π σ ~ i 2 ,
  • where └⋅┐ represents the nearest integer. Therefore, the small time step Δt determines the accuracy of quantum phase estimation. We may choose
  • Δ t = ( 1 ϵ ) ,
  • and the total run time is
  • ( 1 ϵ 3 T ρ ~ ) = ( 1 ϵ 3 polylog ( d 1 d 2 d 3 ) )
  • according to Eq. (6) and Eq. (7).
  • In a step S70, a computation on the clock register C is performed to recover the original singular values of ρ{circumflex over (χ)}†{circumflex over (χ)}, and obtain
  • i = 1 R β i σ i 2 C u 2 ( i ) I u 3 ( i ) I .
  • For example, in this step S70 the λi stored in the clock register C may be transferred to σi 2, λi being a function of σi 2. The threshold operations discussed in the following as applied to the σi 2 may therefore also be, in an alternative formulation, be applied to the λi, with the threshold τ being appropriately rescaled.
  • In a step S90, a quantum singular value projection on the quantum state obtained from the last step S70 is performed. Notice that, classically, this step corresponds to projecting {circumflex over (χ)} onto the subspace {circumflex over (χ)}|⋅|≥τ. In this way, observed entries will be smoothed and unobserved entries get boosted from which we can infer unobserved triples (s, p, ?) in the test dataset (see Theorem 2).
  • Quantum singular value projection given the threshold τ>0 can be implemented in the following way. Therefore, in a step S80, a new auxiliary register R is created on the quantum computing device 200 using an auxiliary qubit and a unitary operation that maps |σi 2
    Figure US20200364599A1-20201119-P00020
    C⊗|0
    Figure US20200364599A1-20201119-P00020
    R to |σi 2
    Figure US20200364599A1-20201119-P00020
    C⊗|1
    Figure US20200364599A1-20201119-P00020
    R only if σi 22, otherwise |0
    Figure US20200364599A1-20201119-P00020
    R remains unchanged. This step of projection gives the state
  • i : σ i 2 τ 2 β i σ i 2 C u 2 ( i ) I u 3 ( i ) I 0 R + i : σ i 2 < τ 2 β i σ i 2 C u 2 ( i ) I u 3 ( i ) I 0 R . ( 9 )
  • In other words, the step S90 means performing, on the result of the computation S70 to recover the singular values, a singular value projection conditioned on the state of the auxiliary register, R, such that eigenstates whose squared singular values are to one side of (here: smaller than) the squared cutoff threshold, τ2, are entangled with a first eigenstate |1
    Figure US20200364599A1-20201119-P00020
    R of the auxiliary register, R, and such that eigenstates whose squared values are to another side of (here: larger than) the squared cutoff threshold, τ2, or equal to the squared cutoff threshold, τ2, are entangled with the second eigenstate |0
    Figure US20200364599A1-20201119-P00020
    R of the auxiliary register, R.
  • One of the major advantages here is that not the individual singular values are used in any decisions but only their squares. This means that possible negative singular values which may occur in the case of tensors (unlike in the case of matrices) do not have any negative impact on the present method.
  • In a step S100, the new register R is measured and post-selected on the state |0
    Figure US20200364599A1-20201119-P00018
    R. This gives the projected state
  • i : σ i 2 τ 2 β i σ i 2 C u 2 ( i ) I u 3 ( i ) I
  • In a step S110, the clock register C is traced out such that the following equation is obtained:
  • i : σ i 2 τ 2 β i u 2 ( i ) I u 3 ( i ) I .
  • The tracing out may be performed e.g. as has been described in “Nielsen et al.”.
  • In a step S120, the resulting quantum state from the last step S110 is measured in the canonical basis of the input register I to get the triples with subject s.
  • In a step S130, they are post-selected on the predicate p. This will return objects to the inference (s, p, ?) after
  • ( 1 ϵ 3 polylog ( d 1 d 2 d 3 ) )
  • steps.
  • The quantum algorithm is summarized also in the following table Algorithm 1.
  • Algorithm 1 Quantum Tensor SVD on KGs
    Input: Inference task (s, p, ?)
    Output: Possible objects to the inference task
    Require: Quantum access to {circumflex over (χ)} stored in a classical memory structure; thresh-
    old τ for the singular value projection
    1: Create 
    Figure US20200364599A1-20201119-P00021
     via qRAM
    2: Create state |{circumflex over (χ)}s (1)
    Figure US20200364599A1-20201119-P00022
    I on the input data register I via qRAM
    3: Prepare unitary operator U and apply on |{circumflex over (χ)}s (1)
    Figure US20200364599A1-20201119-P00022
    I, where
    U := k = 0 K - 1 k Δ t k Δ t C exp ( - ik Δ t ρ ~ χ ^ χ ^ )
    4: Quantum phase estimation on the clock register C to obtain Σi=1 R βii
    Figure US20200364599A1-20201119-P00022
    C
    |u2 (i)
    Figure US20200364599A1-20201119-P00022
    I ⊗ |u3 (i)
    Figure US20200364599A1-20201119-P00022
    I
    5: Controlled computation on the clock register C to obtain Σi=1 R βii 2
    Figure US20200364599A1-20201119-P00022
    C
    |u2 (i)
    Figure US20200364599A1-20201119-P00022
    I ⊗ |u3 (i)
    Figure US20200364599A1-20201119-P00022
    I
    6: Singular value projection given the threshold τ to obtain
    i : σ i 2 τ 2 β i σ i 2 C u 2 ( i ) I u 3 ( i ) I 0 R + i : σ i 2 < τ 2 β i σ i 2 C u 2 ( i ) I u 3 ( i ) I 1 R
    7: Measure on the register R and post-select the state |0 
    Figure US20200364599A1-20201119-P00022
    R
    8: Trace out the clock register C
    9: Measure the resulting state i : σ i τ β i u 2 ( i ) I u 3 ( i ) I in the canonical basis
    of the the input register I
    10: Post-select on the predicate p from the sampled triples (s, ., .)
  • One of the main advantages of the present invention is that a method for implementing implicit knowledge inference from tensorized data, e.g., relational databases such as knowledge graphs, on quantum computing devices is proposed.
  • The present method shows that knowledge inference from tensorized data can be implemented with exponential acceleration on quantum computing devices. Compared to classical systems, this is, as has been shown, much faster and thus less resource-consuming than classical methods.
  • We also test the classical part of our method, namely the tensor singular value decomposition, on a classical devices since due to technical challenges current quantum devices only have a few universal physical qubits. The simulation results show comparable results to other benchmarking algorithms, which ensures the performance of implementing the quantum TSVD on future quantum computers.
  • The acceleration is given by the intrinsic parallel computing of quantum computing devices as described in the foregoing which, however, is only made applicable by the specific technical implementation of the present invention.
  • In some sense, the present method is based on finding the corresponding quantum counterpart of classical tensor singular value decomposition method. To show that tensor singular value decomposition has comparable performance with other classical algorithms, the present method is verified by investigating the performance of classical tensor SVD on benchmark datasets: Kinship and FB15k-237, see e.g. the scientific publication by Kristina Toutanova and Danqi Chen, “Observed versus latent features for knowledge base and text inference.”, in: Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality, pages 57-66, 2015.
  • Given a semantic triple (s, p, o), the value function of TSVD is defined as
  • η spo = i = 1 R σ i u s ( i ) u p ( i ) u o ( i ) ,
  • where us, up, uo are vector representations of s, p, o, respectively. The TSVD is trained by minimizing the objective function
  • := 1 train ( s , p , o ) train ( y spo - η spo ) 2 + γ ( U s U s - R F + U p U p - R F + U o U o - R F )
  • via stochastic gradient descent. The hyper-parameter γ is used to encourage the orthonormality of embedding matrices for subjects, predicates, and objects.
    • in the following Table 1, the performance of tensor SVD model with other benchmark models,
    • e.g., RESCAL (proposed in Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel, “A three-way model for collective learning on multi-relational data”, in: ICML, volume 11, pages 809-816, 2011), Tucker (L. R. Tucker, “Some mathematical notes on three-mode factor analysis”, Psychometrika, September 1966, Vol. 31, Issue 3, pp. 279-311), and ComplEx (Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard, “Complex embeddings for simple link prediction”, in: International Conference on Machine Learning, pages 2071-2080, 2016) are shown.
  • FIG. 3 and FIG. 4 show the training curves of the TSVD on FBK-237. It shows that TSVD performs reasonably well for small rank, hence we can estimate the projection threshold τ according to the Theorem 2.
  • TABLE 1
    Mean Rank, Hits@3, Hits@10 scores of various models compared
    on the Kinship and FB15k-237 datasets.
    KINSHIP FB15K-237
    Methods MR @3 @10 MR @3 @10
    RESCAL 3.2 88.8 95.5 291.3 20.7 35.1
    TUCKER 2.9 89.8 95.0 276.1 20.9 35.7
    COMPLEX 2.2 90.0 97.7 242.7 25.2 39.7
    TSVD 2.7 84.8 96.6 365.5 19.4 35.8
  • FIG. 3 shows the mean rank over epochs for rank values R=32, R=128; R=258 and R=512.
  • FIG. 4 shows the Hits@10 scores over epochs for rank values R=32, R=128; R=258 and R=512.
  • Noisy intermediate-scale quantum processing units (or: quantum computing devices) are expected to be commercially available in the near future. With the help of these quantum computing devices and the present method, learning and inference on the ever-increasing industrial knowledge graphs can be dramatically accelerated compared to conventional computers.
  • In short, the invention provides a computer-implemented method of performing an inference task on a knowledge graph comprising semantic triples of entities, wherein entity types are subject, object and predicate, and wherein each semantic triple comprises one of each entity type, using a quantum computing device, wherein a first entity of a first type and a second entity of a second type are given and the inference task is to infer a third entity of the third type.
  • By performing specific steps and choosing values according to specific prescriptions, an efficient and resource-saving method is developed that utilizes the power of quantum computing systems for inference tasks on large knowledge graphs. In particular, an advantageous value for a cutoff threshold for a cutoff based on singular values of a singular value tensor decomposition is prescribed, and a sequence of steps is developed in which only the squares of the singular values are of consequence and their signs are not.

Claims (10)

1. A computer-implemented method of performing an inference task on a knowledge graph comprising semantic triples of entities, wherein entity types are subject, object and predicate, and wherein each semantic triple comprises one of each entity type, using a quantum computing device, wherein a first entity of a first type and a second entity of a second type are given and the inference task is to infer a third entity of the third type, comprising at least the steps of:
providing a query comprising the first entity and the second entity;
modelling the knowledge graph as a partially observed tensor {circumflex over (χ)} in a classical computer-readable memory structure;
providing a cutoff threshold, τ;
creating, from the partially observed tensor {circumflex over (χ)}, a density operator in a quantum random access memory, qRAM, on the quantum computing device;
preparing a unitary operator, U, based on the created density operator, comprising states on the clock register, C;
creating a first entity state |χs (1)
Figure US20200364599A1-20201119-P00023
indicating the first entity on an input data register of the qRAM, wherein the first entity state is entangled with a maximally entangled clock register, C;
applying the prepared unitary operator to at least the first entity state;
performing thereafter a quantum phase estimation on the clock register, C;
performing thereafter a computation on the clock register, C, to recover singular values;
creating an auxiliary qubit in an auxiliary register, R, which is entangled with the state resulting from the previous step;
wherein the auxiliary register, R, has a first eigenstate |1
Figure US20200364599A1-20201119-P00023
R and a second eigenstate |0
Figure US20200364599A1-20201119-P00023
R;
performing, on the result of the computation to recover the singular values, a singular value projection conditioned on the state of the auxiliary register, R, such that eigenstates whose squared singular values are to one side of the squared cutoff threshold, τ2, are entangled with the first eigenstate |1
Figure US20200364599A1-20201119-P00023
R of the auxiliary register, R, and such that eigenstates whose squared values are to another side of the squared cutoff threshold, τ2, or equal to the squared cutoff threshold, τ2, are entangled with the second eigenstate |0
Figure US20200364599A1-20201119-P00023
R of the auxiliary register, R;
measuring on the auxiliary register, R, and post-selecting one of the two eigenstates |0
Figure US20200364599A1-20201119-P00023
R;
tracing out the clock register, C;
measuring the result thereof in a canonical basis of the input register, wherein the canonical basis comprises tensor products of a basis connected to the second entity type and of a basis connected to the third entity type;
post-selecting the second entity in the basis connected to the second entity type to infer the third entity.
2. The method of claim 1,
wherein the partially observed tensor is obtained such that, for each entry of the partially observed tensor {circumflex over (χ)}, the entry is with a probability p directly proportional to a corresponding entry of a complete tensor χ modelling a complete knowledge graph and equal to 0 with a probability of 1−p, with p being smaller than 1.
3. The method of claim 2,
wherein the cutoff threshold τ is chosen as smaller or equal to a quantity which is indirectly proportional to the probability p.
4. The method of claim 2,
wherein the probability p is chosen to be larger to or equal a maximum value out of a set of values.
5. The method of claim 4,
wherein the set of values comprises at least a value of 0.22.
6. The method of any of claim 4,
wherein the partially observed tensor {circumflex over (χ)} is expressable as the sum of the complete tensor χ and a noise tensor N, and wherein a desired value ϵ>0 is defined such that the Frobenius norm ∥⋅∥F of a rank-r-approximation
Figure US20200364599A1-20201119-P00002
r of the noise tensor
Figure US20200364599A1-20201119-P00002
is bounded such that ∥
Figure US20200364599A1-20201119-P00002
rFϵ∥A∥F, and
wherein the set of values comprises at least one value that is proportional to r and indirectly proportional to ϵ to the n-th power, with n integer and n≥1.
7. The method of claim 6,
wherein the set of values comprises at least one value that is proportional to r and that is indirectly proportional to the square of ϵ.
8. The method of claim 6,
wherein the set of values comprises at least one value that is proportional to a square root of r and that is indirectly proportional to ϵ.
9. The method of claim 6,
wherein the set of values comprises at least one value that is independent of r and that is indirectly proportional to the square of ϵ.
10. A computing system comprising a classical computing device and a quantum computing device, wherein the computing system is configured to perform the method according to claim 1.
US16/414,065 2019-05-16 2019-05-16 Quantum machine learning algorithm for knowledge graphs Active 2041-11-19 US11562278B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/414,065 US11562278B2 (en) 2019-05-16 2019-05-16 Quantum machine learning algorithm for knowledge graphs

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US16/414,065 US11562278B2 (en) 2019-05-16 2019-05-16 Quantum machine learning algorithm for knowledge graphs

Publications (2)

Publication Number Publication Date
US20200364599A1 true US20200364599A1 (en) 2020-11-19
US11562278B2 US11562278B2 (en) 2023-01-24

Family

ID=73228704

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/414,065 Active 2041-11-19 US11562278B2 (en) 2019-05-16 2019-05-16 Quantum machine learning algorithm for knowledge graphs

Country Status (1)

Country Link
US (1) US11562278B2 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200258008A1 (en) * 2019-02-12 2020-08-13 NEC Laboratories Europe GmbH Method and system for adaptive online meta learning from data streams
CN113051404A (en) * 2021-01-08 2021-06-29 中国科学院自动化研究所 Knowledge reasoning method, device and equipment based on tensor decomposition
US11082438B2 (en) 2018-09-05 2021-08-03 Oracle International Corporation Malicious activity detection by cross-trace analysis and deep learning
US11120368B2 (en) 2017-09-27 2021-09-14 Oracle International Corporation Scalable and efficient distributed auto-tuning of machine learning and deep learning models
US11176487B2 (en) 2017-09-28 2021-11-16 Oracle International Corporation Gradient-based auto-tuning for machine learning and deep learning models
US11218498B2 (en) 2018-09-05 2022-01-04 Oracle International Corporation Context-aware feature embedding and anomaly detection of sequential log data using deep recurrent neural networks
CN114579851A (en) * 2022-02-25 2022-06-03 电子科技大学 Information recommendation method based on adaptive node feature generation
US11429895B2 (en) 2019-04-15 2022-08-30 Oracle International Corporation Predicting machine learning or deep learning model training time
US11451565B2 (en) 2018-09-05 2022-09-20 Oracle International Corporation Malicious activity detection by cross-trace analysis and deep learning
US11487976B2 (en) * 2018-12-27 2022-11-01 Bull Sas Method of classification of images among different classes
US11544494B2 (en) 2017-09-28 2023-01-03 Oracle International Corporation Algorithm-specific neural network architectures for automatic machine learning model selection
US11580409B2 (en) * 2016-12-21 2023-02-14 Innereye Ltd. System and method for iterative classification using neurophysiological signals
US20230054480A1 (en) * 2021-08-19 2023-02-23 International Business Machines Corporation Viewpoint analysis of video data
US11868854B2 (en) 2019-05-30 2024-01-09 Oracle International Corporation Using metamodeling for fast and accurate hyperparameter optimization of machine learning and deep learning models

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10747740B2 (en) * 2015-03-24 2020-08-18 Kyndi, Inc. Cognitive memory graph indexing, storage and retrieval
US11727243B2 (en) * 2019-01-30 2023-08-15 Baidu Usa Llc Knowledge-graph-embedding-based question answering

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11580409B2 (en) * 2016-12-21 2023-02-14 Innereye Ltd. System and method for iterative classification using neurophysiological signals
US11120368B2 (en) 2017-09-27 2021-09-14 Oracle International Corporation Scalable and efficient distributed auto-tuning of machine learning and deep learning models
US11176487B2 (en) 2017-09-28 2021-11-16 Oracle International Corporation Gradient-based auto-tuning for machine learning and deep learning models
US11544494B2 (en) 2017-09-28 2023-01-03 Oracle International Corporation Algorithm-specific neural network architectures for automatic machine learning model selection
US11451565B2 (en) 2018-09-05 2022-09-20 Oracle International Corporation Malicious activity detection by cross-trace analysis and deep learning
US11082438B2 (en) 2018-09-05 2021-08-03 Oracle International Corporation Malicious activity detection by cross-trace analysis and deep learning
US11218498B2 (en) 2018-09-05 2022-01-04 Oracle International Corporation Context-aware feature embedding and anomaly detection of sequential log data using deep recurrent neural networks
US11487976B2 (en) * 2018-12-27 2022-11-01 Bull Sas Method of classification of images among different classes
US20200258008A1 (en) * 2019-02-12 2020-08-13 NEC Laboratories Europe GmbH Method and system for adaptive online meta learning from data streams
US11521132B2 (en) * 2019-02-12 2022-12-06 Nec Corporation Method and system for adaptive online meta learning from data streams
US11429895B2 (en) 2019-04-15 2022-08-30 Oracle International Corporation Predicting machine learning or deep learning model training time
US11868854B2 (en) 2019-05-30 2024-01-09 Oracle International Corporation Using metamodeling for fast and accurate hyperparameter optimization of machine learning and deep learning models
CN113051404A (en) * 2021-01-08 2021-06-29 中国科学院自动化研究所 Knowledge reasoning method, device and equipment based on tensor decomposition
US20230054480A1 (en) * 2021-08-19 2023-02-23 International Business Machines Corporation Viewpoint analysis of video data
CN114579851A (en) * 2022-02-25 2022-06-03 电子科技大学 Information recommendation method based on adaptive node feature generation

Also Published As

Publication number Publication date
US11562278B2 (en) 2023-01-24

Similar Documents

Publication Publication Date Title
US11562278B2 (en) Quantum machine learning algorithm for knowledge graphs
Zhao et al. Bayesian deep learning on a quantum computer
US20200057957A1 (en) Quantum Computer with Improved Quantum Optimization by Exploiting Marginal Data
Chuang et al. Prescription for experimental determination of the dynamics of a quantum black box
Terhal et al. Problem of equilibration and the computation of correlation functions on a quantum computer
Coyle et al. Quantum versus classical generative modelling in finance
Granade et al. QInfer: Statistical inference software for quantum applications
Klich et al. Measuring entanglement entropies in many-body systems
US20200279185A1 (en) Quantum relative entropy training of boltzmann machines
Wu et al. Davinz: Data valuation using deep neural networks at initialization
Carpentier et al. Uncertainty quantification for matrix compressed sensing and quantum tomography problems
US20200226487A1 (en) Measurement Reduction Via Orbital Frames Decompositions On Quantum Computers
Cornelissen et al. Scalable benchmarks for gate-based quantum computers
US20230147890A1 (en) Quantum enhanced word embedding for natural language processing
Jung et al. Guide to exact diagonalization study of quantum thermalization
Fano et al. Quantum chemistry on a quantum computer
Diao et al. Quantum counting: Algorithm and error distribution
Zoufal Generative quantum machine learning
Ma et al. Quantum machine learning algorithm for knowledge graphs
Bishwas et al. An investigation on support vector clustering for big data in quantum paradigm
Skavysh et al. Quantum monte carlo for economics: Stress testing and macroeconomic deep learning
Heredge et al. Permutation invariant encodings for quantum machine learning with point cloud data
Coyle Machine learning applications for noisy intermediate-scale quantum computers
Doran et al. Learning instance concepts from multiple-instance data with bags as distributions
Aurell Global Estimates of Errors in Quantum Computation by the Feynman–Vernon Formalism

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: SIEMENS AKTIENGESELLSCHAFT, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MA, YUNPU;TRESP, VOLKER;REEL/FRAME:050959/0109

Effective date: 20190524

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE