WO2020162985A1 - Knowledge-driven digital companion - Google Patents

Knowledge-driven digital companion Download PDF

Info

Publication number
WO2020162985A1
WO2020162985A1 PCT/US2019/053767 US2019053767W WO2020162985A1 WO 2020162985 A1 WO2020162985 A1 WO 2020162985A1 US 2019053767 W US2019053767 W US 2019053767W WO 2020162985 A1 WO2020162985 A1 WO 2020162985A1
Authority
WO
WIPO (PCT)
Prior art keywords
knowledge
chunks
service
text
answer
Prior art date
Application number
PCT/US2019/053767
Other languages
French (fr)
Inventor
Dan Yu
John HODGES JR.
Rafael ANICET ZANINI
Christoph JENTZSCH
Mariam ZARRABI
Andreas Gebert
Original Assignee
Siemens Aktiengesellschaft
Siemens Industry Software Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Aktiengesellschaft, Siemens Industry Software Inc. filed Critical Siemens Aktiengesellschaft
Publication of WO2020162985A1 publication Critical patent/WO2020162985A1/en

Links

Definitions

  • the present invention relates generally to a knowledge-driven digital companion, and more particularly, to methods of operating a knowledge-driven digital companion for use in at least industrial applications.
  • a knowledge-driven digital companion includes a mapping engine configured to map chunks of input text, which are derived from a question asked by a user, to one or more knowledge triples, and a search engine, which is receptive of the chunks of the input text and matches between the chunks of the input text and the one or more knowledge triples.
  • the search engine is configured to generate an answer to the question based on the one or more knowledge triples and to output the answer toward the user.
  • a knowledge graph repository is receptive of knowledge and knowledge modeling to generate a library or knowledge triples and a joint embedding engine is receptive of chunks of language derived from service manuals and logs and the library of knowledge triples and is configured to train a mapping of the chunks of the language to a vector space.
  • the mapping engine is receptive of the one or more knowledge triples from the knowledge graph repository and the knowledge-driven digital companion further includes a chunk repository interposed between the joint embedding engine and the mapping engine.
  • a selection repository is provided where user selections are stored to enhance the knowledge graph repository.
  • a first syntactic parser is configured to parse digitalized service manuals and logs into the chunks of language.
  • a second syntactic parser is configured to parse the question asked by the user into the chunks of the input text.
  • a speech human- machine interface HMI
  • a text synthesizer is configured to convert the answer into answer text
  • a text-to- speech (TTS) engine is configured to generate speech corresponding to the answer text.
  • a service or manufacturing line or machine includes one or more service or manufacturing stations at which users or service engineers are deployed to execute service or manufacturing operations and a digital companion configured to interface with each of the users or service engineers.
  • the digital companion includes a mapping engine configured to map chunks of input text, which are derived from a question asked by one of the users or service engineers, to one or more knowledge triples and a search engine, which is receptive of the chunks of the input text and matches between the chunks of the input text and the one or more knowledge triples.
  • the search engine is configured to generate an answer to the question based on the one or more knowledge triples and to output the answer toward the one of the users or service engineers.
  • the knowledge- driven digital companion further includes a knowledge graph repository, which is receptive of knowledge and knowledge modeling to generate a library or knowledge triples, and a joint embedding engine, which is receptive of chunks of language derived from service manuals and logs and the library of knowledge triples, and which is configured to train a mapping of the chunks of the language to a vector space.
  • a knowledge graph repository which is receptive of knowledge and knowledge modeling to generate a library or knowledge triples
  • a joint embedding engine which is receptive of chunks of language derived from service manuals and logs and the library of knowledge triples, and which is configured to train a mapping of the chunks of the language to a vector space.
  • the mapping engine is receptive of the one or more knowledge triples from the knowledge graph repository and the knowledge-driven digital companion further includes a chunk repository interposed between the joint embedding engine and the mapping engine.
  • the knowledge- driven digital companion further includes a selection repository where user or service engineer selections are stored to enhance the knowledge graph repository.
  • the knowledge- driven digital companion further includes a first syntactic parser configured to parse digitalized service manuals and logs into the chunks of language.
  • the knowledge- driven digital companion further includes a second syntactic parser configured to parse the question asked by the user into the chunks of the input text.
  • the knowledge- driven digital companion further includes a speech human-machine interface (HMI) configured to convert the question asked by the user into text, a text synthesizer configured to convert the answer into answer text and a text-to-speech (TTS) engine configured to generate speech corresponding to the answer text.
  • HMI speech human-machine interface
  • TTS text-to-speech
  • a method of operating a knowledge-driven digital companion includes mapping chunks of input text, which are derived from a question asked by a user, to one or more knowledge triples, matching the chunks of the input text and the one or more knowledge triples, generating an answer to the question based on the one or more knowledge triples and outputting the answer toward the user.
  • the method further includes generating a library or knowledge triples and training a mapping of the chunks of the language to a vector space.
  • the method further includes storing user selections in a selection repository. [0021] In accordance with additional or alternative embodiments, the method further includes parsing digitalized service manuals and logs into the chunks of language.
  • the method further includes parsing the question asked by the user into the chunks of the input text.
  • the method further includes converting the question asked by the user into text, converting the answer into answer text and generating speech corresponding to the answer text.
  • FIG. l is a schematic diagram of a knowledge-driven digital companion in accordance with embodiments.
  • FIG. 2 is a depiction of an operation of a syntactic parser that parses input text into chunks of text in accordance with embodiments;
  • FIG. 3 is a schematic diagram of a service or manufacturing line including the knowledge-driven digital companion of FIG. 1 in accordance with embodiments;
  • FIG. 4 is a flow diagram illustrating a method of operating a knowledge- driven digital companion in accordance with embodiments.
  • a knowledge-driven digital companion is provided to be seated between service engineers and a huge amount of data related to their service duties and requirements.
  • the knowledge-driven digital companion will help the service engineers research and deliver relevant knowledge and would present several important benefits.
  • one device is just a template-based question and answer system where each intent is represented by many utterances and its developers are encouraged to give as many utterances as possible to match intents.
  • This is a very manual and inflexible system and demands the developers to personally understand various problems, anticipate user questions and prepare a template for the response.
  • Another example is one in which a device syntactically parses a question, then uses full text searching to match syntactic tokens with concepts in an existing knowledge graph. A query is then generated based on potential matches in the full text search and query results are visually presented in a localized knowledge graph.
  • This type of device demonstrates a potential of using knowledge graphs to serve a specific domain but the use of full text searching severely limits its capability to understand natural language questions.
  • the knowledge-driven digital companion addresses the shortcomings of the systems and devices mentioned above. It offers depth of knowledge, scalability and ease of use.
  • the knowledge-driven digital companion includes a mapping engine configured to map chunks of input text, which are derived from a question asked by a user, to entities and relations in a knowledge repository, and a search engine, which is receptive of the chunks of the input text and matches between the chunks of the input text and to a query to the knowledge repository.
  • the search engine is configured to execute query on the portion of knowledge relevant to the input text, an answer(s) to the question based on the query results, which is transformed into natural language and output toward the user.
  • triples are among one of the most commonly used forms.
  • Additional knowledge can be represented in a form of a graph, where nodes represents entities or concepts, and relationships are represented as links between nodes.
  • a knowledge graph repository is receptive of knowledge and knowledge schemas to host knowledge representations and a joint embedding engine is receptive of chunks of language derived from existing textual sources from industrial applications, e.g. service manuals and service logs as well as the library of knowledge (may be represented by triples) and is configured to map the chunks of the language and the knowledge entities / relations to a vector space. The distance between entities in the vector space could mean closeness of their concepts.
  • the mapping engine is receptive of the knowledge from the knowledge graph repository, and the knowledge-driven digital companion further includes a chunk repository interposed between the joint embedding engine and the mapping engine.
  • the chunk repository includes language chunks acquired from parsing language sources, and their embedding vectors.
  • the mapping engine uses the embedding information to decide which concept is closer to the textual chunks from the user’s input, then based on that information try to guess what the user wants from the knowledge graph, and finally generate a suitable query to extract the suitable knowledge content from the knowledge graph repository.
  • a selection repository is provided where user selections are stored to further enhance the knowledge graph repository and accuracy of future execution.
  • a first syntactic parser is configured to parse natural language sources (e.g., digitalized service manuals and logs into the chunks of language) and a second syntactic parser is configured to parse the question asked by the user into the chunks of the input text.
  • a speech human-machine interface is configured to convert the question asked by the user into text
  • a text synthesizer is configured to convert the answer into answer text
  • a text-to-speech (TTS) engine is configured to generate speech corresponding to the answer text. This step is optional as we also allows user to directly input text in a computer or a smart phone screen.
  • a knowledge-driven digital companion 101 includes a knowledge graph repository 110, which is receptive of domain know-how and knowledge modeling, and a first syntactic parser 120, which is receptive of data from digitized service manuals and logs.
  • the knowledge graph repository 110 Prior to an engineering phase, the knowledge graph repository 110 is created based on the existing domain know-how, in the form of norms, standards, conventions, etc. Some know-how may already have well- defined information models (e.g., in XML), which can be converted to an ontology, but other knowledge may need to be curated by knowledge engineers working together with domain experts.
  • Knowledge graphs are stored in the knowledge graph repository 110 in various formats (e.g., RDF, JSON-LD, etc.).
  • the digitized service manuals and logs form natural language sentences and are fed into the first syntactic parser 120 to generate language chunks as shown by the highlighted phrases in FIG. 2.
  • the knowledge-driven digital companion 101 further includes a joint embedding engine 130 that is receptive of chunked sentences from the first syntactic parser 120 and knowledge from the knowledge graph repository 110 to jointly train a mapping of chunks to a vector space.
  • a joint embedding engine 130 that is receptive of chunked sentences from the first syntactic parser 120 and knowledge from the knowledge graph repository 110 to jointly train a mapping of chunks to a vector space.
  • each fact from the first syntactic parser 120 or piece of knowledge from the knowledge graph repository 110 is represented by a triple in the form of:
  • (Subject Entity) and (Object Entity) represent two vectors in the vector space Rk
  • (Predicate) is a k-dimensional transformation in Rk.
  • the purpose of the training is to minimize a global loss function of all entities and relations.
  • the training results are stored in the knowledge graph repository 110 for subsequent use.
  • the knowledge-driven digital companion 101 also includes a speech human-machine interface (HMI) 140, a second syntactic parser 150, a mapping engine 160, a search engine 170, a selection repository 180, a text synthesizer 190 and a text to speech (TTS) engine 200.
  • HMI speech human-machine interface
  • Any question asked vocally by a user or a service engineer is captured by the speech HMI 140, where the question is digitalized for transcription into text.
  • the second syntactic parser 150 then performs a same type of task as the first syntactic parser 120 to thereby split the textual sentences into chunks that can be fed into the mapping engine 160.
  • the mapping engine 160 maps the chunks to a triple or a combination of triples in the knowledge graph repository 110.
  • the mapping serves to ground the question into some context frames in the knowledge graph repository 110. For example, a question like “show me how to mount the CPU” may narrow down the knowledge graph search to a certain proximity of the phrase“programmable logic controller” or“PLC”.
  • the search engine 170 which can be configured as a disambiguation engine, can generate questions to ask more for input from the user or the service engineer.
  • the knowledge graph repository 110 may contain references to CPUs and PLCs whereby a request like “disambiguate (CPU, PLC)” might be generated.
  • the text synthesizer 190 is receptive of the request from the search engine 170 and convert it to natural language text, which is converted to speech by the TTS engine 200 and then fed into the speech HMI 140 to form a vocal output.
  • a user’s or service engineer’s selection will follow a similar path to the search engine 170, where the search engine 170 has sufficient information to make a decision that is expected to be accurate to within a predefined degree (e.g., to show the user or service engineer the first step in mounting a CPU). Additionally, the search engine 170 may store the user or service engineer selection into the selection repository 180, where user input is used for future learning to enhance the knowledge graph repository 110. For example, to remove PC CPU from the domain knowledge.
  • a service or manufacturing line or machine 301 is provided and includes one or more service or manufacturing stations 310 at which users or service engineers are deployed to execute service or manufacturing operations and the digital companion 101 of FIG.
  • the knowledge-driven digital companion 101 which, in this case, is configured to interface with each of the users or service engineers. As the users or service engineers complete their service or manufacturing operations, any questions that they have can be asked to the knowledge- driven digital companion 101. The knowledge-driven digital companion 101 will then formulate an answer as described above and provide the users or service engineers with that answer without requiring them to stop working. In this way, the knowledge-driven digital companion 101 can relieve a foreman or manager of the requirement to at least initially address each question any of the users or service engineers might have.
  • a method of operating the knowledge-driven digital companion 101 of FIG. 1 includes an initial operation during which the knowledge graphs of the knowledge graph repository 110 are created (block 401) and an operation of digitizing service manuals and logs to form natural language sentences (block 402) to be fed into the first syntactic parser 120 whereupon the first syntactic parser 120 generates language chunks (block 403).
  • the joint embedding engine 130 will then receive chunked sentences from the first syntactic parser 120 and knowledge from the knowledge graph repository 110 to jointly train the mapping of chunks to a vector space (block 404).
  • each fact from the first syntactic parser or knowledge piece from the knowledge graph repository 110 is represented by a triple as noted above.
  • Any question that is asked is captured by the speech HMI 140, where it is digitalized and transcribed into text (block 405) whereupon the second syntactic parser 150 performs the same task as block 403 to split the sentences into chunks and feed them into the mapping engine 160 (block 406).
  • the mapping engine 160 is then responsible for mapping the chunks to a triple or a combination of triples in the knowledge graph repository 110 (block 407). This mapping helps to ground the question into some context frames in the knowledge graph repository 110.
  • the search engine 170 then generates questions to ask for more input if need be and subsequently, the text synthesizer 190 takes the request from the search engine 170 and converts it to natural language text, which is converted to speech by the TTS engine 200 and then fed into the speech HMI 140 to form an output (block 408).
  • the present invention may be a system, a method, and/or a computer program product.
  • the computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
  • the computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device.
  • the computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • a non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or Flash memory erasable programmable read-only memory
  • SRAM static random access memory
  • CD-ROM compact disc read-only memory
  • DVD digital versatile disk
  • memory stick a floppy disk
  • a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon
  • a computer readable storage medium is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
  • Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
  • the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
  • Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, Python, NodeJS or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • electronic circuitry including, for example,
  • programmable logic circuitry may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
  • FPGA field-programmable gate arrays
  • PLC programmable logic controllers
  • PLA programmable logic arrays
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a
  • the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks
  • the computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures.
  • two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A knowledge-driven digital companion is provided and includes a mapping engine configured to map chunks of input text, which are derived from a question asked by a user, to one or more knowledge triples, and a search engine, which is receptive of the chunks of the input text and matches between the chunks of the input text and the one or more knowledge triples. The search engine is configured to generate an answer to the question based on the one or more knowledge triples and to output the answer toward the user.

Description

KNOWLEDGE-DRIVEN DIGITAL COMPANION
BACKGROUND
[0001] The present invention relates generally to a knowledge-driven digital companion, and more particularly, to methods of operating a knowledge-driven digital companion for use in at least industrial applications.
[0002] When they are sold to customers, many industrial products are accompanied with certain warranties and service agreements. This is especially true for products that cannot easily be shipped back to a manufacturer, such as a large machine, or where customers cannot afford downtime and effort spent in disassembly, shipping and reassembly. In these or similar cases, customers often need to maintain a team of knowledgeable service engineers ready and available for executing on-site services. Such service engineers either gain their knowledge from product development, training and available documents that can be referred to, years of service experience or from other service colleagues directly or indirectly.
[0003] When the service engineers go to service sites, most of the aforementioned knowledge is either in a form that is not easily portable, not easily accessible or not always readily available. In fact, it is very common that service engineers spend a lot of time onsite just to figure out where the knowledge is located or stored and it has been estimated that instant access to expert product knowledge and field service records would save substantial amounts of money with additional benefits from increased productivity (a service engineer can undertake more service works), relieving product experts from routine work of helping these field service engineers, higher service quality and lower downtime of customers’ machines. SUMMARY
[0004] According to an aspect of the disclosure, a knowledge-driven digital companion is provided and includes a mapping engine configured to map chunks of input text, which are derived from a question asked by a user, to one or more knowledge triples, and a search engine, which is receptive of the chunks of the input text and matches between the chunks of the input text and the one or more knowledge triples. The search engine is configured to generate an answer to the question based on the one or more knowledge triples and to output the answer toward the user.
[0005] In accordance with additional or alternative embodiments, a knowledge graph repository is receptive of knowledge and knowledge modeling to generate a library or knowledge triples and a joint embedding engine is receptive of chunks of language derived from service manuals and logs and the library of knowledge triples and is configured to train a mapping of the chunks of the language to a vector space.
[0006] In accordance with additional or alternative embodiments, the mapping engine is receptive of the one or more knowledge triples from the knowledge graph repository and the knowledge-driven digital companion further includes a chunk repository interposed between the joint embedding engine and the mapping engine.
[0007] In accordance with additional or alternative embodiments, a selection repository is provided where user selections are stored to enhance the knowledge graph repository.
[0008] In accordance with additional or alternative embodiments, a first syntactic parser is configured to parse digitalized service manuals and logs into the chunks of language.
[0009] In accordance with additional or alternative embodiments, a second syntactic parser is configured to parse the question asked by the user into the chunks of the input text. [0010] In accordance with additional or alternative embodiments, a speech human- machine interface (HMI) is configured to convert the question asked by the user into text, a text synthesizer is configured to convert the answer into answer text and a text-to- speech (TTS) engine is configured to generate speech corresponding to the answer text.
[0011] According to another aspect of the disclosure, a service or manufacturing line or machine is provided and includes one or more service or manufacturing stations at which users or service engineers are deployed to execute service or manufacturing operations and a digital companion configured to interface with each of the users or service engineers. The digital companion includes a mapping engine configured to map chunks of input text, which are derived from a question asked by one of the users or service engineers, to one or more knowledge triples and a search engine, which is receptive of the chunks of the input text and matches between the chunks of the input text and the one or more knowledge triples. The search engine is configured to generate an answer to the question based on the one or more knowledge triples and to output the answer toward the one of the users or service engineers.
[0012] In accordance with additional or alternative embodiments, the knowledge- driven digital companion further includes a knowledge graph repository, which is receptive of knowledge and knowledge modeling to generate a library or knowledge triples, and a joint embedding engine, which is receptive of chunks of language derived from service manuals and logs and the library of knowledge triples, and which is configured to train a mapping of the chunks of the language to a vector space.
[0013] In accordance with additional or alternative embodiments, the mapping engine is receptive of the one or more knowledge triples from the knowledge graph repository and the knowledge-driven digital companion further includes a chunk repository interposed between the joint embedding engine and the mapping engine. [0014] In accordance with additional or alternative embodiments, the knowledge- driven digital companion further includes a selection repository where user or service engineer selections are stored to enhance the knowledge graph repository.
[0015] In accordance with additional or alternative embodiments, the knowledge- driven digital companion further includes a first syntactic parser configured to parse digitalized service manuals and logs into the chunks of language.
[0016] In accordance with additional or alternative embodiments, the knowledge- driven digital companion further includes a second syntactic parser configured to parse the question asked by the user into the chunks of the input text.
[0017] In accordance with additional or alternative embodiments, the knowledge- driven digital companion further includes a speech human-machine interface (HMI) configured to convert the question asked by the user into text, a text synthesizer configured to convert the answer into answer text and a text-to-speech (TTS) engine configured to generate speech corresponding to the answer text.
[0018] According to another aspect of the disclosure, a method of operating a knowledge-driven digital companion is provided. The method includes mapping chunks of input text, which are derived from a question asked by a user, to one or more knowledge triples, matching the chunks of the input text and the one or more knowledge triples, generating an answer to the question based on the one or more knowledge triples and outputting the answer toward the user.
[0019] In accordance with additional or alternative embodiments, the method further includes generating a library or knowledge triples and training a mapping of the chunks of the language to a vector space.
[0020] In accordance with additional or alternative embodiments, the method further includes storing user selections in a selection repository. [0021] In accordance with additional or alternative embodiments, the method further includes parsing digitalized service manuals and logs into the chunks of language.
[0022] In accordance with additional or alternative embodiments, the method further includes parsing the question asked by the user into the chunks of the input text.
[0023] In accordance with additional or alternative embodiments, the method further includes converting the question asked by the user into text, converting the answer into answer text and generating speech corresponding to the answer text.
BRIEF DESCRIPTION OF THE DRAWINGS
[0024] The detailed description is set forth with reference to the accompanying drawings. The drawings are provided for purposes of illustration only and merely depict example embodiments of the invention. The drawings are provided to facilitate understanding of the invention and shall not be deemed to limit the breadth, scope, or applicability of the invention. In the drawings, the left-most digit(s) of a reference numeral identifies the drawing in which the reference numeral first appears. The use of the same reference numerals indicates similar, but not necessarily the same or identical components. However, different reference numerals may be used to identify similar components as well. Various embodiments may utilize elements or components other than those illustrated in the drawings, and some elements and/or components may not be present in various embodiments. The use of singular terminology to describe a component or element may, depending on the context, encompass a plural number of such components or elements and vice versa.
[0025] FIG. l is a schematic diagram of a knowledge-driven digital companion in accordance with embodiments;
[0026] FIG. 2 is a depiction of an operation of a syntactic parser that parses input text into chunks of text in accordance with embodiments; [0027] FIG. 3 is a schematic diagram of a service or manufacturing line including the knowledge-driven digital companion of FIG. 1 in accordance with embodiments; and
[0028] FIG. 4 is a flow diagram illustrating a method of operating a knowledge- driven digital companion in accordance with embodiments.
DETAILED DESCRIPTION
[0029] As will be described below, a knowledge-driven digital companion is provided to be seated between service engineers and a huge amount of data related to their service duties and requirements. The knowledge-driven digital companion will help the service engineers research and deliver relevant knowledge and would present several important benefits.
[0030] While various attempts have been made to build digital companions to alleviate service engineers from having top search through enormous amounts of knowledge, few have been successful as noted below.
[0031] Given that documents are often available in natural language formats, it is intuitive and natural to access the knowledge stored in those documents with text or speech interfaces. Speech is especially favorable in that it does not distract service engineers from the current work at hand. Although certain devices that allow for this type of text or speech interfacing are currently available and are sometimes regarded as types of digital companions, they lack depth of knowledge and can only follow certain templates.
[0032] For example, one device is just a template-based question and answer system where each intent is represented by many utterances and its developers are encouraged to give as many utterances as possible to match intents. This is a very manual and inflexible system and demands the developers to personally understand various problems, anticipate user questions and prepare a template for the response. Given the depth and complexity of industrial applications, however, it is often very hard to predict what would happen and what the right action to take is in a particular service instance. Thus, this solution is not at all scalable and far from complete
[0033] Another example is one in which a device syntactically parses a question, then uses full text searching to match syntactic tokens with concepts in an existing knowledge graph. A query is then generated based on potential matches in the full text search and query results are visually presented in a localized knowledge graph. This type of device demonstrates a potential of using knowledge graphs to serve a specific domain but the use of full text searching severely limits its capability to understand natural language questions.
[0034] As will be described below, the knowledge-driven digital companion addresses the shortcomings of the systems and devices mentioned above. It offers depth of knowledge, scalability and ease of use. The knowledge-driven digital companion includes a mapping engine configured to map chunks of input text, which are derived from a question asked by a user, to entities and relations in a knowledge repository, and a search engine, which is receptive of the chunks of the input text and matches between the chunks of the input text and to a query to the knowledge repository. The search engine is configured to execute query on the portion of knowledge relevant to the input text, an answer(s) to the question based on the query results, which is transformed into natural language and output toward the user. There are many ways to represent knowledge in a knowledge repository, triples are among one of the most commonly used forms.
Additional knowledge can be represented in a form of a graph, where nodes represents entities or concepts, and relationships are represented as links between nodes. A knowledge graph repository is receptive of knowledge and knowledge schemas to host knowledge representations and a joint embedding engine is receptive of chunks of language derived from existing textual sources from industrial applications, e.g. service manuals and service logs as well as the library of knowledge (may be represented by triples) and is configured to map the chunks of the language and the knowledge entities / relations to a vector space. The distance between entities in the vector space could mean closeness of their concepts. The mapping engine is receptive of the knowledge from the knowledge graph repository, and the knowledge-driven digital companion further includes a chunk repository interposed between the joint embedding engine and the mapping engine. The chunk repository includes language chunks acquired from parsing language sources, and their embedding vectors. The mapping engine then uses the embedding information to decide which concept is closer to the textual chunks from the user’s input, then based on that information try to guess what the user wants from the knowledge graph, and finally generate a suitable query to extract the suitable knowledge content from the knowledge graph repository. A selection repository is provided where user selections are stored to further enhance the knowledge graph repository and accuracy of future execution. A first syntactic parser is configured to parse natural language sources (e.g., digitalized service manuals and logs into the chunks of language) and a second syntactic parser is configured to parse the question asked by the user into the chunks of the input text. A speech human-machine interface (HMI) is configured to convert the question asked by the user into text, a text synthesizer is configured to convert the answer into answer text and a text-to-speech (TTS) engine is configured to generate speech corresponding to the answer text. This step is optional as we also allows user to directly input text in a computer or a smart phone screen.
[0035] With reference to FIGS. 1 and 2, a knowledge-driven digital companion 101 is provided and includes a knowledge graph repository 110, which is receptive of domain know-how and knowledge modeling, and a first syntactic parser 120, which is receptive of data from digitized service manuals and logs. Prior to an engineering phase, the knowledge graph repository 110 is created based on the existing domain know-how, in the form of norms, standards, conventions, etc. Some know-how may already have well- defined information models (e.g., in XML), which can be converted to an ontology, but other knowledge may need to be curated by knowledge engineers working together with domain experts. Knowledge graphs are stored in the knowledge graph repository 110 in various formats (e.g., RDF, JSON-LD, etc.). The digitized service manuals and logs form natural language sentences and are fed into the first syntactic parser 120 to generate language chunks as shown by the highlighted phrases in FIG. 2.
[0036] The knowledge-driven digital companion 101 further includes a joint embedding engine 130 that is receptive of chunked sentences from the first syntactic parser 120 and knowledge from the knowledge graph repository 110 to jointly train a mapping of chunks to a vector space. For this, each fact from the first syntactic parser 120 or piece of knowledge from the knowledge graph repository 110 is represented by a triple in the form of:
(Subject Entity) (Predicate) (Object Entity)
where (Subject Entity) and (Object Entity) represent two vectors in the vector space Rk, and (Predicate) is a k-dimensional transformation in Rk. The purpose of the training is to minimize a global loss function of all entities and relations. The training results are stored in the knowledge graph repository 110 for subsequent use.
[0037] With continued reference to FIG. 1, the knowledge-driven digital companion 101 also includes a speech human-machine interface (HMI) 140, a second syntactic parser 150, a mapping engine 160, a search engine 170, a selection repository 180, a text synthesizer 190 and a text to speech (TTS) engine 200.
[0038] Any question asked vocally by a user or a service engineer is captured by the speech HMI 140, where the question is digitalized for transcription into text. The second syntactic parser 150 then performs a same type of task as the first syntactic parser 120 to thereby split the textual sentences into chunks that can be fed into the mapping engine 160. The mapping engine 160 maps the chunks to a triple or a combination of triples in the knowledge graph repository 110. The mapping serves to ground the question into some context frames in the knowledge graph repository 110. For example, a question like “show me how to mount the CPU” may narrow down the knowledge graph search to a certain proximity of the phrase“programmable logic controller” or“PLC”. At this point, it is likely that there may be some ambiguity in the question, and the search engine 170, which can be configured as a disambiguation engine, can generate questions to ask more for input from the user or the service engineer. For example, the knowledge graph repository 110 may contain references to CPUs and PLCs whereby a request like “disambiguate (CPU, PLC)” might be generated.
[0039] The text synthesizer 190 is receptive of the request from the search engine 170 and convert it to natural language text, which is converted to speech by the TTS engine 200 and then fed into the speech HMI 140 to form a vocal output.
[0040] A user’s or service engineer’s selection will follow a similar path to the search engine 170, where the search engine 170 has sufficient information to make a decision that is expected to be accurate to within a predefined degree (e.g., to show the user or service engineer the first step in mounting a CPU). Additionally, the search engine 170 may store the user or service engineer selection into the selection repository 180, where user input is used for future learning to enhance the knowledge graph repository 110. For example, to remove PC CPU from the domain knowledge. [0041] With reference to FIG. 3, a service or manufacturing line or machine 301 is provided and includes one or more service or manufacturing stations 310 at which users or service engineers are deployed to execute service or manufacturing operations and the digital companion 101 of FIG. 1, which, in this case, is configured to interface with each of the users or service engineers. As the users or service engineers complete their service or manufacturing operations, any questions that they have can be asked to the knowledge- driven digital companion 101. The knowledge-driven digital companion 101 will then formulate an answer as described above and provide the users or service engineers with that answer without requiring them to stop working. In this way, the knowledge-driven digital companion 101 can relieve a foreman or manager of the requirement to at least initially address each question any of the users or service engineers might have.
[0042] With reference back to FIG. 1 and with additional reference to FIG. 4, a method of operating the knowledge-driven digital companion 101 of FIG. 1 is provided and includes an initial operation during which the knowledge graphs of the knowledge graph repository 110 are created (block 401) and an operation of digitizing service manuals and logs to form natural language sentences (block 402) to be fed into the first syntactic parser 120 whereupon the first syntactic parser 120 generates language chunks (block 403). The joint embedding engine 130 will then receive chunked sentences from the first syntactic parser 120 and knowledge from the knowledge graph repository 110 to jointly train the mapping of chunks to a vector space (block 404). Here, each fact from the first syntactic parser or knowledge piece from the knowledge graph repository 110 is represented by a triple as noted above.
[0043] Any question that is asked is captured by the speech HMI 140, where it is digitalized and transcribed into text (block 405) whereupon the second syntactic parser 150 performs the same task as block 403 to split the sentences into chunks and feed them into the mapping engine 160 (block 406). The mapping engine 160 is then responsible for mapping the chunks to a triple or a combination of triples in the knowledge graph repository 110 (block 407). This mapping helps to ground the question into some context frames in the knowledge graph repository 110. The search engine 170 then generates questions to ask for more input if need be and subsequently, the text synthesizer 190 takes the request from the search engine 170 and converts it to natural language text, which is converted to speech by the TTS engine 200 and then fed into the speech HMI 140 to form an output (block 408).
[0044] User selections will follow the same path as above until the search engine 170 determines that enough information is available to make a decision (e.g., to show the user the first step in mounting a CPU) (block 409). The search engine 170 can then store the user selection to the selection repository 180, where the user input is used for future learning to enhance the knowledge graph repository 110.
[0045] Technical effects and benefits of the knowledge-driven digital companion are the enabling of a build-out of a knowledge service platform based on ontologies and models derived from domain know-how. This offers a unique advantage in providing a subscription base of knowledge to in turn enable new functions for applications. An end customer can also subscribe to the knowledge base to enable an import of his engineered project so that he can use natural language to monitor the status of a production line.
[0046] The operations described above may be carried out or performed in any suitable order as desired in various example embodiments of the invention. Additionally, in certain example embodiments, at least a portion of the operations may be carried out in parallel. Furthermore, in certain example embodiments, less, more, or different operations than those depicted.
[0047] Although specific embodiments of the invention have been described, one of ordinary skill in the art will recognize that numerous other modifications and alternative embodiments are within the scope of the invention. For example, any of the functionality and/or processing capabilities described with respect to a particular system, system component, device, or device component may be performed by any other system, device, or component. Further, while various illustrative implementations and architectures have been described in accordance with embodiments of the invention, one of ordinary skill in the art will appreciate that numerous other modifications to the illustrative
implementations and architectures described herein are also within the scope of this invention. In addition, it should be appreciated that any operation, element, component, data, or the like described herein as being based on another operation, element, component, data, or the like may be additionally based on one or more other operations, elements, components, data, or the like. Accordingly, the phrase“based on,” or variants thereof, should be interpreted as“based at least in part on.”
[0048] The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
[0049] The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
[0050] Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
[0051] Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, Python, NodeJS or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example,
programmable logic circuitry, field-programmable gate arrays (FPGA), programmable logic controllers (PLC), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
[0052] Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
[0053] These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a
programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks
[0054] The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
[0055] The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention.
In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
[0056] The descriptions of the various embodiments of the present invention have been presented for purposes of illustration but are not intended to be exhaustive or limited to the embodiments of the invention described. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments of the invention. The terminology used herein was chosen to best explain the principles of the embodiments of the invention, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments of the invention described herein.

Claims

CLAIMS What is claimed is:
1. A knowledge-driven digital companion, comprising: a mapping engine configured to map chunks of input text, which are derived from a question asked by a user, to one or more knowledge triples; and a search engine, which is receptive of the chunks of the input text and matches between the chunks of the input text and the one or more knowledge triples, the search engine being configured to generate an answer to the question based on the one or more knowledge triples and to output the answer toward the user.
2. The knowledge-driven digital companion according to claim 1, further comprising: a knowledge graph repository, which is receptive of knowledge and knowledge modeling to generate a library or knowledge triples; and a joint embedding engine, which is receptive of chunks of language derived from service manuals and logs and the library of knowledge triples, and which is configured to train a mapping of the chunks of the language to a vector space.
3. The knowledge-driven digital companion according to claim 2, wherein: the mapping engine is receptive of the one or more knowledge triples from the knowledge graph repository, and the knowledge-driven digital companion further comprises a chunk repository interposed between the joint embedding engine and the mapping engine.
4. The knowledge-driven digital companion according to claim 3, further comprising a selection repository where user selections are stored to enhance the knowledge graph repository.
5. The knowledge-driven digital companion according to claim 2, further comprising a first syntactic parser configured to parse digitalized service manuals and logs into the chunks of language.
6. The knowledge-driven digital companion according to claim 2, further comprising a second syntactic parser configured to parse the question asked by the user into the chunks of the input text.
7. The knowledge-driven digital companion according to claim 2, further comprising: a speech human-machine interface (HMI) configured to convert the question asked by the user into text; a text synthesizer configured to convert the answer into answer text; and a text-to-speech (TTS) engine configured to generate speech corresponding to the answer text.
8. A service or manufacturing line or machine, comprising: one or more service or manufacturing stations or machines at which users or service engineers are deployed to execute service or manufacturing operations; and a digital companion configured to interface with each of the users or service engineers, the digital companion comprising: a mapping engine configured to map chunks of input text, which are derived from a question asked by one of the users or service engineers, to one or more knowledge triples; and a search engine, which is receptive of the chunks of the input text and matches between the chunks of the input text and the one or more knowledge triples, the search engine being configured to generate an answer to the question based on the one or more knowledge triples and to output the answer toward the one of the users or service engineers.
9. The service or manufacturing line or machine according to claim 8, wherein the knowledge-driven digital companion further comprises: a knowledge graph repository, which is receptive of knowledge and knowledge modeling to generate a library or knowledge triples; and a joint embedding engine, which is receptive of chunks of language derived from service manuals and logs and the library of knowledge triples, and which is configured to train a mapping of the chunks of the language to a vector space.
10. The service or manufacturing line or machine according to claim 9, wherein: the mapping engine is receptive of the one or more knowledge triples from the knowledge graph repository, and the knowledge-driven digital companion further comprises a chunk repository interposed between the joint embedding engine and the mapping engine.
11. The service or manufacturing line or machine according to claim 10, wherein the knowledge-driven digital companion further comprises a selection repository where user or service engineer selections are stored to enhance the knowledge graph repository.
12. The service or manufacturing line or machine according to claim 9, wherein the knowledge-driven digital companion further comprises a first syntactic parser configured to parse digitalized service manuals and logs into the chunks of language.
13. The service or manufacturing line or machine according to claim 9, wherein the knowledge-driven digital companion further comprises a second syntactic parser configured to parse the question asked by the user into the chunks of the input text.
14. The service or manufacturing line or machine according to claim 9, wherein the knowledge-driven digital companion further comprises: a speech human-machine interface (HMI) configured to convert the question asked by the user into text; a text synthesizer configured to convert the answer into answer text; and a text-to-speech (TTS) engine configured to generate speech corresponding to the answer text.
15. A method of operating a knowledge-driven digital companion, the method comprising: mapping chunks of input text, which are derived from a question asked by a user, to one or more knowledge triples; matching the chunks of the input text and the one or more knowledge triples; generating an answer to the question based on the one or more knowledge triples; and outputting the answer toward the user.
16. The method according to claim 15, further comprising: generating a library or knowledge triples; and training a mapping of the chunks of the language to a vector space.
17. The method according to claim 16, further comprising storing user selections in a selection repository.
18. The method according to claim 16, further comprising parsing digitalized service manuals and logs into the chunks of language.
19. The method according to claim 16, further comprising parsing the question asked by the user into the chunks of the input text.
20. The method according to claim 16, further comprising: converting the question asked by the user into text; converting the answer into answer text; and generating speech corresponding to the answer text.
FIG. 4
Figure imgf000025_0001
PCT/US2019/053767 2019-02-08 2019-09-30 Knowledge-driven digital companion WO2020162985A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962802727P 2019-02-08 2019-02-08
US62/802,727 2019-02-08

Publications (1)

Publication Number Publication Date
WO2020162985A1 true WO2020162985A1 (en) 2020-08-13

Family

ID=68290352

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/053767 WO2020162985A1 (en) 2019-02-08 2019-09-30 Knowledge-driven digital companion

Country Status (1)

Country Link
WO (1) WO2020162985A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114116989A (en) * 2022-01-28 2022-03-01 京华信息科技股份有限公司 Formatted document generation method and system based on OCR recognition

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002010980A1 (en) * 2000-07-27 2002-02-07 Science Applications International Corporation Concept-based search and retrieval system
US20160371253A1 (en) * 2015-06-22 2016-12-22 International Business Machines Corporation Augmented Text Search with Syntactic Information
US20170177715A1 (en) * 2015-12-21 2017-06-22 Adobe Systems Incorporated Natural Language System Question Classifier, Semantic Representations, and Logical Form Templates
US20170228372A1 (en) * 2016-02-08 2017-08-10 Taiger Spain Sl System and method for querying questions and answers

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002010980A1 (en) * 2000-07-27 2002-02-07 Science Applications International Corporation Concept-based search and retrieval system
US20160371253A1 (en) * 2015-06-22 2016-12-22 International Business Machines Corporation Augmented Text Search with Syntactic Information
US20170177715A1 (en) * 2015-12-21 2017-06-22 Adobe Systems Incorporated Natural Language System Question Classifier, Semantic Representations, and Logical Form Templates
US20170228372A1 (en) * 2016-02-08 2017-08-10 Taiger Spain Sl System and method for querying questions and answers

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114116989A (en) * 2022-01-28 2022-03-01 京华信息科技股份有限公司 Formatted document generation method and system based on OCR recognition

Similar Documents

Publication Publication Date Title
JP7387714B2 (en) Techniques for building knowledge graphs within limited knowledge domains
US11734584B2 (en) Multi-modal construction of deep learning networks
US11093707B2 (en) Adversarial training data augmentation data for text classifiers
US10394963B2 (en) Natural language processor for providing natural language signals in a natural language output
US10498858B2 (en) System and method for automated on-demand creation of and execution of a customized data integration software application
US20180210879A1 (en) Translating Structured Languages to Natural Language Using Domain-Specific Ontology
US10679000B2 (en) Interpreting conversational authoring of information models
US10394861B2 (en) Natural language processor for providing natural language signals in a natural language output
CN110554875B (en) Code conversion method and device, electronic equipment and storage medium
US11157533B2 (en) Designing conversational systems driven by a semantic network with a library of templated query operators
CN111104796B (en) Method and device for translation
US20220035596A1 (en) Apparatuses, Methods and Computer Programs for a User Device and for a Server
US20240061833A1 (en) Techniques for augmenting training data for aggregation and sorting database operations in a natural language to database query system
Malode Benchmarking public large language model
US20230395076A1 (en) Methods and systems for application integration and macrosystem aware integration
US9208194B2 (en) Expanding high level queries
CN112130830A (en) Interface generation method and device and electronic equipment
US11544478B2 (en) Generating dialog system workspaces
WO2020162985A1 (en) Knowledge-driven digital companion
US20240061834A1 (en) Detecting out-of-domain, out-of-scope, and confusion-span (oocs) input for a natural language to logical form model
US11521065B2 (en) Generating explanations for context aware sequence-to-sequence models
US12094459B2 (en) Automated domain-specific constrained decoding from speech inputs to structured resources
US11669307B2 (en) Code injection from natural language derived intent
US20240289124A1 (en) Context aware code snippet recommendation
US11971887B2 (en) Identifying and replacing logically neutral phrases in natural language queries for query processing

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19790362

Country of ref document: EP

Kind code of ref document: A1