WO2023250188A1 - Systems and methods for semantic parsing with primitive level enumeration - Google Patents

Systems and methods for semantic parsing with primitive level enumeration Download PDF

Info

Publication number
WO2023250188A1
WO2023250188A1 PCT/US2023/026146 US2023026146W WO2023250188A1 WO 2023250188 A1 WO2023250188 A1 WO 2023250188A1 US 2023026146 W US2023026146 W US 2023026146W WO 2023250188 A1 WO2023250188 A1 WO 2023250188A1
Authority
WO
WIPO (PCT)
Prior art keywords
primitives
entity
knowledge base
natural language
database
Prior art date
Application number
PCT/US2023/026146
Other languages
French (fr)
Inventor
Ye Liu
Semih Yavuz
Yingbo Zhou
Rui Meng
Original Assignee
Salesforce, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US18/059,691 external-priority patent/US20240176805A1/en
Application filed by Salesforce, Inc. filed Critical Salesforce, Inc.
Publication of WO2023250188A1 publication Critical patent/WO2023250188A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/242Query formulation
    • G06F16/243Natural language query formulation

Definitions

  • the embodiments relate generally to natural language processing and machine learning systems, and more specifically to systems and methods for semantic parsing with primitive level enumeration for question answering.
  • Database operations such as searching for a result in response to a search query, often require a command in a specific form.
  • Writing the command in a specific logical form requires a user to master a high level of database language.
  • Machine learning systems have been widely used in turning a natural language question into a query in a format used for a database or knowledge base. In this way, a user may enter a natural language query, such as "what is the month having the highest revenue in the past five years?” Parsing natural language questions into executable logical forms is a useful and interpretable way to perform question answering on structured data.
  • Existing approaches enumerate executable logical forms, where the number of logical forms enumerated grows exponentially when dealing with complex questions with multi-hop/multi-table relations.
  • FIG. 1 is a simplified diagram illustrating a semantic parsing framework according to some embodiments.
  • FIG. 2 is a simplified diagram illustrating enumeration of primitives for a knowledge base according to some embodiments.
  • FIG. 3 is a simplified diagram illustrating enumeration of primitives for a database according to some embodiments.
  • FIG. 4 is a simplified diagram illustrating ranking primitives for a knowledge base according to some embodiments.
  • FIG. 5 is a simplified diagram illustrating ranking primitives for a database according to some embodiments.
  • FIG. 6 is a simplified diagram illustrating generating logical forms for a knowledge base according to some embodiments.
  • FIG. 7 is a simplified diagram illustrating generating logical forms for a database according to some embodiments.
  • FIG. 8 is a simplified diagram illustrating a computing device implementing the semantic parsing framework described in FIGS. 1-7, according to one embodiment described herein.
  • FIG. 9 is a simplified block diagram of a networked system suitable for implementing the semantic parsing framework described in FIGS. 1-7 and other embodiments described herein.
  • FIG. 10 is an example logic flow diagram illustrating a method of semantic parsing based on the framework shown in FIGS. 1-9, according to some embodiments described herein
  • FIGS. 11-15 provide charts illustrating exemplary performance of different embodiments described herein.
  • FIG. 16 provides an exemplary comparison of an output of the semantic parsing framework described in embodiments herein to another semantic parser.
  • network may comprise any hardware or softwarebased framework that includes any artificial intelligence network or system, neural network or system and/or any training or learning models implemented thereon or therewith.
  • module may comprise hardware or software-based framework that performs one or more functions.
  • the module may be implemented on one or more neural networks.
  • Parsing natural language questions to generate executable logical forms that can be executed on a database or knowledge base is a useful and interpretable way to perform question answering on structured data. For example, a natural language question such as "How many students are enrolled in Statistics?” may be answerable given a database which contains student data.
  • a semantic parser may generate a logical form which may be executed by the database in order to provide an answer, e.g., SELECT countf*] FROM Courses JOIN Course_Attendance ON Courses.
  • Courseld Course_Attendance.CourseId WHERE Courses.
  • CourseName "Statistics”.
  • Existing approaches for semantic parsing enumerate executable logical forms.
  • entities when the data is structured as a knowledge base, entities may be interconnected by edges which represent relationships.
  • An existing approach enumerates logical forms starting with the entity and including relations for two "hops” away in the structure. Given M connections to entities one hop away from the starting entity, and N connections from each of those entities, this results in M*N logical forms. For sufficiently complex data structures, this results in inefficient resource usage, and results in poor performance. Further, existing approaches do not provide a unified structure that is applicable to both knowledge bases and databases.
  • Uni-Parser a semantic parsing framework
  • the Uni-Parser framework may generate one or more logical form that can be executed on a data structure in response to a natural language input, through enumeration, ranking, and generation.
  • primitives are enumerated based on matching the natural language question to the data structure (e.g., either a database or knowledge base).
  • primitives in a database maybe a column name of a specific table, or a value of a specific cell in a table (or an operation which would provide the relevant cell value as an output), etc.
  • a primitive may be represented by a node in the graph.
  • This is opposed to a logical form which is composed of primitives and corresponding operations upon the primitives, which for a database, for example, may be: SELECT count(*) FROM Courses JOIN Course_Attendance ON Courses.
  • CourselD Course_Attendance.CourselD WHERE Courses.
  • CourseName "Statistics”.
  • This logical form includes a number of primitives corresponding to column names for specific tables such as "Courses.
  • Uni-Parser Rather than enumerating all the possible logical forms with all different variations and/or combinations of the primitives, Uni-Parser enumerates the primitives themselves. In this way, rather than M*N logical forms being enumerated (Where M is the number of first hop entities and N is the number of second hop entities from a specific entity in the natural language question in the knowledge base), Uni-Parser may enumerate M+N primitives.
  • the Uni-Parser framework may rank the primitives used a trained ranker model.
  • the top ranked primitives may then be used as inputs to a generator, which is a learned sequence to sequence model.
  • the sequence-to-sequence model may then produce an output sequence of a logical form in response to an input sequence of the top ranked primitives. Details of the enumeration, ranking, and generation stages are described in more detail below with respect to FIGS. 1-7.
  • Embodiments described herein provide a number of benefits.
  • the same semantic parsing framework may be used for both database and knowledge base question answering.
  • the search space is greatly reduced, which makes candidate generation and ranking more efficient and scalable.
  • the composition of logical forms from primitives and operations is postponed to the generation phase. This leads to a more generalized model that can work on complex logical forms and generalize to questions involving unseen schema.
  • FIG. 1 is a simplified diagram illustrating a semantic parsing framework 100 according to some embodiments.
  • the framework 100 comprises an enumerator 104, operatively connected to a ranker 106, which is operatively connected to a generator 108.
  • a natural language question 102 e.g., How many students are enrolled in Statistics?"
  • enumerator 104 Given the question 102 and a known data structure (e.g., such as a knowledge base comprising a plurality of entities that are interconnected by edges representing the relationships between the entities or a database comprising one or more tables each with respective columns and rows which store values), enumerator 104 may enumerate a number of primitives.
  • the first-hop and second-hop entities from the given entity in the knowledge base may be enumerated as primitives. Enumeration of primitives is described in more detail with respect to FIG. 2 for knowledge bases, and FIG. 3 for databases.
  • Enumerated primitives may be passed to ranker 106 which ranks the primitives based on the question 102.
  • Ranker 106 may be a learned model which is trained on a similar or the same data structure. Ranker 106 is described in more detail with respect to FIG. 4 for knowledge bases, and FIG. 5 for databases.
  • the top ranked primitives may be passed to generator 108. Based on the question 1-2 and the ranked primitives, generator 108 may produce logical form 110. Logical form 110 may be executed on the data structure to provide an answer to question 102.
  • Generator 108 may be a learned model which is trained on a similar or the same data structure. Generator 108 is described in more detail with respect to FIG. 6 for knowledge bases, and FIG. 7 for databases.
  • FIG. 2 is a simplified diagram illustrating enumeration of primitives for a knowledge base according to some embodiments.
  • an enumerator e.g., enumerator 104 of FIG. 1
  • the enumerator may start by detecting entities in the question 202, for example by using a named entity recognition (NER) algorithm.
  • NER named entity recognition
  • name entities such as "currency,” “New York,” “Central Park,” "construction” may be identified in the question 202.
  • the NER algorithm identifies entities in the question 202 based on belonging to a predetermined class of entities (e.g., location, person, etc.).
  • the identified entities in question 202 may then be matched (or fuzzy matched) to entities in knowledge base 204. For example, “Central Park” may be selected as a starting entity based on matching this entity to words "central park” in question 202.
  • only one entity is selected from knowledge base 204, while in other embodiments two or more may be selected.
  • the matched entity (or entities) may be used as a starting point for enumeration.
  • a ranker model may be used to select entity candidates based on the similarity between the question and the one-hop in/out relations of the entity.
  • the enumerator may enumerate all primitives within two hops of the starting entities on a knowledge base. Entities one hop away from “Central Park” shown enumerated in list 206 as reflected in knowledge base 204 may include items such as "architecture. construction_cost central_park,” and “architecture. landscape_project central_park.” At a second hop from "central park” may include additional entities such as "travel.get_destination,” and "money_unit.currency.” As shown in list 206, the enumeration is done at the level of primitives, the base unit of the knowledge base, rather than on queries which are constructed of primitives and operations on the primitives.
  • FIG. 3 is a simplified diagram illustrating enumeration of primitives for a database according to some embodiments.
  • an enumerator e.g., enumerator 104 of FIG. 1
  • a primitive for any input question may include a column name or a cell value.
  • all column names of all data tables may be enumerated first, and only relevant cell values within the one or more data tables may be enumerated next based on the input question.
  • every column name for every table in the database may be enumerated.
  • column names are enumerated including Students. Studentld, Students. StudentName (both from table 306), not shown are the columns from table 308 which would be included, and the column names from table 310 are also included, as illustrated by the exemplary Courses. CourseName entry.
  • the enumerator may detect entities in the question 302, for example by using a named entity recognition algorithm, e.g., “students,” "course,” “statistics” maybe detected from question 302.
  • the entities in question 302 may then be matched (or fuzzy matched) to cell values in database 304.
  • question 302 contains the word “statistics” which matches the cell value "Statistics” in the CourseName column of table 308 of all courses.
  • FIG. 4 is a simplified diagram illustrating ranking primitives for a knowledge base according to some embodiments.
  • Enumerated lists 402 and 404 are lists of first hop and second hop primitives as enumerated by an enumerator (e.g., enumerator 104) as discussed with reference to FIGS. 1-2.
  • the same ranker 106 model may be used for both classes of primitives (e.g., both first hop and second hop).
  • the enumerated lists provided by the enumerator may be concatenated (either by the enumerator, ranker, or as another unit) with the question, a special token indicating first/second hop, and the primitive itself.
  • enumerated list 404 includes items which start with the text of question 202 of FIG. 2, a special first-hop token " ⁇
  • Table 402 includes similar items which are concatenated with the question 202, a second-hop token, followed by the primitive. These are exemplary patterns of concatenation, and others may be used. The enumerated lists are provided to ranker 106.
  • Ranker 106 may be trained to filter out irrelevant primitives by measuring the similarity between questions and enumerated primitive candidates (e.g., as shown in lists 402, 404). In some embodiments the ranker 106 utilizes a cross-encoder architecture.
  • p c is the special token to distinguish the category of the primitive (in knowledge base, p c G ⁇
  • the ranker is optimized to minimize the contrastive loss: where p+ is the positive primitive extracted from the ground-truth logical form and P- is the set of negative primitives from the same category p c .
  • negative sampling may be performed.
  • random sampling is used to select negative samples.
  • a sampling strategy samples hard negative candidates for training the ranker 106.
  • the hard negative candidates of the second hop may be sampled from the primitives connected to the ground truth first hop.
  • a bootstrap negative sampling strategy may be leveraged; that is, the model may be trained recursively using the false positive candidates generated from the last training epoch.
  • ranker 106 may generate an output value associated with each of the enumerated primitives. Based on those values, the primitives may be ranked. In some embodiments, first-hop/second-hop primitives are separately ranked (or column name/cell values for databases]. This is illustrated as ranked lists 406 and 408 in FIG. 4 for knowledge bases.
  • the top k primitives of each category may be selected to be passed on to a generator (e.g., generator 108). For example, k may be set to 5.
  • the top ranked first hop primitives and second hop primitives can be formed into two- hop paths by combining one first hop primitive with each of the second hop primitives. However, to provide valid primitive candidates to the generator 108, they may be further filtered to remove the second hop primitives that cannot be reached from any of the first hop primitives.
  • FIG. 5 is a simplified diagram illustrating ranking primitives for a database according to some embodiments.
  • Enumerated lists 502 and 504 are lists of column names and cell value primitives as enumerated by an enumerator (e.g., enumerator 104) as discussed with reference to FIGS. 1 and 3.
  • the enumerated lists provided by the enumerator may be concatenated (either by the enumerator, ranker, or as another unit) with the question, a special token indicating whether the primitive is a column name or a cell value, and the primitive itself.
  • enumerated list 502 includes a single item which starts with the text of question 302 of FIG. 3, a special cell value token
  • Table 504 includes similar items which are concatenated with the question 202, a column-name token, followed by the primitive. These are exemplary patterns of concatenation, and others may be used. The enumerated lists are provided to ranker 106.
  • ranker 106 may be generally the same as described with reference to databases in FIG. 4. Rather than the enumerated knowledge base primitives, the enumerated database primitives would be applied.
  • the training method may treat those having the same table name with ground truth but different column names as the hard negatives.
  • the training method may treat candidates with the same table and column name with ground truth, but having a different cell value as the hard negative.
  • Ranked lists 506 and 508 similarly provide a ranking of the database primitives as discussed with respect to ranked lists 406 and 408 in FIG. 4 for knowledge bases.
  • FIG. 6 is a simplified diagram illustrating generating a logical form for a knowledge bases according to some embodiments.
  • generator 108 receives input 602 which is the top ranked primitives (concatenated with the question and special tokens) from the ranker 106.
  • the primitives order maybe randomized so that generator 108 does not misuse the positional information to just learn to select the top ranked primitives, and instead must learn to use the semantic meaning.
  • Generator 108 may learn to generate logical forms (queries which are composed of primitives and may be executed on the corresponding data structure) by understanding the meaning of its elements (primitives and operations) and composing them. Generator 108 may be trained to generate the output logical form token-by-token, optimized by crossentropy loss. At inference, beam-search may be used to decode top-k target logical forms in an autoregressive manner. [Inventors, could you describe this last sentence more?]. [0043] An exemplary logical form output 604 is illustrated. As shown, output 604 in is: (JOIN money_unit.currency (JOIN architecture. construction_cost central_parkj).
  • the logical form is composed of the primitives including entities of the knowledge base, and operations performed on those entities (e.g., J01NJ.
  • This logical form may be used as a query against the knowledge base to provide an answer to the question (e.g., question 102 or 202).
  • FIG. 7 is a simplified diagram illustrating generating a logical form for a database according to some embodiments.
  • the training and utilization of generator 108 is the same for databases as described for knowledge bases in FIG. 6.
  • generator 108 receives input 702 which includes the top ranked primitives from ranker 106, but formatted as: [Question;
  • an output may be generated such as output 704 which is: SELECT count(*) FROM Courses JOIN Course_Attendance ON Courses.
  • Courseld Course_Attendance.CourseId WHERE Courses.
  • CourseName "Statistics”.
  • this logical form output 704 may be used as a query against the database to provide an answer to the question (e.g., question 102, or 302).
  • FIG. 8 is a simplified diagram illustrating a computing device implementing the Uni-Parser framework described in FIGS. 1-7, according to one embodiment described herein.
  • computing device 800 includes a processor 810 coupled to memory 820. Operation of computing device 800 is controlled by processor 810.
  • processor 810 maybe representative of one or more central processing units, multi-core processors, microprocessors, microcontrollers, digital signal processors, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), graphics processing units (GPUs) and/or the like in computing device 800.
  • Computing device 800 may be implemented as a stand-alone subsystem, as a board added to a computing device, and/or as a virtual machine.
  • Memory 820 may be used to store software executed by computing device 800 and/or one or more data structures used during operation of computing device 800.
  • Memory 820 may include one or more types of machine-readable media. Some common forms of machine-readable media may include floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, and/or any other medium from which a processor or computer is adapted to read.
  • Processor 810 and/or memory 820 may be arranged in any suitable physical arrangement.
  • processor 810 and/or memory 820 may be implemented on a same board, in a same package (e.g., system-in-package), on a same chip (e.g., system-on-chip), and/or the like.
  • processor 810 and/or memory 820 may include distributed, virtualized, and/or containerized computing resources. Consistent with such embodiments, processor 810 and/or memory 820 may be located in one or more data centers and/or cloud computing facilities.
  • memory 820 may include non-transitoiy, tangible, machine readable media that includes executable code that when run by one or more processors (e.g., processor 810) may cause the one or more processors to perform the methods described in further detail herein.
  • memory 820 includes instructions for Uni-Parser module 830 that may be used to implement and/or emulate the systems and models, and/or to implement any of the methods described further herein.
  • An Uni-Parser module 830 may receive input 840 such as an input training data (e.g., questions and corresponding logical forms) via the data interface 815 and generate an output 850 which may be a logical form based on a question for a given data structure.
  • the data interface 815 may comprise a communication interface, a user interface (such as a voice input interface, a graphical user interface, and/or the like).
  • the computing device 800 may receive the input 840 (such as a training dataset) from a networked database via a communication interface.
  • the computing device 800 may receive the input 840, such as a question and/or a data structure, from a user via the user interface.
  • the Uni-Parser module 830 is configured to produce a logical form for answering a question with a provided data structure.
  • the Uni-Parser module 830 may further include an enumerator submodule 831 (e.g., similar to enumerator 104 in FIG. 1), a ranker submodule 832 (e.g., similar to ranker 106 in FIG. 1), and a generator submodule 833 (e.g., similar to generator 108 in FIG. 1).
  • the Uni-Parser module 830 and its submodules 831-833 may be implemented by hardware, software and/or a combination thereof.
  • computing devices such as computing device 800 may include non-transitoiy, tangible, machine readable media that include executable code that when run by one or more processors (e.g., processor 810) may cause the one or more processors to perform the processes of method.
  • processors e.g., processor 810
  • Some common forms of machine-readable media that may include the processes of method are, for example, floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, and/or any other medium from which a processor or computer is adapted to read.
  • FIG. 9 is a simplified block diagram of a networked system suitable for implementing the Uni-Parser framework described in FIGS. 1-7 and other embodiments described herein.
  • block diagram 900 shows a system including the user device 910 which may be operated by user 940, data vendor servers 945, 970 and 980, server 930, and other forms of devices, servers, and/or software components that operate to perform various methodologies in accordance with the described embodiments.
  • Exemplary devices and servers may include device, stand-alone, and enterprise-class servers which may be similar to the computing device 800 described in FIG.
  • FIG. 9 may be deployed in other ways and that the operations performed, and/or the services provided by such devices and/or servers may be combined or separated for a given embodiment and may be performed by a greater number or fewer number of devices and/or servers.
  • One or more devices and/or servers may be operated and/or maintained by the same or different entities.
  • the user device 910, data vendor servers 945, 970 and 980, and the server 930 may communicate with each other over a network 960.
  • User device 910 may be utilized by a user 940 (e.g., a driver, a system admin, etc.) to access the various features available for user device 910, which may include processes and/or applications associated with the server 930 to receive an output data anomaly report.
  • a user 940 e.g., a driver, a system admin, etc.
  • User device 910, data vendor server 945, and the server 930 may each include one or more processors, memories, and other appropriate components for executing instructions such as program code and/or data stored on one or more computer readable mediums to implement the various applications, data, and steps described herein.
  • instructions may be stored in one or more computer readable media such as memories or data storage devices internal and/or external to various components of system 900, and/or accessible over network 960.
  • User device 910 may be implemented as a communication device that may utilize appropriate hardware and software configured for wired and/or wireless communication with data vendor server 945 and/or the server 930.
  • user device 910 maybe implemented as an autonomous driving vehicle, a personal computer (PC), a smart phone, laptop /tablet computer, wristwatch with appropriate computer hardware resources, eyeglasses with appropriate computer hardware (e.g., GOOGLE GLASS®), other type of wearable computing device, implantable communication devices, and/or other types of computing devices capable of transmitting and/or receiving data, such as an IPAD® from APPLE®.
  • a plurality of communication devices may function similarly.
  • UI 9 contains a user interface (UI) application 912, and/or other applications 916, which may correspond to executable processes, procedures, and/or applications with associated hardware.
  • the user device 910 may receive a message indicating a logical form or a direct answer to a question from the server 930 and display the message via the UI application 912.
  • user device 910 may include additional or different modules having specialized hardware and/or software as required.
  • user device 910 includes other applications 916 as may be desired in particular embodiments to provide features to user device 910.
  • other applications 916 may include security applications for implementing client-side security features, programmatic client applications for interfacing with appropriate application programming interfaces (APIs) over network 960, or other types of applications.
  • Other applications 916 may also include communication applications, such as email, texting, voice, social networking, and IM applications that allow a user to send and receive emails, calls, texts, and other notifications through network 960.
  • the other application 916 may be an email or instant messaging application that receives a query result message from the server 930.
  • Other applications 916 may include device interfaces and other display modules that may receive input and/or output information.
  • other applications 916 may contain software programs for asset management, executable by a processor, including a graphical user interface (GUI) configured to provide an interface to the user 940 to view an answer to a question.
  • GUI graphical user interface
  • User device 910 may further include database 918 stored in a transitory and/or non-transitory memory of user device 910, which may store various applications and data and be utilized during execution of various modules of user device 910.
  • Database 918 may store user profile relating to the user 940, predictions previously viewed or saved by the user 940, historical data received from the server 930, and/or the like.
  • database 918 may be local to user device 910. However, in other embodiments, database 918 maybe external to user device 910 and accessible by user device 910, including cloud storage systems and/or databases that are accessible over network 960.
  • User device 910 includes at least one network interface component 917 adapted to communicate with data vendor server 945 and/or the server 930.
  • network interface component 917 may include a DSL (e.g., Digital Subscriber Line) modem, a PSTN (Public Switched Telephone Network) modem, an Ethernet device, a broadband device, a satellite device and/or various other types of wired and/or wireless network communication devices including microwave, radio frequency, infrared, Bluetooth, and near field communication devices.
  • DSL Digital Subscriber Line
  • PSTN Public Switched Telephone Network
  • Data vendor server 945 may correspond to a server that hosts database 919 to provide training datasets including questions and corresponding logical forms to the server 930.
  • the database 919 maybe implemented by one or more relational database, distributed databases, cloud databases, and/or the like.
  • the data vendor server 945 includes at least one network interface component 926 adapted to communicate with user device 910 and/or the server 930.
  • network interface component 926 may include a DSL (e.g., Digital Subscriber Line) modem, a PSTN (Public Switched Telephone Network) modem, an Ethernet device, a broadband device, a satellite device and/or various other types of wired and/or wireless network communication devices including microwave, radio frequency, infrared, Bluetooth, and near field communication devices.
  • DSL Digital Subscriber Line
  • PSTN Public Switched Telephone Network
  • the server 930 may be housed with the Uni-Parser module 830 and its submodules described in FIG. 1.
  • Uni-Parser module 830 may receive data from database 919 at the data vendor server 945 via the network 960 to generate logical forms and/or direct answers to questions. The generated logical forms and/or answers may also be sent to the user device 910 for review by the user 940 via the network 960.
  • the database 932 may be stored in a transitory and/or non-transitory memory of the server 930.
  • the database 932 may store data obtained from the data vendor server 945.
  • the database 932 may store parameters of the Uni-Parser module 830.
  • the database 932 may store previously generated logical forms, and the corresponding input feature vectors.
  • database 932 may be local to the server 930. However, in other embodiments, database 932 may be external to the server 930 and accessible by the server 930, including cloud storage systems and/or databases that are accessible over network 960.
  • the server 930 includes at least one network interface component 933 adapted to communicate with user device 910 and/or data vendor servers 945, 970 or 980 over network 960.
  • network interface component 933 may comprise a DSL (e.g., Digital Subscriber Line] modem, a PSTN (Public Switched Telephone Network) modem, an Ethernet device, a broadband device, a satellite device and/or various other types of wired and/or wireless network communication devices including microwave, radio frequency (RF), and infrared (IR) communication devices.
  • DSL Digital Subscriber Line
  • PSTN Public Switched Telephone Network
  • Network 960 may be implemented as a single network or a combination of multiple networks.
  • network 960 may include the Internet or one or more intranets, landline networks, wireless networks, and/or other appropriate types of networks.
  • network 960 may correspond to small scale communication networks, such as a private or local area network, or a larger scale network, such as a wide area network or the Internet, accessible by the various components of system 900.
  • FIG. 10 is an example logic flow diagram illustrating a method 1000 of semantic parsing based on the Uni-Parse framework shown in FIGS. 1-9, according to some embodiments described herein.
  • method 1000 may be implemented, at least in part, in the form of executable code stored on non-transitory, tangible, machine-readable media that when run by one or more processors may cause the one or more processors to perform one or more of the processes.
  • method 1000 corresponds to the operation of the Uni-Parser module 830 (e.g., FIGS. 8-9) that performs semantic parsing.
  • the method 1000 includes a number of enumerated steps, but aspects of the method 1000 may include additional steps before, after, and in between the enumerated steps. In some aspects, one or more of the enumerated steps may be omitted or performed in a different order.
  • a system receives, via a communication interface, a natural language question (e.g., question 102, 202, or 302).
  • a natural language question e.g., question 102, 202, or 302
  • the question maybe input to a user-interface by a user.
  • the communication interface may be, for example, a network interface of a computer, where the natural language question is received at a server (e.g., server 930) via a network interface (e.g., network interface 933) from a user device (e.g., user device 910).
  • the system identifies, via a named entity recognition (NER) procedure, a first entity from the natural language question.
  • the first entity may be a node in a knowledge graph such as "Central Park” as discussed in FIG. 2.
  • the first entity may be a cell value such as "Statistics” as described in FIG. 3.
  • the system enumerates (e.g., by an enumerator 104), based on the first entity, a plurality of primitives indicative of entities or entity relations in a database or knowledge base.
  • primitives may include table column names and cell values (including the cell value matched to the first entity described at step 1002).
  • primitives may include entities and entity relations of the knowledge base (including the knowledge base entity matched to the first entity described at step 1002).
  • the knowledge base entity matched to the first entity is a starting entity, from which the other primitives are enumerated, being one hop or two hops from the starting entity.
  • the system ranks (e.g., by a ranker 106) the plurality of primitives based on their respective relevance to the natural language question.
  • a ranker 106 the plurality of primitives based on their respective relevance to the natural language question.
  • separate ranked lists are generated for each class of primitive (e.g., first-hop and second-hop or column name and cell value).
  • Primitives enumerated at step 1004 may be provided to the ranker in a form where they are concatenated with the natural language question and/or a special token identifying the class of the primitive (e.g., first-hop, second- hop, column name, or cell value).
  • the ranker may be a model (e.g., a neural network model) which is trained to minimize a contrastive loss.
  • the contrastive loos may be based on a positive primitive sample extracted from a ground-truth logical form, and a negative primitive sample.
  • the method for selecting the negative primitive sample may be random, or may be selected as a "hard” sample which may improve training.
  • the negative sample may be chosen from primitives connected in the knowledge base to a first hop entity extracted from a ground-truth logical form.
  • the negative sample may be selected by selecting a cell value in the same table and column as a cell value extracted from the ground-truth logical form.
  • the system selects a subset of top-ranked primitives based on the ranking. This may be performed by a ranker 106, generator 108, or another component of the system. The amount of primitives selected may be based on a predetermined value.
  • the ranked list includes multiple ranked lists (e.g., a "first-hop” list and a "second- hop” list), then each list may be ranked individually and the top-ranked primitives from each list may be selected.
  • the system generates (e.g., by a generator 108) a logical form that is executable on the database or the knowledge base based on the natural language question and the subset of the plurality of primitives.
  • the executable logical form may include at least one primitive from the subset.
  • Logical forms may be, for example, logical forms 604 or 704.
  • the system transmits, via a communication interface, the logical form to a database system or a knowledge base system.
  • the database system or knowledge base system may be a server such as data vendor server 945 described in FIG. 9, to which the logical form is transmitted so that the server may execute the logical form on the database (or knowledge base).
  • the system receives, via the communication interface, a query result in response to the natural language question based on the logical form.
  • the answer may be displayed to a user, for example on the same user interface as was used to input the question.
  • FIGS. 11-15 provide charts illustrating exemplary performance of different embodiments described herein. Comparisons are made to Bert Ranking as described in Gu etal, Beyond iid: three levels of generalization for question answering on knowledge bases, In Proceedings of the Web Conference, pages 3477-3488, 2021; ReTrack as described in Chen et al., Retrack: a flexible and efficient framework for knowledge base question answering, in Proceedings of the 59 th Annual Meeting of the Association for Computational Linguistics and the 11 th International Joint Conference on Natural Language Processing: System Demonstrations, pages 325-336, 2021; UnifiedSKG as described in X/'e etal., Unifiedskg: Unifying and multi-tasking structured knowledge grounding with text-to-text language models, arXiv: 2201.05966, 2022; RNG-KBQA as described in Ye etal., Rng-kbqa: Generation augmented iterative ranking for knowledge base question answering, arXiv:2109.08678, 2021; Topic
  • Spider as described in Yu et al., Spider: A large-scale human labeled dataset for complex and cross-domain semantic parsing and text-to-sql task, arXiv: 1809.08887, 2018; and WikiSQL as described in Zhong et al., Seq2sql: Generating structured queries from natural language using reinforcement learning, arXiv: 1709.00103, 2017.
  • the primitive ranker was initiated using BERT-base-uncased. 48 negative candidates were sampled for each primitive category. The ranker was trained for 10 epochs using a learning rate of le-5 and a batch size of 8. Bootstrap hard negative sampling was conducted after every two epochs. Ground truth entity linking was used for enumerating training candidates. The generator was trained it using T5-base and 3B on Spider datasets, top-15 ⁇
  • T5-3B model For the T5-3B model, it was run on 16 A100 GPUs with 100 epochs using a batch size of 1024. And on the WikiSQL dataset, T5-base and T5-large were used, and used top-5 ⁇
  • FIGS. 11-12 illustrate exact match (EM) and Fl scores on test/dev split of the GRA1LQA comparing alternative methods to an embodiment of the methods described herein (Uni-Parse).
  • the reported models are based on BERT-base model for ranker and T5- base for generator. Best results among dev are bolded and the results of test better than dev are underlined.
  • the embodiment tested (Uni-Parser) generally outperformed the prior methods.
  • the approach described herein achieves better performance in compositional and zero-shot settings. Especially, demonstrated was 3.1% improvement over baselines on Fl in the compositional setting.
  • FIGS. 13-14 summarize the results on the Spider and WikiSQL datasets respectively.
  • Uni-Parser achieves competitive performance against all baseline models.
  • the use whole DB table schema as input like BRIDGE and UnifiedSKG Uni-Parser achieves 3% improvement, suggesting the advantage of the method.
  • Uni-Parser achieves comparable performance with fewer training epochs.
  • the upper block of FIG. 14 shows the comparison between the small pre-trained model, and the lower block shows the comparison of the large pre-trained model.
  • FIG. 15 illustrates the effects of hard negative strategies in ranking on WebQSP and WikiSQL datasets.
  • No CG means that primitives are not differentiated by their categories.
  • No CG shows lower performance than the setting using categories in the input.
  • the proposed hard negative sampling can help the ranker to better determine the positive primitive from the negative ones.
  • FIG. 16 provides an exemplary comparison of an output of the semantic parsing framework described in embodiments herein to another semantic parser.
  • the top-5 ranked logical forms using RNG-KBQA contain much redundant information between each other, and none of them equals the gold logical expression.
  • the Uni-Parser method top-5 ranked expressions finds the correct first and second hop primitives and generates the correct logical form. Not that even though proper primitive is not ranked as the top-1, the generator has the capability to correctly select it.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Embodiments described herein provide a semantic parsing framework which may be referred to as Uni-Parser. The Uni-Parser framework may be applied to question answering on both knowledge bases and databases. The three main stages of the Uni-Parser framework are enumeration, ranking, and generation. At the enumeration stage, primitives are enumerated based on matching the question to the data structure. After enumerating primitives, the Uni-Parser framework may rank the primitives used a trained ranker model. The top ranked primitives may then be used as inputs to a generator which is a learned sequence to sequence model which produces a logical form.

Description

SYSTEMS AND METHODS FOR SEMANTIC PARSING WITH PRIMITIVE LEVEL ENUMERATION
Inventors: Ye Liu, Semih Yavuz, Yingbo Zhou, Rui Meng
CROSS REFERENCE fsl
[0001] The present disclosure claims priority to U.S. non-provisional patent application no. 18/059,691, filed on November 29, 2022, which in turn claims priority under 35 U.S.C. 119 to U.S. provisional application no. 63/355,438, filed June 24, 2022, which are hereby expressly incorporated by reference herein in their entirety.
TECHNICAL FIELD
[0002] The embodiments relate generally to natural language processing and machine learning systems, and more specifically to systems and methods for semantic parsing with primitive level enumeration for question answering.
BACKGROUND
[0003] Database operations, such as searching for a result in response to a search query, often require a command in a specific form. Writing the command in a specific logical form requires a user to master a high level of database language. Machine learning systems have been widely used in turning a natural language question into a query in a format used for a database or knowledge base. In this way, a user may enter a natural language query, such as "what is the month having the highest revenue in the past five years?” Parsing natural language questions into executable logical forms is a useful and interpretable way to perform question answering on structured data. Existing approaches enumerate executable logical forms, where the number of logical forms enumerated grows exponentially when dealing with complex questions with multi-hop/multi-table relations. Further, existing approaches rely on different model structures for different data modalities (e.g., knowledge bases and databases) rather than a unified adaptable structure. Therefore, there is a need for improved systems and methods of semantic parsing. BRIEF DESCRIPTION OF THE DRAWINGS
[0004] FIG. 1 is a simplified diagram illustrating a semantic parsing framework according to some embodiments.
[0005] FIG. 2 is a simplified diagram illustrating enumeration of primitives for a knowledge base according to some embodiments.
[0006] FIG. 3 is a simplified diagram illustrating enumeration of primitives for a database according to some embodiments.
[0007] FIG. 4 is a simplified diagram illustrating ranking primitives for a knowledge base according to some embodiments.
[0008] FIG. 5 is a simplified diagram illustrating ranking primitives for a database according to some embodiments.
[0009] FIG. 6 is a simplified diagram illustrating generating logical forms for a knowledge base according to some embodiments.
[0010] FIG. 7 is a simplified diagram illustrating generating logical forms for a database according to some embodiments.
[0011] FIG. 8 is a simplified diagram illustrating a computing device implementing the semantic parsing framework described in FIGS. 1-7, according to one embodiment described herein.
[0012] FIG. 9 is a simplified block diagram of a networked system suitable for implementing the semantic parsing framework described in FIGS. 1-7 and other embodiments described herein.
[0013] FIG. 10 is an example logic flow diagram illustrating a method of semantic parsing based on the framework shown in FIGS. 1-9, according to some embodiments described herein [0014] FIGS. 11-15 provide charts illustrating exemplary performance of different embodiments described herein.
[0015] FIG. 16 provides an exemplary comparison of an output of the semantic parsing framework described in embodiments herein to another semantic parser.
[0016] Embodiments of the disclosure and their advantages are best understood by referring to the detailed description that follows. It should be appreciated that like reference numerals are used to identify like elements illustrated in one or more of the figures, wherein showings therein are for purposes of illustrating embodiments of the disclosure and not for purposes of limiting the same.
DETAILED DESCRIPTION
[0017] As used herein, the term "network" may comprise any hardware or softwarebased framework that includes any artificial intelligence network or system, neural network or system and/or any training or learning models implemented thereon or therewith.
[0018] As used herein, the term "module" may comprise hardware or software-based framework that performs one or more functions. In some embodiments, the module may be implemented on one or more neural networks.
[0019] Parsing natural language questions to generate executable logical forms that can be executed on a database or knowledge base is a useful and interpretable way to perform question answering on structured data. For example, a natural language question such as "How many students are enrolled in Statistics?” may be answerable given a database which contains student data. A semantic parser may generate a logical form which may be executed by the database in order to provide an answer, e.g., SELECT countf*] FROM Courses JOIN Course_Attendance ON Courses. Courseld = Course_Attendance.CourseId WHERE Courses. CourseName = "Statistics". Existing approaches for semantic parsing enumerate executable logical forms. For example, when the data is structured as a knowledge base, entities may be interconnected by edges which represent relationships. An existing approach enumerates logical forms starting with the entity and including relations for two "hops” away in the structure. Given M connections to entities one hop away from the starting entity, and N connections from each of those entities, this results in M*N logical forms. For sufficiently complex data structures, this results in inefficient resource usage, and results in poor performance. Further, existing approaches do not provide a unified structure that is applicable to both knowledge bases and databases.
[0020] In view of the need for improved systems and methods for semantic parsing, embodiments described herein provide a semantic parsing framework ( referred to as "Uni-Parser") that may be applied to question answering on both knowledge bases and databases. The Uni-Parser framework may generate one or more logical form that can be executed on a data structure in response to a natural language input, through enumeration, ranking, and generation. At the enumeration stage, primitives are enumerated based on matching the natural language question to the data structure (e.g., either a database or knowledge base). For example, primitives in a database maybe a column name of a specific table, or a value of a specific cell in a table (or an operation which would provide the relevant cell value as an output), etc. In a knowledge base, a primitive may be represented by a node in the graph. This is opposed to a logical form which is composed of primitives and corresponding operations upon the primitives, which for a database, for example, may be: SELECT count(*) FROM Courses JOIN Course_Attendance ON Courses. CourselD = Course_Attendance.CourselD WHERE Courses. CourseName = "Statistics”. This logical form includes a number of primitives corresponding to column names for specific tables such as "Courses. CourselD" and cell values such as "Statistics." Rather than enumerating all the possible logical forms with all different variations and/or combinations of the primitives, Uni-Parser enumerates the primitives themselves. In this way, rather than M*N logical forms being enumerated (Where M is the number of first hop entities and N is the number of second hop entities from a specific entity in the natural language question in the knowledge base), Uni-Parser may enumerate M+N primitives.
[0021] After enumerating primitives, the Uni-Parser framework may rank the primitives used a trained ranker model. The top ranked primitives may then be used as inputs to a generator, which is a learned sequence to sequence model. The sequence-to-sequence model may then produce an output sequence of a logical form in response to an input sequence of the top ranked primitives. Details of the enumeration, ranking, and generation stages are described in more detail below with respect to FIGS. 1-7.
[0022] Embodiments described herein provide a number of benefits. For example, the same semantic parsing framework may be used for both database and knowledge base question answering. By enumerating primitives rather than logical forms, the search space is greatly reduced, which makes candidate generation and ranking more efficient and scalable. Further, the composition of logical forms from primitives and operations is postponed to the generation phase. This leads to a more generalized model that can work on complex logical forms and generalize to questions involving unseen schema.
[0023] FIG. 1 is a simplified diagram illustrating a semantic parsing framework 100 according to some embodiments. The framework 100 comprises an enumerator 104, operatively connected to a ranker 106, which is operatively connected to a generator 108. A natural language question 102 (e.g., How many students are enrolled in Statistics?") may be an input to enumerator 104. Given the question 102 and a known data structure (e.g., such as a knowledge base comprising a plurality of entities that are interconnected by edges representing the relationships between the entities or a database comprising one or more tables each with respective columns and rows which store values), enumerator 104 may enumerate a number of primitives. For example, for a given entity indicated in question 102, the first-hop and second-hop entities from the given entity in the knowledge base may be enumerated as primitives. Enumeration of primitives is described in more detail with respect to FIG. 2 for knowledge bases, and FIG. 3 for databases.
[0024] Enumerated primitives may be passed to ranker 106 which ranks the primitives based on the question 102. Ranker 106 may be a learned model which is trained on a similar or the same data structure. Ranker 106 is described in more detail with respect to FIG. 4 for knowledge bases, and FIG. 5 for databases. [0025] The top ranked primitives may be passed to generator 108. Based on the question 1-2 and the ranked primitives, generator 108 may produce logical form 110. Logical form 110 may be executed on the data structure to provide an answer to question 102. Generator 108 may be a learned model which is trained on a similar or the same data structure. Generator 108 is described in more detail with respect to FIG. 6 for knowledge bases, and FIG. 7 for databases.
[0026] FIG. 2 is a simplified diagram illustrating enumeration of primitives for a knowledge base according to some embodiments. Based on the provided question 202, an enumerator (e.g., enumerator 104 of FIG. 1] may enumerate primitives from knowledge base 204. The enumerator may start by detecting entities in the question 202, for example by using a named entity recognition (NER) algorithm. For example, name entities such as "currency," "New York," "Central Park," "construction" may be identified in the question 202. In some embodiments, the NER algorithm identifies entities in the question 202 based on belonging to a predetermined class of entities (e.g., location, person, etc.).
[0027] The identified entities in question 202 may then be matched (or fuzzy matched) to entities in knowledge base 204. For example, "Central Park" may be selected as a starting entity based on matching this entity to words "central park" in question 202.
[0028] In some embodiments, only one entity is selected from knowledge base 204, while in other embodiments two or more may be selected. The matched entity (or entities) may be used as a starting point for enumeration. To alleviate the issue of entity disambiguation, a ranker model may be used to select entity candidates based on the similarity between the question and the one-hop in/out relations of the entity.
[0029] After selecting one or more starting entities, the enumerator may enumerate all primitives within two hops of the starting entities on a knowledge base. Entities one hop away from "Central Park” shown enumerated in list 206 as reflected in knowledge base 204 may include items such as "architecture. construction_cost central_park," and "architecture. landscape_project central_park." At a second hop from "central park" may include additional entities such as "travel.get_destination," and "money_unit.currency." As shown in list 206, the enumeration is done at the level of primitives, the base unit of the knowledge base, rather than on queries which are constructed of primitives and operations on the primitives.
[0030] FIG. 3 is a simplified diagram illustrating enumeration of primitives for a database according to some embodiments. Based on the provided question 302, an enumerator (e.g., enumerator 104 of FIG. 1] may enumerate primitives from database 304. For databases, a primitive for any input question may include a column name or a cell value. For example, for a database that includes one or more data tables, all column names of all data tables may be enumerated first, and only relevant cell values within the one or more data tables may be enumerated next based on the input question. As the number of column names is generally manageable even for very large databases, every column name for every table in the database may be enumerated. For example, as shown in enumerated list 310, column names (as associated with their tables] are enumerated including Students. Studentld, Students. StudentName (both from table 306), not shown are the columns from table 308 which would be included, and the column names from table 310 are also included, as illustrated by the exemplary Courses. CourseName entry.
[0031] For the enumeration of individual cells, the amount of individual cells may be too large to enumerate efficiently. Rather than enumerating every cell, the enumerator may detect entities in the question 302, for example by using a named entity recognition algorithm, e.g., “students,” "course,” "statistics” maybe detected from question 302. The entities in question 302 may then be matched (or fuzzy matched) to cell values in database 304. For example, as illustrated, question 302 contains the word "statistics” which matches the cell value "Statistics” in the CourseName column of table 308 of all courses. This is therefore shown in enumerated list 310 as "Courses.CourseName <op> Statistics” which maintains its relation to table 308 and the CourseName column. In some embodiments, if question 302 contains a numeric value, and string match fails to detect a match, the value is paired with all column names in the database.
[0032] FIG. 4 is a simplified diagram illustrating ranking primitives for a knowledge base according to some embodiments. Enumerated lists 402 and 404 are lists of first hop and second hop primitives as enumerated by an enumerator (e.g., enumerator 104) as discussed with reference to FIGS. 1-2. In some embodiments, the same ranker 106 model may be used for both classes of primitives (e.g., both first hop and second hop).
[0033] In some embodiments, and as illustrated in FIG. 4, the enumerated lists provided by the enumerator may be concatenated (either by the enumerator, ranker, or as another unit) with the question, a special token indicating first/second hop, and the primitive itself. For example, enumerated list 404 includes items which start with the text of question 202 of FIG. 2, a special first-hop token "< | firsthop | >” and the primitives such as
"architecture. construction_cost central_park." Table 402 includes similar items which are concatenated with the question 202, a second-hop token, followed by the primitive. These are exemplary patterns of concatenation, and others may be used. The enumerated lists are provided to ranker 106.
[0034] Ranker 106 (as discussed in FIG. 1) may be trained to filter out irrelevant primitives by measuring the similarity between questions and enumerated primitive candidates (e.g., as shown in lists 402, 404). In some embodiments the ranker 106 utilizes a cross-encoder architecture. Given a question X and a primitive p with a category token pc, ranker 106 may use a BERT-based encoder that takes as input the concatenation of their vector representations, and outputs a logit representing the similarity between the primitive and the question: s(X, p, pc) = FFN(i|r6(pc ® X ® p)) where © denotes a concatenation operation. i O denotes the [CLS] representation of the concatenated input after BERT embedding; FNN is a projection layer reducing the representation to a scalar similarity score. pc is the special token to distinguish the category of the primitive (in knowledge base, pc G {<| firsthop |>, <|secondhop|>) and in database, pc 6 {< | tb_cl| >, < | tb_cl_vl | >}) . The ranker is optimized to minimize the contrastive loss:
Figure imgf000010_0001
where p+ is the positive primitive extracted from the ground-truth logical form and P- is the set of negative primitives from the same category pc.
[0035] In order to pair negative primitive samples with positive samples when training ranker 106, negative sampling may be performed. In some embodiments, random sampling is used to select negative samples. In other embodiments, a sampling strategy samples hard negative candidates for training the ranker 106. In cases of knowledge bases, the hard negative candidates of the second hop may be sampled from the primitives connected to the ground truth first hop. Moreover, a bootstrap negative sampling strategy may be leveraged; that is, the model may be trained recursively using the false positive candidates generated from the last training epoch.
[0036] At inference, ranker 106 may generate an output value associated with each of the enumerated primitives. Based on those values, the primitives may be ranked. In some embodiments, first-hop/second-hop primitives are separately ranked (or column name/cell values for databases]. This is illustrated as ranked lists 406 and 408 in FIG. 4 for knowledge bases. In some embodiments, the top k primitives of each category may be selected to be passed on to a generator (e.g., generator 108). For example, k may be set to 5. In knowledge bases, the top ranked first hop primitives and second hop primitives can be formed into two- hop paths by combining one first hop primitive with each of the second hop primitives. However, to provide valid primitive candidates to the generator 108, they may be further filtered to remove the second hop primitives that cannot be reached from any of the first hop primitives.
[0037] FIG. 5 is a simplified diagram illustrating ranking primitives for a database according to some embodiments. Enumerated lists 502 and 504 are lists of column names and cell value primitives as enumerated by an enumerator (e.g., enumerator 104) as discussed with reference to FIGS. 1 and 3.
[0038] In some embodiments, and as illustrated in FIG. 5, the enumerated lists provided by the enumerator may be concatenated (either by the enumerator, ranker, or as another unit) with the question, a special token indicating whether the primitive is a column name or a cell value, and the primitive itself. For example, enumerated list 502 includes a single item which starts with the text of question 302 of FIG. 3, a special cell value token
"< | tb_cl_vl | >” and the primitive "Courses. CourseName <op> Statistics.” Table 504 includes similar items which are concatenated with the question 202, a column-name token, followed by the primitive. These are exemplary patterns of concatenation, and others may be used. The enumerated lists are provided to ranker 106.
[0039] The training and utilization of ranker 106 may be generally the same as described with reference to databases in FIG. 4. Rather than the enumerated knowledge base primitives, the enumerated database primitives would be applied. In selecting hard negatives for training ranker 106 on databases, for <|tb_cl|> category primitives, the training method may treat those having the same table name with ground truth but different column names as the hard negatives. And for the <|tb_cl_vl|> category, the training method may treat candidates with the same table and column name with ground truth, but having a different cell value as the hard negative.
[0040] Ranked lists 506 and 508 similarly provide a ranking of the database primitives as discussed with respect to ranked lists 406 and 408 in FIG. 4 for knowledge bases.
[0041] FIG. 6 is a simplified diagram illustrating generating a logical form for a knowledge bases according to some embodiments. For knowledge bases, generator 108 receives input 602 which is the top ranked primitives (concatenated with the question and special tokens) from the ranker 106. In some embodiments, the primitives order maybe randomized so that generator 108 does not misuse the positional information to just learn to select the top ranked primitives, and instead must learn to use the semantic meaning.
[0042] Generator 108 may learn to generate logical forms (queries which are composed of primitives and may be executed on the corresponding data structure) by understanding the meaning of its elements (primitives and operations) and composing them. Generator 108 may be trained to generate the output logical form token-by-token, optimized by crossentropy loss. At inference, beam-search may be used to decode top-k target logical forms in an autoregressive manner. [Inventors, could you describe this last sentence more?]. [0043] An exemplary logical form output 604 is illustrated. As shown, output 604 in is: (JOIN money_unit.currency (JOIN architecture. construction_cost central_parkj). Note that the logical form is composed of the primitives including entities of the knowledge base, and operations performed on those entities (e.g., J01NJ. This logical form may be used as a query against the knowledge base to provide an answer to the question (e.g., question 102 or 202).
[0044] FIG. 7 is a simplified diagram illustrating generating a logical form for a database according to some embodiments. Generally, the training and utilization of generator 108 is the same for databases as described for knowledge bases in FIG. 6.
[0045] For databases, generator 108 receives input 702 which includes the top ranked primitives from ranker 106, but formatted as: [Question; |table_namel| column_namel, column_name2 <op> value ... |table_name|2 ...]. For databases, the formatting of putting column names and cell values organized according to the table they belong to naturally changes the order of them from their ranked order.
[0046] As illustrated, an output may be generated such as output 704 which is: SELECT count(*) FROM Courses JOIN Course_Attendance ON Courses. Courseld = Course_Attendance.CourseId WHERE Courses. CourseName = "Statistics”. Similar to the logical form described in FIG. 6, this logical form output 704 may be used as a query against the database to provide an answer to the question (e.g., question 102, or 302).
[0047] FIG. 8 is a simplified diagram illustrating a computing device implementing the Uni-Parser framework described in FIGS. 1-7, according to one embodiment described herein. As shown in FIG. 8, computing device 800 includes a processor 810 coupled to memory 820. Operation of computing device 800 is controlled by processor 810. And although computing device 800 is shown with only one processor 810, it is understood that processor 810 maybe representative of one or more central processing units, multi-core processors, microprocessors, microcontrollers, digital signal processors, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), graphics processing units (GPUs) and/or the like in computing device 800. Computing device 800 may be implemented as a stand-alone subsystem, as a board added to a computing device, and/or as a virtual machine.
[0048] Memory 820 may be used to store software executed by computing device 800 and/or one or more data structures used during operation of computing device 800. Memory 820 may include one or more types of machine-readable media. Some common forms of machine-readable media may include floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, and/or any other medium from which a processor or computer is adapted to read.
[0049] Processor 810 and/or memory 820 may be arranged in any suitable physical arrangement. In some embodiments, processor 810 and/or memory 820 may be implemented on a same board, in a same package (e.g., system-in-package), on a same chip (e.g., system-on-chip), and/or the like. In some embodiments, processor 810 and/or memory 820 may include distributed, virtualized, and/or containerized computing resources. Consistent with such embodiments, processor 810 and/or memory 820 may be located in one or more data centers and/or cloud computing facilities.
[0050] In some examples, memory 820 may include non-transitoiy, tangible, machine readable media that includes executable code that when run by one or more processors (e.g., processor 810) may cause the one or more processors to perform the methods described in further detail herein. For example, as shown, memory 820 includes instructions for Uni-Parser module 830 that may be used to implement and/or emulate the systems and models, and/or to implement any of the methods described further herein. An Uni-Parser module 830 may receive input 840 such as an input training data (e.g., questions and corresponding logical forms) via the data interface 815 and generate an output 850 which may be a logical form based on a question for a given data structure.
[0051] The data interface 815 may comprise a communication interface, a user interface (such as a voice input interface, a graphical user interface, and/or the like). For example. the computing device 800 may receive the input 840 (such as a training dataset) from a networked database via a communication interface. Or the computing device 800 may receive the input 840, such as a question and/or a data structure, from a user via the user interface.
[0052] In some embodiments, the Uni-Parser module 830 is configured to produce a logical form for answering a question with a provided data structure. The Uni-Parser module 830 may further include an enumerator submodule 831 (e.g., similar to enumerator 104 in FIG. 1), a ranker submodule 832 (e.g., similar to ranker 106 in FIG. 1), and a generator submodule 833 (e.g., similar to generator 108 in FIG. 1). In one embodiment, the Uni-Parser module 830 and its submodules 831-833 may be implemented by hardware, software and/or a combination thereof.
[0053] Some examples of computing devices, such as computing device 800 may include non-transitoiy, tangible, machine readable media that include executable code that when run by one or more processors (e.g., processor 810) may cause the one or more processors to perform the processes of method. Some common forms of machine-readable media that may include the processes of method are, for example, floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, and/or any other medium from which a processor or computer is adapted to read.
[0054] FIG. 9 is a simplified block diagram of a networked system suitable for implementing the Uni-Parser framework described in FIGS. 1-7 and other embodiments described herein. In one embodiment, block diagram 900 shows a system including the user device 910 which may be operated by user 940, data vendor servers 945, 970 and 980, server 930, and other forms of devices, servers, and/or software components that operate to perform various methodologies in accordance with the described embodiments. Exemplary devices and servers may include device, stand-alone, and enterprise-class servers which may be similar to the computing device 800 described in FIG. 8, operating an OS such as a MICROSOFT® OS, a UNIX® OS, a LINUX® OS, or other suitable device and/or server-based OS. It can be appreciated that the devices and/or servers illustrated in FIG. 9 may be deployed in other ways and that the operations performed, and/or the services provided by such devices and/or servers may be combined or separated for a given embodiment and may be performed by a greater number or fewer number of devices and/or servers. One or more devices and/or servers may be operated and/or maintained by the same or different entities.
[0055] The user device 910, data vendor servers 945, 970 and 980, and the server 930 may communicate with each other over a network 960. User device 910 may be utilized by a user 940 (e.g., a driver, a system admin, etc.) to access the various features available for user device 910, which may include processes and/or applications associated with the server 930 to receive an output data anomaly report.
[0056] User device 910, data vendor server 945, and the server 930 may each include one or more processors, memories, and other appropriate components for executing instructions such as program code and/or data stored on one or more computer readable mediums to implement the various applications, data, and steps described herein. For example, such instructions may be stored in one or more computer readable media such as memories or data storage devices internal and/or external to various components of system 900, and/or accessible over network 960.
[0057] User device 910 may be implemented as a communication device that may utilize appropriate hardware and software configured for wired and/or wireless communication with data vendor server 945 and/or the server 930. For example, in one embodiment, user device 910 maybe implemented as an autonomous driving vehicle, a personal computer (PC), a smart phone, laptop /tablet computer, wristwatch with appropriate computer hardware resources, eyeglasses with appropriate computer hardware (e.g., GOOGLE GLASS®), other type of wearable computing device, implantable communication devices, and/or other types of computing devices capable of transmitting and/or receiving data, such as an IPAD® from APPLE®. Although only one communication device is shown, a plurality of communication devices may function similarly. [0058] User device 910 of FIG. 9 contains a user interface (UI) application 912, and/or other applications 916, which may correspond to executable processes, procedures, and/or applications with associated hardware. For example, the user device 910 may receive a message indicating a logical form or a direct answer to a question from the server 930 and display the message via the UI application 912. In other embodiments, user device 910 may include additional or different modules having specialized hardware and/or software as required.
[0059] In various embodiments, user device 910 includes other applications 916 as may be desired in particular embodiments to provide features to user device 910. For example, other applications 916 may include security applications for implementing client-side security features, programmatic client applications for interfacing with appropriate application programming interfaces (APIs) over network 960, or other types of applications. Other applications 916 may also include communication applications, such as email, texting, voice, social networking, and IM applications that allow a user to send and receive emails, calls, texts, and other notifications through network 960. For example, the other application 916 may be an email or instant messaging application that receives a query result message from the server 930. Other applications 916 may include device interfaces and other display modules that may receive input and/or output information. For example, other applications 916 may contain software programs for asset management, executable by a processor, including a graphical user interface (GUI) configured to provide an interface to the user 940 to view an answer to a question.
[0060] User device 910 may further include database 918 stored in a transitory and/or non-transitory memory of user device 910, which may store various applications and data and be utilized during execution of various modules of user device 910. Database 918 may store user profile relating to the user 940, predictions previously viewed or saved by the user 940, historical data received from the server 930, and/or the like. In some embodiments, database 918 may be local to user device 910. However, in other embodiments, database 918 maybe external to user device 910 and accessible by user device 910, including cloud storage systems and/or databases that are accessible over network 960.
[0061] User device 910 includes at least one network interface component 917 adapted to communicate with data vendor server 945 and/or the server 930. In various embodiments, network interface component 917 may include a DSL (e.g., Digital Subscriber Line) modem, a PSTN (Public Switched Telephone Network) modem, an Ethernet device, a broadband device, a satellite device and/or various other types of wired and/or wireless network communication devices including microwave, radio frequency, infrared, Bluetooth, and near field communication devices.
[0062] Data vendor server 945 may correspond to a server that hosts database 919 to provide training datasets including questions and corresponding logical forms to the server 930. The database 919 maybe implemented by one or more relational database, distributed databases, cloud databases, and/or the like.
[0063] The data vendor server 945 includes at least one network interface component 926 adapted to communicate with user device 910 and/or the server 930. In various embodiments, network interface component 926 may include a DSL (e.g., Digital Subscriber Line) modem, a PSTN (Public Switched Telephone Network) modem, an Ethernet device, a broadband device, a satellite device and/or various other types of wired and/or wireless network communication devices including microwave, radio frequency, infrared, Bluetooth, and near field communication devices. For example, in one implementation, the data vendor server 945 may send asset information from the database 919, via the network interface 926, to the server 930.
[0064] The server 930 may be housed with the Uni-Parser module 830 and its submodules described in FIG. 1. In some implementations, Uni-Parser module 830 may receive data from database 919 at the data vendor server 945 via the network 960 to generate logical forms and/or direct answers to questions. The generated logical forms and/or answers may also be sent to the user device 910 for review by the user 940 via the network 960. [0065] The database 932 may be stored in a transitory and/or non-transitory memory of the server 930. In one implementation, the database 932 may store data obtained from the data vendor server 945. In one implementation, the database 932 may store parameters of the Uni-Parser module 830. In one implementation, the database 932 may store previously generated logical forms, and the corresponding input feature vectors.
[0066] In some embodiments, database 932 may be local to the server 930. However, in other embodiments, database 932 may be external to the server 930 and accessible by the server 930, including cloud storage systems and/or databases that are accessible over network 960.
[0067] The server 930 includes at least one network interface component 933 adapted to communicate with user device 910 and/or data vendor servers 945, 970 or 980 over network 960. In various embodiments, network interface component 933 may comprise a DSL (e.g., Digital Subscriber Line] modem, a PSTN (Public Switched Telephone Network) modem, an Ethernet device, a broadband device, a satellite device and/or various other types of wired and/or wireless network communication devices including microwave, radio frequency (RF), and infrared (IR) communication devices.
[0068] Network 960 may be implemented as a single network or a combination of multiple networks. For example, in various embodiments, network 960 may include the Internet or one or more intranets, landline networks, wireless networks, and/or other appropriate types of networks. Thus, network 960 may correspond to small scale communication networks, such as a private or local area network, or a larger scale network, such as a wide area network or the Internet, accessible by the various components of system 900.
[0069] The knowledge base and/or database which is used for the enumeration of primitives and querying using the generated logical forms may be stored on one of the illustrated network devices such as data vendor server 945, server 930, or user device 910. Such knowledge base/database may or may not be associated with or incorporated within the databases described above (i.e., databases 918, 919, and 932). [0070] FIG. 10 is an example logic flow diagram illustrating a method 1000 of semantic parsing based on the Uni-Parse framework shown in FIGS. 1-9, according to some embodiments described herein. One or more of the processes of method 1000 may be implemented, at least in part, in the form of executable code stored on non-transitory, tangible, machine-readable media that when run by one or more processors may cause the one or more processors to perform one or more of the processes. In some embodiments, method 1000 corresponds to the operation of the Uni-Parser module 830 (e.g., FIGS. 8-9) that performs semantic parsing.
[0071] As illustrated, the method 1000 includes a number of enumerated steps, but aspects of the method 1000 may include additional steps before, after, and in between the enumerated steps. In some aspects, one or more of the enumerated steps may be omitted or performed in a different order.
[0072] At step 1001, a system receives, via a communication interface, a natural language question (e.g., question 102, 202, or 302). For example, the question maybe input to a user-interface by a user. The communication interface may be, for example, a network interface of a computer, where the natural language question is received at a server (e.g., server 930) via a network interface (e.g., network interface 933) from a user device (e.g., user device 910).
[0073] At step 1002, the system identifies, via a named entity recognition (NER) procedure, a first entity from the natural language question. For example, the first entity may be a node in a knowledge graph such as "Central Park” as discussed in FIG. 2. For a database, the first entity may be a cell value such as "Statistics" as described in FIG. 3.
[0074] At step 1003, the system enumerates (e.g., by an enumerator 104), based on the first entity, a plurality of primitives indicative of entities or entity relations in a database or knowledge base. In a database, for example, primitives may include table column names and cell values (including the cell value matched to the first entity described at step 1002). In knowledge bases, for example, primitives may include entities and entity relations of the knowledge base (including the knowledge base entity matched to the first entity described at step 1002). In some embodiments, the knowledge base entity matched to the first entity is a starting entity, from which the other primitives are enumerated, being one hop or two hops from the starting entity.
[0075] At step 1004, the system ranks (e.g., by a ranker 106) the plurality of primitives based on their respective relevance to the natural language question. In some embodiments, separate ranked lists are generated for each class of primitive (e.g., first-hop and second-hop or column name and cell value). Primitives enumerated at step 1004 may be provided to the ranker in a form where they are concatenated with the natural language question and/or a special token identifying the class of the primitive (e.g., first-hop, second- hop, column name, or cell value). The ranker may be a model (e.g., a neural network model) which is trained to minimize a contrastive loss. The contrastive loos may be based on a positive primitive sample extracted from a ground-truth logical form, and a negative primitive sample. The method for selecting the negative primitive sample may be random, or may be selected as a "hard” sample which may improve training. For example, in knowledge bases the negative sample may be chosen from primitives connected in the knowledge base to a first hop entity extracted from a ground-truth logical form. For databases the negative sample may be selected by selecting a cell value in the same table and column as a cell value extracted from the ground-truth logical form.
[0076] At step 1005, the system selects a subset of top-ranked primitives based on the ranking. This may be performed by a ranker 106, generator 108, or another component of the system. The amount of primitives selected may be based on a predetermined value. When the ranked list includes multiple ranked lists (e.g., a "first-hop” list and a "second- hop” list), then each list may be ranked individually and the top-ranked primitives from each list may be selected.
[0077] At step 1006, the system generates (e.g., by a generator 108) a logical form that is executable on the database or the knowledge base based on the natural language question and the subset of the plurality of primitives. The executable logical form may include at least one primitive from the subset. Logical forms may be, for example, logical forms 604 or 704. [0078] At step 1007, the system transmits, via a communication interface, the logical form to a database system or a knowledge base system. For example, the database system or knowledge base system may be a server such as data vendor server 945 described in FIG. 9, to which the logical form is transmitted so that the server may execute the logical form on the database (or knowledge base).
[0079] At step 1008, the system receives, via the communication interface, a query result in response to the natural language question based on the logical form. The answer may be displayed to a user, for example on the same user interface as was used to input the question.
[0080] FIGS. 11-15 provide charts illustrating exemplary performance of different embodiments described herein. Comparisons are made to Bert Ranking as described in Gu etal, Beyond iid: three levels of generalization for question answering on knowledge bases, In Proceedings of the Web Conference, pages 3477-3488, 2021; ReTrack as described in Chen et al., Retrack: a flexible and efficient framework for knowledge base question answering, in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations, pages 325-336, 2021; UnifiedSKG as described in X/'e etal., Unifiedskg: Unifying and multi-tasking structured knowledge grounding with text-to-text language models, arXiv: 2201.05966, 2022; RNG-KBQA as described in Ye etal., Rng-kbqa: Generation augmented iterative ranking for knowledge base question answering, arXiv:2109.08678, 2021; Topic Units as described in Lan et al., Knowledge base question answering with topic units, 2019; STAGG as described in Yih etal., Semantic parsing via staged query graph generation: Question answering with knowledge base, In Proceedings of the Joint Conference of the 53rd Annual Meeting of the ACL and the 7th International Joint Conference on Natural Language Processing of the AFNLP, 2015; QGG as described in Lan etal., Query graph generation for answering multi-hop complex questions from knowledge bases, Association for Computational Linguistics, 2020; CBR as described in Das et al., Casebased reasoning for natural language queries over knowledge bases, arXiv: 2104.08762, 2021; ArcaneQA as described in Gu and Su, Arcaneqa: Dynamic program induction and contextualized encoding for knowledge base question answering, arXiv: 2204.08109, 2022; Global-GNN as described in Bogin etal., Global reasoning over database structures for text- to-sql parsing, arXiv: 1908.11214, 2019; EditSQL as described in Zhang etal., Editing-based sql query generation for cross-domain context-dependent questions, arXiv: 1909.00786, 2019; T5-Base as described in Scholak et al., Picard: Parsing incrementally for constrained auto-regressive decoding from language models, arXiv: 2109.05093, 2021; RAT-SQL as described in Wang et al., Rat-sql: Relation-aware schema encoding and linking for text-to- sql parsers, arXiv: 1911.04942, 2019; BRIDGE as described in Lin et al., Bridging textual and tabular data for cross-domain text-to-sql semantic parsing, arXiv: 2012.12627, 2020; T5-3B as described in Scholak et al, Picard: Parsing incrementally for constrained autoregressive decoding from language models, arXiv: 2109.05093, 2021; SQLova as described in Hwang et al., A comprehensive exploration on wikisql with table-aware word contextualization, arXiv: 1902.01069, 2019; X-SQL as described in He etal, X-sql: reinforce schema representation with context, arXiv: 1908.08113, 2019; 1E-SQL as described in Ma et al., Mention extraction and linking for sql query generation, arXiv: 2012.10074, 2020; NL2SQL as described in Guo and Gao, Content enhanced bert-based text-to-sql generation, arXiv: 1910.07179, 2019; HydraNet as described in Lyu et al., Hybrid ranking network for text-to-sql, arXiv: 2008.04759, 2020; and TAPEX as described in Liu etal., Tapex: Table pretrainingvia learning a neural sql executor, arXiv: 2107.07653, 2021.
[0081] Datasets used in the comparisons include the test and dev datasets of GRA1LQA as described in Gu etal., Beyond iid: three levels of generalization for question answering on knowledge bases, In Proceedings of the Web Conference, pages 3477-3488, 2021;
Spider as described in Yu et al., Spider: A large-scale human labeled dataset for complex and cross-domain semantic parsing and text-to-sql task, arXiv: 1809.08887, 2018; and WikiSQL as described in Zhong et al., Seq2sql: Generating structured queries from natural language using reinforcement learning, arXiv: 1709.00103, 2017.
[0082] For the experiments, to construct the < | tb_cl_vl| > category primitives, relevant cell values related to the question were found. Given a question and DB, the string matching was computed between the arbitrary length of phrase in question and the list of cell values under each column of all tables. A fuzzy matching algorithm was used to match a question to a possible cell value mentioned in the DB. Also detected were the number value in the question and form all column names with the value as the primitives. The column name in the WikiSQL dataset is vague, like "No.”, "Pick #”, and "Rank”, so the cell value was used to supplement the meaning of the column name. The matching cell value was used to locate the row and match the column name with the cell value in the same row. The primitive ranker was initiated using BERT-base-uncased. 48 negative candidates were sampled for each primitive category. The ranker was trained for 10 epochs using a learning rate of le-5 and a batch size of 8. Bootstrap hard negative sampling was conducted after every two epochs. Ground truth entity linking was used for enumerating training candidates. The generator was trained it using T5-base and 3B on Spider datasets, top-15 <|tb_cl|> category primitives were used and top-5 < | tb_cl_vl| > category primitives returned by the ranker and fine-tune the T5-base model for 200 epochs using a learning rate of 5e-5 and a batch size of 64. For the T5-3B model, it was run on 16 A100 GPUs with 100 epochs using a batch size of 1024. And on the WikiSQL dataset, T5-base and T5-large were used, and used top-5 < | tb_cl | > category primitives and top-3 <|tb_cl_vl|> category primitives as the input of the generator, the T5-base/large model was fine-tuned for 20 epochs using a learning rate of 3e-5 and a batch size of 16.
[0083] FIGS. 11-12 illustrate exact match (EM) and Fl scores on test/dev split of the GRA1LQA comparing alternative methods to an embodiment of the methods described herein (Uni-Parse). The reported models are based on BERT-base model for ranker and T5- base for generator. Best results among dev are bolded and the results of test better than dev are underlined. As shown, the embodiment tested (Uni-Parser) generally outperformed the prior methods. Compared with the methods enumerating logical forms like Bert Ranking and RNG-KBQA, the approach described herein achieves better performance in compositional and zero-shot settings. Especially, demonstrated was 3.1% improvement over baselines on Fl in the compositional setting. This matches the expectation that the generator learns the composition of the primitives. [0084] FIGS. 13-14 summarize the results on the Spider and WikiSQL datasets respectively. On the Spider dataset, Uni-Parser achieves competitive performance against all baseline models. Compared with generation models the use whole DB table schema as input like BRIDGE and UnifiedSKG, Uni-Parser achieves 3% improvement, suggesting the advantage of the method. Compared with the other T5-3B models, Uni-Parser achieves comparable performance with fewer training epochs. The upper block of FIG. 14 shows the comparison between the small pre-trained model, and the lower block shows the comparison of the large pre-trained model.
[0085] FIG. 15 illustrates the effects of hard negative strategies in ranking on WebQSP and WikiSQL datasets. No CG means that primitives are not differentiated by their categories. No CG shows lower performance than the setting using categories in the input. By comparing settings of with and without hard negative (right-most two columns), it is shown that the proposed hard negative sampling can help the ranker to better determine the positive primitive from the negative ones.
[0086] FIG. 16 provides an exemplary comparison of an output of the semantic parsing framework described in embodiments herein to another semantic parser. The top-5 ranked logical forms using RNG-KBQA contain much redundant information between each other, and none of them equals the gold logical expression. In contrast, the Uni-Parser method top-5 ranked expressions finds the correct first and second hop primitives and generates the correct logical form. Not that even though proper primitive is not ranked as the top-1, the generator has the capability to correctly select it.
[0087] This description and the accompanying drawings that illustrate inventive aspects, embodiments, implementations, or applications should not be taken as limiting. Various mechanical, compositional, structural, electrical, and operational changes may be made without departing from the spirit and scope of this description and the claims. In some instances, well-known circuits, structures, or techniques have not been shown or described in detail in order not to obscure the embodiments of this disclosure. Like numbers in two or more figures represent the same or similar elements. [0088] In this description, specific details are set forth describing some embodiments consistent with the present disclosure. Numerous specific details are set forth in order to provide a thorough understanding of the embodiments. It will be apparent, however, to one skilled in the art that some embodiments may be practiced without some or all of these specific details. The specific embodiments disclosed herein are meant to be illustrative but not limiting. One skilled in the art may realize other elements that, although not specifically described here, are within the scope and the spirit of this disclosure. In addition, to avoid unnecessary repetition, one or more features shown and described in association with one embodiment may be incorporated into other embodiments unless specifically described otherwise or if the one or more features would make an embodiment non-functional.
[0089] Although illustrative embodiments have been shown and described, a wide range of modification, change and substitution is contemplated in the foregoing disclosure and in some instances, some features of the embodiments may be employed without a corresponding use of other features. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. Thus, the scope of the invention should be limited only by the following claims, and it is appropriate that the claims be construed broadly and, in a manner, consistent with the scope of the embodiments disclosed herein.

Claims

WHAT IS CLAIMED IS:
1. A method of generating for a natural language question a logical form that is executable on a database or a knowledge base, the method comprising: receiving, via a communication interface, the natural language question; identifying, via a named entity recognition procedure, a first entity from the natural language question; enumerating, by an enumerator based on the first entity, a plurality of primitives indicative of entities or entity relations in the database or the knowledge base; ranking, by a ranker module, the plurality of primitives based on their respective relevance to the natural language question; selecting a subset of top-ranked primitives based on the ranking; generating, by a generator, the logical form that is executable on the database or the knowledge base based on the natural language question and the subset of the plurality of primitives; transmitting, via the communication interface, the logical form to a database system or a knowledge base system; and receiving, via the communication interface, a query result in response to the natural language question based on the logical form.
2. The method of claim 1, further comprising: identifying a starting knowledge base entity based on matching the starting knowledge base entity with the first entity from the natural language question.
3. The method of claim 2, wherein the plurality of primitives includes primitives of the knowledge base which are within a predefined number of hops from the starting knowledge base entity.
4. The method of claim 1, wherein the plurality of primitives includes: column names of tables in the database; and a cell value of the database identified by matching the cell value with the first entity from the natural language question.
5. The method of claim 1, further comprising: providing, by the enumerator to the ranker module, the plurality of primitives respectively concatenated with the natural language question and a special token identifying a class of primitive.
6. The method of claim 1, wherein the ranker module is trained to minimize a contrastive loss, wherein the contrastive loss is based on a positive primitive sample extracted from a ground-truth logical form, and a negative primitive sample.
7. The method of claim 6, wherein the negative primitive sample is selected from the primitives connected in the knowledge base to a first hop entity extracted from the ground-truth logical form.
8. The method of claim 6, wherein the negative primitive sample is selected by selecting a cell value in the same table and column as a cell value extracted from the ground-truth logical form.
9. A system for generating for a natural language question a logical form that is executable on a database or a knowledge base, the system comprising: a memory that stores the database or the knowledge base and a plurality of processor-executable instructions; a communication interface that receives the natural language question; and one or more hardware processors that read and execute the plurality of processorexecutable instructions from the memory to perform operations comprising: identifying, via a named entity recognition procedure, a first entity from the natural language question; enumerating, by an enumerator based on the first entity, a plurality of primitives indicative of entities or entity relations in the database or the knowledge base; ranking, by a ranker module, the plurality of primitives based on their respective relevance to the natural language question; selecting a subset of top-ranked primitives based on the ranking; generating, by a generator, the logical form that is executable on the database or the knowledge base based on the natural language question and the subset of the plurality of primitives; transmitting, via the communication interface, the logical form to a database system or a knowledge base system; and receiving, via the communication interface, a query result in response to the natural language question based on the logical form.
10. The system of claim 9, the operations further comprising: identifying a starting knowledge base entity based on matching the starting knowledge base entity with the first entity from the natural language question.
11. The system of claim 10, wherein the plurality of primitives includes primitives of the knowledge base which are within a predefined number of hops from the starting knowledge base entity.
12. The system of claim 9, wherein the plurality of primitives includes: column names of tables in the database; and a cell value of the database identified by matching the cell value with the first entity from the natural language question.
13. The system of claim 9, the operations further comprising: providing, by the enumerator to the ranker module, the plurality of primitives respectively concatenated with the natural language question and a special token identifying a class of primitive.
14. The system of claim 9, wherein the ranker module is trained to minimize a contrastive loss, wherein the contrastive loss is based on a positive primitive sample extracted from a ground-truth logical form, and a negative primitive sample.
15. The system of claim 14, wherein the negative primitive sample is selected from the primitives connected in the knowledge base to a first hop entity extracted from the ground-truth logical form.
16. The system of claim 14, wherein the negative primitive sample is selected by selecting a cell value in the same table and column as a cell value extracted from the ground-truth logical form.
17. A non-transitory machine-readable medium comprising a plurality of machine-executable instructions which, when executed by one or more processors, are adapted to cause the one or more processors to perform operations comprising: receiving, via a communication interface, a natural language question; identifying, via a named entity recognition procedure, a first entity from the natural language question; enumerating, by an enumerator based on the first entity, a plurality of primitives indicative of entities or entity relations in a database or a knowledge base; ranking, by a ranker module, the plurality of primitives based on their respective relevance to the natural language question; selecting a subset of top-ranked primitives based on the ranking; generating, by a generator, a logical form that is executable on the database or the knowledge base based on the natural language question and the subset of the plurality of primitives; transmitting, via the communication interface, the logical form to a database system or a knowledge base system; and receiving, via the communication interface, a query result in response to the natural language question based on the logical form.
18. The non-transitory machine-readable medium of claim 17, the operations further comprising: identifying a starting knowledge base entity based on matching the starting knowledge base entity with the first entity from the natural language question.
19. The non-transitory machine-readable medium of claim 18, wherein the plurality of primitives includes primitives of the knowledge base which are within a predefined number of hops from the starting knowledge base entity.
20. The non-transitory machine-readable medium of claim 17, wherein the plurality of primitives includes: column names of tables in the database; and a cell value of the database identified by matching the cell value with the first entity from the natural language question.
PCT/US2023/026146 2022-06-24 2023-06-23 Systems and methods for semantic parsing with primitive level enumeration WO2023250188A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202263355438P 2022-06-24 2022-06-24
US63/355,438 2022-06-24
US18/059,691 US20240176805A1 (en) 2022-11-29 2022-11-29 Systems and methods for semantic parsing with primitive level enumeration
US18/059,691 2022-11-29

Publications (1)

Publication Number Publication Date
WO2023250188A1 true WO2023250188A1 (en) 2023-12-28

Family

ID=87418907

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/026146 WO2023250188A1 (en) 2022-06-24 2023-06-23 Systems and methods for semantic parsing with primitive level enumeration

Country Status (1)

Country Link
WO (1) WO2023250188A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011053755A1 (en) * 2009-10-30 2011-05-05 Evri, Inc. Improving keyword-based search engine results using enhanced query strategies
US20220156251A1 (en) * 2020-11-17 2022-05-19 Salesforce.Com, Inc. Tenant specific and global pretagging for natural language queries

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011053755A1 (en) * 2009-10-30 2011-05-05 Evri, Inc. Improving keyword-based search engine results using enhanced query strategies
US20220156251A1 (en) * 2020-11-17 2022-05-19 Salesforce.Com, Inc. Tenant specific and global pretagging for natural language queries

Non-Patent Citations (21)

* Cited by examiner, † Cited by third party
Title
BOGIN ET AL.: "Global reasoning over database structures for text-to-sql parsing", ARXIV: 1908.11214, 2019
CHEN ET AL.: "Retrack: a flexible and efficient framework for knowledge base question answering", PROCEEDINGS OF THE 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING: SYSTEM DEMONSTRATIONS, 2021, pages 325 - 336
DAS ET AL.: "Case-based reasoning for natural language queries over knowledge bases", ARXIV: 2104.08762, 2021
GU ET AL.: "Beyond iid: three levels of generalization for question answering on knowledge bases", PROCEEDINGS OF THE WEB CONFERENCE, 2021, pages 3477 - 3488
GUOGAO: "Content enhanced bert-based text-to-sql generation", ARXIV: 1910.07179, 2019
GUSU: "Arcaneqa: Dynamic program induction and contextualized encoding for knowledge base question answering", ARXIV: 2204.08109, 2022
HE ET AL.: "X-sql: reinforce schema representation with context", ARXIV: 1908.08113, 2019
HWANG ET AL.: "A comprehensive exploration on wikisql with table-aware word contextualization", ARXIV: 1902.01069, 2019
LAN ET AL., KNOWLEDGE BASE QUESTION ANSWERING WITH TOPIC UNITS, 2019
LIN ET AL.: "Bridging textual and tabular data for cross-domain text-to-sql semantic parsing", ARXIV: 2012.12627, 2020
LIU ET AL.: "Tapex: Table pre-training via learning a neural sql executor", ARXIV: 2107.07653, 2021
LYU ET AL.: "Hybrid ranking network for text-to-sql", ARXIV: 2008.04759, 2020
MA ET AL.: "Mention extraction and linking for sql query generation", ARXIV: 2012.10074, 2020
SCHOLAK ET AL.: "Picard: Parsing incrementally for constrained auto-regressive decoding from language models", ARXIV: 2109.05093, 2021
WANG ET AL.: "Rat-sql: Relation-aware schema encoding and linking for text-to-sql parsers", ARXIV: 1911.04942, 2019
XIE ET AL.: "Unifiedskg: Unifying and multi-tasking structured knowledge grounding with text-to-text language models", ARXIV: 2201.05966, 2022
YE: "Rng-kbqa: Generation augmented iterative ranking for knowledge base question answering", ARXIV:2109.08678, 2021
YIH ET AL.: "Semantic parsing via staged query graph generation: Question answering with knowledge base", PROCEEDINGS OF THE JOINT CONFERENCE OF THE 53RD ANNUAL MEETING OF THE ACL AND THE 7TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING OF THE AFNLP, 2015
YU ET AL.: "Spider: A large-scale human labeled dataset for complex and cross-domain semantic parsing and text-to-sql task", ARXIV: 1809.08887, 2018
ZHANG ET AL.: "Editing-based sql query generation for cross-domain context-dependent questions", ARXIV: 1909.00786, 2019
ZHONG ET AL.: "Seq2sql: Generating structured queries from natural language using reinforcement learning", ARXIV: 1709.00103, 2017

Similar Documents

Publication Publication Date Title
KR102448129B1 (en) Method, apparatus, device, and storage medium for linking entity
Wang et al. K-adapter: Infusing knowledge into pre-trained models with adapters
US10699080B2 (en) Capturing rich response relationships with small-data neural networks
CN110837550B (en) Knowledge graph-based question answering method and device, electronic equipment and storage medium
US11403288B2 (en) Querying a data graph using natural language queries
US11783131B2 (en) Knowledge graph fusion
US10678786B2 (en) Translating search queries on online social networks
JP2022539138A (en) Systems and methods for performing semantic search using a natural language understanding (NLU) framework
US20180357240A1 (en) Key-Value Memory Networks
US20190108282A1 (en) Parsing and Classifying Search Queries on Online Social Networks
US9734252B2 (en) Method and system for analyzing data using a query answering system
US10242320B1 (en) Machine assisted learning of entities
KR101509727B1 (en) Apparatus for creating alignment corpus based on unsupervised alignment and method thereof, and apparatus for performing morphological analysis of non-canonical text using the alignment corpus and method thereof
US12013902B2 (en) Inter-document attention mechanism
WO2016087982A1 (en) Persona-based profiles in question answering system
US20220245353A1 (en) System and method for entity labeling in a natural language understanding (nlu) framework
US20220164546A1 (en) Machine Learning Systems and Methods for Many-Hop Fact Extraction and Claim Verification
Wong et al. A survey of natural language processing implementation for data query systems
US11514258B2 (en) Table header detection using global machine learning features from orthogonal rows and columns
CN115114419A (en) Question and answer processing method and device, electronic equipment and computer readable medium
Liu et al. A split-and-recombine approach for follow-up query analysis
US20240176805A1 (en) Systems and methods for semantic parsing with primitive level enumeration
WO2023250188A1 (en) Systems and methods for semantic parsing with primitive level enumeration
Adel et al. Type-aware convolutional neural networks for slot filling
CN112528600B (en) Text data processing method, related device and computer program product

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23744284

Country of ref document: EP

Kind code of ref document: A1