WO2021237934A1 - 答案选择方法、装置、计算机设备及计算机可读存储介质 - Google Patents

答案选择方法、装置、计算机设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2021237934A1
WO2021237934A1 PCT/CN2020/105901 CN2020105901W WO2021237934A1 WO 2021237934 A1 WO2021237934 A1 WO 2021237934A1 CN 2020105901 W CN2020105901 W CN 2020105901W WO 2021237934 A1 WO2021237934 A1 WO 2021237934A1
Authority
WO
WIPO (PCT)
Prior art keywords
sentence
candidate
text
sentences
similarity
Prior art date
Application number
PCT/CN2020/105901
Other languages
English (en)
French (fr)
Inventor
蒋宏达
徐国强
Original Assignee
深圳壹账通智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳壹账通智能科技有限公司 filed Critical 深圳壹账通智能科技有限公司
Publication of WO2021237934A1 publication Critical patent/WO2021237934A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/194Calculation of difference between files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/211Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Definitions

  • This application relates to the field of artificial intelligence technology, in particular to an answer selection method, device, computer equipment, and computer-readable storage medium.
  • the first aspect of this application provides an answer selection method, and the answer selection method includes:
  • For each candidate sentence sort the similarities from high to low, and determine the text sentences corresponding to the first preset number of similarities ranked first as similar sentences of the candidate sentences;
  • the candidate identifier corresponding to the target candidate sentence is determined as the answer.
  • a second aspect of the present application provides an answer selection device, the answer selection device includes:
  • the obtaining module is used to obtain the reading text, the question stem, and multiple candidate options corresponding to the question stem;
  • a calculation module configured to combine the stem and each candidate option into a candidate sentence, and calculate the similarity between each candidate sentence and each text sentence in the read text;
  • the first determining module is configured to sort the similarity degrees from high to low for each candidate sentence, and determine the text sentences corresponding to the first number of similarities that are ranked first as similar sentences of the candidate sentences ;
  • the input module is used to combine candidate sentences and similar sentences of the candidate sentences into candidate sentence pairs, and input each candidate sentence pair into the implication relationship classification model to obtain the implication relationship probability value;
  • a sorting module configured to sort the implied relationship probability values from high to low, and determine the candidate sentences corresponding to the second number of implied relationship probability values preset in the ranking as the target candidate sentences;
  • the second determining module is used to determine the candidate identifier corresponding to the target candidate sentence as the answer.
  • a third aspect of the present application provides a computer device that includes a processor, and the processor is configured to execute computer-readable instructions stored in a memory to implement the following steps:
  • For each candidate sentence sort the similarities from high to low, and determine the text sentences corresponding to the first preset number of similarities ranked first as similar sentences of the candidate sentences;
  • the candidate identifier corresponding to the target candidate sentence is determined as the answer.
  • a fourth aspect of the present application provides a computer-readable storage medium having computer-readable instructions stored on the computer-readable storage medium, and when the computer-readable instructions are executed by a processor, the following steps are implemented:
  • For each candidate sentence sort the similarities from high to low, and determine the text sentences corresponding to the first preset number of similarities ranked first as similar sentences of the candidate sentences;
  • the candidate identifier corresponding to the target candidate sentence is determined as the answer.
  • This application combines the stem and each candidate option into a candidate sentence, and calculates the similarity between each candidate sentence and each text sentence in the reading text; for each candidate sentence, the similarity is calculated by Sorting is performed from high to low, and the text sentences corresponding to the first predetermined number of similarities in the ranking are determined as similar sentences of the candidate sentences.
  • Sorting is performed from high to low, and the text sentences corresponding to the first predetermined number of similarities in the ranking are determined as similar sentences of the candidate sentences.
  • Fig. 1 is a flowchart of an answer selection method provided by an embodiment of the present application.
  • Fig. 2 is a structural diagram of an answer selection device provided by an embodiment of the present application.
  • Fig. 3 is a schematic diagram of a computer device provided by an embodiment of the present application.
  • the answer selection method of this application is applied to one or more computer devices.
  • the computer device is a device that can automatically perform numerical calculation and/or information processing in accordance with pre-set or stored instructions.
  • Its hardware includes, but is not limited to, a microprocessor and an application specific integrated circuit (ASIC) , Programmable Gate Array (Field-Programmable Gate Array, FPGA), Digital Processor (Digital Signal Processor, DSP), embedded equipment, etc.
  • ASIC application specific integrated circuit
  • FPGA Field-Programmable Gate Array
  • DSP Digital Processor
  • embedded equipment etc.
  • This application can be used in many general-purpose or special-purpose computer system environments or configurations. For example: personal computers, server computers, handheld or portable devices, tablet devices, multi-processor systems, microprocessor-based systems, set-top boxes, programmable consumer electronic devices, network PCs, small computers, large computers, including Distributed computing environment for any of the above systems or equipment, etc.
  • This application may be described in the general context of computer-executable instructions executed by a computer, such as a program module.
  • program modules include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific abstract data types.
  • This application can also be practiced in distributed computing environments. In these distributed computing environments, tasks are performed by remote processing devices connected through a communication network.
  • program modules can be located in local and remote computer storage media including storage devices.
  • the computer device may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the computer device can interact with the user through a keyboard, a mouse, a remote control, a touch panel, or a voice control device.
  • Fig. 1 is a flowchart of an answer selection method provided in Embodiment 1 of the present application.
  • the answer selection method is applied to computer equipment, and is used to select the correct answer in multiple-choice questions of reading comprehension.
  • the answer selection method specifically includes the following steps. According to different requirements, the order of the steps in the flowchart can be changed, and some can be omitted.
  • the stem can be a word, phrase or sentence, and there are 2 or 3 correct answers to the stem among the 7 candidate options.
  • a character recognition device may be used to recognize the reading text, the question stem, and multiple candidate options corresponding to the question stem from the paper test questions.
  • the reading text, the question stem, and multiple candidate options corresponding to the question stem may also be read from a storage medium.
  • the combining the stem and each candidate option into a candidate sentence includes:
  • the calculating the similarity between each candidate sentence and each text sentence in the read text includes:
  • the text two classification model includes a BERT layer and a SOFTMAX classification layer.
  • the text binary classification model is a neural network based on artificial intelligence.
  • the candidate sentence is "foods that will help lower blood sugar"
  • the two sentences in the reading text are "Lemons are rich in Vitamin C and their acidity helps to lower other foods'glycemic indexes, Oat and ricebran crackers make health snacks, Complement with organic nut butter or cheese, Other foods that stabilize blood sugar are cheese, eggs, berries and brew's yeast (later referred to as the first sentence), “Low hemolobin, high blood pressure, high levels of bad cholesterol blood sugar levels are a few factirs that influence blood health, Your diet can go a long way in promoting healthy blood, and most foods that are good for the blood are promote healthy weight and general (well, later called the second sentence).
  • the text binary classification model calculates that the similarity between the candidate sentence and the first sentence is 0.942, and the similarity between the candidate sentence and the second sentence is 0.034.
  • the first sentence with high similarity is determined as the similar sentence of the candidate sentence.
  • the training process of the text binary classification model includes:
  • the parameters of the BERT layer and the SOFTMAX classification layer are optimized by using a back propagation algorithm to obtain the two classification model.
  • the sentence pair vector may be a 512-dimensional vector.
  • the classification loss function in the SOFTMAX classification layer adopts the cross-entropy loss function.
  • the calculation of the similarity between the candidate sentence and each text sentence in the read text by using a trained BERT-based text binary classification model includes:
  • the vector representation is a vector or a sequence of vectors
  • the similarity between the candidate sentence and each text sentence in the read text is calculated.
  • the calculating the similarity between each candidate sentence and each text sentence in the read text includes:
  • For each candidate sentence sort the similarity from high to low, and determine a text sentence corresponding to a first preset number of similarities in the ranking as a similar sentence of the candidate sentence.
  • the similarity may be sorted from high to low, and the text sentences corresponding to the two similarities in the first ranking are determined as similar sentences of the candidate sentences.
  • the reading text includes the third sentence, and the similarity between the candidate sentence and the third sentence is 0.882, and the first sentence and the third sentence corresponding to the two similarities ranked first are determined as the candidate sentence Similar sentences.
  • the implication relation classification model includes a ROBERTA layer and a linear classification layer.
  • the implication relationship model is a neural network based on artificial intelligence.
  • the training process of the implicit relationship classification model includes:
  • first sentence pair training samples and obtain a plurality of second sentence pair training samples, wherein the first sentence pair training sample includes a first sentence and similar sentences, and the second sentence pair includes a second sentence and no Similar sentences
  • a backpropagation algorithm is used to optimize the parameters in the deep learning network according to the labels and output vectors of the training samples to obtain the implication relationship classification model.
  • the first label may be 1, and the second label may be 0.
  • the implied relationship probability values of 7 candidate sentences from high to low are 0.95, 0.92, 0.76, 0.66, 0.23, 0.12, 0.02.
  • the second number is preset to 2, and the candidate sentences corresponding to the implied relationship probability values of 0.95 and 0.92 ranked in the top 2 are determined as target candidate sentences.
  • the implied relationship probability values of the 4 candidate sentences are 0.96, 0.72, 0.56, and 0.16 from high to low.
  • the second number is preset to be 1, and the candidate sentence corresponding to the implied relationship probability value of 0.96 ranked in the top 1 is determined as the target candidate sentence.
  • the implied relationship probability values of the 7 candidate sentences from high to low are 0.95, 0.92, 0.76, 0.66, 0.23, 0.12, 0.02, and the corresponding candidate identifiers are A, C, D, G, B, F, E .
  • the candidate sentences corresponding to the implied relationship probability values of 0.95 and 0.92 ranked in the top 2 are determined as target candidate sentences, and the A and C options corresponding to the two target candidate sentences are determined as answers.
  • the implied relationship probability values of the four candidate sentences are 0.96, 0.72, 0.56, and 0.16 from high to low, and the corresponding candidate identifiers are B, D, A, and C.
  • the candidate sentence corresponding to the implied relationship probability value of 0.96 ranked in the top 1 is determined as the target candidate sentence, and the B option corresponding to the target candidate sentence is determined as the answer.
  • the answer selection method of the first embodiment combines the stem and each candidate option into a candidate sentence, and calculates the similarity between each candidate sentence and each text sentence in the reading text; for each candidate sentence, The similarity is sorted from high to low, and the text sentences corresponding to the first predetermined number of similarities in the sort order are determined as similar sentences of the candidate sentences.
  • Combining each candidate sentence and the similar sentences of the candidate sentence into the input implied relationship classification model improves the accuracy of calculating the implied relationship probability value of each candidate sentence. Thereby improving the efficiency and accuracy of answer selection.
  • the answer selection method further includes:
  • Fig. 2 is a structural diagram of the answer selection device provided in the second embodiment of the present application.
  • the answer selection device 20 is applied to computer equipment.
  • the answer selection device 20 is used to select the correct answer in multiple-choice questions of reading comprehension.
  • the answer selection device 20 may include an acquisition module 201, a calculation module 202, a first determination module 203, an input module 204, a sorting module 205, and a second determination module 206.
  • the obtaining module 201 is configured to obtain the reading text, the question stem, and multiple candidate options corresponding to the question stem.
  • the stem can be a word, phrase or sentence, and there are 2 or 3 correct answers to the stem among the 7 candidate options.
  • a character recognition device may be used to recognize the reading text, the question stem, and multiple candidate options corresponding to the question stem from the paper test questions.
  • the reading text, the question stem, and multiple candidate options corresponding to the question stem may also be read from a storage medium.
  • the calculation module 202 is configured to combine the stem and each candidate option into a candidate sentence, and calculate the similarity between each candidate sentence and each text sentence in the read text.
  • the combining the stem and each candidate option into a candidate sentence includes:
  • the calculating the similarity between each candidate sentence and each text sentence in the read text includes:
  • the text two classification model includes a BERT layer and a SOFTMAX classification layer.
  • the text binary classification model is a neural network based on artificial intelligence.
  • the candidate sentence is "foods that will help lower blood sugar"
  • the two sentences in the reading text are "Lemons are rich in Vitamin C and their acidity helps to lower other foods'glycemic indexes, Oat and ricebran crackers make health snacks, Complement with organic nut butter or cheese, Other foods that stabilize blood sugar are cheese, eggs, berries and brew's yeast (later referred to as the first sentence), “Low hemolobin, high blood pressure, high levels of bad cholesterol blood sugar levels are a few factirs that influence blood health, Your diet can go a long way in promoting healthy blood, and most foods that are good for the blood are promote healthy weight and general (well, later called the second sentence).
  • the text binary classification model calculates that the similarity between the candidate sentence and the first sentence is 0.942, and the similarity between the candidate sentence and the second sentence is 0.034.
  • the first sentence with high similarity is determined as the similar sentence of the candidate sentence.
  • the training process of the text binary classification model includes:
  • the parameters of the BERT layer and the SOFTMAX classification layer are optimized by using a back propagation algorithm to obtain the two classification model.
  • the sentence pair vector may be a 512-dimensional vector.
  • the classification loss function in the SOFTMAX classification layer adopts the cross-entropy loss function.
  • the calculation of the similarity between the candidate sentence and each text sentence in the read text by using a trained BERT-based text binary classification model includes:
  • the vector representation is a vector or a sequence of vectors
  • the similarity between the candidate sentence and each text sentence in the read text is calculated.
  • the calculating the similarity between each candidate sentence and each text sentence in the read text includes:
  • the first determining module 203 is configured to sort the similarity degrees from high to low for each candidate sentence, and determine the text sentences corresponding to the first number of similarities in the ranking as the similarity of the candidate sentences Statement.
  • the similarity may be sorted from high to low, and the text sentences corresponding to the two similarities in the first ranking are determined as similar sentences of the candidate sentences.
  • the reading text includes the third sentence, and the similarity between the candidate sentence and the third sentence is 0.882, and the first sentence and the third sentence corresponding to the two similarities ranked first are determined as the candidate sentence Similar sentences.
  • the input module 204 is configured to combine candidate sentences and similar sentences of the candidate sentences into candidate sentence pairs, and input each candidate sentence pair into the implication relationship classification model to obtain the implication relationship probability value.
  • the implication relation classification model includes a ROBERTA layer and a linear classification layer.
  • the implication relationship model is a neural network based on artificial intelligence.
  • the training process of the implicit relationship classification model includes:
  • first sentence pair training samples and obtain a plurality of second sentence pair training samples, wherein the first sentence pair training sample includes a first sentence and similar sentences, and the second sentence pair includes a second sentence and no Similar sentences
  • a backpropagation algorithm is used to optimize the parameters in the deep learning network according to the labels and output vectors of the training samples to obtain the implication relationship classification model.
  • the first label may be 1, and the second label may be 0.
  • the sorting module 205 is configured to sort the implied relationship probability values from high to low, and determine the candidate sentences corresponding to the second number of implied relationship probability values preset in the ranking as the target candidate sentences.
  • the implied relationship probability values of 7 candidate sentences from high to low are 0.95, 0.92, 0.76, 0.66, 0.23, 0.12, 0.02.
  • the second number is preset to 2, and the candidate sentences corresponding to the implied relationship probability values of 0.95 and 0.92 ranked in the top 2 are determined as target candidate sentences.
  • the implied relationship probability values of the 4 candidate sentences are 0.96, 0.72, 0.56, and 0.16 from high to low.
  • the second number is preset to be 1, and the candidate sentence corresponding to the implied relationship probability value of 0.96 ranked in the top 1 is determined as the target candidate sentence.
  • the second determining module 206 is configured to determine the candidate identifier corresponding to the target candidate sentence as the answer.
  • the implied relationship probability values of the 7 candidate sentences from high to low are 0.95, 0.92, 0.76, 0.66, 0.23, 0.12, 0.02, and the corresponding candidate identifiers are A, C, D, G, B, F, E .
  • the candidate sentences corresponding to the implied relationship probability values of 0.95 and 0.92 ranked in the top 2 are determined as target candidate sentences, and the A and C options corresponding to the two target candidate sentences are determined as answers.
  • the implied relationship probability values of the four candidate sentences are 0.96, 0.72, 0.56, and 0.16 from high to low, and the corresponding candidate identifiers are B, D, A, and C.
  • the candidate sentence corresponding to the implied relationship probability value of 0.96 ranked in the top 1 is determined as the target candidate sentence, and the B option corresponding to the target candidate sentence is determined as the answer.
  • the answer selection device 20 of the second embodiment combines the stem and each candidate option into a candidate sentence, and calculates the similarity between each candidate sentence and each text sentence in the reading text; for each candidate sentence, The similarity is sorted from high to low, and the text sentences corresponding to the first number of similarities in the pre-sorted order are determined as similar sentences of the candidate sentences. Improve the processing efficiency of the entire reading text and candidate options. Combining each candidate sentence and the similar sentences of the candidate sentence into the input implied relationship classification model improves the accuracy of calculating the implied relationship probability value of each candidate sentence. Thereby improving the efficiency and accuracy of answer selection.
  • the calculation module is also used to obtain summary information of each paragraph in the read text
  • This embodiment provides a computer-readable storage medium having computer-readable instructions stored on the computer-readable storage medium.
  • the computer-readable storage medium may be nonvolatile or volatile.
  • For each candidate sentence sort the similarities from high to low, and determine the text sentences corresponding to the first preset number of similarities in the ranking as similar sentences of the candidate sentences;
  • each module in the above-mentioned device embodiment is realized, for example, the modules 201-206 in FIG. 2:
  • the obtaining module 201 is configured to obtain the reading text, the question stem, and multiple candidate options corresponding to the question stem;
  • the calculation module 202 is configured to combine the stem and each candidate option into a candidate sentence, and calculate the similarity between each candidate sentence and each text sentence in the read text;
  • the first determining module 203 is configured to sort the similarity degrees from high to low for each candidate sentence, and determine the text sentences corresponding to the first number of similarities in the ranking as the similarity of the candidate sentences Statement
  • the input module 204 is configured to combine candidate sentences and similar sentences of the candidate sentences into candidate sentence pairs, and input each candidate sentence pair into the implication relationship classification model to obtain the implication relationship probability value;
  • the sorting module 205 is configured to sort the implied relationship probability values from high to low, and determine the candidate sentences corresponding to the second number of implied relationship probability values preset in the ranking as the target candidate sentences;
  • the second determining module 206 is configured to determine the candidate identifier corresponding to the target candidate sentence as the answer.
  • FIG. 3 is a schematic diagram of the computer equipment provided in the fourth embodiment of the application.
  • the computer device 30 includes a memory 301, a processor 302, and computer-readable instructions 303 stored in the memory 301 and running on the processor 302, such as an answer selection program.
  • the processor 302 executes the computer-readable instruction 303, the steps in the embodiment of the answer selection method described above are implemented, for example, steps 101-106 shown in Fig. 1:
  • For each candidate sentence sort the similarities from high to low, and determine the text sentences corresponding to the first preset number of similarities in the ranking as similar sentences of the candidate sentences;
  • each module in the above-mentioned device embodiment is realized, for example, the modules 201-206 in FIG. 2:
  • the obtaining module 201 is configured to obtain the reading text, the question stem, and multiple candidate options corresponding to the question stem;
  • the calculation module 202 is configured to combine the stem and each candidate option into a candidate sentence, and calculate the similarity between each candidate sentence and each text sentence in the read text;
  • the first determining module 203 is configured to sort the similarity degrees from high to low for each candidate sentence, and determine the text sentences corresponding to the first number of similarities in the ranking as the similarity of the candidate sentences Statement
  • the input module 204 is configured to combine candidate sentences and similar sentences of the candidate sentences into candidate sentence pairs, and input each candidate sentence pair into the implication relationship classification model to obtain the implication relationship probability value;
  • the sorting module 205 is configured to sort the implied relationship probability values from high to low, and determine the candidate sentences corresponding to the second number of implied relationship probability values preset in the ranking as the target candidate sentences;
  • the second determining module 206 is configured to determine the candidate identifier corresponding to the target candidate sentence as the answer.
  • the computer-readable instruction 303 may be divided into one or more modules, and the one or more modules are stored in the memory 301 and executed by the processor 302 to complete the method.
  • the one or more modules may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer readable instruction 303 in the computer device 30.
  • the computer-readable instruction 303 can be divided into the acquisition module 201, the calculation module 202, the first determination module 203, the input module 204, the sorting module 205, and the second determination module 206 in FIG. 2. For the specific functions of each module, see ⁇ Example two.
  • the computer device 30 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the schematic diagram 3 is only an example of the computer device 30, and does not constitute a limitation on the computer device 30. It may include more or less components than those shown in the figure, or combine certain components, or different components.
  • the computer device 30 may also include input and output devices, network access devices, buses, and so on.
  • the so-called processor 302 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor can be a microprocessor or the processor 302 can also be any conventional processor, etc.
  • the processor 302 is the control center of the computer device 30, which uses various interfaces and lines to connect the entire computer device 30. Various parts.
  • the memory 301 can be used to store the computer-readable instructions 303, and the processor 302 executes or executes the computer-readable instructions or modules stored in the memory 301 and calls data stored in the memory 301 to implement Various functions of the computer device 30.
  • the memory 301 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; the storage data area may Data and the like created in accordance with the use of the computer device 30 are stored.
  • the memory 301 may include a hard disk, a memory, a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a Secure Digital (SD) card, a flash memory card (Flash Card), at least one disk storage device, flash memory Devices, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), or other non-volatile/volatile storage devices.
  • the integrated module of the computer device 30 may be stored in a computer-readable storage medium.
  • the computer-readable storage medium may be non-volatile or volatile. Based on this understanding, this application implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through computer-readable instructions, and the computer-readable instructions can be stored in a computer-readable storage medium.
  • the computer-readable instruction when executed by the processor, it can implement the steps of the foregoing method embodiments.
  • the computer-readable instructions may be in the form of source code, object code, executable file, or some intermediate form.
  • the computer-readable storage medium may include: any entity or device capable of carrying the computer-readable instructions, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, read only memory (ROM), random access memory ( RAM).
  • modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical modules, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, or in the form of hardware plus software functional modules.
  • the above-mentioned integrated modules implemented in the form of software functional modules may be stored in a computer-readable storage medium.
  • the above-mentioned software function module is stored in a storage medium and includes several instructions to make a computer device (which can be a personal computer, a server, or a network device, etc.) or a processor execute the answer selection described in each embodiment of this application Part of the method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种答案选择方法、装置、计算机设备及计算机可读存储介质。所述答案选择方法将题干和每个候选选项组合为一个候选语句,计算每个候选语句与阅读文本中的每个文本语句的相似度(102);针对每个候选语句,将相似度由高至低进行排序,将排序在前预设第一数量的相似度对应的文本语句确定为候选语句的相似语句(103);将候选语句和候选语句的相似语句组合成候选语句对,并将每个候选语句对输入至蕴含关系分类模型中,得到蕴含关系概率值(104);将蕴含关系概率值由高至低进行排序,将排序在前预设第二数量的蕴含关系概率值对应候选语句确定为目标候选语句(105);将目标候选语句对应的候选标识确定为答案。本申请提升了答案选择的效率和准确率(106)。

Description

答案选择方法、装置、计算机设备及计算机可读存储介质
本申请要求于2020年05月29日提交中国专利局,申请号为202010481867.9申请名称为“答案选择方法、装置、计算机设备及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能技术领域,具体涉及一种答案选择方法、装置、计算机设备及计算机可读存储介质。
背景技术
如今人工智能已经慢慢开始普及。在科技、金融、教育考试、医疗等领域,都能看见人工智能的影子。
在教育考试领域的人工智能,尤其是自然语言处理在阅读理解上的应用最为突出。发明人意识到,让机器能像人一样去选择答案,这样可以省去部分人工解答校验工作。如何提升答案选择的效率和准确率,成为待解决的问题。
发明内容
鉴于以上内容,有必要提出一种答案选择方法、装置、计算机设备及计算机可读存储介质,其可以在阅读理解的选择题中选择正确答案。
本申请的第一方面提供一种答案选择方法,所述答案选择方法包括:
获取阅读文本、题干和所述题干对应的多个候选选项;
将所述题干和每个候选选项组合为一个候选语句,计算每个候选语句与所述阅读文本中的每个文本语句的相似度;
针对每个候选语句,将所述相似度由高至低进行排序,将排序在前预设第一数量的相似度对应的文本语句确定为所述候选语句的相似语句;
将候选语句和所述候选语句的相似语句组合成候选语句对,并将每个候选语句对输入至蕴含关系分类模型中,得到蕴含关系概率值;
将所述蕴含关系概率值由高至低进行排序,将排序在前预设第二数量的蕴含关系概率值对应候选语句确定为目标候选语句;
将所述目标候选语句对应的候选标识确定为答案。
本申请的第二方面提供一种答案选择装置,所述答案选择装置包括:
获取模块,用于获取阅读文本、题干和所述题干对应的多个候选选项;
计算模块,用于将所述题干和每个候选选项组合为一个候选语句,计算每个候选语句与所述阅读文本中的每个文本语句的相似度;
第一确定模块,用于针对每个候选语句,将所述相似度由高至低进行排序,将排序在前预设第一数量的相似度对应的文本语句确定为所述候选语句的相似语句;
输入模块,用于将候选语句和所述候选语句的相似语句组合成候选语句对,并将每个候选语句对输入至蕴含关系分类模型中,得到蕴含关系概率值;
排序模块,用于将所述蕴含关系概率值由高至低进行排序,将排序在前预设第二数量的蕴含关系概率值对应候选语句确定为目标候选语句;
第二确定模块,用于将所述目标候选语句对应的候选标识确定为答案。
本申请的第三方面提供一种计算机设备,所述计算机设备包括处理器,所述处理器用于执行存储器中存储的计算机可读指令以实现以下步骤:
获取阅读文本、题干和所述题干对应的多个候选选项;
将所述题干和每个候选选项组合为一个候选语句,计算每个候选语句与所述阅读文本中的每个文本语句的相似度;
针对每个候选语句,将所述相似度由高至低进行排序,将排序在前预设第一数量的相似度对应的文本语句确定为所述候选语句的相似语句;
将候选语句和所述候选语句的相似语句组合成候选语句对,并将每个候选语句对输入至蕴含关系分类模型中,得到蕴含关系概率值;
将所述蕴含关系概率值由高至低进行排序,将排序在前预设第二数量的蕴含关系概率值对应候选语句确定为目标候选语句;
将所述目标候选语句对应的候选标识确定为答案。
本申请的第四方面提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现以下步骤:
获取阅读文本、题干和所述题干对应的多个候选选项;
将所述题干和每个候选选项组合为一个候选语句,计算每个候选语句与所述阅读文本中的每个文本语句的相似度;
针对每个候选语句,将所述相似度由高至低进行排序,将排序在前预设第一数量的相似度对应的文本语句确定为所述候选语句的相似语句;
将候选语句和所述候选语句的相似语句组合成候选语句对,并将每个候选语句对输入至蕴含关系分类模型中,得到蕴含关系概率值;
将所述蕴含关系概率值由高至低进行排序,将排序在前预设第二数量的蕴含关系概率值对应候选语句确定为目标候选语句;
将所述目标候选语句对应的候选标识确定为答案。
本申请将所述题干和每个候选选项组合为一个候选语句,计算每个候选语句与所述阅读文本中的每个文本语句的相似度;针对每个候选语句,将所述相似度由高至低进行排序,将排序在前预设第一数量的相似度对应的文本语句确定为所述候选语句的相似语句。提升了对整个阅读文本和候选选项的处理效率。将由每个候选语句和该候选语句的相似语句组合成的候选语句对输入蕴含关系分类模型,提升了计算每个候选语句的蕴含关系概率值的准确率。从而提升了答案选择的效率和准确率。
附图说明
图1是本申请实施例提供的答案选择方法的流程图。
图2是本申请实施例提供的答案选择装置的结构图。
图3是本申请实施例提供的计算机设备的示意图。
具体实施方式
为了能够更清楚地理解本申请的上述目的、特征和优点,下面结合附图和具体实施例对本申请进行详细描述。需要说明的是,在不冲突的情况下,本申请的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本申请,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中在本申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请。
优选地,本申请的答案选择方法应用在一个或者多个计算机设备中。所述计算机设备是一种能够按照事先设定或存储的指令,自动进行数值计算和/或信息处理的设备,其硬件包括但不限于微处理器、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程门阵列(Field-Programmable Gate Array,FPGA)、数字处理器(Digital Signal Processor,DSP)、嵌入式设备等。
本申请可用于众多通用或专用的计算机系统环境或配置中。例如:个人计算机、服务器计算机、手持设备或便携式设备、平板型设备、多处理器系统、基于微处理器的系统、置顶盒、可编程的消费电子设备、网络PC、小型计算机、大型计算机、包括以上任何系统或设备的分布式计算环境等等。本申请可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本申请,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
所述计算机设备可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。所述计算机设备可以与用户通过键盘、鼠标、遥控器、触摸板或声控设备等方式进行人机交互。
实施例一
图1是本申请实施例一提供的答案选择方法的流程图。所述答案选择方法应用于计算机设备,用于在阅读理解的选择题中选择正确答案。
所述答案选择方法具体包括以下步骤,根据不同的需求,该流程图中步骤的顺序可以改变,某些可以省略。
101,获取阅读文本、题干和所述题干对应的多个候选选项。
例如,获取一个托福考试连线题的阅读文本、题干和题干对应的7个候选选项。题干可以是一个词语、短语或语句,7个候选选项中存在题干的2个或3个正确答案。
再如,获取一个英语阅读理解的阅读文本、题干和题干对应的4个候选选项。题干是一个语句,4个候选选项中存在题干的1个正确答案。
在一具体实施例中,可以通过字符识别设备从纸质试题中识别所述阅读文本、所述题干和所述题干对应的多个候选选项。也可以从存储介质读取所述阅读文本、所述题干和所述题干对应的多个候选选项。
102,将所述题干和每个候选选项组合为一个候选语句,计算每个候选语句与所述阅读文本中的每个文本语句的相似度。
例如,可以将题干和7个候选选项分别组合,得到7个候选语句。再如,可以将题干和4个候选选项分别组合,得到4个候选语句。
在一具体实施例中,所述将所述题干和每个候选选项组合为一个候选语句包括:
通过将所述题干和所述候选选项连接的方式将所述题干和所述候选选项组合为一个候选语句。
例如,题干为“foods”,一个候选选项为“that will help lower blood sugar”,连接所述题干和所述候选选项,得到一个候选语句“foods that will help lower blood sugar”。
在一具体实施例中,所述计算每个候选语句与所述阅读文本中的每个文本语句的相似度包括:
通过训练好的基于BERT的文本二分类模型计算所述候选语句与所述阅读文本中的每个文本语句的相似度,所述文本二分类模型包括BERT层和SOFTMAX分类层。所述文本二分类模型是基于人工智能的神经网络。
例如,候选语句为“foods that will help lower blood sugar”,阅读文本中的两个语句 分别为“Lemons are rich in Vitamin C and their acidity helps to lower other foods’glycemic indexes,Oat and rice bran crackers make health snacks,Complement with organic nut butter or cheese,Other foods that stabilize blood sugar are cheese,egg yolks,berries and brewer’s yeast”(后称第一语句)、“Low hemoglobin,high blood pressure,high levels of bad cholesterol and abnormal blood sugar levels are a few factirs that influence blood health,Your diet can go a long way in promoting healthy blood,and most foods that are good for the blood are promote healthy weight and general well being”(后称第二语句)。所述文本二分类模型计算得到该候选语句与第一语句的相似度为0.942,该候选语句与第二语句的相似度为0.034。将相似度高的第一语句确定为该候选语句的相似语句。
在另一实施例中,所述文本二分类模型的训练过程包括:
获取MSMARCO数据集中的候选语句和阅读文本;
将候选语句与阅读文本中的每个文本语句组合成语句对,为每个语句对设置标签;
通过所述BERT层对语句对进行编码计算,得到语句对向量;
通过所述SOFTMAX分类层的前向传播算法对所述语句对向量进行计算,得到语句对的相似度;
根据所述语句对的相似度和语句对的标签采用反向传播算法优化所述BERT层和所述SOFTMAX分类层的参数,得到所述二分类模型。
所述语句对向量可以是512维的向量。SOFTMAX分类层中的分类损失函数采用交叉熵损失函数。
在另一实施例中,所述通过训练好的基于BERT的文本二分类模型计算所述候选语句与所述阅读文本中的每个文本语句的相似度包括:
获取所述题干的向量表示和所述候选选项的向量表示,向量表示为向量或向量序列;
将所述题干的向量表示和所述候选选项的向量表示进行元素相加,得到所述候选语句的向量表示;
获取所述阅读文本中的每个文本语句的向量表示;
通过所述文本二分类模型基于所述候选语句的向量表示和所述阅读文本中的每个文本语句的向量表示,计算所述候选语句与所述阅读文本中的每个文本语句的相似度。
在另一实施例中,所述计算每个候选语句与所述阅读文本中的每个文本语句的相似度包括:
调用基于WORD2VECTOR语义相关度计算方法计算所述候选语句与所述阅读文本中的每个文本语句的相似度。
103,针对每个候选语句,将所述相似度由高至低进行排序,将排序在前预设第一数量的相似度对应的文本语句确定为所述候选语句的相似语句。
在一具体实施例中,可以将所述相似度由高至低进行排序,将排序在前的两个相似度对应的文本语句确定为所述候选语句的相似语句。
例如,上例中,阅读文本中包括第三语句,该候选语句与第三语句的相似度为0.882,将排序在前的两个相似度对应的第一语句和第三语句确定为该候选语句的相似语句。
104,将候选语句和所述候选语句的相似语句组合成候选语句对,并将每个候选语句对输入至蕴含关系分类模型中,得到蕴含关系概率值。
在一具体实施例中,所述蕴含关系分类模型包括ROBERTA层和线性分类层。所述蕴含关系模型是基于人工智能的神经网络。
在一具体实施例中,所述蕴含关系分类模型的训练过程包括:
获取多个第一语句对训练样本及获取多个第二语句对训练样本,其中,所述第一语句对训练样本包括第一语句及相似语句,所述第二语句对包括第二语句及不相似语句;
为所述多个第一语句对训练样本设置第一标签,及为多个第二语句对训练样本设置第二标签;
通过深度学习网络的前向传播算法对所述多个第一语句对训练样本及所述多个第二 语句对训练样本进行计算,得到每个训练样本的输出向量;
采用反向传播算法根据所述训练样本的标签和输出向量优化所述深度学习网络中的参数,得到所述蕴含关系分类模型。
所述第一标签可以为1,所述第二标签可以为0。
105,将所述蕴含关系概率值由高至低进行排序,将排序在前预设第二数量的蕴含关系概率值对应候选语句确定为目标候选语句。
例如,7个候选语句的蕴含关系概率值从高至低依次为0.95、0.92、0.76、0.66、0.23、0.12、0.02。预设第二数量为2,将排序在前2的0.95、0.92的蕴含关系概率值对应候选语句确定为目标候选语句。
再如,4个候选语句的蕴含关系概率值从高至低依次为0.96、0.72、0.56、0.16。预设第二数量为1,将排序在前1的0.96的蕴含关系概率值对应候选语句确定为目标候选语句。
106,将所述目标候选语句对应的候选标识确定为答案。
如上例,7个候选语句的蕴含关系概率值从高至低依次为0.95、0.92、0.76、0.66、0.23、0.12、0.02,对应的候选标识为A、C、D、G、B、F、E。将排序在前2的0.95、0.92的蕴含关系概率值对应候选语句确定为目标候选语句,将两个目标候选语句对应的A、C选项确定为答案。
再如,4个候选语句的蕴含关系概率值从高至低依次为0.96、0.72、0.56、0.16,对应的候选标识为B、D、A、C。将排序在前1的0.96的蕴含关系概率值对应候选语句确定为目标候选语句,将目标候选语句对应的B选项确定为答案。
实施例一的答案选择方法将所述题干和每个候选选项组合为一个候选语句,计算每个候选语句与所述阅读文本中的每个文本语句的相似度;针对每个候选语句,将所述相似度由高至低进行排序,将排序在前预设第一数量的相似度对应的文本语句确定为所述候选语句的相似语句。提升了对整个阅读文本和候选选项的处理效率。将由每个候选语句和该候选语句的相似语句组合成的候选语句对输入蕴含关系分类模型,提升了计算每个候选语句的蕴含关系概率值的准确率。从而提升了答案选择的效率和准确率。
在另一实施例中,所述答案选择方法还包括:
获取所述阅读文本中的每个段落的摘要信息;
计算所述候选语句与每个段落的摘要信息的摘要相似度;
将所述摘要相似度由高至低进行排序,将排序在前预设第三数量的摘要相似度对应的段落确定为目标段落;
计算所述候选语句与所述目标段落中的每个文本语句的相似度。
通过对目标段落的确定并计算所述候选语句与所述目标段落中的每个文本语句的相似度,不直接计算所述候选语句与所述阅读文本中的每个文本语句的相似度,极大地较少了计算量,提升了相似度的计算效率,从而提升了答案选择的效率。
实施例二
图2是本申请实施例二提供的答案选择装置的结构图。所述答案选择装置20应用于计算机设备。所述答案选择装置20用于在阅读理解的选择题中选择正确答案。
如图2所示,所述答案选择装置20可以包括获取模块201、计算模块202、第一确定模块203、输入模块204、排序模块205、第二确定模块206。
获取模块201,用于获取阅读文本、题干和所述题干对应的多个候选选项。
例如,获取一个托福考试连线题的阅读文本、题干和题干对应的7个候选选项。题干可以是一个词语、短语或语句,7个候选选项中存在题干的2个或3个正确答案。
再如,获取一个英语阅读理解的阅读文本、题干和题干对应的4个候选选项。题干是一个语句,4个候选选项中存在题干的1个正确答案。
在一具体实施例中,可以通过字符识别设备从纸质试题中识别所述阅读文本、所述题干和所述题干对应的多个候选选项。也可以从存储介质读取所述阅读文本、所述题干和所述题干对应的多个候选选项。
计算模块202,用于将所述题干和每个候选选项组合为一个候选语句,计算每个候选语句与所述阅读文本中的每个文本语句的相似度。
例如,可以将题干和7个候选选项分别组合,得到7个候选语句。再如,可以将题干和4个候选选项分别组合,得到4个候选语句。
在一具体实施例中,所述将所述题干和每个候选选项组合为一个候选语句包括:
通过将所述题干和所述候选选项连接的方式将所述题干和所述候选选项组合为一个候选语句。
例如,题干为“foods”,一个候选选项为“that will help lower blood sugar”,连接所述题干和所述候选选项,得到一个候选语句“foods that will help lower blood sugar”。
在一具体实施例中,所述计算每个候选语句与所述阅读文本中的每个文本语句的相似度包括:
通过训练好的基于BERT的文本二分类模型计算所述候选语句与所述阅读文本中的每个文本语句的相似度,所述文本二分类模型包括BERT层和SOFTMAX分类层。所述文本二分类模型是基于人工智能的神经网络。
例如,候选语句为“foods that will help lower blood sugar”,阅读文本中的两个语句分别为“Lemons are rich in Vitamin C and their acidity helps to lower other foods’glycemic indexes,Oat and rice bran crackers make health snacks,Complement with organic nut butter or cheese,Other foods that stabilize blood sugar are cheese,egg yolks,berries and brewer’s yeast”(后称第一语句)、“Low hemoglobin,high blood pressure,high levels of bad cholesterol and abnormal blood sugar levels are a few factirs that influence blood health,Your diet can go a long way in promoting healthy blood,and most foods that are good for the blood are promote healthy weight and general well being”(后称第二语句)。所述文本二分类模型计算得到该候选语句与第一语句的相似度为0.942,该候选语句与第二语句的相似度为0.034。将相似度高的第一语句确定为该候选语句的相似语句。
在另一实施例中,所述文本二分类模型的训练过程包括:
获取MSMARCO数据集中的候选语句和阅读文本;
将候选语句与阅读文本中的每个文本语句组合成语句对,为每个语句对设置标签;
通过所述BERT层对语句对进行编码计算,得到语句对向量;
通过所述SOFTMAX分类层的前向传播算法对所述语句对向量进行计算,得到语句对的相似度;
根据所述语句对的相似度和语句对的标签采用反向传播算法优化所述BERT层和所述SOFTMAX分类层的参数,得到所述二分类模型。
所述语句对向量可以是512维的向量。SOFTMAX分类层中的分类损失函数采用交叉熵损失函数。
在另一实施例中,所述通过训练好的基于BERT的文本二分类模型计算所述候选语句与所述阅读文本中的每个文本语句的相似度包括:
获取所述题干的向量表示和所述候选选项的向量表示,向量表示为向量或向量序列;
将所述题干的向量表示和所述候选选项的向量表示进行元素相加,得到所述候选语句的向量表示;
获取所述阅读文本中的每个文本语句的向量表示;
通过所述文本二分类模型基于所述候选语句的向量表示和所述阅读文本中的每个文本语句的向量表示,计算所述候选语句与所述阅读文本中的每个文本语句的相似度。
在另一实施例中,所述计算每个候选语句与所述阅读文本中的每个文本语句的相似度包括:
调用基于WORD2VECTOR语义相关度计算方法计算所述候选语句与所述阅读文本中的每个文本语句的相似度。
第一确定模块203,用于针对每个候选语句,将所述相似度由高至低进行排序,将排序在前预设第一数量的相似度对应的文本语句确定为所述候选语句的相似语句。
在一具体实施例中,可以将所述相似度由高至低进行排序,将排序在前的两个相似度对应的文本语句确定为所述候选语句的相似语句。
例如,上例中,阅读文本中包括第三语句,该候选语句与第三语句的相似度为0.882,将排序在前的两个相似度对应的第一语句和第三语句确定为该候选语句的相似语句。
输入模块204,用于将候选语句和所述候选语句的相似语句组合成候选语句对,并将每个候选语句对输入至蕴含关系分类模型中,得到蕴含关系概率值。
在一具体实施例中,所述蕴含关系分类模型包括ROBERTA层和线性分类层。所述蕴含关系模型是基于人工智能的神经网络。
在一具体实施例中,所述蕴含关系分类模型的训练过程包括:
获取多个第一语句对训练样本及获取多个第二语句对训练样本,其中,所述第一语句对训练样本包括第一语句及相似语句,所述第二语句对包括第二语句及不相似语句;
为所述多个第一语句对训练样本设置第一标签,及为多个第二语句对训练样本设置第二标签;
通过深度学习网络的前向传播算法对所述多个第一语句对训练样本及所述多个第二语句对训练样本进行计算,得到每个训练样本的输出向量;
采用反向传播算法根据所述训练样本的标签和输出向量优化所述深度学习网络中的参数,得到所述蕴含关系分类模型。
所述第一标签可以为1,所述第二标签可以为0。
排序模块205,用于将所述蕴含关系概率值由高至低进行排序,将排序在前预设第二数量的蕴含关系概率值对应候选语句确定为目标候选语句。
例如,7个候选语句的蕴含关系概率值从高至低依次为0.95、0.92、0.76、0.66、0.23、0.12、0.02。预设第二数量为2,将排序在前2的0.95、0.92的蕴含关系概率值对应候选语句确定为目标候选语句。
再如,4个候选语句的蕴含关系概率值从高至低依次为0.96、0.72、0.56、0.16。预设第二数量为1,将排序在前1的0.96的蕴含关系概率值对应候选语句确定为目标候选语句。
第二确定模块206,用于将所述目标候选语句对应的候选标识确定为答案。
如上例,7个候选语句的蕴含关系概率值从高至低依次为0.95、0.92、0.76、0.66、0.23、0.12、0.02,对应的候选标识为A、C、D、G、B、F、E。将排序在前2的0.95、0.92的蕴含关系概率值对应候选语句确定为目标候选语句,将两个目标候选语句对应的A、C选项确定为答案。
再如,4个候选语句的蕴含关系概率值从高至低依次为0.96、0.72、0.56、0.16,对应的候选标识为B、D、A、C。将排序在前1的0.96的蕴含关系概率值对应候选语句确定为目标候选语句,将目标候选语句对应的B选项确定为答案。
实施例二的答案选择装置20将所述题干和每个候选选项组合为一个候选语句,计算每个候选语句与所述阅读文本中的每个文本语句的相似度;针对每个候选语句,将所述相似度由高至低进行排序,将排序在前预设第一数量的相似度对应的文本语句确定为所述候选语句的相似语句。提升了对整个阅读文本和候选选项的处理效率。将由每个候选语句和该候选语句的相似语句组合成的候选语句对输入蕴含关系分类模型,提升了计算每个候选语句的蕴含关系概率值的准确率。从而提升了答案选择的效率和准确率。
在另一实施例中,所述计算模块还用于获取所述阅读文本中的每个段落的摘要信息;
计算所述候选语句与每个段落的摘要信息的摘要相似度;
将所述摘要相似度由高至低进行排序,将排序在前预设第三数量的摘要相似度对应的段落确定为目标段落;
计算所述候选语句与所述目标段落中的每个文本语句的相似度。
通过对目标段落的确定并计算所述候选语句与所述目标段落中的每个文本语句的相似度,不直接计算所述候选语句与所述阅读文本中的每个文本语句的相似度,极大地较少了计算量,提升了相似度的计算效率,从而提升了答案选择的效率。
实施例三
本实施例提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机可读指令,所述计算机可读存储介质可以是非易失性,也可以是易失性。该计算机可读指令被处理器执行时实现上述答案选择方法实施例中的步骤,例如图1所示的步骤101-106:
101,获取阅读文本、题干和所述题干对应的多个候选选项;
102,将所述题干和每个候选选项组合为一个候选语句,计算每个候选语句与所述阅读文本中的每个文本语句的相似度;
103,针对每个候选语句,将所述相似度由高至低进行排序,将排序在前预设第一数量的相似度对应的文本语句确定为所述候选语句的相似语句;
104,将候选语句和所述候选语句的相似语句组合成候选语句对,并将每个候选语句对输入至蕴含关系分类模型中,得到蕴含关系概率值;
105,将所述蕴含关系概率值由高至低进行排序,将排序在前预设第二数量的蕴含关系概率值对应候选语句确定为目标候选语句;
106,将所述目标候选语句对应的候选标识确定为答案。
或者,该计算机可读指令被处理器执行时实现上述装置实施例中各模块的功能,例如图2中的模块201-206:
获取模块201,用于获取阅读文本、题干和所述题干对应的多个候选选项;
计算模块202,用于将所述题干和每个候选选项组合为一个候选语句,计算每个候选语句与所述阅读文本中的每个文本语句的相似度;
第一确定模块203,用于针对每个候选语句,将所述相似度由高至低进行排序,将排序在前预设第一数量的相似度对应的文本语句确定为所述候选语句的相似语句;
输入模块204,用于将候选语句和所述候选语句的相似语句组合成候选语句对,并将每个候选语句对输入至蕴含关系分类模型中,得到蕴含关系概率值;
排序模块205,用于将所述蕴含关系概率值由高至低进行排序,将排序在前预设第二数量的蕴含关系概率值对应候选语句确定为目标候选语句;
第二确定模块206,用于将所述目标候选语句对应的候选标识确定为答案。
实施例四
图3为本申请实施例四提供的计算机设备的示意图。所述计算机设备30包括存储器301、处理器302以及存储在所述存储器301中并可在所述处理器302上运行的计算机可读指令303,例如答案选择程序。所述处理器302执行所述计算机可读指令303时实现上述答案选择方法实施例中的步骤,例如图1所示的步骤101-106:
101,获取阅读文本、题干和所述题干对应的多个候选选项;
102,将所述题干和每个候选选项组合为一个候选语句,计算每个候选语句与所述阅读文本中的每个文本语句的相似度;
103,针对每个候选语句,将所述相似度由高至低进行排序,将排序在前预设第一数量的相似度对应的文本语句确定为所述候选语句的相似语句;
104,将候选语句和所述候选语句的相似语句组合成候选语句对,并将每个候选语句对输入至蕴含关系分类模型中,得到蕴含关系概率值;
105,将所述蕴含关系概率值由高至低进行排序,将排序在前预设第二数量的蕴含关 系概率值对应候选语句确定为目标候选语句;
106,将所述目标候选语句对应的候选标识确定为答案。
或者,该计算机可读指令被处理器执行时实现上述装置实施例中各模块的功能,例如图2中的模块201-206:
获取模块201,用于获取阅读文本、题干和所述题干对应的多个候选选项;
计算模块202,用于将所述题干和每个候选选项组合为一个候选语句,计算每个候选语句与所述阅读文本中的每个文本语句的相似度;
第一确定模块203,用于针对每个候选语句,将所述相似度由高至低进行排序,将排序在前预设第一数量的相似度对应的文本语句确定为所述候选语句的相似语句;
输入模块204,用于将候选语句和所述候选语句的相似语句组合成候选语句对,并将每个候选语句对输入至蕴含关系分类模型中,得到蕴含关系概率值;
排序模块205,用于将所述蕴含关系概率值由高至低进行排序,将排序在前预设第二数量的蕴含关系概率值对应候选语句确定为目标候选语句;
第二确定模块206,用于将所述目标候选语句对应的候选标识确定为答案。
示例性的,所述计算机可读指令303可以被分割成一个或多个模块,所述一个或者多个模块被存储在所述存储器301中,并由所述处理器302执行,以完成本方法。所述一个或多个模块可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述所述计算机可读指令303在所述计算机设备30中的执行过程。例如,所述计算机可读指令303可以被分割成图2中的获取模块201、计算模块202、第一确定模块203、输入模块204、排序模块205、第二确定模块206,各模块具体功能参见实施例二。
所述计算机设备30可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。本领域技术人员可以理解,所述示意图3仅仅是计算机设备30的示例,并不构成对计算机设备30的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述计算机设备30还可以包括输入输出设备、网络接入设备、总线等。
所称处理器302可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器302也可以是任何常规的处理器等,所述处理器302是所述计算机设备30的控制中心,利用各种接口和线路连接整个计算机设备30的各个部分。
所述存储器301可用于存储所述计算机可读指令303,所述处理器302通过运行或执行存储在所述存储器301内的计算机可读指令或模块,以及调用存储在存储器301内的数据,实现所述计算机设备30的各种功能。所述存储器301可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据计算机设备30的使用所创建的数据等。此外,存储器301可以包括硬盘、内存、插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)、至少一个磁盘存储器件、闪存器件、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)或其他非易失性/易失性存储器件。
所述计算机设备30集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读存储介质中。所述计算机可读存储介质可以是非易失性,也可以是易失性。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一计算机可读存储介质中,该计算机可读指令在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机可读指令可以为源代码形式、对象代码 形式、可执行文件或某些中间形式等。所述计算机可读存储介质可以包括:能够携带所述计算机可读指令的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、只读存储器(ROM)、随机存取存储器(RAM)。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用硬件加软件功能模块的形式实现。
上述以软件功能模块的形式实现的集成的模块,可以存储在一个计算机可读存储介质中。上述软件功能模块存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本申请各个实施例所述答案选择方法的部分步骤。
对于本领域技术人员而言,显然本申请不限于上述示范性实施例的细节,而且在不背离本申请的精神或基本特征的情况下,能够以其他的具体形式实现本申请。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本申请的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化涵括在本申请内。不应将权利要求中的任何附关联图标记视为限制所涉及的权利要求。此外,显然“包括”一词不排除其他模块或步骤,单数不排除复数。系统权利要求中陈述的多个模块或装置也可以由一个模块或装置通过软件或者硬件来实现。第一,第二等词语用来表示名称,而并不表示任何特定的顺序。
最后应说明的是,以上实施例仅用以说明本申请的技术方案而非限制,尽管参照较佳实施例对本申请进行了详细说明,本领域的普通技术人员应当理解,可以对本申请的技术方案进行修改或等同替换,而不脱离本申请技术方案的精神和范围。

Claims (20)

  1. 一种答案选择方法,所述答案选择方法包括:
    获取阅读文本、题干和所述题干对应的多个候选选项;
    将所述题干和每个候选选项组合为一个候选语句,计算每个候选语句与所述阅读文本中的每个文本语句的相似度;
    针对每个候选语句,将所述相似度由高至低进行排序,将排序在前预设第一数量的相似度对应的文本语句确定为所述候选语句的相似语句;
    将候选语句和所述候选语句的相似语句组合成候选语句对,并将每个候选语句对输入至蕴含关系分类模型中,得到蕴含关系概率值;
    将所述蕴含关系概率值由高至低进行排序,将排序在前预设第二数量的蕴含关系概率值对应候选语句确定为目标候选语句;
    将所述目标候选语句对应的候选标识确定为答案。
  2. 如权利要求1所述的答案选择方法,其中,所述获取阅读文本、题干和所述题干对应的多个候选选项包括:
    通过字符识别设备从纸质试题中识别所述阅读文本、所述题干和所述题干对应的多个候选选项;或者
    从存储介质读取所述阅读文本、所述题干和所述题干对应的多个候选选项。
  3. 如权利要求1所述的答案选择方法,其中,所述计算每个候选语句与所述阅读文本中的每个文本语句的相似度包括:
    通过训练好的基于BERT的文本二分类模型计算所述候选语句与所述阅读文本中的每个文本语句的相似度,所述文本二分类模型包括BERT层和SOFTMAX分类层。
  4. 如权利要求3所述的答案选择方法,其中,所述文本二分类模型的训练过程包括:
    获取MSMARCO数据集中的候选语句和阅读文本;
    将候选语句与阅读文本中的每个文本语句组合成语句对,为每个语句对设置标签;
    通过所述BERT层对语句对进行编码计算,得到语句对向量;
    通过所述SOFTMAX分类层的前向传播算法对所述语句对向量进行计算,得到语句对的相似度;
    根据所述语句对的相似度和语句对的标签采用反向传播算法优化所述BERT层和所述SOFTMAX分类层的参数,得到所述二分类模型。
  5. 如权利要求3所述的答案选择方法,其中,所述通过训练好的基于BERT的文本二分类模型计算所述候选语句与所述阅读文本中的每个文本语句的相似度包括:
    获取所述题干的向量表示和所述候选选项的向量表示,向量表示为向量或向量序列;
    将所述题干的向量表示和所述候选选项的向量表示进行元素相加,得到所述候选语句的向量表示;
    获取所述阅读文本中的每个文本语句的向量表示;
    通过所述文本二分类模型基于所述候选语句的向量表示和所述阅读文本中的每个文本语句的向量表示,计算所述候选语句与所述阅读文本中的每个文本语句的相似度。
  6. 如权利要求1所述的答案选择方法,其中,所述计算每个候选语句与所述阅读文本中的每个文本语句的相似度包括:
    调用基于WORD2VECTOR语义相关度计算方法计算所述候选语句与所述阅读文本中的每个文本语句的相似度。
  7. 如权利要求1所述的答案选择方法,其中,所述蕴含关系分类模型的训练过程包括:
    获取多个第一语句对训练样本及获取多个第二语句对训练样本,其中,所述第一语句对训练样本包括第一语句及相似语句,所述第二语句对包括第二语句及不相似语句;
    为所述多个第一语句对训练样本设置第一标签,及为多个第二语句对训练样本设置第二标签;
    通过深度学习网络的前向传播算法对所述多个第一语句对训练样本及所述多个第二语句对训练样本进行计算,得到每个训练样本的输出向量;
    采用反向传播算法根据所述训练样本的标签和输出向量优化所述深度学习网络中的参数,得到所述蕴含关系分类模型。
  8. 一种答案选择装置,其中,所述答案选择装置包括:
    获取模块,用于获取阅读文本、题干和所述题干对应的多个候选选项;
    计算模块,用于将所述题干和每个候选选项组合为一个候选语句,计算每个候选语句与所述阅读文本中的每个文本语句的相似度;
    第一确定模块,用于针对每个候选语句,将所述相似度由高至低进行排序,将排序在前预设第一数量的相似度对应的文本语句确定为所述候选语句的相似语句;
    输入模块,用于将候选语句和所述候选语句的相似语句组合成候选语句对,并将每个候选语句对输入至蕴含关系分类模型中,得到蕴含关系概率值;
    排序模块,用于将所述蕴含关系概率值由高至低进行排序,将排序在前预设第二数量的蕴含关系概率值对应候选语句确定为目标候选语句;
    第二确定模块,用于将所述目标候选语句对应的候选标识确定为答案。
  9. 一种计算机设备,其中,所述计算机设备包括处理器,所述处理器用于执行存储器中存储的计算机可读指令以实现以下步骤:
    获取阅读文本、题干和所述题干对应的多个候选选项;
    将所述题干和每个候选选项组合为一个候选语句,计算每个候选语句与所述阅读文本中的每个文本语句的相似度;
    针对每个候选语句,将所述相似度由高至低进行排序,将排序在前预设第一数量的相似度对应的文本语句确定为所述候选语句的相似语句;
    将候选语句和所述候选语句的相似语句组合成候选语句对,并将每个候选语句对输入至蕴含关系分类模型中,得到蕴含关系概率值;
    将所述蕴含关系概率值由高至低进行排序,将排序在前预设第二数量的蕴含关系概率值对应候选语句确定为目标候选语句;
    将所述目标候选语句对应的候选标识确定为答案。
  10. 如权利要求9所述的计算机设备,其中,所述处理器执行所述存储器中存储的计算机可读指令以实现所述获取阅读文本、题干和所述题干对应的多个候选选项时,包括:
    通过字符识别设备从纸质试题中识别所述阅读文本、所述题干和所述题干对应的多个候选选项;或者
    从存储介质读取所述阅读文本、所述题干和所述题干对应的多个候选选项。
  11. 如权利要求9所述的计算机设备,其中,所述处理器执行所述存储器中存储的计算机可读指令以实现所述计算每个候选语句与所述阅读文本中的每个文本语句的相似度时,包括:
    通过训练好的基于BERT的文本二分类模型计算所述候选语句与所述阅读文本中的每个文本语句的相似度,所述文本二分类模型包括BERT层和SOFTMAX分类层。
  12. 如权利要求11所述的计算机设备,其中,所述处理器执行所述存储器中存储的计算机可读指令以实现所述文本二分类模型的训练过程时,包括:
    获取MSMARCO数据集中的候选语句和阅读文本;
    将候选语句与阅读文本中的每个文本语句组合成语句对,为每个语句对设置标签;
    通过所述BERT层对语句对进行编码计算,得到语句对向量;
    通过所述SOFTMAX分类层的前向传播算法对所述语句对向量进行计算,得到语句对的相似度;
    根据所述语句对的相似度和语句对的标签采用反向传播算法优化所述BERT层和所述SOFTMAX分类层的参数,得到所述二分类模型。
  13. 如权利要求11所述的计算机设备,其中,所述处理器执行所述存储器中存储的计算机可读指令以实现所述通过训练好的基于BERT的文本二分类模型计算所述候选语句与所述阅读文本中的每个文本语句的相似度时,包括:
    获取所述题干的向量表示和所述候选选项的向量表示,向量表示为向量或向量序列;
    将所述题干的向量表示和所述候选选项的向量表示进行元素相加,得到所述候选语句的向量表示;
    获取所述阅读文本中的每个文本语句的向量表示;
    通过所述文本二分类模型基于所述候选语句的向量表示和所述阅读文本中的每个文本语句的向量表示,计算所述候选语句与所述阅读文本中的每个文本语句的相似度。
  14. 如权利要求9所述的计算机设备,其中,所述处理器执行所述存储器中存储的计算机可读指令以实现所述计算每个候选语句与所述阅读文本中的每个文本语句的相似度时,包括:
    调用基于WORD2VECTOR语义相关度计算方法计算所述候选语句与所述阅读文本中的每个文本语句的相似度。
  15. 如权利要求9所述的计算机设备,其中,所述处理器执行所述存储器中存储的计算机可读指令以实现所述蕴含关系分类模型的训练过程时,包括:
    获取多个第一语句对训练样本及获取多个第二语句对训练样本,其中,所述第一语句对训练样本包括第一语句及相似语句,所述第二语句对包括第二语句及不相似语句;
    为所述多个第一语句对训练样本设置第一标签,及为多个第二语句对训练样本设置第二标签;
    通过深度学习网络的前向传播算法对所述多个第一语句对训练样本及所述多个第二语句对训练样本进行计算,得到每个训练样本的输出向量;
    采用反向传播算法根据所述训练样本的标签和输出向量优化所述深度学习网络中的参数,得到所述蕴含关系分类模型。
  16. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机可读指令,其中,所述计算机可读指令被处理器执行时实现以下步骤:
    获取阅读文本、题干和所述题干对应的多个候选选项;
    将所述题干和每个候选选项组合为一个候选语句,计算每个候选语句与所述阅读文本中的每个文本语句的相似度;
    针对每个候选语句,将所述相似度由高至低进行排序,将排序在前预设第一数量的相似度对应的文本语句确定为所述候选语句的相似语句;
    将候选语句和所述候选语句的相似语句组合成候选语句对,并将每个候选语句对输入至蕴含关系分类模型中,得到蕴含关系概率值;
    将所述蕴含关系概率值由高至低进行排序,将排序在前预设第二数量的蕴含关系概率值对应候选语句确定为目标候选语句;
    将所述目标候选语句对应的候选标识确定为答案。
  17. 如权利要求16所述的存储介质,其中,所述计算机可读指令被所述处理器执行还用以实现所述获取阅读文本、题干和所述题干对应的多个候选选项时,包括:
    通过字符识别设备从纸质试题中识别所述阅读文本、所述题干和所述题干对应的多个候选选项;或者
    从存储介质读取所述阅读文本、所述题干和所述题干对应的多个候选选项。
  18. 如权利要求16所述的存储介质,其中,所述计算机可读指令被所述处理器执行以实现所述计算每个候选语句与所述阅读文本中的每个文本语句的相似度包括:
    通过训练好的基于BERT的文本二分类模型计算所述候选语句与所述阅读文本中的每个文本语句的相似度,所述文本二分类模型包括BERT层和SOFTMAX分类层。
  19. 如权利要求18所述的存储介质,其中,所述计算机可读指令被所述处理器执行以实现所述文本二分类模型的训练过程时,包括:
    获取MSMARCO数据集中的候选语句和阅读文本;
    将候选语句与阅读文本中的每个文本语句组合成语句对,为每个语句对设置标签;
    通过所述BERT层对语句对进行编码计算,得到语句对向量;
    通过所述SOFTMAX分类层的前向传播算法对所述语句对向量进行计算,得到语句对的相似度;
    根据所述语句对的相似度和语句对的标签采用反向传播算法优化所述BERT层和所述SOFTMAX分类层的参数,得到所述二分类模型。
  20. 如权利要求18所述的存储介质,其中,所述计算机可读指令被所述处理器执行以实现所述通过训练好的基于BERT的文本二分类模型计算所述候选语句与所述阅读文本中的每个文本语句的相似度时,包括:
    获取所述题干的向量表示和所述候选选项的向量表示,向量表示为向量或向量序列;
    将所述题干的向量表示和所述候选选项的向量表示进行元素相加,得到所述候选语句的向量表示;
    获取所述阅读文本中的每个文本语句的向量表示;
    通过所述文本二分类模型基于所述候选语句的向量表示和所述阅读文本中的每个文本语句的向量表示,计算所述候选语句与所述阅读文本中的每个文本语句的相似度。
PCT/CN2020/105901 2020-05-29 2020-07-30 答案选择方法、装置、计算机设备及计算机可读存储介质 WO2021237934A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010481867.9 2020-05-29
CN202010481867.9A CN111639170A (zh) 2020-05-29 2020-05-29 答案选择方法、装置、计算机设备及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2021237934A1 true WO2021237934A1 (zh) 2021-12-02

Family

ID=72330315

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/105901 WO2021237934A1 (zh) 2020-05-29 2020-07-30 答案选择方法、装置、计算机设备及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN111639170A (zh)
WO (1) WO2021237934A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114219050A (zh) * 2022-02-22 2022-03-22 杭州远传新业科技有限公司 文本相似度模型的训练方法、系统、装置和介质
CN114547274A (zh) * 2022-04-26 2022-05-27 阿里巴巴达摩院(杭州)科技有限公司 多轮问答的方法、装置及设备
CN116050412A (zh) * 2023-03-07 2023-05-02 江西风向标智能科技有限公司 基于数学语义逻辑关系的高中数学题目的分割方法和系统
CN116503215A (zh) * 2023-06-25 2023-07-28 江西联创精密机电有限公司 一种通用训练题目和答案生成方法及系统
CN117056497A (zh) * 2023-10-13 2023-11-14 北京睿企信息科技有限公司 一种基于llm的问答方法、电子设备及存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112380873B (zh) * 2020-12-04 2024-04-26 鼎富智能科技有限公司 一种规范文书中被选中项确定方法及装置
CN113239689B (zh) * 2021-07-07 2021-10-08 北京语言大学 面向易混淆词考察的选择题干扰项自动生成方法及装置
CN113282738B (zh) * 2021-07-26 2021-10-08 北京世纪好未来教育科技有限公司 文本选择方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160379120A1 (en) * 2015-06-25 2016-12-29 International Business Machines Corporation Knowledge Canvassing Using a Knowledge Graph and a Question and Answer System
CN108647233A (zh) * 2018-04-02 2018-10-12 北京大学深圳研究生院 一种用于问答系统的答案排序方法
CN108875074A (zh) * 2018-07-09 2018-11-23 北京慧闻科技发展有限公司 基于交叉注意力神经网络的答案选择方法、装置和电子设备
CN110647619A (zh) * 2019-08-01 2020-01-03 中山大学 一种基于问题生成和卷积神经网络的常识问答方法
CN111190997A (zh) * 2018-10-26 2020-05-22 南京大学 一种使用神经网络和机器学习排序算法的问答系统实现方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160379120A1 (en) * 2015-06-25 2016-12-29 International Business Machines Corporation Knowledge Canvassing Using a Knowledge Graph and a Question and Answer System
CN108647233A (zh) * 2018-04-02 2018-10-12 北京大学深圳研究生院 一种用于问答系统的答案排序方法
CN108875074A (zh) * 2018-07-09 2018-11-23 北京慧闻科技发展有限公司 基于交叉注意力神经网络的答案选择方法、装置和电子设备
CN111190997A (zh) * 2018-10-26 2020-05-22 南京大学 一种使用神经网络和机器学习排序算法的问答系统实现方法
CN110647619A (zh) * 2019-08-01 2020-01-03 中山大学 一种基于问题生成和卷积神经网络的常识问答方法

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114219050A (zh) * 2022-02-22 2022-03-22 杭州远传新业科技有限公司 文本相似度模型的训练方法、系统、装置和介质
CN114219050B (zh) * 2022-02-22 2022-06-21 杭州远传新业科技股份有限公司 文本相似度模型的训练方法、系统、装置和介质
CN114547274A (zh) * 2022-04-26 2022-05-27 阿里巴巴达摩院(杭州)科技有限公司 多轮问答的方法、装置及设备
CN116050412A (zh) * 2023-03-07 2023-05-02 江西风向标智能科技有限公司 基于数学语义逻辑关系的高中数学题目的分割方法和系统
CN116050412B (zh) * 2023-03-07 2024-01-26 江西风向标智能科技有限公司 基于数学语义逻辑关系的高中数学题目的分割方法和系统
CN116503215A (zh) * 2023-06-25 2023-07-28 江西联创精密机电有限公司 一种通用训练题目和答案生成方法及系统
CN117056497A (zh) * 2023-10-13 2023-11-14 北京睿企信息科技有限公司 一种基于llm的问答方法、电子设备及存储介质
CN117056497B (zh) * 2023-10-13 2024-01-23 北京睿企信息科技有限公司 一种基于llm的问答方法、电子设备及存储介质

Also Published As

Publication number Publication date
CN111639170A (zh) 2020-09-08

Similar Documents

Publication Publication Date Title
WO2021237934A1 (zh) 答案选择方法、装置、计算机设备及计算机可读存储介质
CN111415740B (zh) 问诊信息的处理方法、装置、存储介质及计算机设备
US20200097814A1 (en) Method and system for enabling interactive dialogue session between user and virtual medical assistant
US9959776B1 (en) System and method for automated scoring of texual responses to picture-based items
US20210342212A1 (en) Method and system for identifying root causes
Kirk et al. Machine learning in nutrition research
US20200279147A1 (en) Method and apparatus for intelligently recommending object
US20200211709A1 (en) Method and system to provide medical advice to a user in real time based on medical triage conversation
US11610683B2 (en) Methods and systems for generating a vibrant compatibility plan using artificial intelligence
Ripoll et al. Multi-Lingual Contextual Hate Speech Detection Using Transformer-Based Ensembles
US20230223132A1 (en) Methods and systems for nutritional recommendation using artificial intelligence analysis of immune impacts
US20230253122A1 (en) Systems and methods for generating a genotypic causal model of a disease state
US11783244B2 (en) Methods and systems for holistic medical student and medical residency matching
CN112836027A (zh) 用于确定文本相似度的方法、问答方法及问答系统
Milewska et al. Graphical representation of the relationships between qualitative variables concerning the process of hospitalization in the gynecological ward using correspondence analysis
US11931186B2 (en) Method of system for reversing inflammation in a user
CN112818128B (zh) 一种基于知识图谱增益的机器阅读理解系统
US10699589B2 (en) Systems and methods for determining the validity of an essay examination prompt
US11594316B2 (en) Methods and systems for nutritional recommendation using artificial intelligence analysis of immune impacts
Moscato et al. Biomedical Spanish Language Models for entity recognition and linking at BioASQ DisTEMIST.
Schwartz et al. An automated sql query grading system using an attention-based convolutional neural network
US20220108799A1 (en) System and method for transmitting a severity vector
US20240005231A1 (en) Methods and systems for holistic medical student and medical residency matching
Tebbe et al. Is natural language processing the cheap charlie of analyzing cheap talk? A horse race between classifiers on experimental communication data
Reddy et al. Detecting chronic kidney disease using machine learning

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20938263

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 130323)

122 Ep: pct application non-entry in european phase

Ref document number: 20938263

Country of ref document: EP

Kind code of ref document: A1