WO2022142121A1 - 摘要语句提取方法、装置、服务器及计算机可读存储介质 - Google Patents

摘要语句提取方法、装置、服务器及计算机可读存储介质 Download PDF

Info

Publication number
WO2022142121A1
WO2022142121A1 PCT/CN2021/097421 CN2021097421W WO2022142121A1 WO 2022142121 A1 WO2022142121 A1 WO 2022142121A1 CN 2021097421 W CN2021097421 W CN 2021097421W WO 2022142121 A1 WO2022142121 A1 WO 2022142121A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
semantic vector
sentence
semantic
model
Prior art date
Application number
PCT/CN2021/097421
Other languages
English (en)
French (fr)
Inventor
梁子敬
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2022142121A1 publication Critical patent/WO2022142121A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/34Browsing; Visualisation therefor
    • G06F16/345Summarisation for human users
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Definitions

  • the present application relates to the technical field of semantic parsing, and in particular, to a method, apparatus, server, and computer-readable storage medium for extracting abstract sentences.
  • the methods for determining text summaries mainly include extractive and generative methods.
  • the extractive method refers to extracting important sentences in the text as abstract sentences, and the abstract is composed based on these abstract sentences, while the generative method refers to the method based on natural language processing. , synonymous substitution, and sentence abbreviation to generate text summaries.
  • the commonly used algorithm for extracting important sentences in text is mainly the TextRank algorithm, but the inventor found that the original TextRank method only extracted important sentences based on the similarity between sentences and text, and the extracted sentences were redundant and less accurate. . Therefore, how to improve the extraction accuracy of summary sentences in the text is an urgent problem to be solved.
  • Embodiments of the present application provide a method, device, server, and computer-readable storage medium for extracting abstract sentences, which aim to improve the accuracy of extracting abstract sentences in text.
  • an embodiment of the present application provides a method for extracting a summary sentence, which is applied to a server, where the server stores a summary sentence extraction model, and the summary sentence extraction model includes a semantic recognition model, a semantic fusion model, and a sentence classification model.
  • the methods described include:
  • a digest sentence of the target text is determined from the plurality of target sentences according to the classification label sequence and the first label for indicating that the target sentence is a digest sentence.
  • an embodiment of the present application further provides an apparatus for extracting abstract sentences, which is applied to a server, where the server stores an abstract sentence extraction model, and the abstract sentence extraction model includes a semantic recognition model, a semantic fusion model, and a sentence classification model,
  • the abstract sentence extraction device includes:
  • an acquisition module used to acquire the target text of the abstract to be extracted
  • a text splitting module for splitting the target text into multiple target sentences
  • control module configured to call the semantic recognition model to process each of the target sentences to obtain the first semantic vector of each of the target sentences
  • the control module is further configured to call the semantic fusion model to process the first semantic vector of each target sentence to obtain a semantic vector matrix of the target text;
  • the control module is further configured to perform linear transformation on the semantic vector matrix to obtain a target semantic vector matrix
  • the control module is further configured to call the sentence classification model to process the target semantic vector matrix to obtain a classification label sequence, where the classification label sequence includes the classification label of each target sentence;
  • a sentence determination module configured to determine a digest sentence of the target text from the plurality of target sentences according to the classification label sequence and the first label used to indicate that the target sentence is a digest sentence.
  • an embodiment of the present application further provides a server, the server includes a processor, a memory, and a computer program stored on the memory and executable by the processor, wherein the computer program is executed by the When executed by the processor, the steps of the above-mentioned abstract sentence extraction method are implemented.
  • embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, wherein when the computer program is executed by a processor, the above-mentioned abstract sentence extraction is implemented steps of the method.
  • Embodiments of the present application provide a method, device, server, and computer-readable storage medium for extracting abstract sentences.
  • a sentence By dividing a target text into multiple target sentences, and processing each target sentence through a semantic recognition model, a sentence can be obtained.
  • the semantic vector of semantic understanding at the level of semantic understanding, and then processing the semantic vector of semantic understanding at the sentence level through the semantic fusion model the semantic vector matrix of text-level semantic understanding can be obtained, and the semantic vector matrix is linearly transformed to obtain the target. Semantic vector matrix.
  • the target semantic vector matrix of text-level semantic understanding is processed by the sentence classification model, and the classification label sequence can be obtained. Based on the classification label sequence and the first label used to indicate that the target sentence is a summary sentence, from multiple The summary sentences of the target text are determined from the target sentences, which greatly improves the extraction accuracy of the summary sentences in the text.
  • FIG. 1 is a schematic flowchart of a method for extracting a summary sentence provided by an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of a summary sentence extraction model in an embodiment of the present application.
  • FIG. 3 is another schematic structural diagram of a summary sentence extraction model in an embodiment of the present application.
  • FIG. 4 is a schematic block diagram of an apparatus for extracting a summary sentence provided by an embodiment of the present application.
  • Fig. 5 is a schematic block diagram of sub-modules of the summary sentence extraction device in Fig. 4;
  • FIG. 6 is a schematic structural block diagram of a server provided by an embodiment of the present application.
  • the methods for determining text summaries mainly include extractive and generative methods.
  • the extractive method refers to extracting important sentences in the text as abstract sentences, and the abstract is composed based on these abstract sentences, while the generative method refers to the method based on natural language processing. , synonymous substitution, and sentence abbreviation to generate text summaries.
  • the commonly used algorithm for extracting important sentences in text is mainly the TextRank algorithm, but the original TextRank method only extracts important sentences based on the similarity between sentences and text, and the extracted sentences are redundant and have low accuracy. Therefore, how to improve the extraction accuracy of summary sentences in the text is an urgent problem to be solved.
  • embodiments of the present application provide a method, device, server, and computer-readable storage medium for extracting abstract sentences.
  • the semantic vector of sentence-level semantic understanding can be obtained, and then the semantic understanding of sentence-level semantic understanding can be obtained through the semantic fusion model.
  • the semantic vector matrix of text-level semantic understanding can be obtained, and the semantic vector matrix can be linearly transformed to obtain the target semantic vector matrix.
  • the target semantic vector matrix of text-level semantic understanding can be processed through the sentence classification model.
  • the classification label sequence can be obtained, and based on the classification label sequence and the first label used to indicate that the target sentence is a summary sentence, the summary sentence of the target text can be determined from multiple target sentences, which greatly improves the accuracy of important sentences in the text. Extraction accuracy.
  • FIG. 1 is a schematic flowchart of a method for extracting a summary sentence provided by an embodiment of the present application.
  • the abstract sentence extraction method can be applied to a server, and the server can be a single server or a server cluster composed of multiple servers, which is not specifically limited in this embodiment of the present application.
  • the method for extracting a summary sentence includes steps S101 to S106.
  • Step S101 Obtain the target text of the abstract to be extracted, and split the target text into a plurality of target sentences.
  • the server can obtain the target text of the abstract to be extracted from the database, the target text of the abstract to be extracted from the external storage device, and the target text of the abstract to be extracted from the abstract extraction request sent by the terminal device.
  • the databases include local databases and cloud databases
  • external devices include plug-in hard disks, secure digital cards, and flash memory cards.
  • the target text may include text that can be directly read by the server and text that cannot be directly read.
  • the directly read text includes text in word format, txt format, and wps format, and the text that cannot be directly read includes pdf format. , tif format, and text in image format, etc.
  • the method of dividing the target text into a plurality of target sentences may be: according to the segment identifier in the target text, dividing the target text into a plurality of initial sentences; determining each initial sentence The number of characters of the statement, and each initial statement is preprocessed according to the number of characters of each initial statement to obtain multiple target statements, and the number of characters of each target statement is equal to the preset number of characters.
  • the preset number of characters may be set based on the actual situation, which is not specifically limited in this embodiment of the present application.
  • the preset number of characters is 256 or 512
  • the segment identifier is a symbol indicating the end of a statement in the grammar
  • the segment identifier includes but is not limited to period, semicolon, question mark, exclamation mark and interline symbol.
  • each initial statement is preprocessed according to the number of characters of each initial statement
  • the method for obtaining a plurality of target statements may be: if the number of characters of the initial statement is less than the preset number of characters, then determine the initial statement. The absolute value of the difference between the number of characters and the preset number of characters, the number of the first character is obtained, and the preset characters of the first number of characters are filled at the end of the initial statement to obtain the target statement; The number of characters is greater than the preset number of characters, then determine the absolute value of the difference between the number of characters in the initial statement and the preset number of characters, obtain the second number of characters, and remove the first number of characters according to the reverse order of the characters in the initial statement. If the number of characters in the initial statement is equal to the preset number of characters, it will not be processed.
  • Step S102 invoking the semantic recognition model to process each of the target sentences to obtain a first semantic vector of each of the target sentences.
  • the server stores a summary sentence extraction model, as shown in Figure 2, the summary extraction model includes a semantic recognition model, a semantic fusion model and a sentence classification model.
  • the semantic recognition model is connected with the semantic fusion model, and the semantic fusion model is connected with the sentence classification model.
  • the semantic recognition model is a pre-trained Bert model
  • the semantic fusion model is a pre-trained long short-term memory artificial neural network (Long Short-Term Memory, LSTM) model or a GRU model
  • the sentence classification model is pre-trained
  • the abstract sentence extraction model can be obtained by iteratively training the Bert model, the LSTM model and the two-class model together.
  • the iterative training of the Bert model, the LSTM model (which can also be replaced by the GRU model) and the two-class model can be as follows: obtaining a sample data set, wherein the sample data set includes a plurality of sample data, so The sample data includes sample text, annotated classification label sequence, annotated semantic vector, and annotated semantic vector matrix; select a sample data in the sample data set, and split the sample text in the sample data into characters whose number is the predetermined number of characters.
  • the weight coefficient matrix and bias term matrix are linearly transformed to the semantic vector matrix to obtain the target semantic vector matrix; the target semantic vector matrix is input into the binary classification model, and the classification label sequence is output; then based on the output classification label sequence and labeled classification
  • the label sequence updates the model parameters of the binary classification model, updates the weight coefficient matrix and the bias term matrix, based on the output semantic vector matrix and the labeled semantic vector matrix, updates the model parameters of the LSTM model, based on the output semantic vector and annotation of the sample sentence
  • the semantic vector of the Bert model is updated to update the model parameters of the Bert model, thereby updating the model parameters of the entire abstract sentence extraction model; after updating the model parameters, continue to iteratively train the Bert model, the LSTM model and the two-class model together until the Bert model, LSTM model
  • the semantic recognition model is called to process each target sentence, and the sentence-level semantic vector of each target sentence can be obtained.
  • the semantic vector is used to describe the semantic information of the target sentence, that is, the meaning that the target sentence wants to express.
  • Step S103 Invoke the semantic fusion model to process the first semantic vector of each target sentence to obtain a semantic vector matrix of the target text.
  • the abstract sentence extraction model may further include a dropout layer.
  • the semantic recognition model is connected to the dropout layer
  • the dropout layer is connected to the semantic fusion model
  • the semantic fusion model is connected to the sentence classification model.
  • the abstract sentence extraction model can be obtained by iterative training of Bert model, dropout layer, LSTM model (which can also be replaced by GRU model) and binary classification model.
  • the dropout layer can prevent the overfitting of the model and improve the model effect.
  • the first semantic vector of each target sentence is input into the dropout layer to obtain a semantic vector sequence; the semantic vector sequence is preprocessed to obtain the target semantic vector sequence, The length of the target semantic vector sequence is equal to the preset length; the semantic fusion model is called to process the target semantic vector sequence to obtain the semantic vector matrix of the target text.
  • the semantic vector matrix of the target text includes the second semantic vector of each target sentence, and the second semantic vector is used to describe the semantic information of the target sentence in the target text, that is, the meaning that the sentence itself wants to express is comprehensively considered, and the second semantic vector is used to describe the semantic information of the target sentence in the target text.
  • the method of preprocessing the semantic vector sequence to obtain the target semantic vector sequence may be as follows: if the length of the semantic vector sequence is less than the preset length, then fill the zero vector in the semantic vector sequence to obtain the target semantic vector sequence; If the length of the semantic vector sequence is greater than the preset length, the semantic vector sequence is intercepted, that is, the semantic vector of the first preset length in the semantic vector sequence is intercepted to obtain the target semantic vector sequence.
  • the padding number of the zero vector is determined according to the difference between the preset length and the length of the first semantic vector sequence.
  • the Bert model, the dropout layer, the LSTM model (which can also be replaced by the GRU model) and the two-class model can be iteratively trained by: acquiring a sample data set, wherein the sample data set includes a plurality of samples.
  • the sample data includes sample text, labeled classification label sequence, labeled semantic vector, labeled semantic vector sequence, labeled semantic vector matrix; select a sample data in the sample data set, and use the sample data in the sample data.
  • the text is divided into multiple sample sentences with a preset number of characters; the multiple sample sentences are input into the Bert model to obtain the semantic vector of each sample sentence; the semantic vector of each sample sentence is input into the dropout layer to obtain Semantic vector sequence, preprocess the semantic vector sequence to obtain the target semantic vector sequence; input the target semantic vector sequence into the LSTM model to obtain the semantic vector matrix of the sample text; based on the set weight coefficient matrix and bias term matrix, the semantic vector The matrix is linearly transformed to obtain the target semantic vector matrix; the target semantic vector matrix is input into the binary classification model, and the classification label sequence is output;
  • the model parameters of the binary classification model Based on the output classification label sequence and the labeled classification label sequence, update the model parameters of the binary classification model, update the weight coefficient matrix and the bias term matrix, and update the model parameters of the LSTM model based on the output semantic vector matrix and the labeled semantic vector matrix, Based on the output semantic vector sequence and the labeled semantic vector sequence, the model parameters of the dropout layer are updated, and the model parameters of the Bert model are updated based on the output semantic vector of the sample sentence and the labeled semantic vector, thereby updating the model of the entire abstract sentence extraction model.
  • Step S104 Perform linear transformation on the semantic vector matrix to obtain a target semantic vector matrix.
  • a preset weight coefficient matrix and a preset bias term matrix are obtained; according to the preset weight coefficient matrix and the preset bias term matrix, the semantic vector matrix is linearly transformed to obtain a target semantic vector matrix.
  • the preset weight coefficient matrix and the preset bias term matrix are determined when the models converge during the iterative training process of the Bert model, the LSTM model (which can also be replaced by the GRU model) and the two-class model, or, The preset weight coefficient matrix and the preset bias term matrix are determined during the iterative training process of the Bert model, the dropout layer, the LSTM model and the binary classification model together, when the model converges.
  • the semantic vector matrix is h
  • the target semantic vector matrix is H
  • the preset weight coefficient matrix is W
  • the preset bias term matrix is B
  • Step S105 Invoke the sentence classification model to process the target semantic vector matrix to obtain a classification label sequence, where the classification label sequence includes the classification label of each target sentence.
  • the number of classification labels in the classification label sequence is determined according to the number of target sentences, and the classification labels in the classification label sequence correspond to the target sentences one-to-one, and the classification label may be the first label or the second label,
  • the first label is used to indicate that the corresponding target sentence is a summary sentence
  • the second label is used to indicate that the corresponding target sentence is not a summary sentence.
  • the target text includes N target sentences
  • the classification label sequence includes N classification labels.
  • the first label and the second label may be set based on actual conditions, for example, the first label is 1 and the second label is 0, or the first label is 1 and the second label is -1.
  • Step S106 according to the classification label sequence and the first label used to indicate that the target sentence is a digest sentence, determine the digest sentence of the target text from the plurality of target sentences.
  • the ranking number of the first label in the classification label sequence is determined, and a target sentence corresponding to the ranking number is selected from a plurality of target sentences as a summary sentence of the target text.
  • the number of abstract sentences may be one or multiple, which is not specifically limited in this embodiment of the present application. For example, if the classification label sequence includes 100 classification label sequences, and the classification labels with the order numbers 20, 50, 75, and 90 are the first labels, the targets corresponding to the classification labels with order numbers 20, 50, 75, and 90 can be determined.
  • the statement is a summary statement of the target text.
  • the sorting number of the classification labels in the classification label sequence is determined according to the position of the corresponding target sentence in the target text.
  • the target text includes N target sentences, and the target sentence can be determined according to the position of the target sentence in the target text.
  • the position number of the first target sentence of the text is 1. Therefore, the classification label of the first target sentence is also ranked 1 in the sequence of classification labels.
  • the position number of the last target sentence is N. Therefore, The categorical label of the last target sentence is also ranked N in the sequence of categorical labels.
  • a semantic vector of sentence-level semantic understanding can be obtained, and then the semantic vector of the semantic understanding at the sentence level can be obtained.
  • the fusion model processes the semantic vector of sentence-level semantic understanding, and can obtain the semantic vector matrix of text-level semantic understanding, and performs linear transformation on the semantic vector matrix to obtain the target semantic vector matrix.
  • the target semantic vector matrix of the semantic understanding can be processed to obtain a classification label sequence, and based on the classification label sequence and the first label used to indicate that the target sentence is a summary sentence, the summary sentence of the target text is determined from multiple target sentences. Greatly improves the extraction accuracy of important sentences in the text.
  • FIG. 4 is a schematic block diagram of an apparatus for extracting a summary sentence provided by an embodiment of the present application.
  • the abstract sentence extraction device is applied to a server, and the server stores an abstract sentence extraction model.
  • the abstract sentence extraction model includes a semantic recognition model, a semantic fusion model and a sentence classification model.
  • the abstract sentence extraction device 200 includes: obtaining module 210, text splitting module 220, control module 230 and sentence determination module 240, wherein:
  • the obtaining module 210 is used to obtain the target text of the abstract to be extracted
  • the text splitting module 220 is configured to split the target text into multiple target sentences
  • the control module 230 is configured to call the semantic recognition model to process each of the target sentences to obtain the first semantic vector of each of the target sentences;
  • the control module 230 is further configured to call the semantic fusion model to process the first semantic vector of each of the target sentences to obtain a semantic vector matrix of the target text;
  • the control module 230 is further configured to perform linear transformation on the semantic vector matrix to obtain a target semantic vector matrix
  • the control module 230 is further configured to call the sentence classification model to process the target semantic vector matrix to obtain a classification label sequence, where the classification label sequence includes the classification label of each target sentence;
  • the sentence determination module 240 is configured to determine a digest sentence of the target text from the plurality of target sentences according to the classification label sequence and the first label used to indicate that the target sentence is a digest sentence.
  • the semantic recognition model is a pre-trained Bert model
  • the semantic fusion model is a pre-trained LSTM model or a GRU model
  • the sentence classification model is a pre-trained binary classification model.
  • the number of characters of each target sentence is equal to the preset number of characters.
  • the abstract sentence extraction model further includes a dropout layer
  • the control module 230 is further configured to:
  • the semantic fusion model is invoked to process the target semantic vector sequence to obtain a semantic vector matrix of the target text.
  • control module 230 is further configured to:
  • the length of the semantic vector sequence is less than the preset length, then fill the zero vector in the semantic vector sequence to obtain a target semantic vector sequence;
  • the semantic vector sequence is intercepted to obtain a target semantic vector sequence.
  • control module 230 is further configured to:
  • the preset weight coefficient matrix and the preset bias term matrix linearly transform the semantic vector matrix to obtain the target semantic vector matrix.
  • the sentence determination module 240 includes:
  • the sentence selection sub-module 242 is configured to select a target sentence corresponding to the sequence number from the plurality of target sentences as a summary sentence of the target text.
  • the apparatuses provided by the above embodiments may be implemented in the form of a computer program, and the computer program may be executed on the server as shown in FIG. 6 .
  • FIG. 6 is a schematic structural block diagram of a server provided by an embodiment of the present application.
  • the server includes a processor, a memory and a network interface connected through a system bus, the memory stores an abstract sentence extraction model, and the abstract sentence extraction model includes a semantic recognition model, a semantic fusion model and a sentence classification model , wherein the memory may include a storage medium and an internal memory.
  • the storage medium may store an operating system and a computer program.
  • the computer program includes program instructions that, when executed, cause the processor to perform any one of the abstract sentence extraction methods.
  • the processor is used to provide computing and control capabilities to support the operation of the entire server.
  • the internal memory provides an environment for running the computer program in the storage medium, and when the computer program is executed by the processor, the processor can cause the processor to execute any abstract sentence extraction method.
  • the network interface is used for network communication, such as sending assigned tasks.
  • the network interface is used for network communication, such as sending assigned tasks.
  • FIG. 6 is only a block diagram of a partial structure related to the solution of the present application, and does not constitute a limitation on the server to which the solution of the present application is applied. More or fewer components are shown in the figures, either in combination or with different arrangements of components.
  • the processor may be a central processing unit (Central Processing Unit, CPU), and the processor may also be other general-purpose processors, digital signal processors (Digital Signal Processors, DSP), application specific integrated circuits (Application Specific Integrated circuits) Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor can be a microprocessor or the processor can also be any conventional processor or the like.
  • the processor is used for running the computer program stored in the memory to realize the following steps:
  • a digest sentence of the target text is determined from the plurality of target sentences according to the classification label sequence and a first label for indicating that the target sentence is a digest sentence.
  • the semantic recognition model is a pre-trained Bert model
  • the semantic fusion model is a pre-trained LSTM model or a GRU model
  • the sentence classification model is a pre-trained binary classification model.
  • the number of characters of each target sentence is equal to the preset number of characters.
  • the abstract sentence extraction model further includes a dropout layer, and the processor invokes the semantic fusion model to process the first semantic vector of each target sentence to obtain the target text.
  • the processor invokes the semantic fusion model to process the first semantic vector of each target sentence to obtain the target text.
  • the semantic fusion model is invoked to process the target semantic vector sequence to obtain a semantic vector matrix of the target text.
  • the processor when the processor performs preprocessing on the semantic vector sequence to obtain a target semantic vector sequence, the processor is used to implement:
  • the length of the semantic vector sequence is less than the preset length, then fill the zero vector in the semantic vector sequence to obtain a target semantic vector sequence;
  • the semantic vector sequence is intercepted to obtain a target semantic vector sequence.
  • the processor when the processor performs linear transformation on the semantic vector matrix to obtain a target semantic vector matrix, the processor is used to implement:
  • the preset weight coefficient matrix and the preset bias term matrix linearly transform the semantic vector matrix to obtain the target semantic vector matrix.
  • the processor determines the digest sentence of the target text from the plurality of target sentences according to the classification label sequence and the first label used to indicate that the target sentence is a digest sentence, Used to implement:
  • a target sentence corresponding to the ranking number is selected from the plurality of target sentences as a summary sentence of the target text.
  • the present application can be implemented by means of software plus a necessary general hardware platform. Based on this understanding, the technical solutions of the present application can be embodied in the form of software products in essence or the parts that make contributions to the prior art, and the computer software products can be stored in storage media, such as ROM/RAM, magnetic disks , optical disc, etc., including several instructions for causing a server (which may be a personal computer, a server, or a network server, etc.) to execute the methods described in various embodiments or some parts of the embodiments of the present application.
  • a server which may be a personal computer, a server, or a network server, etc.
  • Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, the computer program includes program instructions, and when the program instructions are executed, the following steps are implemented:
  • a digest sentence of the target text is determined from the plurality of target sentences according to the classification label sequence and a first label for indicating that the target sentence is a digest sentence.
  • the computer-readable storage medium may be volatile or non-volatile.
  • the computer-readable storage medium may be an internal storage unit of the server described in the foregoing embodiments, such as a hard disk or a memory of the server.
  • the computer-readable storage medium may also be an external storage device of the server, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card equipped on the server , Flash Card (Flash Card) and so on.
  • the computer-readable storage medium may mainly include a stored program area and a stored data area, wherein the stored program area may store an operating system, an application program required for at least one function, and the like; The data created by the use of the node, etc.
  • the blockchain referred to in this application is a new application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information to verify its Validity of information (anti-counterfeiting) and generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Machine Translation (AREA)

Abstract

一种摘要语句提取方法、装置、服务器及计算机可读存储介质,该方法包括:获取目标文本,并将所述目标文本拆分为多个目标语句(S101);调用所述语义识别模型对每个所述目标语句进行处理,得到每个所述目标语句的第一语义向量(S102);调用所述语义融合模型对每个所述目标语句的第一语义向量进行处理,得到所述目标文本的语义向量矩阵(S103);对所述语义向量矩阵进行线性变换,得到目标语义向量矩阵(S104);调用所述语句分类模型对所述目标语义向量矩阵进行处理,得到分类标签序列,所述分类标签序列包括每个所述目标语句的分类标签(S105);根据所述分类标签序列和用于指示目标语句为摘要语句的第一标签,从所述多个目标语句中确定目标文本的摘要语句(S106)。该方法提高了摘要语句的提取准确性。还涉及区块链领域,上述计算机可读存储介质可存储根据区块链节点的使用所创建的数据。

Description

摘要语句提取方法、装置、服务器及计算机可读存储介质
本申请要求于2020年12月31日提交中国专利局、申请号为202011640996.4、发明名称为“摘要语句提取方法、装置、服务器及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及语义解析的技术领域,尤其涉及一种摘要语句提取方法、装置、服务器及计算机可读存储介质。
背景技术
目前,文本摘要的确定方式主要包括抽取式和生成式,抽取式是指抽取文本中的重要语句作为摘要语句,基于这些摘要语句组成摘要,而生成式是指基于自然语言处理的方法,通过转述、同义替换和句子缩写等技术,生成文本摘要。而常用的抽取文本中的重要语句的算法主要是是TextRank算法,但是发明人发现原始TextRank方法只是基于语句与文本的相似度抽取重要句子,且抽取出的语句存在冗余性,准确性较低。因此,如何提高文本中的摘要语句的提取准确性是目前亟待解决的问题。
发明内容
本申请实施例提供一种摘要语句提取方法、装置、服务器及计算机可读存储介质,旨在提高文本中的摘要语句的提取准确性。
第一方面,本申请实施例提供一种摘要语句提取方法,应用于服务器,所述服务器存储有摘要语句提取模型,所述摘要语句提取模型包括语义识别模型、语义融合模型和语句分类模型,所述方法包括:
获取待提取摘要的目标文本,并将所述目标文本拆分为多个目标语句;
调用所述语义识别模型对每个所述目标语句进行处理,得到每个所述目标语句的第一语义向量;
调用所述语义融合模型对每个所述目标语句的第一语义向量进行处理,得到所述目标文本的语义向量矩阵;
对所述语义向量矩阵进行线性变换,得到目标语义向量矩阵;
调用所述语句分类模型对所述目标语义向量矩阵进行处理,得到分类标签序列,所述分类标签序列包括每个所述目标语句的分类标签;
根据所述分类标签序列和用于指示目标语句为摘要语句的第一标签,从所述多个目标语句中确定所述目标文本的摘要语句。
第二方面,本申请实施例还提供一种摘要语句提取装置,应用于服务器,所述服务器存储有摘要语句提取模型,所述摘要语句提取模型包括语义识别模型、语义融合模型和语句分类模型,所述摘要语句提取装置包括:
获取模块,用于获取待提取摘要的目标文本;
文本拆分模块,用于将所述目标文本拆分为多个目标语句;
控制模块,用于调用所述语义识别模型对每个所述目标语句进行处理,得到每个所述目标语句的第一语义向量;
所述控制模块,还用于调用所述语义融合模型对每个所述目标语句的第一语义向量进行处理,得到所述目标文本的语义向量矩阵;
所述控制模块,还用于对所述语义向量矩阵进行线性变换,得到目标语义向量矩阵;
所述控制模块,还用于调用所述语句分类模型对所述目标语义向量矩阵进行处理,得到分类标签序列,所述分类标签序列包括每个所述目标语句的分类标签;
语句确定模块,用于根据所述分类标签序列和用于指示目标语句为摘要语句的第一标签,从所述多个目标语句中确定所述目标文本的摘要语句。
第三方面,本申请实施例还提供一种服务器,所述服务器包括处理器、存储器、以及存储在所述存储器上并可被所述处理器执行的计算机程序,其中所述计算机程序被所述处理器执行时,实现如上所述的摘要语句提取方法的步骤。
第四方面,本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,其中所述计算机程序被处理器执行时,实现如上所述的摘要语句提取方法的步骤。
本申请实施例提供一种摘要语句提取方法、装置、服务器及计算机可读存储介质,通过将目标文本拆分为多个目标语句,并通过语义识别模型对每个目标语句进行处理,可以得到句子级别的语义理解的语义向量,然后再通过语义融合模型对句子级别的语义理解的语义向量进行处理,可以得到文本级别的语义理解的语义向量矩阵,并对该语义向量矩阵进行线性变换,得到目标语义向量矩阵,最后通过语句分类模型对文本级别的语义理解的目标语义向量矩阵进行处理,可以得到分类标签序列,并基于分类标签序列和用于指示目标语句为摘要语句的第一标签,从多个目标语句中确定目标文本的摘要语句,极大的提高了文本中的摘要语句的提取准确性。
附图说明
为了更清楚地说明本申请实施例技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种摘要语句提取方法的流程示意图;
图2是本申请实施例中的摘要语句提取模型的一结构示意图;
图3是本申请实施例中的摘要语句提取模型的另一结构示意图;
图4是本申请实施例提供的一种摘要语句提取装置的示意性框图;
图5是图4中的摘要语句提取装置的子模块的示意性框图;
图6是本申请实施例提供的一种服务器的结构示意性框图。
本申请目的的实现、功能特点及优点将结合实施例,参阅附图做进一步说明。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
附图中所示的流程图仅是示例说明,不是必须包括所有的内容和操作/步骤,也不是必须按所描述的顺序执行。例如,有的操作/步骤还可以分解、组合或部分合并,因此实际执行的顺序有可能根据实际情况改变。
目前,文本摘要的确定方式主要包括抽取式和生成式,抽取式是指抽取文本中的重要语句作为摘要语句,基于这些摘要语句组成摘要,而生成式是指基于自然语言处理的方法,通过转述、同义替换和句子缩写等技术,生成文本摘要。而常用的抽取文本中的重要语句的算法主要是是TextRank算法,但是原始TextRank方法只是基于语句与文本的相似度抽取重要句子,且抽取出的语句存在冗余性,准确性较低。因此,如何提高文本中的摘要语句的提取准确性是目前亟待解决的问题。
为解决上述问题,本申请实施例提供一种摘要语句提取方法、装置、服务器及计算机可读存储介质。通过将目标文本拆分为多个目标语句,并通过语义识别模型对每个目标语句进行处理,可以得到句子级别的语义理解的语义向量,然后再通过语义融合模型对句子级别的语义理解的语义向量进行处理,可以得到文本级别的语义理解的语义向量矩阵,并对该语义向量矩阵进行线性变换,得到目标语义向量矩阵,最后通过语句分类模型对文本级别的 语义理解的目标语义向量矩阵进行处理,可以得到分类标签序列,并基于分类标签序列和用于指示目标语句为摘要语句的第一标签,从多个目标语句中确定目标文本的摘要语句,极大的提高了文本中的重要句子的提取准确性。
下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
请参阅图1,图1是本申请实施例提供的一种摘要语句提取方法的流程示意图。该摘要语句提取方法可应用于服务器,该服务器可以为单台的服务器,也可以为由多台服务器组成的服务器集群,本申请实施例对此不做具体限定。
如图1所示,该摘要语句提取方法包括步骤S101至步骤S106。
步骤S101、获取待提取摘要的目标文本,并将所述目标文本拆分为多个目标语句。
服务器可以从数据库中获取待提取摘要的目标文本,也可以从外部存储设备中获取待提取摘要的目标文本,还可以从终端设备发送的摘要提取请求中获取待提取摘要的目标文本。其中,数据库包括本地数据库和云端数据库,外部设备包括插接式硬盘,安全数字卡,闪存卡等。该目标文本可以包括服务器能够直接读取的文本和不能够直接读取的文本,该直接读取的文本包括word格式、txt格式和wps格式等的文本,不能够直接读取的文本包括pdf格式、tif格式和图片格式等的文本。
在一实施例中,将所述目标文本拆分为多个目标语句的方式可以为:根据所述目标文本中的断句标识符,将该目标文本拆分为多个初始语句;确定每个初始语句的字符数,并根据每个初始语句的字符数对每个初始语句进行预处理,得到多个目标语句,每个目标语句的字符数等于预设字符数。其中预设字符数可基于实际情况进行设置,本申请实施例对此不做具体限定。例如,预设字符数为256或512,该断句标识符为语法中表示语句结束的符号,断句标识符包括但不限于句号、分号、问号、感叹号和隔行符号。
在一实施例中,根据每个初始语句的字符数对每个初始语句进行预处理,得到多个目标语句的方式可以为:若初始语句的字符数小于预设字符数,则确定该初始语句的字符个数与预设字符数的差值的绝对值,得到第一字符个数,并在该初始语句的尾部填充该第一字符个数的预设字符,得到目标语句;若初始语句的字符数大于预设字符数,则确定该初始语句的字符个数与预设字符数的差值的绝对值,得到第二字符个数,并按照该初始语句中的字符的倒序,去除该第二字符个数的字符,得到目标语句;若初始语句的字符数等于预设字符数,则不做处理。
步骤S102、调用所述语义识别模型对每个所述目标语句进行处理,得到每个所述目标语句的第一语义向量。
服务器存储有摘要语句提取模型,如图2所示,该摘要提取模型包括语义识别模型、语义融合模型和语句分类模型,语义识别模型与语义融合模型连接,语义融合模型与语句分类模型连接,所述语义识别模型为预先训练好的Bert模型,所述语义融合模型为预先训练好的长短期记忆人工神经网络(Long Short-Term Memory,LSTM)模型或者GRU模型,所述语句分类模型为预先训练好的二分类模型,通过对Bert模型、LSTM模型和二分类模型一起进行迭代训练,可以得到摘要语句提取模型。
示例性的,对Bert模型、LSTM模型(也可以替换为GRU模型)和二分类模型一起进行迭代训练的方式可以为:获取样本数据集,其中,所述样本数据集包括多个样本数据,所述样本数据包括样本文本、标注的分类标签序列、标注的语义向量、标注的语义向量矩阵;选取样本数据集中的一个样本数据,并将该样本数据中的样本文本拆分为字符个数为预设字符个数的多个样本语句;将多个样本语句输入Bert模型,得到每个样本语句的语义向量;将每个样本语句的语义向量输入LSTM模型,得到样本文本的语义向量矩阵;基于设置的权重系数矩阵和偏置项矩阵对该语义向量矩阵进行线性变换,得到目标语义向量矩阵;将目标语义向量矩阵输入二分类模型,输出分类标签序列;然后基于输出的分类标签序列和标注的分类标签序列更新二分类模型的模型参数,更新权重系数矩阵和偏置项矩阵,基于输出的语义向量矩阵和标注的语义向量矩阵,更新LSTM模型的模型参数,基于输出的样本语句的语义向量和标注的语义向量,更新Bert模型的模型参数,从而更新整个摘要语句提取模型的模型参数;在更新模型参数后,继续对Bert模型、LSTM模型和二分类模型一起进行迭代训练,直到Bert模型、LSTM模型和二分类模型均收敛,得到摘要语句提取模型。
在将目标文本拆分为多个目标语句后,调用语义识别模型对每个目标语句进行处理,可以得到每个目标语句的句子级别的语义向量。其中,该语义向量用于描述目标语句的语义信息,即目标语句想要表达的意思。
步骤S103、调用所述语义融合模型对每个所述目标语句的第一语义向量进行处理,得到所述目标文本的语义向量矩阵。
示例性的,摘要语句提取模型还可以包括dropout层,如图3所示,语义识别模型与dropout层连接,dropout层与语义融合模型连接,语义融合模型与语句分类模型连接。通过对Bert模型、dropout层、LSTM模型(也可以替换为GRU模型)和二分类模型一起进行迭代训练,可以得到摘要语句提取模型。通过dropout层可以防止模型的过拟合,提高模型效果。
在一实施例中,在得到目标语句的第一语义向量后,将每个目标语句的第一语义向量输入dropout层,得到语义向量序列;对语义向量序列进行预处 理,得到目标语义向量序列,其中,目标语义向量序列的长度等于预设长度;调用语义融合模型对目标语义向量序列进行处理,得到目标文本的语义向量矩阵。所述目标文本的语义向量矩阵包括每个目标语句的第二语义向量,所述第二语义向量用于描述目标语句在目标文本内的语义信息,即综合考虑了语句本身想表达的意思,也考虑了语句间想表达的意思。
示例性的,对语义向量序列进行预处理,得到目标语义向量序列的方式可以为:若语义向量序列的长度小于预设长度,则在语义向量序列中填充零向量,得到目标语义向量序列;若语义向量序列的长度大于预设长度,则对语义向量序列进行截取,即截取语义向量序列中的靠前的预设长度的语义向量,得到目标语义向量序列。其中,零向量的填充个数是根据预设长度与第一语义向量序列的长度的差值确定的。
示例性的,对Bert模型、dropout层、LSTM模型(也可以替换为GRU模型)和二分类模型一起进行迭代训练的方式可以为:获取样本数据集,其中,所述样本数据集包括多个样本数据,所述样本数据包括样本文本、标注的分类标签序列、标注的语义向量、标注的语义向量序列、标注的语义向量矩阵;选取样本数据集中的一个样本数据,并将该样本数据中的样本文本拆分为字符个数为预设字符个数的多个样本语句;将多个样本语句输入Bert模型,得到每个样本语句的语义向量;将每个样本语句的语义向量输入dropout层,得到语义向量序列,对语义向量序列进行预处理,得到目标语义向量序列;将目标语义向量序列输入LSTM模型,得到样本文本的语义向量矩阵;基于设置的权重系数矩阵和偏置项矩阵对该语义向量矩阵进行线性变换,得到目标语义向量矩阵;将目标语义向量矩阵输入二分类模型,输出分类标签序列;
基于输出的分类标签序列和标注的分类标签序列更新二分类模型的模型参数,更新权重系数矩阵和偏置项矩阵,基于输出的语义向量矩阵和标注的语义向量矩阵,更新LSTM模型的模型参数,基于输出的语义向量序列和标注的语义向量序列,更新dropout层的模型参数,基于输出的样本语句的语义向量和标注的语义向量,更新Bert模型的模型参数,从而更新整个摘要语句提取模型的模型参数;在更新模型参数后,继续对Bert模型、dropout层、LSTM模型和二分类模型一起进行迭代训练,直到Bert模型、dropout层、LSTM模型和二分类模型均收敛,得到摘要语句提取模型。
步骤S104、对所述语义向量矩阵进行线性变换,得到目标语义向量矩阵。
示例性的,获取预设权重系数矩阵和预设偏置项矩阵;根据预设权重系数矩阵和预设偏置项矩阵,对语义向量矩阵进行线性变换,得到目标语义向量矩阵。其中,预设权重系数矩阵和预设偏置项矩阵是在对Bert模型、LSTM 模型(也可以替换为GRU模型)和二分类模型一起进行迭代训练的过程中,模型收敛时确定的,或者,预设权重系数矩阵和预设偏置项矩阵是在对Bert模型、dropout层、LSTM模型和二分类模型一起进行迭代训练的过程中,模型收敛时确定的。例如,语义向量矩阵为h,目标语义向量矩阵为H,预设权重系数矩阵为W和预设偏置项矩阵为B,则对语义向量矩阵语义向量矩阵h进行线性变换,得到的目标语义向量矩阵H=W*h+B。
步骤S105、调用所述语句分类模型对所述目标语义向量矩阵进行处理,得到分类标签序列,所述分类标签序列包括每个所述目标语句的分类标签。
其中,分类标签序列中的分类标签的数量是根据目标语句的数量确定的,且分类标签序列中的分类标签与目标语句一一对应,分类标签可以为第一标签,也可以为第二标签,第一标签用于指示对应的目标语句为摘要语句,第二标签用于指示对应的目标语句不为摘要语句。例如,目标文本包括N个目标语句,则分类标签序列包括N个分类标签。其中,第一标签和第二标签可基于实际情况进行设置,例如,第一标签为1,第二标签为0,或者,第一标签为1,第二标签为-1。
步骤S106、根据所述分类标签序列和用于指示目标语句为摘要语句的第一标签,从所述多个目标语句中确定所述目标文本的摘要语句。
示例性的,确定第一标签在该分类标签序列中的排序编号,并从多个目标语句中选择与该排序编号对应的目标语句作为目标文本的摘要语句。其中,摘要语句的数量可以为一个,也可以为多个,本申请实施例对此不做具体限定。例如,分类标签序列包括100个分类标签序列,且排序编号为20、50、75和90的分类标签为第一标签,则可以确定排序编号为20、50、75和90的分类标签对应的目标语句为目标文本的摘要语句。
其中,分类标签序列中的分类标签的排序编号是根据对应的目标语句在目标文本中的位置确定的,例如,目标文本包括N个目标语句,按照目标语句在目标文本中的位置,可以确定目标文本的第一个目标语句的位置编号为1,因此,第一个目标语句的分类标签在分类标签序列中的排序编号也为1,类似的,最后一个目标语句的位置编号为N,因此,最后一个目标语句的分类标签在分类标签序列中的排序编号也为N。
上述实施例提供的摘要语句提取方法,通过将目标文本拆分为多个目标语句,并通过语义识别模型对每个目标语句进行处理,可以得到句子级别的语义理解的语义向量,然后再通过语义融合模型对句子级别的语义理解的语义向量进行处理,可以得到文本级别的语义理解的语义向量矩阵,并对该语义向量矩阵进行线性变换,得到目标语义向量矩阵,最后通过语句分类模型对文本级别的语义理解的目标语义向量矩阵进行处理,可以得到分类标签序 列,并基于分类标签序列和用于指示目标语句为摘要语句的第一标签,从多个目标语句中确定目标文本的摘要语句,极大的提高了文本中的重要句子的提取准确性。
请参阅图4,图4是本申请实施例提供的一种摘要语句提取装置的示意性框图。
该摘要语句提取装置应用于服务器,服务器存储有摘要语句提取模型,该摘要语句提取模型包括语义识别模型、语义融合模型和语句分类模型,如图4所示,该摘要语句提取装置200包括:获取模块210、文本拆分模块220、控制模块230和语句确定模块240,其中:
所述获取模块210,用于获取待提取摘要的目标文本;
所述文本拆分模块220,用于将所述目标文本拆分为多个目标语句;
所述控制模块230,用于调用所述语义识别模型对每个所述目标语句进行处理,得到每个所述目标语句的第一语义向量;
所述控制模块230,还用于调用所述语义融合模型对每个所述目标语句的第一语义向量进行处理,得到所述目标文本的语义向量矩阵;
所述控制模块230,还用于对所述语义向量矩阵进行线性变换,得到目标语义向量矩阵;
所述控制模块230,还用于调用所述语句分类模型对所述目标语义向量矩阵进行处理,得到分类标签序列,所述分类标签序列包括每个所述目标语句的分类标签;
所述语句确定模块240,用于根据所述分类标签序列和用于指示目标语句为摘要语句的第一标签,从所述多个目标语句中确定所述目标文本的摘要语句。
在一实施例中,所述语义识别模型为预先训练好的Bert模型,所述语义融合模型为预先训练好的LSTM模型或者GRU模型,所述语句分类模型为预先训练好的二分类模型。
在一实施例中,每个所述目标语句的字符个数等于预设字符个数。
在一实施例中,所述摘要语句提取模型还包括dropout层,所述控制模块230还用于:
将每个所述目标语句的第一语义向量输入所述dropout层,得到语义向量序列;
对所述语义向量序列进行预处理,得到目标语义向量序列,其中,所述目标语义向量序列的长度等于预设长度;
调用所述语义融合模型对所述目标语义向量序列进行处理,得到所述目标文本的语义向量矩阵。
在一实施例中,所述控制模块230还用于:
若所述语义向量序列的长度小于所述预设长度,则在所述语义向量序列中填充零向量,得到目标语义向量序列;
若所述语义向量序列的长度大于所述预设长度,则对所述语义向量序列进行截取,得到目标语义向量序列。
在一实施例中,所述控制模块230还用于:
获取预设权重系数矩阵和预设偏置项矩阵;
根据预设权重系数矩阵和预设偏置项矩阵,对所述语义向量矩阵进行线性变换,得到目标语义向量矩阵。
在一实施例中,如图5所示,所述语句确定模块240包括:
确定子模块241,用于确定所述第一标签在所述分类标签序列中的排序编号;
语句选择子模块242,用于从所述多个目标语句中选择与所述排序编号对应的目标语句作为所述目标文本的摘要语句。
需要说明的是,所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的装置和各模块及单元的具体工作过程,可以参考前述摘要语句提取方法实施例中的对应过程,在此不再赘述。
上述实施例提供的装置可以实现为一种计算机程序的形式,该计算机程序可以在如图6所示的服务器上运行。
请参阅图6,图6是本申请实施例提供的一种服务器的结构示意性框图。
如图6所示,该服务器包括通过系统总线连接的处理器、存储器和网络接口,所述存储器存储有摘要语句提取模型,所述摘要语句提取模型包括语义识别模型、语义融合模型和语句分类模型,其中,存储器可以包括存储介质和内存储器。
存储介质可存储操作系统和计算机程序。该计算机程序包括程序指令,该程序指令被执行时,可使得处理器执行任意一种摘要语句提取方法。
处理器用于提供计算和控制能力,支撑整个服务器的运行。
内存储器为存储介质中的计算机程序的运行提供环境,该计算机程序被处理器执行时,可使得处理器执行任意一种摘要语句提取方法。
该网络接口用于进行网络通信,如发送分配的任务等。本领域技术人员可以理解,图6中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的服务器的限定,具体的服务器可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
应当理解的是,处理器可以是中央处理单元(Central Processing Unit, CPU),该处理器还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。其中,通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
其中,在一实施例中,所述处理器用于运行存储在存储器中的计算机程序,以实现如下步骤:
获取待提取摘要的目标文本,并将所述目标文本拆分为多个目标语句;
调用所述语义识别模型对每个所述目标语句进行处理,得到每个所述目标语句的第一语义向量;
调用所述语义融合模型对每个所述目标语句的第一语义向量进行处理,得到所述目标文本的语义向量矩阵;
对所述语义向量矩阵进行线性变换,得到目标语义向量矩阵;
调用所述语句分类模型对所述目标语义向量矩阵进行处理,得到分类标签序列,所述分类标签序列包括每个所述目标语句的分类标签;
根据所述分类标签序列和用于指示目标语句为摘要语句的第一标签,从所述多个目标语句中确定所述目标文本的摘要语句。
在一实施例中,所述语义识别模型为预先训练好的Bert模型,所述语义融合模型为预先训练好的LSTM模型或者GRU模型,所述语句分类模型为预先训练好的二分类模型。
在一实施例中,每个所述目标语句的字符个数等于预设字符个数。
在一实施例中,所述摘要语句提取模型还包括dropout层,所述处理器在实现调用所述语义融合模型对每个所述目标语句的第一语义向量进行处理,得到所述目标文本的语义向量矩阵之前,还用于实现:
将每个所述目标语句的第一语义向量输入所述dropout层,得到语义向量序列;
对所述语义向量序列进行预处理,得到目标语义向量序列,其中,所述目标语义向量序列的长度等于预设长度;
调用所述语义融合模型对所述目标语义向量序列进行处理,得到所述目标文本的语义向量矩阵。
在一实施例中,所述处理器在实现对所述语义向量序列进行预处理,得到目标语义向量序列时,用于实现:
若所述语义向量序列的长度小于所述预设长度,则在所述语义向量序列中填充零向量,得到目标语义向量序列;
若所述语义向量序列的长度大于所述预设长度,则对所述语义向量序列 进行截取,得到目标语义向量序列。
在一实施例中,所述处理器在实现对所述语义向量矩阵进行线性变换,得到目标语义向量矩阵时,用于实现:
获取预设权重系数矩阵和预设偏置项矩阵;
根据预设权重系数矩阵和预设偏置项矩阵,对所述语义向量矩阵进行线性变换,得到目标语义向量矩阵。
在一实施例中,所述处理器在实现根据所述分类标签序列和用于指示目标语句为摘要语句的第一标签,从所述多个目标语句中确定所述目标文本的摘要语句时,用于实现:
确定所述第一标签在所述分类标签序列中的排序编号;
从所述多个目标语句中选择与所述排序编号对应的目标语句作为所述目标文本的摘要语句。
需要说明的是,所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的服务器的具体工作过程,可以参考前述摘要语句提取方法实施例中的对应过程,在此不再赘述。
通过以上的实施方式的描述可知,本领域的技术人员可以清楚地了解到本申请可借助软件加必需的通用硬件平台的方式来实现。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台服务器(可以是个人计算机,服务器,或者网络服务器等)执行本申请各个实施例或者实施例的某些部分所述的方法。
本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序中包括程序指令,所述程序指令被执行时,以实现以下步骤:
获取待提取摘要的目标文本,并将所述目标文本拆分为多个目标语句;
调用摘要语句提取模型中的语义识别模型对每个所述目标语句进行处理,得到每个所述目标语句的第一语义向量;
调用摘要语句提取模型中的语义融合模型对每个所述目标语句的第一语义向量进行处理,得到所述目标文本的语义向量矩阵;
对所述语义向量矩阵进行线性变换,得到目标语义向量矩阵;
调用摘要语句提取模型中的语句分类模型对所述目标语义向量矩阵进行处理,得到分类标签序列,所述分类标签序列包括每个所述目标语句的分类标签;
根据所述分类标签序列和用于指示目标语句为摘要语句的第一标签,从 所述多个目标语句中确定所述目标文本的摘要语句。
需要说明的是,所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的计算机可读存储介质的具体工作过程,可以参考前述开墙处理方法的各个实施例。
其中,所述计算机可读存储介质可以是易失性的,也可以是非易失性的。所述计算机可读存储介质可以是前述实施例所述的服务器的内部存储单元,例如所述服务器的硬盘或内存。所述计算机可读存储介质也可以是所述服务器的外部存储设备,例如所述服务器上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。
进一步地,所述计算机可读存储介质可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序等;存储数据区可存储根据区块链节点的使用所创建的数据等。
本申请所指区块链是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。区块链(Blockchain),本质上是一个去中心化的数据库,是一串使用密码学方法相关联产生的数据块,每一个数据块中包含了一批次网络交易的信息,用于验证其信息的有效性(防伪)和生成下一个区块。区块链可以包括区块链底层平台、平台产品服务层以及应用服务层等。
应当理解,在此本申请说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。如在本申请说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者系统不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者系统所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者系统中还存在另外的相同要素。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (20)

  1. 一种摘要语句提取方法,应用于服务器,所述服务器存储有摘要语句提取模型,所述摘要语句提取模型包括语义识别模型、语义融合模型和语句分类模型,所述方法包括:
    获取待提取摘要的目标文本,并将所述目标文本拆分为多个目标语句;
    调用所述语义识别模型对每个所述目标语句进行处理,得到每个所述目标语句的第一语义向量;
    调用所述语义融合模型对每个所述目标语句的第一语义向量进行处理,得到所述目标文本的语义向量矩阵;
    对所述语义向量矩阵进行线性变换,得到目标语义向量矩阵;
    调用所述语句分类模型对所述目标语义向量矩阵进行处理,得到分类标签序列,所述分类标签序列包括每个所述目标语句的分类标签;
    根据所述分类标签序列和用于指示目标语句为摘要语句的第一标签,从所述多个目标语句中确定所述目标文本的摘要语句。
  2. 根据权利要求1所述的摘要语句提取方法,其中,所述语义识别模型为预先训练好的Bert模型,所述语义融合模型为预先训练好的LSTM模型或者GRU模型,所述语句分类模型为预先训练好的二分类模型。
  3. 根据权利要求1所述的摘要语句提取方法,其中,每个所述目标语句的字符个数等于预设字符个数。
  4. 根据权利要求1所述的摘要语句提取方法,其中,所述摘要语句提取模型还包括dropout层,所述调用所述语义融合模型对每个所述目标语句的第一语义向量进行处理,得到所述目标文本的语义向量矩阵之前,还包括:
    将每个所述目标语句的第一语义向量输入所述dropout层,得到语义向量序列;
    对所述语义向量序列进行预处理,得到目标语义向量序列,其中,所述目标语义向量序列的长度等于预设长度;
    所述调用所述语义融合模型对每个所述目标语句的第一语义向量进行处理,得到所述目标文本的语义向量矩阵,包括:
    调用所述语义融合模型对所述目标语义向量序列进行处理,得到所述目标文本的语义向量矩阵。
  5. 根据权利要求4所述的摘要语句提取方法,其中,所述对所述语义向量序列进行预处理,得到目标语义向量序列,包括:
    若所述语义向量序列的长度小于所述预设长度,则在所述语义向量序列中填充零向量,得到目标语义向量序列;
    若所述语义向量序列的长度大于所述预设长度,则对所述语义向量序列进行截取,得到目标语义向量序列。
  6. 根据权利要求1-5中任一项所述的摘要语句提取方法,其中,所述对所述语义向量矩阵进行线性变换,得到目标语义向量矩阵,包括:
    获取预设权重系数矩阵和预设偏置项矩阵;
    根据预设权重系数矩阵和预设偏置项矩阵,对所述语义向量矩阵进行线性变换,得到目标语义向量矩阵。
  7. 根据权利要求1-5中任一项所述的摘要语句提取方法,其中,所述根据所述分类标签序列和用于指示目标语句为摘要语句的第一标签,从所述多个目标语句中确定所述目标文本的摘要语句,包括:
    确定所述第一标签在所述分类标签序列中的排序编号;
    从所述多个目标语句中选择与所述排序编号对应的目标语句作为所述目标文本的摘要语句。
  8. 一种摘要语句提取装置,其中,应用于服务器,所述服务器存储有摘要语句提取模型,所述摘要语句提取模型包括语义识别模型、语义融合模型和语句分类模型,所述摘要语句提取装置包括:
    获取模块,用于获取待提取摘要的目标文本;
    文本拆分模块,用于将所述目标文本拆分为多个目标语句;
    控制模块,用于调用所述语义识别模型对每个所述目标语句进行处理,得到每个所述目标语句的第一语义向量;
    所述控制模块,还用于调用所述语义融合模型对每个所述目标语句的第一语义向量进行处理,得到所述目标文本的语义向量矩阵;
    所述控制模块,还用于对所述语义向量矩阵进行线性变换,得到目标语义向量矩阵;
    所述控制模块,还用于调用所述语句分类模型对所述目标语义向量矩阵进行处理,得到分类标签序列,所述分类标签序列包括每个所述目标语句的分类标签;
    语句确定模块,用于根据所述分类标签序列和用于指示目标语句为摘要语句的第一标签,从所述多个目标语句中确定所述目标文本的摘要语句。
  9. 一种服务器,其中,所述服务器包括处理器、存储器、以及存储在所述存储器上并可被所述处理器执行的计算机程序,所述服务器存储有摘要语句提取模型,所述摘要语句提取模型包括语义识别模型、语义融合模型和语句分类模型,其中所述计算机程序被所述处理器执行时,实现以下步骤:
    获取待提取摘要的目标文本,并将所述目标文本拆分为多个目标语句;
    调用所述语义识别模型对每个所述目标语句进行处理,得到每个所述目 标语句的第一语义向量;
    调用所述语义融合模型对每个所述目标语句的第一语义向量进行处理,得到所述目标文本的语义向量矩阵;
    对所述语义向量矩阵进行线性变换,得到目标语义向量矩阵;
    调用所述语句分类模型对所述目标语义向量矩阵进行处理,得到分类标签序列,所述分类标签序列包括每个所述目标语句的分类标签;
    根据所述分类标签序列和用于指示目标语句为摘要语句的第一标签,从所述多个目标语句中确定所述目标文本的摘要语句。
  10. 根据权利要求9所述的服务器,其中,所述语义识别模型为预先训练好的Bert模型,所述语义融合模型为预先训练好的LSTM模型或者GRU模型,所述语句分类模型为预先训练好的二分类模型。
  11. 根据权利要求9所述的服务器,其中,每个所述目标语句的字符个数等于预设字符个数。
  12. 根据权利要求9所述的服务器,其中,所述摘要语句提取模型还包括dropout层,所述处理器在实现调用所述语义融合模型对每个所述目标语句的第一语义向量进行处理,得到所述目标文本的语义向量矩阵之前,还用于实现:
    将每个所述目标语句的第一语义向量输入所述dropout层,得到语义向量序列;
    对所述语义向量序列进行预处理,得到目标语义向量序列,其中,所述目标语义向量序列的长度等于预设长度;
    调用所述语义融合模型对所述目标语义向量序列进行处理,得到所述目标文本的语义向量矩阵。
  13. 根据权利要求12所述的服务器,其中,所述处理器在实现对所述语义向量序列进行预处理,得到目标语义向量序列时,用于实现:
    若所述语义向量序列的长度小于所述预设长度,则在所述语义向量序列中填充零向量,得到目标语义向量序列;
    若所述语义向量序列的长度大于所述预设长度,则对所述语义向量序列进行截取,得到目标语义向量序列。
  14. 根据权利要求9-13中任一项所述的服务器,其中,所述处理器在实现对所述语义向量矩阵进行线性变换,得到目标语义向量矩阵时,用于实现:
    获取预设权重系数矩阵和预设偏置项矩阵;
    根据预设权重系数矩阵和预设偏置项矩阵,对所述语义向量矩阵进行线性变换,得到目标语义向量矩阵。
  15. 根据权利要求9-13中任一项所述的服务器,其中,所述处理器在实 现根据所述分类标签序列和用于指示目标语句为摘要语句的第一标签,从所述多个目标语句中确定所述目标文本的摘要语句时,用于实现:
    确定所述第一标签在所述分类标签序列中的排序编号;
    从所述多个目标语句中选择与所述排序编号对应的目标语句作为所述目标文本的摘要语句。
  16. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有计算机程序,其中,所述计算机程序被处理器执行时,实现以下步骤:
    获取待提取摘要的目标文本,并将所述目标文本拆分为多个目标语句;
    调用摘要语句提取模型中的语义识别模型对每个所述目标语句进行处理,得到每个所述目标语句的第一语义向量;
    调用摘要语句提取模型中的语义融合模型对每个所述目标语句的第一语义向量进行处理,得到所述目标文本的语义向量矩阵;
    对所述语义向量矩阵进行线性变换,得到目标语义向量矩阵;
    调用摘要语句提取模型中的语句分类模型对所述目标语义向量矩阵进行处理,得到分类标签序列,所述分类标签序列包括每个所述目标语句的分类标签;
    根据所述分类标签序列和用于指示目标语句为摘要语句的第一标签,从所述多个目标语句中确定所述目标文本的摘要语句。
  17. 根据权利要求16所述的计算机可读存储介质器,其中,所述摘要语句提取模型还包括dropout层,所述处理器在实现调用所述语义融合模型对每个所述目标语句的第一语义向量进行处理,得到所述目标文本的语义向量矩阵之前,还用于实现:
    将每个所述目标语句的第一语义向量输入所述dropout层,得到语义向量序列;
    对所述语义向量序列进行预处理,得到目标语义向量序列,其中,所述目标语义向量序列的长度等于预设长度;
    调用所述语义融合模型对所述目标语义向量序列进行处理,得到所述目标文本的语义向量矩阵。
  18. 根据权利要求17所述的计算机可读存储介质器,其中,所述处理器在实现对所述语义向量序列进行预处理,得到目标语义向量序列时,用于实现:
    若所述语义向量序列的长度小于所述预设长度,则在所述语义向量序列中填充零向量,得到目标语义向量序列;
    若所述语义向量序列的长度大于所述预设长度,则对所述语义向量序列进行截取,得到目标语义向量序列。
  19. 根据权利要求16-17中任一项所述的计算机可读存储介质器,其中,所述处理器在实现对所述语义向量矩阵进行线性变换,得到目标语义向量矩阵时,用于实现:
    获取预设权重系数矩阵和预设偏置项矩阵;
    根据预设权重系数矩阵和预设偏置项矩阵,对所述语义向量矩阵进行线性变换,得到目标语义向量矩阵。
  20. 根据权利要求16-17中任一项所述的计算机可读存储介质器,其中,所述处理器在实现根据所述分类标签序列和用于指示目标语句为摘要语句的第一标签,从所述多个目标语句中确定所述目标文本的摘要语句时,用于实现:
    确定所述第一标签在所述分类标签序列中的排序编号;
    从所述多个目标语句中选择与所述排序编号对应的目标语句作为所述目标文本的摘要语句。
PCT/CN2021/097421 2020-12-31 2021-05-31 摘要语句提取方法、装置、服务器及计算机可读存储介质 WO2022142121A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011640996.4 2020-12-31
CN202011640996.4A CN112732899A (zh) 2020-12-31 2020-12-31 摘要语句提取方法、装置、服务器及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2022142121A1 true WO2022142121A1 (zh) 2022-07-07

Family

ID=75609094

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/097421 WO2022142121A1 (zh) 2020-12-31 2021-05-31 摘要语句提取方法、装置、服务器及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN112732899A (zh)
WO (1) WO2022142121A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116701625A (zh) * 2023-05-29 2023-09-05 中国南方电网有限责任公司 电力调度语句处理方法、装置、设备及介质

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112732899A (zh) * 2020-12-31 2021-04-30 平安科技(深圳)有限公司 摘要语句提取方法、装置、服务器及计算机可读存储介质
CN112906385B (zh) * 2021-05-06 2021-08-13 平安科技(深圳)有限公司 文本摘要生成方法、计算机设备及存储介质
CN113239668B (zh) * 2021-05-31 2023-06-23 平安科技(深圳)有限公司 关键词智能提取方法、装置、计算机设备及存储介质
CN114386390B (zh) * 2021-11-25 2022-12-06 马上消费金融股份有限公司 一种数据处理方法、装置、计算机设备及存储介质
CN114969313B (zh) * 2022-06-07 2023-05-09 四川大学 摘要抽取方法、装置、计算机设备及计算机可读存储介质
CN114741499B (zh) * 2022-06-08 2022-09-06 杭州费尔斯通科技有限公司 一种基于句子语义模型的文本摘要生成方法和系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170213130A1 (en) * 2016-01-21 2017-07-27 Ebay Inc. Snippet extractor: recurrent neural networks for text summarization at industry scale
CN110348016A (zh) * 2019-07-15 2019-10-18 昆明理工大学 基于句子关联注意力机制的文本摘要生成方法
CN110532554A (zh) * 2019-08-26 2019-12-03 南京信息职业技术学院 一种中文摘要生成方法、系统及存储介质
CN111581374A (zh) * 2020-05-09 2020-08-25 联想(北京)有限公司 文本的摘要获取方法、装置及电子设备
CN112732899A (zh) * 2020-12-31 2021-04-30 平安科技(深圳)有限公司 摘要语句提取方法、装置、服务器及计算机可读存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11836181B2 (en) * 2019-05-22 2023-12-05 SalesTing, Inc. Content summarization leveraging systems and processes for key moment identification and extraction
CN110781290A (zh) * 2019-10-10 2020-02-11 南京摄星智能科技有限公司 一种长篇章结构化文本摘要提取方法
CN111639174B (zh) * 2020-05-15 2023-12-22 民生科技有限责任公司 文本摘要生成系统、方法、装置及计算机可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170213130A1 (en) * 2016-01-21 2017-07-27 Ebay Inc. Snippet extractor: recurrent neural networks for text summarization at industry scale
CN110348016A (zh) * 2019-07-15 2019-10-18 昆明理工大学 基于句子关联注意力机制的文本摘要生成方法
CN110532554A (zh) * 2019-08-26 2019-12-03 南京信息职业技术学院 一种中文摘要生成方法、系统及存储介质
CN111581374A (zh) * 2020-05-09 2020-08-25 联想(北京)有限公司 文本的摘要获取方法、装置及电子设备
CN112732899A (zh) * 2020-12-31 2021-04-30 平安科技(深圳)有限公司 摘要语句提取方法、装置、服务器及计算机可读存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116701625A (zh) * 2023-05-29 2023-09-05 中国南方电网有限责任公司 电力调度语句处理方法、装置、设备及介质
CN116701625B (zh) * 2023-05-29 2024-05-10 中国南方电网有限责任公司 电力调度语句处理方法、装置、设备及介质

Also Published As

Publication number Publication date
CN112732899A (zh) 2021-04-30

Similar Documents

Publication Publication Date Title
WO2022142121A1 (zh) 摘要语句提取方法、装置、服务器及计算机可读存储介质
US11636264B2 (en) Stylistic text rewriting for a target author
US20210027025A1 (en) Multi-turn dialogue response generation with template generation
CN111159220B (zh) 用于输出结构化查询语句的方法和装置
CN111026858B (zh) 基于项目推荐模型的项目信息处理方法及装置
US20190121868A1 (en) Data clustering
CN111026319B (zh) 一种智能文本处理方法、装置、电子设备及存储介质
WO2021174864A1 (zh) 基于少量训练样本的信息抽取方法及装置
CN111026320B (zh) 多模态智能文本处理方法、装置、电子设备及存储介质
WO2022174496A1 (zh) 基于生成模型的数据标注方法、装置、设备及存储介质
WO2023045184A1 (zh) 一种文本类别识别方法、装置、计算机设备及介质
KR20210048425A (ko) 데이터 매핑을 위한 방법, 장치 및 시스템
CN111142728B (zh) 车载环境智能文本处理方法、装置、电子设备及存储介质
JP6095487B2 (ja) 質問応答装置、及び質問応答方法
US11048887B1 (en) Cross-language models based on transfer learning
CN114175007A (zh) 用于数据匹配的主动学习
WO2020097326A1 (en) Systems and methods for content filtering of publications
CN114691716A (zh) Sql语句转换方法、装置、设备及计算机可读存储介质
US11550777B2 (en) Determining metadata of a dataset
CN111125154B (zh) 用于输出结构化查询语句的方法和装置
JP6979899B2 (ja) 生成装置、学習装置、生成方法、学習方法、生成プログラム、及び学習プログラム
JP2020035427A (ja) 情報を更新するための方法と装置
US11868737B2 (en) Method and server for processing text sequence for machine processing task
US11954102B1 (en) Structured query language query execution using natural language and related techniques
WO2023084761A1 (ja) 情報処理装置、情報処理方法及び情報処理プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21912882

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21912882

Country of ref document: EP

Kind code of ref document: A1