CN115481619A - Interactive point pair extraction method, related equipment and storage medium - Google Patents

Interactive point pair extraction method, related equipment and storage medium Download PDF

Info

Publication number
CN115481619A
CN115481619A CN202211133233.XA CN202211133233A CN115481619A CN 115481619 A CN115481619 A CN 115481619A CN 202211133233 A CN202211133233 A CN 202211133233A CN 115481619 A CN115481619 A CN 115481619A
Authority
CN
China
Prior art keywords
text
argument
sample
chapter
sentence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211133233.XA
Other languages
Chinese (zh)
Inventor
徐睿峰
鲍建竹
孙婧伊
杨敏
梁斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Harbin Institute of Technology
Original Assignee
Shenzhen Graduate School Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Harbin Institute of Technology filed Critical Shenzhen Graduate School Harbin Institute of Technology
Priority to CN202211133233.XA priority Critical patent/CN115481619A/en
Publication of CN115481619A publication Critical patent/CN115481619A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/226Validation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Machine Translation (AREA)

Abstract

The application discloses an interactive viewpoint pair extraction method, related equipment and a storage medium, wherein the interactive viewpoint pair extraction method comprises the following steps: acquiring two chapters of interactive point pairs to be extracted; selecting the chapters of which the argument extraction is executed at the first stage as first chapters, and taking the other chapters as second chapters of which the argument extraction is executed at the second stage; wherein, any one of the two chapters is selected as the first chapter, or the two chapters are respectively selected as the first chapter successively; performing argument extraction based on the first chapters to obtain a plurality of first arguments; and respectively taking the first arguments as query arguments, and performing argument extraction based on the query arguments and the second chapters to obtain second arguments forming interactive argument pairs with the query arguments. By the scheme, the accuracy of the interactive argument pair extraction can be improved.

Description

Interactive point pair extraction method, related equipment and storage medium
Technical Field
The application relates to the technical field of machine reading and understanding, in particular to an interactive argument pair extraction method, related equipment and a storage medium.
Background
Generally, a chapter contains a text paragraph (which often contains one to many sentences) that expresses an independent point of view about a problem, and for the purpose of description, it is often referred to as a point of discourse in the field of machine-reading technology understanding. Based on this, arguments discussing the same problem in two different chapters may constitute an interactive argument pair.
The extraction of the interactive point pairs is a new task in the field of argument mining, and if two chapters are respectively the review opinion (review) of the thesis and the dispute manuscript (review) of the author, the interactive point pairs discussing the same problem in the two chapters can be identified through the extraction of the interactive point pairs. Research shows that the extraction of the existing interactive point pair usually has the problems of inaccurate extraction and the like. In view of this, how to improve the accuracy of the interactive theory on the extraction becomes an urgent problem to be solved.
Disclosure of Invention
The technical problem mainly solved by the application is to provide an interactive point pair extraction method, related equipment and a storage medium, and the accuracy of the interactive point pair extraction can be improved.
In order to solve the above technical problem, a first aspect of the present application provides an interactive theoretical point pair extraction method, including: acquiring two chapters of the interactive point pairs to be extracted; selecting the chapters of which the argument extraction is executed at the first stage as first chapters, and taking the other chapters as second chapters of which the argument extraction is executed at the second stage; wherein, either one of the two chapters is selected as the first chapter, or the two chapters are respectively selected as the first chapter in sequence; performing argument extraction based on a first piece of discourse to obtain a plurality of first arguments; and respectively taking the first arguments as query arguments, and performing argument extraction based on the query arguments and the second chapters to obtain second arguments forming interactive argument pairs with the query arguments.
In order to solve the above technical problem, a second aspect of the present application provides an interactive argument pair extraction device, including a display screen, a memory and a processor, where the display screen and the memory are respectively coupled to the processor, the memory stores program instructions, the processor is configured to execute the program instructions to implement the interactive argument pair extraction method in the first aspect, so as to extract an interactive argument pair between two chapters, and the display screen is configured to provide a display interface, and the display interface includes: the first area and the second area are used for displaying different chapters respectively, and the third area is used for displaying the interactive viewpoint pairs.
In order to solve the above technical problem, a third aspect of the present application provides a computer-readable storage medium storing program instructions capable of being executed by a processor, where the program instructions are configured to implement the interactive aspect-pair extraction method of the first aspect.
According to the scheme, two chapters of the interactive point pairs to be extracted are obtained, the chapter of which the point extraction is performed at the first stage is selected as the first chapter, the other chapter is used as the second chapter of which the point extraction is performed at the second stage, any one of the two chapters is selected as the first chapter, or the two chapters are respectively selected as the first chapter in sequence, the point extraction is performed based on the first chapter to obtain a plurality of first points, the first points are respectively used as query points, the point extraction is performed based on the query points and the second chapter to obtain the second points of the interactive point pairs which are formed with the query points, on one hand, the interactive point pair extraction is realized by extracting the first points in the first chapter based on the first points and extracting the second points from the second chapter based on the first points, namely, the two-stage machine reading understanding is adopted, so that the interactive point pair extraction is realized, the relationship between the two chapters is favorably improved by the fact that the first points are extracted at the point level, on the other hand, the accuracy of the interactive point pair extraction is improved, and the interactive point extraction is realized by combining the query points with the second chapter extraction of the overall information of the query points, so that the overall query points can be extracted. Therefore, the accuracy of extraction of the interactive argument pairs can be improved.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of an interactive argument pair extraction method of the present application;
FIG. 2 is a schematic interface diagram of an embodiment of an interactive argument pair extraction method of the present application;
FIG. 3 is a process diagram of an embodiment of the present application interaction theory on an extraction method;
FIG. 4 is a schematic flowchart of an embodiment of step S13 or step S14 in FIG. 1;
FIG. 5 is a block diagram of an embodiment of a point extraction model;
FIG. 6 is a schematic flow chart diagram of an embodiment of a training argument extraction model;
FIG. 7 is a block diagram of an embodiment of an extraction device according to the present application;
FIG. 8 is a block diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association relationship describing an associated object, and means that there may be three relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of an interactive argument pair extraction method of the present application. Specifically, the method may include the steps of:
step S11: and acquiring two chapters of the interactive point pairs to be extracted.
In one implementation scenario, in the review scenario, the two chapters may be the review opinion (review) of the paper and the dispute manuscript (review) of the author, respectively; or, in the current comment scene, the two chapters may be the comment articles published by the two authors for a certain current affair, respectively. Other scenarios may be analogized, and are not exemplified here.
In an implementation scenario, please refer to fig. 2 in combination, and fig. 2 is a schematic interface diagram of an embodiment of the interactive argument pair extraction method of the present application. As shown in fig. 2, a display interface may be provided, and the display interface may include a first area, a second area, and a third area. The first area and the second area may respectively display two chapters, and the third area may display the interactive viewpoint pairs extracted from the two chapters. Of course, the embodiment of the display interface shown in fig. 2 is only one embodiment of the display interface, and is not limited thereto, for example, the display interface may also include three areas arranged in parallel for displaying two chapters and the interactive viewpoint pairs extracted from the two chapters, respectively, and the embodiment of the display interface is not limited herein. In addition, the first area and the second area may be respectively provided with an upload button, and after the upload button corresponding to the first area is clicked, a file selection prompt may pop up to support the user to select the chapters uploaded to the first area; or, the user may also support dragging chapters to the first area or the second area, respectively, so as to upload chapters. The uploading method of the chapters is not limited here.
In one implementation scenario, to facilitate distinguishing the two chapters, the two chapters may be referred to as chapter a and chapter B, respectively, and chapter a may be represented as a sentence sequence
Figure BDA0003850472130000041
Discourse B may be represented as a sequence of sentences
Figure BDA0003850472130000042
Wherein the content of the first and second substances,
Figure BDA0003850472130000043
representing the jth sentence text, n, in chapter i a Representing the total number of sentence text, n, contained in chapter A b Representing the total number of sentence text contained in chapter B. The purpose of the disclosed embodiment is to extract the set of interactive point pairs of discourse A and discourse B
Figure BDA0003850472130000044
Wherein the content of the first and second substances,
Figure BDA0003850472130000045
shows the ith argument in chapter a,
Figure BDA0003850472130000046
the ith argument in chapter B is represented, which constitutes the ith interactive argument pair, | | | represents the total number of interactive argument pairs.
Step S12: the chapters for which the point extraction is performed in the first stage are selected as the first chapters, and the other chapter is selected as the second chapter for which the point extraction is performed in the second stage.
In the disclosed embodiment, either of the two chapters may be selected as the first chapter. As described above, to facilitate distinguishing the two chapters, the two chapters may be referred to as chapter a and chapter B, respectively, and then chapter a may be selected as the first chapter and chapter B as the second chapter, in which case, the argument extraction may be performed on chapter a to extract the argument in chapter a as the first argument, and then the argument extraction may be performed on chapter B in combination with the first argument to extract the second argument in chapter B that forms the interactive argument pair with the first argument. Of course, the chapter B may also be selected as the first chapter, and the chapter a may be selected as the second chapter, in which case, the argument extraction may be performed on the chapter B first to extract the argument in the chapter B as the first argument, and then the argument extraction may be performed in combination with the first argument and the chapter a to extract the second argument in the chapter a that forms the interactive argument pair with the first argument. For convenience of description, the first way may be referred to as extracting interactive theoretical point pairs based on the A → B direction, and the second way may be referred to as extracting interactive wheel pairs based on the B → A direction. In practical applications, when either of the two chapters is selected as the first chapter, the two manners can be alternatively implemented, and are not limited herein.
In the embodiment of the disclosure, different from the foregoing manner, two chapters may also be selected as the first chapter in sequence, respectively. If the two chapters are respectively called chapter a and chapter B, chapter a can be selected as the first chapter and chapter B as the second chapter, under which case, the argument extraction can be performed on chapter a first to extract the argument in chapter a as the first argument, and then the argument extraction can be performed on chapter B in combination with the first argument to extract the second argument of the interactive argument pair formed by chapter B and the first argument; meanwhile, chapter B can be selected as the first chapter, and chapter a as the second chapter, in which case, the argument extraction can be performed on chapter B first to extract the argument in chapter B as the first argument, and then the argument extraction can be performed in combination with the first argument and chapter a to extract the second argument in chapter a forming an interactive argument pair with the first argument. That is, in practical applications, the extraction of the interactive theoretical point pairs based on the A → B direction and the extraction of the interactive alternate point pairs based on the B → A direction can be implemented simultaneously.
Step S13: and performing argument extraction based on the first piece of discourse to obtain a plurality of first arguments.
In one implementation scenario, in order to further unify the model framework on the premise of improving the extraction effect, in both the first stage and the second stage, the point extraction may be performed by the point extraction model, and the input of the point extraction model may include a query text and a chapter text, and when the point extraction is performed in the first stage to extract a first point in a first chapter, the query text may be set as a preset text, the chapter text is the first chapter, and the preset text is used to indicate that the currently performed point extraction is in the first stage. Illustratively, the Argument Mining of the first stage may be called Argument Mining (AM), and the Argument Mining of the second stage may be called Argument Pair Extraction (APE), and then the preset text may take a special symbol "[ AM ]". Of course, the above-mentioned preset text is merely one possible implementation manner in an actual application process, and a specific implementation manner of the preset text is not limited thereby, for example, the preset text may also adopt a special symbol "[ first stage ]", and a specific implementation of the preset text is not limited herein. For a specific process of argument extraction and a network structure of an argument extraction model, reference may be made to the following disclosure embodiments, which are not described herein again. By the method, the argument extraction is respectively carried out in the first stage and the second stage through the unified model framework, the recognition capability of the model is enhanced, and the overall extraction performance is improved.
In a specific implementation scenario, under the condition of a unified model framework, if the argument extraction is at the second stage, when the argument extraction is performed at the second stage, the query text may be one of the first arguments, and the chapter text may be the second chapter each time the argument extraction is performed, and after the argument extraction of the second stage is performed on each first argument, the second round of arguments forming the interactive argument pairs with the first arguments may be extracted.
In a specific implementation scenario, when the model framework is unified, the argument extraction model can be obtained by training through a first training process and a second training process based on sample chapters, the sample chapters are labeled with sample arguments, the paired sample chapters are also labeled with corresponding relations between the sample arguments, and the two sample arguments with the corresponding relations form an interactive argument pair. Illustratively, paired sample chapters may be denoted as a sample chapter a and a sample chapter B, and the sample chapter a is labeled with a sample argument, and the sample chapter B is also labeled with a sample argument, and further, the sample chapter a and the sample chapter B are also labeled with a corresponding relationship between sample argument pairs, for example, it may be labeled that there is a corresponding relationship between a sample argument 1 in the sample chapter a and a sample argument 1 in the sample chapter B, that is, it represents that the sample argument 1 in the sample chapter a and the sample argument 1 in the sample chapter B constitute an interaction argument pair. Other cases may be analogized, and no one example is given here. On the basis, in the first training process, a character sequence formed by a preset text and sample chapters is used as a sample text input during the training of the argument extraction model, and sample arguments marked by the sample chapters are used as a supervision text during the training of the argument extraction model, so that the sample text and the supervision text form a group of sample data, and the model performance of the argument extraction model for executing argument extraction under the condition that arguments are unknown at the first stage can be improved. For example, in the first training stage, the character sequence composed of the preset text "[ AM ]" and the sample chapter a may be used as a sample text input during the training of the argument extraction model, a sample argument labeled by the sample chapter a (e.g., the aforementioned sample argument 1) may be used as a supervised text during the training of the argument extraction model, and the like may be used in other cases, which are not illustrated herein. In the second training process, the character sequence formed by the sample argument labeled by the first sample chapter in the paired sample chapters and the second sample chapter in the paired sample chapters is used as the sample text input during the training of the argument extraction model, the sample argument in the second sample chapter, which has a corresponding relation with the reference argument, is used as the supervision text during the training of the argument extraction model, and the reference argument is the sample argument in the sample text, so that the sample text and the supervision text can be used as a group of sample data, and the model performance of extracting another argument forming an interactive argument pair with the known argument from the chapters under the condition that the argument is known at the second stage of the argument extraction model can be improved. For example, in the second training stage, the sample argument 1 labeled in the sample chapter a may form a character sequence with the sample chapter B, which is used as the sample text input during the training of the argument extraction model, and since the sample argument 1 of the sample chapter B has a corresponding relationship with the sample argument 1 in the sample chapter a, the sample argument 1 in the sample chapter B may be used as the supervised text during the training of the argument extraction model. For the training process of the point extraction model, reference may be made to the following disclosed embodiments, which are not repeated herein. By the method, the argument extraction of the first stage and the argument extraction of the second stage can be unified into the same framework, so that the two stages are optimized together in the same model in the subsequent model training stage, the recognition capability of the model is enhanced, and the overall extraction performance is improved.
In another implementation scenario, different from the foregoing manner, in order to improve the efficiency of the argument extraction in the first stage without requiring the uniformity of the model framework, a first extraction model for performing the argument extraction in the first stage may be trained in advance, and similarly, a second extraction model for performing the argument extraction in the second stage may be trained in advance, so that when the argument extraction is performed on the first chapters, the argument extraction may be performed on the first chapters based on the first extraction model to obtain a plurality of first arguments, and when the argument extraction is continued to be performed on the second chapters after the plurality of first arguments are extracted, the argument extraction may be performed on the second chapters based on the second extraction model. In addition, the network structures of the first extraction model and the second extraction model may refer to the aforementioned argument extraction models, respectively, and the network structures of the first extraction model and the network structures of the second extraction model are not limited herein. The specific process of the argument extraction and the training processes of the first extraction model and the second extraction model may refer to the related description of the argument extraction model, and are not described herein again. In this case, the first extraction model and the second extraction model may be set by referring to the architecture mode and the training mode of the point extraction model, respectively.
Step S14: and respectively taking the first arguments as query arguments, and performing argument extraction based on the query arguments and the second chapters to obtain second arguments forming interactive argument pairs with the query arguments.
In an implementation scenario, as described above, each of the first argument points extracted in step S13 may be respectively used as query argument points, and the query argument points and the second chapter are used as a whole to perform argument point extraction, so as to combine information at the argument point level and overall chapter information to extract and obtain a second argument point of the interactive argument point pair formed by the query argument points in the second chapter. Thus, the second argument of the interactive argument pair formed by the first argument can be extracted and obtained for each first argument.
In one implementation scenario, referring to fig. 3 in combination, fig. 3 is a process diagram of an embodiment of the method for extracting interactive point pairs of the present application, and as shown in fig. 3, taking extracting interactive point pairs in a direction based on a → B as an example, a chapter a can be taken as a first chapter, and a chapter B can be taken as a second chapter, and then in the point extraction of the first stage (i.e., AM stage), a plurality of points in the chapter a (e.g., point 1, point 2, 8230; point n) can be extracted, and on this basis, in the point extraction of the second stage (i.e., APE stage), the points can be respectively taken as queries, and in combination with chapter B, point extraction is performed to extract points in chapter B to constitute interactive point pair 1 with point pair 1, and in chapter B to extract points in chapter 2, point 8230, and point pair B to constitute interactive point pair in chapter B. The specific process of extracting the interaction pairs based on the direction of B → A can be analogized, and is not described herein again.
In one implementation scenario, when the A → B based direction extraction interaction point pairs and the B → A based direction extraction interaction point pairs are executed simultaneously, the A → B based direction extraction interaction point pair set and the B → A based direction extraction interaction point pair set can be output simultaneously for the user to use. Referring to fig. 2, the first area and the second area in fig. 2 may be used to display two chapters, respectively, and the display interface may further be provided with a "start" button, when the user clicks the "start" button, the steps of the process in the embodiment of the disclosure are performed to extract the interactive viewpoint pairs in the chapters and displayed in the third area.
According to the scheme, two chapters of the interactive point pairs to be extracted are obtained, the chapter of which the point extraction is performed at the first stage is selected as the first chapter, the other chapter is used as the second chapter of which the point extraction is performed at the second stage, any one of the two chapters is selected as the first chapter, or the two chapters are respectively selected as the first chapter in sequence, the point extraction is performed based on the first chapter to obtain a plurality of first points, the first points are respectively used as query points, the point extraction is performed based on the query points and the second chapter to obtain the second points of the interactive point pairs which are formed with the query points, on one hand, the interactive point pair extraction is realized by extracting the first points in the first chapter based on the first points and extracting the second points from the second chapter based on the first points, namely, the two-stage machine reading understanding is adopted, so that the interactive point pair extraction is realized, the relationship between the two chapters is favorably improved by the fact that the first points are extracted at the point level, on the other hand, the accuracy of the interactive point pair extraction is improved, and the interactive point extraction is realized by combining the query points with the second chapter extraction of the overall information of the query points, so that the overall query points can be extracted. Therefore, the accuracy of the interactive point pair extraction can be improved.
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating an embodiment of step S13 or step S14 in fig. 1. Specifically, fig. 4 is a flowchart illustrating an embodiment of extracting a first argument and a second argument based on a unified framework. Specifically, the method may include the steps of:
step S41: and extracting semantic feature representation of each character in the query text and the chapter text based on the query text and the chapter text.
It should be noted that, when the first argument is extracted (i.e. in the AM phase), the query text is the preset text. The specific meaning of the preset text may refer to the related description, which is not described herein, and the argument text finally extracted in the embodiment of the present disclosure is the first argument. Different from the AM phase, when the second argument is extracted (i.e., in the APE phase), the query text is the first argument, the chapter text is the second chapter, and the argument text finally extracted in the embodiment of the present disclosure is the second argument of the interactive argument pair formed with the query text. In addition, for convenience of description, in the embodiments of the present disclosure, the following related examples take the text of the chapter in the AM phase as chapter a and the text of the chapter in the APE phase as chapter B, and so on, which are not described herein.
In one implementation scenario, the query text and the chapter text may be spliced to form a character sequence, and semantic extraction is performed based on the character sequence to obtain semantic feature representations of each character in the character sequence. It should be noted that the semantic feature representation of the character may include semantic information of the character itself, such as the meaning of the character itself. In addition, in the embodiments disclosed in the present application, unless otherwise specified, the "feature representation" such as "semantic feature representation", "context feature representation", and the like may be expressed by using a vector, and the dimension of the vector may be set to 128, 256, and the like, which is not limited herein.
In a specific implementation scenario, for convenience of description, the query text in the AM phase may be denoted as q am Then the query text and chapter text in the AM phase may be composed as the following character sequence:
Figure BDA0003850472130000091
in the above formula (1), [ s ]][/s]The method is used for distinguishing the query text from the chapter text, wherein the former represents the text beginning, and the latter represents the text ending. In addition to this, the present invention is,
Figure BDA0003850472130000092
the related meanings of (a) can be referred to the related descriptions in the foregoing disclosed embodiments, and the description is omitted here.
In another specific implementation scenario, for convenience of description, the query text in the APE stage may be recorded as
Figure BDA0003850472130000093
I.e., the kth first argument extracted from chapter a in the first phase (i.e., AM phase), the query text and chapter text in APE phase may be composed of the following character sequences:
Figure BDA0003850472130000101
in the above-mentioned formula (2),
Figure BDA0003850472130000102
the related meanings of (c) can be referred to the related descriptions in the foregoing disclosed embodiments, and are not described again here.
In a specific implementation scenario, the semantic feature representation of each character may be obtained by encoding a character sequence composed of a query text and a chapter text by a Longformer. It should be noted that Longformer is a model that can process long texts efficiently, and improves the self-attention mechanism in the conventional Transformer model. Each token draws local attention only to tokens around a fixed window size. And Longformer also adds a global attention to the local attention for a specific task. The specific process of extracting semantic feature representation of each character in the character sequence based on Longformer can refer to technical details of Longformer, and is not described herein again. In the mode, the semantic feature representation of each character is obtained by encoding the character sequence consisting of the query text and the chapter text by the Longformer, so that the semantic information of the overlong sequence can be effectively encoded by the Longformer, the input character sequence does not need to be subjected to any truncation processing, the semantic information of the article is retained to the maximum extent, and the accuracy of subsequent argument extraction is favorably improved.
In another specific implementation scenario, instead of using Longformer to extract the semantic feature Representation, a larger-scale pre-training model with more parameters, such as BERT (Bidirectional Encoder Representation from Transformers), may also be used to extract the semantic feature Representation of each character in the character sequence, which is not limited herein.
Step S42: and extracting the context characteristic representation of each sentence text in the query text and the discourse text based on the semantic characteristic representation of each character.
Specifically, the semantic feature representation of the sentence text can be obtained by fusing based on the semantic feature representation of the characters in the sentence text. On the basis, the context feature can be further extracted based on the semantic feature representation of each sentence text in the query text and the discourse text, so as to obtain the context feature representation of each sentence text. It should be noted that the context feature representation of the sentence text may include semantic information of the sentence text itself, and may also include semantic information of adjacent sentence texts, so that the semantics of the sentence text can be more accurately expressed. In the above manner, the context feature representation of each sentence text is extracted through the steps of feature fusion, context feature extraction and the like, which can help to improve the accuracy of the context feature representation.
In an implementation scenario, for each sentence text, the semantic feature representations of the characters in the sentence text may be averaged and pooled to implement fusion, so as to obtain the semantic feature representation of the sentence text.
In one implementation scenario, the semantic feature representation of each sentence text may be input into a long term and short term memory network to obtain a context feature representation of each sentence text. It should be noted that the long-short term memory network may include, but is not limited to, a bidirectional long-short term memory network, and the like, and is not limited herein. For convenience of description, the extracted context feature representation H may be written as:
H=(h 1 ,h 2 ,…,h n )……(3)
in the above formula (3), h i A semantic feature representation representing the text of the ith sentence.
In one implementation scenario, referring to FIG. 5, FIG. 5 is a block diagram of an embodiment of a point extraction model. As shown in fig. 5, a character sequence composed of a query text and a chapter text may be input into a middle Longformer, semantic feature representations of each character in the character sequence are extracted, and further, semantic feature representations of characters included in each sentence text are fused based on the semantic feature representations of the characters to obtain a semantic feature representation of the sentence text, so that the semantic feature representation of each sentence text may be input into a long-term and short-term memory network, and a context feature representation of each sentence text is extracted.
Step S43: and predicting the discourse text in the discourse text based on the context feature representation of each sentence text.
In one implementation scenario, a first prediction may be performed based on the context feature representation of each sentence text to obtain a sentence text of a beginning sentence suspected as a point of discourse text in the chapter text as a first text, and a second prediction may be performed based on the context feature representation of each sentence text to obtain a sentence text of an ending sentence suspected as a point of discourse text in the chapter text as a second text. On the basis, third prediction can be carried out on the basis of the context feature representation of the first text and the context feature representation of the second text, a probability value of forming a point text by taking the first text as a starting sentence and the second text as an ending sentence is obtained, and therefore the point text is extracted from the chapter text on the basis of the first text and the second text in response to the probability value not being lower than a preset threshold value. It should be noted that the preset threshold may be set according to actual situations, and is not limited herein, for example, the preset threshold may be set to 0.8, 0.9, and the like. In the above manner, by predicting the start sentence and the end sentence separately and further predicting the probability value that can constitute the point text based on the prediction, it is helpful to reduce the complexity of the point prediction.
In one implementation scenario, in addition to the Longformer and the long-short term memory network, the argument extraction model may further include a first classifier, and the first classifier is used to determine whether each sentence text in the chapter text can be used as a starting sentence, i.e. the first classifier is essentially a dichotomizer. Specifically, the contextual characteristic representation of the sentence text may be input into the first classifier, and the first classifier may output a probability value that each sentence text in the chapter text can be used as a starting sentence, and based on this, the sentence text with the probability value higher than a preset threshold (e.g., 0.8, 0.9, etc.) may be used as the first text. Further, the first classifier may include, but is not limited to: a convolutional layer, a fully connected layer, etc., without limitation.
In one implementation scenario, in addition to the Longformer and the long-short term memory network, the argument extraction model may further include a second classifier, and the second classifier is used to determine whether each sentence text in the chapter text can be used as an ending sentence, that is, the second classifier is essentially a dichotomer. Specifically, the context feature representation of the sentence text may be input into the second classifier, and the second classifier may output a probability value that each sentence text in the chapter text can be used as an ending sentence, and based on this, the sentence text with the probability value higher than a preset threshold (e.g., 0.8, 0.9, etc.) may be used as the second text. Further, the second classifier may include, but is not limited to: a convolutional layer, a fully connected layer, etc., without limitation.
In one implementation scenario, in addition to the Longformer and the long-short term memory network, the argument extraction model may further include a third classifier, and the third classifier is used to determine whether the argument text can be constructed by using any of the first texts as a starting sentence and any of the second texts as an ending sentence, that is, the third classifier is essentially a classifier. In particular, the contextual feature representations of both the first text as a beginning sentence and the second text as an ending sentence may be input to the third classifier, which may then output a probability value that the argument text is composed with the first text as the beginning sentence and the second text as the ending sentence. Further, the third classifier may include, but is not limited to: a convolutional layer, a fully connected layer, etc., and is not limited herein.
In another implementation scenario, unlike the aforementioned manner of extracting the argument text by predicting the beginning sentence and the ending sentence, the context feature representation of each sentence text can be predicted by using a structured sequence prediction model such as CFF (Conditional Random Field), so as to obtain each argument text in the chapter text. The prediction process of the argument text can refer to the technical details of a structured sequence prediction model such as CRF, and is not described herein again.
According to the scheme, the semantic feature representation of each character in the query text and the chapter text is extracted based on the query text and the chapter text, the context feature representation of each sentence text in the query text and the chapter text is extracted based on the semantic feature representation of each character, the argument text in the chapter text is predicted based on the context feature representation of each sentence text, when the first argument is extracted, the query text is a preset text, the chapter text is the first chapter, the argument text is the first argument, when the second argument is extracted, the query text is the first argument, the chapter text is the second chapter, and the argument text is the second argument forming an interactive argument pair with the query text, so that the argument extraction of the first stage and the argument extraction of the second stage can be the same frame, the two stages are favorably optimized together in the same model in the subsequent model training stage, the recognition capability of the model is enhanced, and the integral extraction performance is improved.
Referring to FIG. 6, FIG. 6 is a flow diagram illustrating an embodiment of a training argument extraction model. Specifically, the following steps may be included:
step S61: based on the supervised text, a first case is found for a sample sentence as a start sentence, and a second case is found for a sample sentence as an end sentence.
In an implementation scenario, for the first training process, the sample data obtained from the sample chapter a in the foregoing disclosed embodiment is taken as an example, the sample text in the sample data may be a character sequence composed of the sample chapter a and a preset text (e.g., "[ AM ]"), and the supervised text may be a sample argument labeled by the sample chapter a, and on this basis, it may be determined, based on the supervised text, which sample sentences of the sample chapter in the sample text are used as the beginning sentences of the sample argument, so as to obtain the first case, and which sample sentences are used as the ending sentences of the sample argument, so as to obtain the second case. Other cases may be analogized, and no one example is given here. For the convenience of the subsequent loss metric, the first case may include a set of value sequences, and the value sequences include first values respectively corresponding to sample statements in the sample chapters, and if the sample statement is used as a start statement of the sample argument, the first value corresponding to the sample statement may be 1, otherwise if the sample statement cannot be used as a start statement of the sample argument, the first value corresponding to the sample statement may be 0. Similarly, the second case may also include a set of value sequences, and the value sequences include second values corresponding to the sample sentences in the sample chapters, respectively, and the second value corresponding to the sample sentence may be 1 if the sample sentence is used as the ending sentence of the sample argument, whereas the second value corresponding to the sample sentence may be 0 if the sample sentence cannot be used as the ending sentence of the sample argument.
In another implementation scenario, for the second training process, taking the sample data obtained from the sample chapter a and the sample chapter B as an example in the foregoing disclosed embodiment, the sample text in the sample data may be a character sequence composed of the sample argument a-1 and the sample chapter B in the sample chapter a, and the supervised text may be the sample argument B-1 having a corresponding relationship with the sample argument a-1, and on this basis, it may be determined, based on the supervised text, which sample sentences of the sample chapter B in the sample text are used as the beginning sentences of the sample argument, so as to obtain the first case, and which sample sentences are used as the ending sentences of the sample argument, so as to obtain the second case. Other cases may be analogized, and no one example is given here. For the convenience of the subsequent loss measurement, the first case may include a set of value sequences, and the value sequences include first values respectively corresponding to the sample statements in the sample chapters B, and if the sample statement is used as the start statement of the sample argument B-1, the first value corresponding to the sample statement may be 1, whereas if the sample statement cannot be used as the start statement of the sample argument B-1, the first value corresponding to the sample statement may be 0. Similarly, the second case may also include a set of value sequences, and the value sequences include second values corresponding to the sample statements in the sample discourse B, respectively, and the second value corresponding to the sample statement may be 1 if the sample statement is the end statement of the sample argument B-1, or may be 0 if the sample statement cannot be the end statement of the sample argument B-1.
Step S62: and performing argument extraction on the sample text based on an argument extraction model to obtain a first prediction probability of taking a sample sentence of a sample chapter in the sample text as a starting sentence, a second prediction probability of taking the sample sentence as an ending sentence, and a third prediction probability of forming the sample argument by taking the first sample text as the starting sentence and taking the second sample text as the ending sentence.
In the embodiment of the present disclosure, a sample sentence with a first prediction probability satisfying a first condition is used as a first sample text, and a sample sentence with a second prediction probability satisfying a second condition is used as a second sample text. For example, the first condition may include that the first prediction probability is greater than a preset threshold, the second condition may include that the second prediction probability is greater than a preset threshold, and specific meaning of the preset threshold may refer to the foregoing disclosed embodiment, which is not described herein again. In addition, the specific process of performing the argument extraction based on the argument extraction model can refer to the foregoing disclosed embodiments, and is not further described herein.
Step S63: and measuring to obtain a first prediction loss based on the first prediction probability and the first condition, measuring to obtain a second prediction loss based on the second prediction probability and the second condition, and measuring to obtain a third prediction loss based on the third prediction probability and the supervision text.
Specifically, the first prediction probability and the first condition may be subjected to loss measurement based on a loss function such as a binary cross entropy to obtain a first prediction loss, and the second prediction probability and the second condition may be subjected to loss measurement based on a loss function such as a binary cross entropy to obtain a second prediction loss. In addition, as described above, based on the supervised text, it can be known whether any sample sentence and another sample sentence in the sample chapters contained in the sample text can be used as the start sentence and the end sentence to form a sample argument, if so, the sample sentence can be labeled as 1, otherwise, the sample sentence can be labeled as 0, based on this, the loss measurement can be performed on the third prediction probability and the labeled information based on the loss functions such as the binary cross entropy, and the like, so as to obtain the third prediction loss. The specific way of the loss metric may refer to technical details of the loss function such as the cross entropy of the two classes, which are not described herein again.
Step S64: and adjusting network parameters of the point extraction model based on the first prediction loss, the second prediction loss and the third prediction loss.
Specifically, the first prediction loss, the second prediction loss, and the third prediction loss may be weighted to obtain a model loss of the argument extraction model, so that the network parameters of the argument extraction model may be adjusted based on the model loss. It should be noted that the weighting coefficients of the first prediction loss, the second prediction loss, and the third prediction loss may be the same or different, for example, the weighting coefficient of the first prediction loss may be set to be larger, the weighting coefficient of the second prediction loss may be set to be larger, or the weighting coefficient of the third prediction loss may be set to be larger, which is not limited herein. In addition, the specific process of adjusting the network parameters may refer to technical details of an optimization manner such as gradient descent, and is not described herein again. In addition, in the training process, by minimizing the model loss, the following three items can be satisfied after the argument extraction model performs argument extraction on the sample text: (1) If the sample sentences of the sample chapters in the sample texts are the initial sentences marked by the supervised text, the first prediction probability of the sample sentences as the initial sentences predicted by the argument extraction model is as close to 1 as possible, and if the sample sentences of the sample chapters in the sample texts are not the initial sentences marked by the supervised text, the first prediction probability of the sample sentences as the initial sentences predicted by the argument extraction model is as close to 0 as possible; (2) If the sample sentence of the sample chapter in the sample text is the end sentence marked by the supervised text, the second prediction probability of the sample sentence as the end sentence predicted by the argument extraction model is as close to 1 as possible, and if the sample sentence of the sample chapter in the sample text is not the end sentence marked by the supervised text, the second prediction probability of the sample sentence as the end sentence predicted by the argument extraction model is as close to 0 as possible; (3) If a sample sentence of sample chapters in the sample text is used as the beginning sentence and another sample sentence as the ending sentence constitutes a sample argument labeled by the supervised text, the third prediction probabilities of the two sample sentences as the beginning sentence and the ending sentence constituting the sample argument are predicted to be as close to 1 as possible, and if a sample sentence of sample chapters in the sample text is used as the beginning sentence and another sample sentence as the ending sentence does not constitute the sample argument labeled by the supervised text, the third prediction probabilities of the two sample sentences as the beginning sentence and the ending sentence constituting the sample argument are predicted to be as close to 0 as possible.
According to the scheme, the same training process is followed by the first training process and the second training process, so that the AM stage and the APE stage can be jointly optimized in the same model, the recognition capability of the model is enhanced, and the whole extraction performance is favorably improved.
Referring to fig. 7, fig. 7 is a block diagram of an embodiment of an extraction device 70 according to the present application. The interactive theoretical point pair extracting apparatus 70 comprises a memory 71, a processor 72 and a display screen 73, the memory 71 and the display screen 73 are respectively coupled to the processor 72, the memory 71 stores program instructions, the processor 72 is configured to execute the program instructions to implement the steps of any of the above-mentioned embodiments of the interactive theoretical point pair extracting method to extract interactive theoretical point pairs between two chapters, and the display screen 73 is configured to provide a display interface, and the display interface includes: the first area and the second area are used for displaying different chapters respectively, and the third area is used for displaying the interactive viewpoint pairs. Specifically, reference may be made to fig. 2 and the related description in the foregoing disclosed embodiments, which are not repeated herein. In particular, the interactive point-of-interest extraction device 70 may include, but is not limited to: desktop computers, notebook computers, servers, mobile phones, tablet computers, and the like, without limitation.
Specifically, the processor 72 is configured to control itself, the memory 71 and the display 73 to implement the steps of any of the above embodiments of the interactive viewpoint extracting method. The processor 72 may also be referred to as a CPU (Central Processing Unit). The processor 72 may be an integrated circuit chip having signal processing capabilities. The Processor 72 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. Additionally, the processor 72 may be collectively implemented by an integrated circuit chip.
According to the scheme, on one hand, the first argument in the first discourse is extracted firstly, and then the second argument is extracted from the second discourse based on the first argument, namely, the interactive argument pair extraction can be realized by adopting machine reading understanding of two stages, the argument structure can be favorably outlined from the argument level, and the interactive relation between the two discourse is modeled, so that the accuracy of the interactive argument pair extraction is improved, on the other hand, the second stage takes the first argument as the inquiry argument to perform argument extraction together with the second discourse, so that the interactive argument pair extraction can be extracted by combining the information of the argument level and the overall discourse information, and the accuracy of the interactive argument pair extraction is improved. Therefore, the accuracy of the interactive point pair extraction can be improved.
Referring to fig. 8, fig. 8 is a block diagram illustrating an embodiment of a computer readable storage medium 80 according to the present application. The computer readable storage medium 80 stores program instructions 81 capable of being executed by the processor, the program instructions 81 being configured to implement the steps of any of the above-described embodiments of the interactive viewpoint extracting method.
According to the scheme, on one hand, the first argument in the first chapter is extracted firstly, and then the second argument is extracted from the second chapter based on the first argument, namely, the interactive argument pair extraction can be realized by adopting two-stage machine reading understanding, the argument structure can be favorably outlined from the argument level, and the interactive relation between the two chapters is modeled, so that the accuracy of the interactive argument pair extraction is improved, on the other hand, since the argument extraction is executed by taking the first argument as the query argument and the second chapter together in the second stage, the interactive argument pair can be extracted by combining the information of the argument level and the overall information of the chapters, so that the accuracy of the interactive argument pair extraction is improved. Therefore, the accuracy of the interactive point pair extraction can be improved.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
If the technical scheme of the application relates to personal information, a product applying the technical scheme of the application clearly informs personal information processing rules before processing the personal information, and obtains personal independent consent. If the technical scheme of the application relates to sensitive personal information, a product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'express consent'. For example, at a personal information collection device such as a camera, a clear and significant identifier is set to inform that the personal information collection range is entered, the personal information is collected, and if the person voluntarily enters the collection range, the person is regarded as agreeing to collect the personal information; or on the device for processing the personal information, under the condition of informing the personal information processing rule by using obvious identification/information, obtaining personal authorization by modes of popping window information or asking a person to upload personal information of the person by himself, and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing method, and a type of personal information to be processed.

Claims (10)

1. An interactive argument pair extraction method, comprising:
acquiring two chapters of the interactive point pairs to be extracted;
selecting the chapters of which the argument extraction is performed at the first stage as first chapters, and performing the argument extraction at the second stage by using the other chapters as second chapters; wherein, either one of the two chapters is selected as the first chapter, or the two chapters are respectively selected in sequence as the first chapter;
performing the argument extraction based on the first chapters to obtain a plurality of first arguments;
and respectively taking the first arguments as query arguments, and executing the argument extraction based on the query arguments and the second chapters to obtain second arguments forming the interactive argument pairs with the query arguments.
2. The method of claim 1, wherein the argument extraction is performed by an argument extraction model, the input of which comprises query text and chapter text;
when the first argument is extracted, the query text is a preset text, the chapter text is the first argument, and the preset text is used for indicating that the currently executed argument extraction is in the first stage, and when the second argument is extracted, the query text is the first argument, and the chapter text is the second argument.
3. The method of claim 1, wherein said performing said argument extraction based on said first piece to obtain a number of first arguments, or said performing said argument extraction based on said query arguments and said second piece to obtain a second argument that constitutes said interactive argument pair with said query arguments, comprises:
extracting semantic feature representations of each character in the query text and the discourse text based on the query text and the discourse text;
extracting context feature representations of each sentence text in the query text and the discourse text based on the semantic feature representations of each character;
predicting the discourse point text in the discourse text based on the context feature representation of each sentence text;
when the second argument is extracted, the query text is the first argument, the chapter text is the second argument, and the argument text is a second argument forming the interactive argument pair with the query text.
4. The method of claim 3, wherein the semantic feature representation of each character is obtained by Longformer encoding a sequence of characters consisting of the query text and the chapter text.
5. The method of claim 3, wherein extracting the contextual feature representation of each sentence text in the query text and the discourse text based on the semantic feature representation of each character comprises:
performing feature fusion based on the semantic feature representation of the characters in the sentence text to obtain the semantic feature representation of the sentence text;
and extracting context features based on the semantic feature representation of each sentence text in the query text and the discourse text to obtain the context feature representation of each sentence text.
6. The method of claim 3, wherein predicting the discourse text in the discourse text based on the context-feature representation of the sentence text comprises:
performing first prediction based on the context feature representation of each sentence text to obtain a sentence text which is suspected to be a beginning sentence of the argument text in the chapter text and is used as a first text, and performing second prediction based on the context feature representation of each sentence text to obtain a sentence text which is suspected to be an ending sentence of the argument text in the chapter text and is used as a second text;
performing third prediction based on the context feature representation of the first text and the context feature representation of the second text to obtain a probability value that the first text is used as the starting sentence and the second text is used as the ending sentence to form the point-of-interest text;
and in response to the probability value not being lower than a preset threshold value, extracting the argument text from the discourse text based on the first text and the second text.
7. The method of claim 1, wherein the argument extraction is performed by an argument extraction model, the argument extraction model being trained through a first training process and a second training process based on sample chapters, the sample chapters being labeled with sample arguments, and pairs of sample chapters further being labeled with correspondences between the sample arguments, and two of the sample arguments having the correspondences constitute an interactive argument pair;
in the first training process, a character sequence composed of a preset text and the sample chapters is used as a sample text input during the training of the argument extraction model, a sample argument labeled by the sample chapters is used as a supervised text during the training of the argument extraction model, in the second training process, a character sequence composed of a sample argument labeled by a first sample chapter in a pair of sample chapters and a second sample chapter in a pair of sample chapters is used as a sample text input during the training of the argument extraction model, a sample argument having the corresponding relation with a reference argument in the second sample chapter is used as a supervised text during the training of the argument extraction model, and the reference argument is a sample argument in the sample text.
8. The method of claim 7, wherein the step of the first training process or the second training process comprises:
obtaining a first case regarding a sample sentence as a start sentence and a second case regarding a sample sentence as an end sentence based on the supervisory text;
performing the argument extraction on the sample text based on the argument extraction model to obtain a first prediction probability taking sample sentences of the sample chapters in the sample text as starting sentences and a second prediction probability taking the sample sentences as ending sentences, and a third prediction probability taking the first sample text as the starting sentences and the second sample text as the ending sentences to form the sample arguments; wherein, the sample sentence with the first prediction probability satisfying a first condition is used as the first sample text, and the sample sentence with the second prediction probability satisfying a second condition is used as the second sample text;
measuring to obtain a first prediction loss based on the first prediction probability and the first condition, measuring to obtain a second prediction loss based on the second prediction probability and the second condition, and measuring to obtain a third prediction loss based on the third prediction probability and the supervision text;
adjusting network parameters of the point of interest extraction model based on the first predicted loss, the second predicted loss, and the third predicted loss.
9. An interactive argument pair extraction device, comprising a display screen, a memory and a processor, wherein the display screen and the memory are respectively coupled to the processor, the memory stores program instructions, the processor is configured to execute the program instructions to implement the interactive argument pair extraction method according to any one of claims 1 to 8 to extract an interactive argument pair between two chapters, and the display screen is configured to provide a display interface, and the display interface comprises: a first area and a second area for displaying different chapters respectively, and a third area for displaying the interactive viewpoint pairs.
10. A computer readable storage medium having stored thereon program instructions executable by a processor for implementing the interactive point pair extraction method of any one of claims 1 to 8.
CN202211133233.XA 2022-09-16 2022-09-16 Interactive point pair extraction method, related equipment and storage medium Pending CN115481619A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211133233.XA CN115481619A (en) 2022-09-16 2022-09-16 Interactive point pair extraction method, related equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211133233.XA CN115481619A (en) 2022-09-16 2022-09-16 Interactive point pair extraction method, related equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115481619A true CN115481619A (en) 2022-12-16

Family

ID=84424265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211133233.XA Pending CN115481619A (en) 2022-09-16 2022-09-16 Interactive point pair extraction method, related equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115481619A (en)

Similar Documents

Publication Publication Date Title
CN112164391B (en) Statement processing method, device, electronic equipment and storage medium
CN110580292B (en) Text label generation method, device and computer readable storage medium
WO2019084867A1 (en) Automatic answering method and apparatus, storage medium, and electronic device
US10885344B2 (en) Method and apparatus for generating video
CN112559800B (en) Method, apparatus, electronic device, medium and product for processing video
CN114942984B (en) Pre-training and image-text retrieval method and device for visual scene text fusion model
CN111368551B (en) Method and device for determining event main body
CN114861889B (en) Deep learning model training method, target object detection method and device
EP4310695A1 (en) Data processing method and apparatus, computer device, and storage medium
CN110263218B (en) Video description text generation method, device, equipment and medium
CN112015928A (en) Information extraction method and device of multimedia resource, electronic equipment and storage medium
CN116166827B (en) Training of semantic tag extraction model and semantic tag extraction method and device
CN116975199A (en) Text prediction method, device, equipment and storage medium
US20230315990A1 (en) Text detection method and apparatus, electronic device, and storage medium
CN114722837A (en) Multi-turn dialog intention recognition method and device and computer readable storage medium
TW201216083A (en) Information processing device, information processing method, and program
CN113919361A (en) Text classification method and device
CN115481619A (en) Interactive point pair extraction method, related equipment and storage medium
CN114490946A (en) Xlnet model-based class case retrieval method, system and equipment
CN114363664A (en) Method and device for generating video collection title
CN109739970B (en) Information processing method and device and electronic equipment
CN114169418A (en) Label recommendation model training method and device, and label obtaining method and device
CN113806541A (en) Emotion classification method and emotion classification model training method and device
CN113239215A (en) Multimedia resource classification method and device, electronic equipment and storage medium
CN117909505B (en) Event argument extraction method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination