CN115146073A - Test question knowledge point marking method for cross-space semantic knowledge injection and application - Google Patents

Test question knowledge point marking method for cross-space semantic knowledge injection and application Download PDF

Info

Publication number
CN115146073A
CN115146073A CN202210797599.0A CN202210797599A CN115146073A CN 115146073 A CN115146073 A CN 115146073A CN 202210797599 A CN202210797599 A CN 202210797599A CN 115146073 A CN115146073 A CN 115146073A
Authority
CN
China
Prior art keywords
semantic
knowledge
knowledge point
feature
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210797599.0A
Other languages
Chinese (zh)
Other versions
CN115146073B (en
Inventor
刘海
张昭理
石佛波
朱俊艳
宋云霄
李家豪
刘婷婷
杨兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei University
Central China Normal University
Original Assignee
Hubei University
Central China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei University, Central China Normal University filed Critical Hubei University
Priority to CN202210797599.0A priority Critical patent/CN115146073B/en
Publication of CN115146073A publication Critical patent/CN115146073A/en
Application granted granted Critical
Publication of CN115146073B publication Critical patent/CN115146073B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Economics (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Animal Behavior & Ethology (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a cross-space semantic knowledge injection test question knowledge point marking method and application. The method comprises the following steps: constructing a knowledge point set; marking knowledge points in a test question sample, and constructing a triple sample; converting the entities and relationships in the triple samples into word vectors; the word vectors of the triple samples are input to a knowledge point automatic marker for training, the knowledge point automatic marker comprises a semantic extraction module, a feature interaction module and a feature extraction and prediction module, the semantic extraction module is used for obtaining semantic features of the triple samples in different semantic spaces, the feature interaction module is used for obtaining a first interaction feature between a head entity and a relationship after feature interaction, a second interaction feature between the relationship and a tail entity according to the semantic features, and the feature extraction and prediction module is used for extracting features according to the first interaction feature and the second interaction feature and marking the knowledge points. The invention can improve the accuracy of knowledge point labeling.

Description

Test question knowledge point marking method for cross-space semantic knowledge injection and application
Technical Field
The application relates to the technical field of text processing, in particular to a test question knowledge point marking method for cross-space semantic knowledge injection and application.
Background
In recent years, knowledge-graphs have played an important role in many tasks in artificial intelligence, such as word similarity calculation, word sense disambiguation, entity disambiguation, semantic parsing, text classification, topic indexing, file summarization, document ranking, information extraction, and question answering. However, in the current field of text auto-labeling, people are still facing two major challenges: data sparsity and increased computational inefficiency. The existing knowledge construction and application methods usually store the relationship fact with the entity and the one-hot representation of the relationship in a text word vector, and cannot bear rich semantic information at this moment. It is generally expressed that essentially every entity or relationship to an index is mapped, which can be stored very efficiently. However, it does not embed any semantic aspects of entities and relationships. Therefore, it cannot understand the similarity of "banana" and "watermelon" as fruits. In addition, these works rely on design complexities and specialized functions extracted from external information sources or the network structure of the knowledge graph. As the size of real-world entities increases, these approaches often suffer from computational inefficiency and lack of scalability.
Disclosure of Invention
Aiming at least one defect or improvement requirement in the prior art, the invention provides a test question knowledge point marking method injected by cross-space semantic knowledge and application thereof, so that the accuracy of knowledge point marking is improved.
To achieve the above object, according to a first aspect of the present invention, there is provided a method for labeling test question knowledge points injected by cross-space semantic knowledge, comprising:
constructing a knowledge point set, wherein each knowledge point in the knowledge point set comprises a plurality of attribute information;
marking knowledge points in a test question sample, respectively taking the test question and the knowledge points as head and tail entities, and taking the relation between the test question and the knowledge points as the relation between the head and tail entities to construct a triple sample;
converting the test questions, the relations and the knowledge points in the triple samples into word vectors by using the attribute information of the knowledge points in the knowledge point set;
the method comprises the steps of inputting word vectors of triple samples into a knowledge point automatic labeling device for training, wherein the knowledge point automatic labeling device comprises a semantic extraction module, a feature interaction module and a feature extraction and prediction module, the semantic extraction module is used for obtaining semantic features of the triple samples in different semantic spaces, the feature interaction module is used for obtaining first interaction features among head entities and relations and second interaction features among tail entities after feature interaction according to the semantic features, and the feature extraction and prediction module is used for extracting features according to the first interaction features and the second interaction features and labeling knowledge points.
Furthermore, each knowledge point in the knowledge point set comprises knowledge point definition description attribute information, learning stage attribute information to which the knowledge point belongs, difficulty level attribute information of the knowledge point, occurrence frequency attribute information of the knowledge point, type attribute information of the knowledge point and examination frequency attribute information of the knowledge point.
Further, the constructing the triple sample comprises:
and constructing a triple positive sample by using the test question sample, the knowledge points marked in the test question sample and the corresponding relation of the knowledge points, randomly replacing head and tail entities in the triple positive sample to generate a triple negative sample, removing the pseudo triple negative sample, and forming a triple sample set by the triple positive sample and the retained triple negative sample.
Further, the converting the test questions, the relations and the knowledge points in the triple sample into word vectors includes:
performing word segmentation processing on the test questions, the relation and the knowledge points in the triple sample, converting each word segmentation into a word vector, wherein the calculation formula of the conversion is
E i =E token +E segment +E position
Wherein E is i Word vectors for the ith word segmentation, E token Indicating the initial embedding of the participle, E segment Indicating whether the word is an entity or a relationship, E position Position information indicating the word segmentation.
Further, the semantic extraction module includes a plurality of layers of semantic understanding deblocking stacked, each semantic understanding block includes a multi-semantic attention layer, a normalization layer and a source information cue layer, the multi-semantic attention layer is used for outputting semantic features of each input word vector in different semantic spaces, the normalization layer is used for normalizing output data of the multi-semantic attention layer, and the source information cue layer is used for taking an output of the multi-semantic attention layer and an output of the normalization layer as an output of the source information cue layer.
Further, the feature interaction module comprises a feature reshaping module and an interaction module, wherein the feature reshaping module is used for performing randomization and disordering on input semantic features of different semantic spaces to obtain a reshaped semantic feature; and the interaction module is used for outputting the first interaction feature and the second interaction feature according to the remolded semantic features.
Further, the calculation formula for outputting the first interactive feature and the second interactive feature according to the reshaped semantic features is as follows:
Figure BDA0003736307510000031
Figure BDA0003736307510000032
Figure BDA0003736307510000033
respectively representing convolution operation for head entity vector, tail entity vector and relationship vector in ith semantic space after feature reshaping, and w (. Sub.) represents convolution operation hr Convolution kernel parameter, w, for head entity-relationship interaction rt The convolution kernel parameter, vec (-) representing vectorization operation, E, for the tail entity interaction with the relationship hr As a first feature of interaction between head entities and relationships, E rt Is a second interactive feature between the relationship and the tail entity.
According to a second aspect of the present invention, there is also provided a test question knowledge point labeling system for cross-space semantic knowledge injection, comprising:
the knowledge point set building module is used for building a knowledge point set, and each knowledge point in the knowledge point set comprises a plurality of attribute information;
the triple sample construction module is used for marking knowledge points in the test question sample, respectively taking the test questions and the knowledge points as head and tail entities, and taking the relation between the test questions and the knowledge points as the relation between the head and tail entities to construct a triple sample;
the word vector conversion module is used for converting the test questions and the relation in the triple samples into word vectors and converting the knowledge points in the triple samples into the word vectors by using the attribute information of the knowledge points in the knowledge point set;
the knowledge point automatic marker comprises a semantic extraction module, a feature interaction module and a feature extraction and prediction module, wherein the semantic extraction module is used for acquiring semantic features of the triple samples in different semantic spaces, the feature interaction module is used for acquiring first interaction features among head entities and relations after feature interaction, second interaction features among relations and tail entities according to the semantic features, and the feature extraction and prediction module is used for extracting features according to the first interaction features and the second interaction features and marking the knowledge points.
According to a third aspect of the present invention, there is also provided an electronic device comprising at least one processor, and at least one memory module, wherein the memory module stores a computer program that, when executed by the processor, causes the processor to perform the steps of any of the methods described above.
According to a fourth aspect of the present invention, there is also provided a storage medium storing a computer program executable by a processor, the computer program, when run on the processor, causing the processor to perform the steps of any of the methods described above.
In general, compared with the prior art, the above technical solution contemplated by the present invention can achieve the following beneficial effects:
(1) The invention provides a cross-space knowledge representation method, which takes test questions and knowledge point representation as entity representation, decouples the entity representation into independent vectors in a plurality of semantic spaces, and extracts multi-level implicit knowledge in different semantic spaces, thereby improving the expression capability of a model and the accuracy of knowledge point marking.
(2) The semantic knowledge and the test question knowledge are fused to carry out deep understanding on the dependency relationship between the test questions and the knowledge points, and the problem of data sparsity in the text labeling problem is effectively solved.
(3) A relation perception information aggregation mechanism is introduced into the multi-space and multi-scale convolution of the knowledge graph, so that different information is aggregated to the test questions and each component of the knowledge points and applied to intelligent labeling of the text test questions.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the embodiments will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a cross-space semantic knowledge injection test question knowledge point labeling method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a method for labeling test question knowledge points for cross-space semantic knowledge injection according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of the conversion of triples into word vectors according to an embodiment of the present invention;
fig. 4 is a network diagram of an automatic knowledge point annotator according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The terms "first," "second," and the like in the description and claims of the present application and in the foregoing drawings are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or modules is not limited to the listed steps or modules but may alternatively include other steps or modules not listed or inherent to such process, method, article, or apparatus.
As shown in fig. 1 and fig. 2, a method for labeling test question knowledge points by injecting cross-space semantic knowledge according to an embodiment of the present invention includes steps 1 to 4.
Step 1, a knowledge point set is constructed, and each knowledge point in the knowledge point set comprises a plurality of attribute information.
Specifically, a primary school mathematics examination outline and a class standard teaching material of the education department are obtained, and a primary school mathematics test question is obtained through sorting and induction to obtain a primary school mathematics knowledge point set. Numbering the knowledge points in the primary school mathematics knowledge point set, such as: knowledge point 1, knowledge point 2, knowledge point n, etc.
Attribute information is marked for each knowledge point in the set of knowledge points.
Furthermore, each knowledge point in the knowledge point set comprises knowledge point definition description attribute information, learning stage attribute information to which the knowledge point belongs, difficulty level attribute information of the knowledge point, occurrence frequency attribute information of the knowledge point, type attribute information of the knowledge point and examination frequency attribute information of the knowledge point.
For each knowledge point in the knowledge point information base as a main attribute node, the attribute value of the knowledge point attribute information is defined as follows:
defining knowledge points, wherein the knowledge points are used for describing semantic information of the knowledge points;
learning stage (primary school 1-6 grade) to which the knowledge point belongs;
ease of knowledge points ("easy", "medium", "difficult");
frequency of occurrence ("low", "medium", "high") of knowledge points;
the type of knowledge points ("algebra", "geometry");
the frequency of examination of the knowledge points ("low", "medium", "high");
integrating the above attribute information of each knowledge point in the knowledge point set, such as { ' common score of the score ', ' process of converting several different denominators into scores (formulas) of the same denominators which are equal to the original scores (formulas), which are called common score, ' fifth grade ', ' easy ', ' high ', ' algebraic and ' high ', ' and making into knowledge point attribute information supplement of the primary school mathematics trial question knowledge point information base.
And 2, marking knowledge points in the test question sample, respectively taking the test question and the knowledge points as head and tail entities, taking the relation between the test question and the knowledge points as the relation between the head and tail entities, and constructing the triple sample.
Extracting the information of the test questions and the knowledge points, labeling the knowledge points corresponding to the test questions, and constructing a decimal mathematics test question and a triple sample set of the knowledge points corresponding to the decimal mathematics test question.
Further, constructing the triple sample comprises: and constructing a triple positive sample by using the test question sample, the knowledge points marked in the test question sample and the corresponding relation of the knowledge points, randomly replacing head and tail entities in the triple positive sample to generate a triple negative sample, removing the pseudo triple negative sample, and forming a triple sample set by the triple positive sample and the retained triple negative sample.
Generating negative example triples(s) by randomly replacing head and tail entities - ,r,o - )∈S - It is likely that a true triplet will be generated, which will have a large impact on the overall training process. Therefore, the false negative triple samples need to be removed, that is, the false negative triple samples in the negative triple sample set are removed by using the positive triple sample set.
Negative case triplet S - Is constructed in a manner of S - =∪ (h,,)∈S {(h - ,r,t)}∪{(h,r - ,t)}∪{(h,r,t - ) The negative case triplets provide a large number of spurious samples for model training.
And 3, converting the test questions and the relation in the triple samples into word vectors, and converting the knowledge points in the triple samples into the word vectors by using the attribute information of the knowledge points in the knowledge point set.
And (3) taking the serialized text information of the knowledge point information base and the test question-knowledge point knowledge base constructed in the steps 1 and 2 as the input of the trained word vector converter module, and obtaining the output which is the word vector representation of the test question and the knowledge point with shallow semantics.
For the triple obtained in the step 2, for the knowledge point entity in the triple, there is no auxiliary information to help the machine to understand what the knowledge point has, so that the knowledge point set in the step 1 can be combined to help the machine to understand the knowledge point better according to the attribute information of the knowledge point in the knowledge point set. As shown in fig. 3, taking the test question as a head entity and the knowledge point as a tail entity as an example, converting the test question, the relationship and the knowledge point in the triple sample into a word vector by using the attribute information of the knowledge point in the knowledge point set means: and for the test questions and the relations in the triple samples, obtaining a plurality of participles through participle processing, wherein the participles are used as input participles for obtaining embedded information of the test questions and the relations. And for the knowledge point tail entity, performing word segmentation on the knowledge point tail entity and attribute information corresponding to the knowledge point tail entity to obtain a plurality of words comprising knowledge points and corresponding knowledge point attributes, and using the words as input words for obtaining knowledge point embedded information.
In the input layer, the input E of the ith participle (token) i Segment information, location information, and token initial embedding information should be included. Wherein E token Initial embedding information representing token, E segment Segment embedding information representing a token for distinguishing whether the token belongs to an entity or a relationship, E position Location information representing a token;
the token, position and segment embedding information mode of the fusion entity is as follows:
E i =E token +E segment +E position
wherein E is i Can be regarded as an embedded representation of the ith test question entity with location awareness, E token Representing the initial embedding of tokens, E segment Segment embedding representing a token for distinguishing whether the token belongs to an entity or a relationship, E position Location information representing a token.
And 4, inputting word vectors of the triple samples into a knowledge point automatic marker for training, wherein the knowledge point automatic marker comprises a semantic extraction module, a feature interaction module and a feature extraction and prediction module, the semantic extraction module is used for acquiring semantic features of the triple samples in different semantic spaces, the feature interaction module is used for acquiring first interaction features among head entities and relations after feature interaction, second interaction features among the relations and tail entities according to the semantic features, and the feature extraction and prediction module is used for extracting features according to the first interaction features and the second interaction features and marking the knowledge points.
Training a knowledge point automatic marker by learning the contents constructed in the step 1 and the step 2; and (4) taking the vector output in the step (3) as the initial input of the knowledge point automatic marker.
The semantic extraction module has the main functions of extracting knowledge of test questions and knowledge points in a multi-semantic space and preventing the forgetting of meta-information in the learning process through the source information prompt, so that the understanding of the multi-space potential semantics of the test questions and the knowledge points is realized.
And the feature interaction module obtains the output of the entity containing the multi-space semantics after passing through the stacked semantic understanding blocks, and takes the output as the input of the feature remodeling module.
The feature extraction and prediction module comprises a knowledge point feature extraction module and a prediction module. The basic structure of the knowledge point feature extraction module is a two-layer full-connection layer, and the subordinate information of the test question and the knowledge point after interaction is extracted.
Furthermore, the Semantic extraction module comprises a plurality of layers of stacked Semantic meaning deblocking, each Semantic meaning deblocking module comprises a Multi-Semantic Attention Layer (Multi-Semantic-Attention Layer), a Normalization Layer (Layer Normalization Layer) and a source information prompt Layer, the Multi-Semantic Attention Layer is used for outputting Semantic features of each input word vector in different Semantic spaces, the Normalization Layer is used for normalizing output data of the Multi-Semantic Attention Layer, and the source information prompt Layer is used for prompting initial information and information after passing through the Semantic extraction module.
Further, the feature interaction module comprises a feature reshaping module and an interaction module, wherein the feature reshaping module is used for performing randomization and disordering on input semantic features of different semantic spaces to obtain reshaped semantic features; the interaction module is used for outputting a first feature interaction and a second interaction feature according to the reshaped semantic features.
Further, the method for labeling test question knowledge points injected by cross-space semantic knowledge further comprises the following steps of 5: according to the method, different test question-knowledge point triples are formed by test questions to be labeled and different knowledge points respectively and are used as input, the primary school mathematics test question to be labeled is automatically labeled, all the knowledge points possibly contained in the test question are judged by judging whether the score of the triples is larger than a threshold value f, and finally the knowledge points contained in the test question are output.
Step 5.1: test question information and knowledge point information are digitalized, semantic description of the test question and the knowledge point is coded, knowledge is extracted, a test question vector containing shallow semantic information of the test question and a knowledge point vector containing shallow semantic information of the knowledge point are obtained, and vectors representing the test question and the knowledge point information are stored in a numerical matrix data structure form;
step 5.2: test question vector t of the test question a obtained in the step 5.1 a Knowledge point vector k of knowledge point b b Constructing a triad (t) composed of test question a, inclusion relation r and knowledge point b a 、r、k b ) The specific formula is as follows;
input triplet (t) a 、r、k b ) Represents the head entity t in the triplet a Is a vectorized representation of the textual description of the question, the tail entity k b Is a vectorized representation of the knowledge points to be detected, with the trained parameters W 0 The semantic extraction module performs interactive operation in a semantic extraction network to extract deep semantic information in test questions and knowledge points, and the specific process is as follows:
the token, position and segment embedding information mode of the fusion entity is as follows: e i =E token + E segment +E position
Will E i And inputting the input integrated with the multivariate information into a Multi-Semantic attribute layer, and performing Multi-Semantic spatial feature extraction on the interaction matrix of the test question and the knowledge point to ensure that the representations of different parts learned by the entity can represent different meanings. In particular, for entity E i =[h i,1 ,h i,2 ,...h i,K ],
Figure BDA0003736307510000091
Figure BDA0003736307510000092
The semantic representation of the system is assumed to be jointly determined by K independent semantic spaces, wherein one layer is as follows;
Figure BDA0003736307510000093
wherein h is i,K Semantic representation of the entity in different semantic spaces, σ being a non-linear activation function, W 0 ={w (1) ,w (2) ,...,w (k) },w (k) For the parameter in the Kth semantic space, for the realization of h i,K Depicting entity E i The semantics of the kth aspect of (1), first, feature x i Projected into different potential spaces to extract different semantics from each component of the node feature. Initialization embedding
Figure BDA0003736307510000094
From x i Obtained by K different projection matrices, x i Is entity E i The characteristics of (1).
Obtaining a relationship-aware head entity representation E after feature interaction hr And tail entity embedding representation E rt Embedding the relation-aware head entity and tail entity as the input of the knowledge point feature extraction module, wherein the calculation formula of the knowledge point feature extraction module is as follows:
cos(E hr ,E rt )=(max(0,E hr ·W 1 +b 1 )W 2 +b 2 )E rt
the knowledge point feature extraction module can be subdivided into two layers, wherein the first layer is a linear activation layer, the second layer is a nonlinear activation layer, and the activation function is ReLU. W 1 Is a parameter of the linear active layer, W 2 Parameters of the non-linear active layer and b = [ b ] 1 ,b 2 ]Is the model deviation, cos (-) is the function for calculating the cosine similarity. The knowledge point feature extraction module can store the intersection of the knowledge points and the test questionsThe characteristics of each other.
Step 5.3: the triad (t) obtained in the step 5.2 is a 、r、k b ) The method is used as the input of an automatic test question knowledge point marker, the deep semantics and knowledge point characteristics of the test question a and the knowledge point b are obtained through a semantic extraction module, and the score f of the triple of the test question a-knowledge point b is obtained through a dynamic scoring module τ1
Step 5.4: if f τ1 If the value is more than a certain threshold value mu =0.8, the triple is considered to be in the knowledge map of the test question-knowledge point, namely the test question a is considered to contain the knowledge point b;
and step 5.5: for triple (t) a 、r、k b ) Replacement tail entity k b Is k c Get the triplet (t) a 、r、k c ) The triple is used for judging whether the test question a contains a knowledge point c or not, and whether the triple belongs to a test question-knowledge point knowledge map or not can be judged according to the step 5.1, the step 5.2, the step 5.3 and the step 5.4, namely whether the test question a to be marked contains the knowledge point c or not is judged;
step 5.6: repeating the step 5.5, traversing all knowledge points in the knowledge point information base, finding out all primary school mathematics knowledge points contained in the test question a, and updating the information relationship between the test questions and the corresponding knowledge points in the primary school mathematics test question-knowledge point knowledge base;
the structure of the knowledge point automatic annotator is shown in fig. 4, and the training process of the knowledge point automatic annotator is specifically described below.
Step 1: and (4) multi-space semantic learning, namely embedding test questions and knowledge points into a multi-semantic space to extract knowledge information. The input of the multi-space semantic learning is a triple (t) a 、r、k b ) Embedded representation of, header entity t a Is a vectorized representation of the textual description of the question, the tail entity k b The method is characterized in that vectorization representation of knowledge points to be detected is carried out, interactive operation is carried out in a semantic extraction network through a semantic extraction module, and deep semantic information in test questions and the knowledge points is extracted, and the specific process is as follows:
step 1.1: multi-Semantic Attention layer, the primary purpose of which is to extract inputThe interactive information of the test question and the knowledge points in different semantic subspaces is found through an attention mechanism, the most probable knowledge points contained in the test question are found, and the input is Embedding (t) a 、r、k b )=[x 1 ,x 2 ,...,x K ]Output is [ h ] i,1 ,h i,2 ,...h i,K ]: the concrete formula is as follows;
h i,K =σ(w (k) ·x i )
Figure BDA0003736307510000111
wherein h is i,1 ~h i,K Is a semantic representation of the entity in a different semantic space, σ is a non-linear activation function, W 0 ={w (1) ,w (2) ,...,w (k) },w (k) For the parameter in the Kth semantic space, for h i,K Depicting entity E i The semantics of the kth aspect of (1), first, feature x i Projected into different potential spaces to extract different semantics from each component of the node feature. Initialization embedding
Figure BDA0003736307510000112
From x i Obtained by K different projection matrices, x i Is entity E i The characteristics of (1).
Step 1.2: the Layer Normalization Layer is used for normalizing input data to prevent the data from being larger and larger. Input is [ h ] i,1 ,h i,2 ,...h i,K ]The output is LN ([ h ] i,1 ,h i,2 ,...h i,K ])
Step 1.3: and the source information prompt layer is used for carrying out prompt operation on the initial information and the information after passing through the semantic extraction module, so that gradient explosion is avoided during training. The input is LN ([ h ] i,1 ,h i,2 ,...h i,K ]) The output is:
Figure RE-GDA0003833468460000113
step 1.4: the steps 1.1 to 1.3 are collectively referred to as a semantic understanding block, and the semantic understanding block is stacked into l layers, and a specific formula taking a semantic space as an example is as follows:
Figure BDA0003736307510000114
according to the scheme, the output obtained by deep semantic learning is carried out through a semantic understanding network, and the output is used as the input of a semantic feature interaction module of test question-knowledge point to learn the deep semantic representation between the knowledge point and the question. The training steps of the semantic feature interaction module of the test question-knowledge point are as follows:
step 2: the semantic feature interaction module comprises a feature remodeling part and a feature interaction part, more interaction information is obtained through the feature remodeling, and the dependency relationship between the knowledge points and the test questions is judged through the semantic feature interaction.
Step 2.1: and (4) characteristic reshaping. The feature reshaping is by randomly scrambling the embedding of entities and relationships. Relation vector r i Initialization embedding and feature remodeling processes of (a) and (b) with entity vectors
Figure BDA0003736307510000115
Is the same as the feature reshaping, the purpose of the feature reshaping is to make more feature interaction between the test question and the knowledge point, and the specific operation is
Figure BDA0003736307510000116
And
Figure BDA0003736307510000117
one specific way to embed random obfuscated entities and relationships is to exchange the representations of the K spaces randomly for position, which is calculated as:
Figure BDA0003736307510000121
step 2.2, a characteristic interaction process: head entity vector in ith semantic space after feature reshaping
Figure BDA0003736307510000122
Tail entity vector
Figure BDA0003736307510000123
And relation vector
Figure BDA0003736307510000124
As input for feature interaction. The formulas for head entity to relationship and tail entity to relationship feature interaction are as follows:
Figure BDA0003736307510000125
Figure BDA0003736307510000126
wherein conv ({ }) represents a convolution operation, w hr Convolution kernel parameter, w, for head entity-relationship interaction rt Convolution kernels for tail entity and relationship interactions, vec (-) representing vectorized operations, E hr 、E rt And is also an interactive feature corresponding to the ith space.
According to the scheme, the output obtained by the semantic feature interaction module of the test question-knowledge point is used as the input of the knowledge point feature extraction module, and the relevant features of interaction of the knowledge points and the questions are learned and stored. The training steps of the knowledge point feature extraction module are as follows:
and step 3: and a knowledge point feature extraction module. The basic structure of the system is a full-connection layer with two layers, and subordinate information of test questions and knowledge points after interaction is extracted;
step 3.1, linear activation layer σ 1 (W 2 C+b 1 ) Extracting linear characteristics of the knowledge points;
step 3.2, nonlinear activation layer σ 2 (W 1 ×C+b 2 ) Extracting nonlinear characteristics of knowledge points;
step 3.3, setting the form of the knowledge point feature extractor as follows: sigma 2 (W 1 ×σ 1 (W 2 C+b 1 )+b 2 ) The knowledge point annotation knowledge of the primary school mathematics test question can be considered to be stored in the parameter W of the knowledge point characteristic storage module 1 、W 2 And b 1 、b 2 Performing the following steps;
wherein σ 1 (. Cndot.) is a linear activation function, σ 2 (. Cndot.) denotes a nonlinear activation function, and the nonlinear activation function used may be a sigmoid function or a ReLU function, and when a ReLU function is used:
Figure BDA0003736307510000127
the training knowledge point automatic marker comprises a semantic extraction module, a test question-knowledge point semantic feature interaction module and a knowledge point feature extraction module, a Margin-based method is set as a training strategy, and the whole knowledge point automatic marker is trained by minimizing Margin.
The Margin-based approach defines the following loss function as a training target:
Figure BDA0003736307510000131
wherein theta represents all parameters of the primary school mathematics test question knowledge point automatic labeling model based on the knowledge map, max (x, y) returns a higher value between x and y, gamma is a margin, S is a ternary set of a test question-knowledge point knowledge base, and S is - The method is a negative sampling ternary group set of test questions and knowledge points, and in order to save the operation cost, the construction mode is to randomly replace head and tail entities to generate a negative example triple.
This way negative example triplets(s) are generated by randomly replacing head and tail entities - ,r,o - )∈S - It is likely that a true triplet will be generated, which will have a large impact on the overall training process. Therefore, example IIIThe tuples are constructed as follows:
Figure BDA0003736307510000132
updating semantic extraction network parameters W in the modules through random gradient descent 0 And the parameter w of the semantic feature interaction module of test question-knowledge point hr 、w rt And the parameter W of the knowledge point feature extraction module 1 、W 2 And b, W during the gradient descent 0 、W 1 、W 2 And b are updated as follows:
Figure BDA0003736307510000141
Figure BDA0003736307510000142
Figure BDA0003736307510000143
Figure BDA0003736307510000144
Wi (l) ←Wi (l) -αδ (l)(l-1) )
b (l) ←b (l) -αδ (l)
and 4, step 4: and a dynamic scoring prediction module. The final scoring function is formulated as follows:
Figure BDA0003736307510000145
wherein conv ({ }) represents a convolution operation, w hr Convolution kernel parameter, w, for head entity-relationship interaction rt Volumes that interact with relationships for tail entitiesA kernel, vec (-) represents a vectorization operation, f (-) represents an activation function,
Figure BDA0003736307510000146
a head-entity vector is represented that represents,
Figure BDA0003736307510000147
the relationship vector is represented by a vector of relationships,
Figure BDA0003736307510000148
representing the tail entity vector.
The invention provides a cross-space semantic knowledge injection test question knowledge point marking system, which comprises:
the knowledge point set building module is used for building a knowledge point set, and each knowledge point in the knowledge point set comprises a plurality of attribute information;
the triple sample construction module is used for marking knowledge points in the test question sample, taking the test question and the knowledge points as head and tail entities respectively, taking the relation between the test question and the knowledge points as the relation between the head and tail entities, and constructing the triple sample;
the word vector conversion module is used for converting the test questions and the relation in the triple samples into word vectors and converting the knowledge points in the triple samples into the word vectors by using the attribute information of the knowledge points in the knowledge point set;
the knowledge point automatic marker comprises a semantic extraction module, a feature interaction module and a feature extraction and prediction module, wherein the semantic extraction module is used for acquiring semantic features of the triple samples in different semantic spaces, the feature interaction module is used for acquiring first interaction features among head entities and relations after feature interaction, second interaction features among relations and tail entities according to the semantic features, and the feature extraction and prediction module is used for extracting features according to the first interaction features and the second interaction features and marking the knowledge points.
The system and method are implemented in the same principle, and are not described herein again.
The embodiment further provides an electronic device, which includes at least one processor and at least one memory, where the memory stores a computer program, and when the computer program is executed by the processor, the processor executes any one of the steps of the cross-space semantic knowledge injection test question knowledge point labeling method, where the specific steps refer to method embodiments and are not described herein again; in this embodiment, the types of the processor and the memory are not particularly limited, for example: the processor may be a microprocessor, digital information processor, on-chip programmable logic system, or the like; the memory may be volatile memory, non-volatile memory, a combination thereof, or the like.
The present application further provides a storage medium storing a computer program executable by a processor, the computer program, when executed on the processor, causing the processor to perform the steps of any of the above-described cross-space semantic knowledge injection question knowledge point labeling methods. The computer-readable storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, DVDs, CD-ROMs, microdrive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data.
It should be noted that for simplicity of description, the above-mentioned embodiments of the method are described as a series of acts, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed system may be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be through some service interfaces, indirect coupling or communication connection of systems or modules, and may be electrical or in other forms.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program, which is stored in a computer-readable memory, and the memory may include: flash disks, read-Only memories (ROMs), random Access Memories (RAMs), magnetic or optical disks, and the like.
The above description is merely an exemplary embodiment of the present disclosure, and the scope of the present disclosure is not limited thereto. That is, all equivalent changes and modifications made in accordance with the teachings of the present disclosure are intended to be included within the scope of the present disclosure. Embodiments of the present disclosure will be readily apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
It will be understood by those skilled in the art that the foregoing is only an exemplary embodiment of the present invention, and is not intended to limit the invention to the particular forms disclosed, since various modifications, substitutions and improvements within the spirit and scope of the invention are possible and within the scope of the appended claims.

Claims (10)

1. A cross-space semantic knowledge injection test question knowledge point marking method is characterized by comprising the following steps:
constructing a knowledge point set, wherein each knowledge point in the knowledge point set comprises a plurality of attribute information;
marking knowledge points in a test question sample, respectively taking the test question and the knowledge points as head and tail entities, and taking the relation between the test question and the knowledge points as the relation between the head and tail entities to construct a triple sample;
converting the test questions, the relations and the knowledge points in the triple samples into word vectors by using the attribute information of the knowledge points in the knowledge point set;
the method comprises the steps of inputting word vectors of triple samples into a knowledge point automatic labeling device for training, wherein the knowledge point automatic labeling device comprises a semantic extraction module, a feature interaction module and a feature extraction and prediction module, the semantic extraction module is used for obtaining semantic features of the triple samples in different semantic spaces, the feature interaction module is used for obtaining first interaction features among head entities and relations and second interaction features among tail entities after feature interaction according to the semantic features, and the feature extraction and prediction module is used for extracting features according to the first interaction features and the second interaction features and labeling knowledge points.
2. The cross-space semantic knowledge injection test question knowledge point labeling method of claim 1, wherein each knowledge point in the knowledge point set comprises knowledge point definition description attribute information, learning stage attribute information to which the knowledge point belongs, difficulty degree attribute information of the knowledge point, occurrence frequency attribute information of the knowledge point, type attribute information of the knowledge point, and test frequency attribute information of the knowledge point.
3. The method for labeling test question knowledge points for cross-space semantic knowledge injection of claim 1, wherein the constructing a triple sample comprises:
and constructing a triple positive sample by using the test question sample, the knowledge points marked in the test question sample and the corresponding relation of the knowledge points, randomly replacing head and tail entities in the triple positive sample to generate a triple negative sample, removing the pseudo triple negative sample, and forming a triple sample set by the triple positive sample and the retained triple negative sample.
4. The method for labeling test questions knowledge points for cross-space semantic knowledge injection of claim 1, wherein the converting the test questions, the relations and the knowledge points in the triple samples into word vectors comprises:
performing word segmentation processing on the test questions, the relation and the knowledge points in the triple sample, converting each word segmentation into a word vector, wherein the calculation formula of the conversion is
E i =E token +E segment +E position
Wherein E is i Word vectors for the ith word segmentation, E token Indicating the initial embedding of the participle, E segment Indicating whether the participle is an entity or a relationship, E position Position information indicating the word segmentation.
5. The cross-spatial semantic knowledge injection question knowledge point labeling method according to claim 1, wherein the semantic extraction module comprises a plurality of layers of stacked semantic-meaning deblocking, each semantic understanding block comprises a multi-semantic-attention layer, a normalization layer and a source information cue layer, the multi-semantic-attention layer is used for outputting semantic features of each input word vector in different semantic spaces, the normalization layer is used for normalizing output data of the multi-semantic-attention layer, and the source information cue layer is used for combining an output of the multi-semantic-attention layer and an output of the normalization layer as an output of the source information cue layer.
6. The cross-space semantic knowledge injection test question knowledge point labeling method of claim 1, wherein the feature interaction module comprises a feature reshaping module and an interaction module, the feature reshaping module is used for performing randomization and disorganization processing on input semantic features of different semantic spaces to obtain a reshaped semantic feature; and the interaction module is used for outputting the first interaction feature and the second interaction feature according to the remolded semantic features.
7. The cross-space semantic knowledge injection test question knowledge point labeling method according to claim 6, wherein the calculation formula for outputting the first interactive feature and the second interactive feature according to the reshaped semantic features is as follows:
Figure FDA0003736307500000021
Figure FDA0003736307500000022
Figure FDA0003736307500000023
after the features are reshaped, a head entity vector, a tail entity vector and a relationship vector in the ith semantic space respectively represent convolution operation, and w (&) represents convolution operation hr Convolution kernel parameter, w, for head entity-relationship interaction rt The convolution kernel parameter, vec (-) representing vectorization operation, E, for the tail entity interaction with the relationship hr As a first interaction feature between head entities and relationships, E rt Is a second interactive feature between the relationship and the tail entity.
8. A cross-space semantic knowledge injection question knowledge point labeling system, comprising:
the knowledge point set building module is used for building a knowledge point set, and each knowledge point in the knowledge point set comprises a plurality of attribute information;
the triple sample construction module is used for marking knowledge points in the test question sample, taking the test question and the knowledge points as head and tail entities respectively, taking the relation between the test question and the knowledge points as the relation between the head and tail entities, and constructing the triple sample;
the word vector conversion module is used for converting the test questions and the relation in the triple samples into word vectors and converting the knowledge points in the triple samples into the word vectors by using the attribute information of the knowledge points in the knowledge point set;
the knowledge point automatic marker comprises a semantic extraction module, a feature interaction module and a feature extraction and prediction module, wherein the semantic extraction module is used for acquiring semantic features of the triple samples in different semantic spaces, the feature interaction module is used for acquiring first interaction features among head entities and relations after feature interaction, second interaction features among relations and tail entities according to the semantic features, and the feature extraction and prediction module is used for extracting features according to the first interaction features and the second interaction features and marking the knowledge points.
9. An electronic device, comprising at least one processor and at least one memory module, wherein the memory module stores a computer program that, when executed by the processor, causes the processor to perform the steps of the method of any one of claims 1 to 7.
10. A storage medium, characterized in that it stores a computer program which, when run on a processor, causes the processor to perform the steps of the method according to any one of claims 1 to 7.
CN202210797599.0A 2022-07-08 2022-07-08 Test question knowledge point marking method for cross-space semantic knowledge injection and application Active CN115146073B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210797599.0A CN115146073B (en) 2022-07-08 2022-07-08 Test question knowledge point marking method for cross-space semantic knowledge injection and application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210797599.0A CN115146073B (en) 2022-07-08 2022-07-08 Test question knowledge point marking method for cross-space semantic knowledge injection and application

Publications (2)

Publication Number Publication Date
CN115146073A true CN115146073A (en) 2022-10-04
CN115146073B CN115146073B (en) 2024-06-21

Family

ID=83412741

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210797599.0A Active CN115146073B (en) 2022-07-08 2022-07-08 Test question knowledge point marking method for cross-space semantic knowledge injection and application

Country Status (1)

Country Link
CN (1) CN115146073B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115906867A (en) * 2022-11-30 2023-04-04 华中师范大学 Test question feature extraction and knowledge point labeling method based on hidden knowledge space mapping

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210070213A (en) * 2019-12-04 2021-06-14 삼성전자주식회사 Voice user interface
EP3913543A2 (en) * 2020-12-21 2021-11-24 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for training multivariate relationship generation model, electronic device and medium
CN113919366A (en) * 2021-09-06 2022-01-11 国网河北省电力有限公司电力科学研究院 Semantic matching method and device for power transformer knowledge question answering
CN114154637A (en) * 2021-11-05 2022-03-08 华中师范大学 Knowledge point automatic labeling modeling method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210070213A (en) * 2019-12-04 2021-06-14 삼성전자주식회사 Voice user interface
EP3913543A2 (en) * 2020-12-21 2021-11-24 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for training multivariate relationship generation model, electronic device and medium
CN113919366A (en) * 2021-09-06 2022-01-11 国网河北省电力有限公司电力科学研究院 Semantic matching method and device for power transformer knowledge question answering
CN114154637A (en) * 2021-11-05 2022-03-08 华中师范大学 Knowledge point automatic labeling modeling method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
何彬;李心宇;陈蓓蕾;夏盟;曾致中;: "基于属性关系深度挖掘的试题知识点标注模型", 南京信息工程大学学报(自然科学版), no. 06, 28 November 2019 (2019-11-28) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115906867A (en) * 2022-11-30 2023-04-04 华中师范大学 Test question feature extraction and knowledge point labeling method based on hidden knowledge space mapping
CN115906867B (en) * 2022-11-30 2023-10-31 华中师范大学 Test question feature extraction and knowledge point labeling method based on hidden knowledge space mapping

Also Published As

Publication number Publication date
CN115146073B (en) 2024-06-21

Similar Documents

Publication Publication Date Title
CN111949787B (en) Automatic question-answering method, device, equipment and storage medium based on knowledge graph
EP3227836B1 (en) Active machine learning
CN109416705B (en) Utilizing information available in a corpus for data parsing and prediction
CN111159385B (en) Template-free general intelligent question-answering method based on dynamic knowledge graph
US11288324B2 (en) Chart question answering
CN112070138B (en) Construction method of multi-label mixed classification model, news classification method and system
US20200004765A1 (en) Unstructured data parsing for structured information
CN111191275A (en) Sensitive data identification method, system and device
CN109522412B (en) Text emotion analysis method, device and medium
CN112819023A (en) Sample set acquisition method and device, computer equipment and storage medium
CN112949476B (en) Text relation detection method, device and storage medium based on graph convolution neural network
CN112988963A (en) User intention prediction method, device, equipment and medium based on multi-process node
CN112101042A (en) Text emotion recognition method and device, terminal device and storage medium
CN112966117A (en) Entity linking method
CN114491079A (en) Knowledge graph construction and query method, device, equipment and medium
CN115146073A (en) Test question knowledge point marking method for cross-space semantic knowledge injection and application
CN113360654B (en) Text classification method, apparatus, electronic device and readable storage medium
CN113553326A (en) Spreadsheet data processing method, device, computer equipment and storage medium
CN107783958B (en) Target statement identification method and device
KR102406961B1 (en) A method of learning data characteristics and method of identifying fake information through self-supervised learning
CN115563278A (en) Question classification processing method and device for sentence text
CN114036289A (en) Intention identification method, device, equipment and medium
Reich et al. Visually grounded vqa by lattice-based retrieval
Sulzmann et al. Rule Stacking: An approach for compressing an ensemble of rule sets into a single classifier
CN113128231A (en) Data quality inspection method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant