CN115700828A - Table element identification method and device, computer equipment and storage medium - Google Patents

Table element identification method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN115700828A
CN115700828A CN202110875581.3A CN202110875581A CN115700828A CN 115700828 A CN115700828 A CN 115700828A CN 202110875581 A CN202110875581 A CN 202110875581A CN 115700828 A CN115700828 A CN 115700828A
Authority
CN
China
Prior art keywords
graph
neural network
sample
node
intermediate processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110875581.3A
Other languages
Chinese (zh)
Inventor
罗光圣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eisoo Information Technology Co Ltd
Original Assignee
Shanghai Eisoo Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eisoo Information Technology Co Ltd filed Critical Shanghai Eisoo Information Technology Co Ltd
Priority to CN202110875581.3A priority Critical patent/CN115700828A/en
Publication of CN115700828A publication Critical patent/CN115700828A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a table element identification method, a table element identification device, computer equipment and a storage medium, wherein the method comprises the following steps: constructing an intermediate processing graph according to the cells of the table to be processed; extracting feature information of at least one node in the intermediate processing graph, wherein the feature information at least comprises text feature information and position feature information; and processing the characteristic information according to a pre-trained graph neural network to determine a table element corresponding to each cell. The embodiment of the invention can improve the accuracy of table element identification by keeping the original table semantic features in the table element identification process.

Description

Table element identification method and device, computer equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of computer application, in particular to a table element identification method and device, computer equipment and a storage medium.
Background
The table is a scene information bearing object in various documents, and is widely applied as an important data organization and display mode of information-based life, however, with the explosive increase of the number of documents, how to efficiently find the table from the documents, and identify the structure information and the content information of the table becomes a problem to be solved urgently. Among them, table structure identification is particularly important for the research in the industry.
The table structure detection is to detect the area where the table is located from the page, and identify the logical structure and the content of the table on the basis of the table area, wherein the identified logical structure may include the row, column, level and the like of the table. Currently, common table logic structure Recognition is mainly implemented by an Optical Character Recognition (OCR) technology, for example, spreadsheet detection based on a convolutional neural network and invoice receipt table detection based on a graph neural network, table detection combined with corner positioning, table detection based on a region convolutional neural network, and the like.
Disclosure of Invention
The invention provides a table element positioning method, a table element positioning device, computer equipment and a storage medium, which are used for realizing data table element identification, reserving semantic features and structural features of a data table, enhancing the accuracy of table element identification and facilitating the detection and processing of subsequent table data.
In a first aspect, an embodiment of the present invention provides a table element identification method, where the method includes:
constructing an intermediate processing graph according to the cells of the table to be processed;
extracting feature information of at least one node in the intermediate processing graph, wherein the feature information at least comprises text feature information and position feature information;
and processing the characteristic information according to a pre-trained graph neural network to determine the table elements of each unit in the table to be processed.
In a second aspect, an embodiment of the present invention further provides a table element identification apparatus, where the apparatus includes:
the graph construction module is used for constructing an intermediate processing graph according to the cell information of the table to be processed;
the characteristic extraction module is used for extracting characteristic information of at least one node in the intermediate processing graph, wherein the characteristic information at least comprises text characteristic information and position characteristic information;
and the element determining module is used for processing the characteristic information according to a pre-trained graph neural network to determine the table elements of each unit in the table to be processed.
In a third aspect, an embodiment of the present invention further provides a computer device, where the computer device includes:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a table element identification method as any one of the embodiments of the present invention.
In a fourth aspect, embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the table element identification method according to any one of the embodiments of the present invention.
According to the embodiment of the invention, the intermediate processing graph is constructed according to the cells of the table to be processed, the characteristic information of each node in the intermediate processing graph is extracted, and the characteristic information is input into the pre-trained graph neural network for processing so as to determine the table elements corresponding to each cell.
Drawings
Fig. 1 is a flowchart of a table element identification method according to an embodiment of the present invention;
FIG. 2 is an exemplary diagram of an intermediate processing graph structure according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a neural network according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a table structure according to an embodiment of the present invention;
FIG. 5 is a flow chart of a training process of a neural network according to an embodiment of the present invention;
fig. 6 is a flowchart of a table element identification method according to a second embodiment of the present invention;
FIG. 7 is an exemplary diagram of a small sample learning provided by the second embodiment of the present invention;
FIG. 8 is an exemplary diagram of a neural network provided in accordance with a second embodiment of the present invention;
fig. 9 is a schematic structural diagram of a table element identification apparatus according to a third embodiment of the present invention;
fig. 10 is a schematic structural diagram of a computer device according to a fourth embodiment of the present invention;
fig. 11 is a schematic structural diagram of a chip according to a fifth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be noted that, for convenience of description, only a part of the structures related to the present invention, not all of the structures, are shown in the drawings, and furthermore, embodiments of the present invention and features of the embodiments may be combined with each other without conflict.
Example one
Fig. 1 is a flowchart of a form element identification method according to an embodiment of the present invention, where this embodiment is applicable to a case of identifying form elements in a massive document, and the method may be executed by a form element identification apparatus, and the apparatus may be implemented in a hardware and/or software manner, referring to fig. 1, the form element identification method according to an embodiment of the present invention may specifically include the following steps:
and step 110, constructing an intermediate processing graph according to the cells of the table to be processed.
The table to be processed may be table information appearing in a document file, the table to be processed may be located in a data document with a different format, the data document may include other data elements such as pictures, words, audio, video, and the like, and for example, the table to be processed may be a data table in a Word document. A cell may be a component of a table to be processed, and a cell may be an intersection of a row and a column in a table to be processed. The intermediate processing graph can be a node graph generated by conversion of the table to be processed, each cell can be a node in the intermediate processing graph, and the intermediate processing graph can be represented by an adjacency matrix or an adjacent linked list.
In the embodiment of the invention, the intermediate processing graph can be respectively constructed as the nodes according to the cells in the table to be processed. Fig. 2 is an exemplary diagram of an intermediate processing graph according to an embodiment of the present invention, and referring to fig. 2, cells in a table to be processed may be obtained as nodes of the intermediate processing graph, and an adjacent relationship between the cells may be obtained as edges of the intermediate processing graph.
And 120, extracting feature information of at least one node in the intermediate processing graph, wherein the feature information at least comprises text feature information and position feature information.
The nodes may be components of the intermediate processing graph, and specifically may represent cells in the table, and the nodes in the intermediate processing graph may store attribute information of the cells, for example, characters, colors, fonts, and the like. The feature information may be information reflecting the features of the cells, may include attribute information of the cells, such as words or cell locations, and the feature information may be an implicit feature, and may be obtained through a neural network or a data statistics manner. In the embodiment of the present invention, the feature information of the node in the intermediate processing graph may be feature information of a cell in the table to be processed.
Specifically, the intermediate processing graph may be processed to extract feature information of each node in the intermediate processing graph, where the feature information may include text feature information and location feature information of a cell corresponding to the node, and further, the feature information of each node may be extracted in a neural network manner.
And step 130, processing the characteristic information according to a pre-trained graph neural network to determine a table element corresponding to each cell.
In essence, each node of the intermediate processing Graph processed by the Graph Neural Network is associated with a label item, and the label of the node, that is, each table element of the node, can be predicted. Fig. 3 is a schematic structural diagram of a graph neural network according to an embodiment of the present invention, regarding a graph G (V, a), an adjacency matrix representing an edge a is transformed by using a graph operator, an embedding operation is performed on a representing node V, and graph residuals are determined by using calculation results respectively, and output results for classification are generated according to residual results respectively, and referring to fig. 3, a formula used in the GNN is as follows:
Figure BDA0003190395760000051
in the figure a is a connection matrix in which,
Figure BDA0003190395760000052
is a trainable parameter. ρ (.) is the ReLU activation function, x (k) Is the data of node X of the k-layer intermediate processing graph. Table elements may be information representing a structure of a data table, and may include elements such as a table, a column, and a content, fig. 4 is a schematic diagram of a structure of a table provided in an embodiment of the present invention, and referring to fig. 4, a data table may be composed of elements such as a column a, metadata M, a header H, and data D.
In the embodiment of the invention, the advanced feature information can be input into a pre-trained graph neural network, and each cell is classified by the graph neural network based on the feature information to respectively determine the table element to which each cell belongs. Wherein the graph neural network may be generated through training using a sample table labeled with table elements.
According to the embodiment of the invention, the intermediate processing graph is constructed by the cell information of the table to be processed, the intermediate processing graph is processed to obtain the characteristic information of each node, the table elements corresponding to each cell are determined according to the characteristic information and the pre-trained graph neural network, the identification of the data table elements is realized, the semantic characteristics and the structural characteristics of the data table are reserved, the identification accuracy of the table elements can be enhanced, and the detection and the processing of the subsequent table data are facilitated.
Further, fig. 5 is a training flowchart of a neural network according to an embodiment of the present invention, referring to fig. 5, on the basis of the embodiment of the present invention, a training process of the neural network includes the following steps:
step 210, constructing a corresponding sample node graph for each sample table in the training sample set, wherein the training sample set is generated through small sample learning.
The training sample set may be a document file including a data table, the document format in the training sample set may be various, and each cell of the data table included in the training sample may be labeled by a corresponding table element, and may be used for training and verification. Small-sample Learning (Few-Shot Learning) can be a machine Learning problem that can be specified by experience E, task T, and performance metrics P, where E contains only a limited number of examples with target T supervised information, i.e. the process of machine Learning training through a small portion of samples with labels.
Specifically, a training sample set may be generated by a small sample learning method, a plurality of sample node maps are constructed for sample tables included in the training sample set, each sample table in the training sample set may generate a respective corresponding sample node map, nodes in the sample node map may be composed of cells of the sample table, and edges in the sample node map may be composed of adjacent relationships between cells of the sample table.
And step 220, extracting characteristic information and label information of table nodes in each sample node graph to form a characteristic vector and a label vector.
The table nodes may be nodes in each sample node graph, the table nodes may represent cells in a training table in a training sample set, the feature vector may be a vector composed of feature information of the table nodes in the sample node graph, and the feature information of each table node may correspond to one value in the feature vector. The label information may be a table element of a cell corresponding to a table node, the label vector may be a vector composed of label information of table nodes in the sample node map, and the label information of each table node may correspond to one value in the label vector.
In the embodiment of the present invention, the sample node maps may be processed respectively, the feature information and the label information of the sample nodes included in each sample node map are extracted, and the extracted feature information and the label information may be combined into a feature vector and a label vector, respectively.
And step 230, inputting each feature vector and each label vector into the graph neural network for iterative training until the loss function of the graph neural network meets a preset condition.
The loss function may be information for measuring a training effect of the graph neural network, the preset condition may be information for controlling a training process of the graph neural network, the accurate setting of the preset condition may improve training efficiency on the basis of ensuring the training effect of the graph neural network, and the preset condition may include a loss function critical value or a maximum number of iterations, for example, when the loss function after the training of the graph neural network meets the loss function critical value, it is determined that the training is completed, or when the number of iterations of the training of the graph neural network is greater than the maximum number of iterations, it is determined that the training is completed.
In the embodiment of the invention, each feature vector and each label vector can be respectively input into a preset graph neural network for training, so that the current loss function of the graph neural network can be obtained after each training, the loss function can be compared with a preset condition, if the loss function is met, the graph neural network is determined to be trained, and if the loss function is not met, the feature vectors and the label vectors are continuously used for training the graph neural network.
Further, on the basis of the above embodiment of the invention, the loss function is
Figure BDA0003190395760000081
Wherein, Y represents the output of the neural network of the graph, T represents a training sample set, and log is a logarithmic function.
In an embodiment of the present invention, the loss function used in the training process of the graph neural network may be
Figure BDA0003190395760000082
The sum of the logarithms of the probabilities of the validation set, which is output after each graph neural network training, can be used as a loss function for measuring the training effect of the graph neural network.
Further, on the basis of the above embodiment of the invention, the connection formula used by the multilayer perceptron in the graph neural network includes at least one of the following:
Figure BDA0003190395760000083
wherein, x is (k) i A vector set including the feature vector and the label vector representing an i-th sample node graph,
Figure BDA0003190395760000084
a vector set including the feature vectors and the label vectors representing a jth sample node graph, abs represents a vector distance function, k represents a kth training process,
Figure BDA0003190395760000085
representing a distance between the adjacency matrices of the ith sample node map and the jth sample node map;
Figure BDA0003190395760000086
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003190395760000087
and representing the distance between the adjacent matrixes of the ith sample node map and the jth sample node map.
In particular, the connection layer in the graph neural network can be a multi-layer perceptron, andthe multilayer perceptron uses a connection formula of
Figure BDA0003190395760000088
Wherein, the x (k) i A vector set including the feature vector and the label vector representing an i-th sample node graph,
Figure BDA0003190395760000089
a vector set including the feature vectors and the label vectors representing a jth sample node graph, abs represents a vector distance function, k represents a kth training process,
Figure BDA00031903957600000810
representing a distance between the adjacency matrices of the ith sample node map and the jth sample node map;
Figure BDA0003190395760000091
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003190395760000092
and representing the distance between the adjacent matrixes of the ith sample node map and the jth sample node map.
Example two
Fig. 6 is a flowchart of a table element identification method provided in the second embodiment of the present invention, and the second embodiment of the present invention is embodied on the basis of the foregoing second embodiment of the present invention, and referring to fig. 6, the method provided in the second embodiment of the present invention specifically includes the following steps:
and 310, constructing a corresponding sample node graph for each sample table in the training sample set, wherein the training sample set is generated through small sample learning.
And step 320, extracting characteristic information and label information of table nodes in each sample node graph to form a characteristic vector and a label vector.
And step 330, inputting the feature vectors and the label vectors into the graph neural network for iterative training until the loss function of the graph neural network meets a preset condition.
And 340, acquiring the position coordinates and the adjacent relation of each cell of the table to be processed.
The position coordinates may be relative position coordinates between each unit in the table to be processed, for example, a relative position coordinate may be determined by using any one unit in the table to be processed as a coordinate origin, and the adjacent relationship may reflect information on whether different units are in an adjacent state.
In the embodiment of the invention, the position coordinates and the adjacent relation of each cell in the table to be processed can be obtained in a character matching or image recognition mode.
And 350, constructing an adjacency matrix as an intermediate processing graph according to the position coordinates and the adjacent relation.
In the embodiment of the invention, the nodes can be determined according to the position coordinates, the edges corresponding to the nodes are determined by using the adjacent relation, the connected graph can be constructed according to the nodes and the corresponding edges, and the connected graph is stored in the form of an adjacent matrix to be used as the intermediate processing graph.
And step 360, extracting the relative positions of the nodes in the intermediate processing graph as position characteristic information.
Specifically, the intermediate processing graph may be processed to obtain the relative position of each node, the processing mode may include image recognition or a neural network, the relative position may reflect the relative position relationship between different nodes, and the relative position may specifically be information such as a distance or a vector. The acquired relative position may be used as the position characteristic information of the intermediate processing map.
Step 370, process the intermediate processing map using a convolutional neural network to obtain image feature information.
Specifically, a table image of the intermediate processing map may be acquired, and feature extraction may be performed on the table image using a convolutional neural network, and the extracted characteristic features may be used as image feature information.
And 380, processing the node texts of the intermediate processing graph according to the embedded operation to acquire text characteristic information.
The embedding operation may be a process of numerically representing a node text of each node in the intermediate processing graph, and performing learning and updating in a neural network manner. For example, machine learning can be represented as [1,2,3], deep learning can be represented as [1, 3], sunset bloom can be represented as [9, 6], wherein the distance of the numeric vectors corresponding to different texts can represent the degree of similarity of the different texts.
In the embodiment of the invention, the node text can be represented numerically, the numerical vector mechanical energy features can be extracted, and the extracted features can be used as text feature information corresponding to the intermediate processing graph.
And 390, inputting the characteristic information corresponding to each cell into an input layer of the neural network.
In the embodiment of the invention, the characteristic information of each cell can be input into the input layer of the graph neural network, so that the graph neural network classifies the cells. The feature information may include feature information of text feature information, image feature information, and position feature information.
And 3100, obtaining a classification vector output by an output layer of the graph neural network.
The classification vector may be a result output by an output layer of the neural network, and may be composed of one or more vector values, where it is understood that each vector value may correspond to a node in the intermediate processing graph, that is, to a cell in the table to be processed.
Specifically, the information output by the output layer of the map network can be extracted, and each information can be combined into a classification vector.
Step 3110, finding the table elements corresponding to the classification values in the classification vector, and using the table elements as the table elements of the cells.
The classification value may be each value of a classification vector, and may correspond to one cell, and the classification value may be used to classify the cell to determine the table element classification to which the cell belongs.
In the embodiment of the present invention, classification values in a lattice classification vector may be extracted, each classification value may correspond to a cell, the classification value may be used to determine a table element to which the cell belongs, each obtained classification value may be compared with a classification threshold corresponding to each table element, and the classification of the table element to which the corresponding cell belongs is determined according to the value of the classification value.
According to the embodiment of the invention, a sample node map is generated by a sample training set generated by small sample learning training, the characteristic information and the label information of the sample node map are extracted as a characteristic vector and a label vector, a neural network of the map is trained on the basis of the characteristic vector and the label vector, the position coordinate and the adjacent relation in a table to be processed are obtained, an intermediate processing map is generated according to the position coordinate and the adjacent relation, the characteristic information such as the position characteristic information, the image characteristic information and the text characteristic information of the intermediate processing map is obtained, the characteristic information is input into the neural network of the map to be processed to obtain a classification vector, the table elements of each cell are determined according to the classification vector, the identification of the data table elements is realized, the semantic characteristics and the structural characteristics of the data table are reserved, the identification accuracy of the table elements can be enhanced, and the detection and the processing of the subsequent table data are facilitated.
Further, on the basis of the above embodiment of the present invention, the generating the training sample set through small sample learning includes:
dividing an original training sample set into a support set and a query set, wherein a sample table included in the support set is marked with a table element label; determining form element labels of the query set according to the support set by using a small sample learning network trained in advance; and using the query set and the support set with the table element labels as a training sample set.
The original training sample set may be a data set in which table elements are labeled on partial data, and only a small part of sample tables in the data set are labeled by table element labels. The support set may include sample tables with labels, the query set may include unlabeled sample tables, and the table element labels may be used to identify the table elements to which the cells belong, which may be specifically labeled data consisting of numbers, letters, and special symbols.
In the embodiment of the present invention, part of sample tables in an original training sample may be labeled, labeled sample tables may be used as a support set, unlabeled sample tables may be used as a query set, the query set may be classified based on the support set through a small sample learning network, table element labels corresponding to the sample tables in a lattice query set are determined, the sample tables in the query set may be labeled by using the determined element labels, and the labeled query set and the support set may be used as a training sample set. The small sample learning network may be a network that processes a data set with only a small number of labels, and may generate data labels of all data in the data set, fig. 7 is an exemplary diagram of small sample learning provided by the second embodiment of the present invention, referring to fig. 7, the small sample learning network may be a classifier, and may perform feature extraction on a small sample table data set, where the feature extraction may use a Long short-Term Memory artificial neural network (LSTM) model, and the classifier that classifies query set data, and the classifier may include a full connection layer and a Support Vector Machine (SVM), and may classify query set data based on the Support set data, and classify the query set data by using a similarity metric to determine labels corresponding to each query set, so that the graph neural network identified by a table element has a better training effect, and the LSTM model may be formed by the following formula:
f t =σ(W f ·[h t-1 ,x t ]+b f )
i t =σ(W i ·[h t-1 ,x t ]+b i )
Figure BDA0003190395760000131
Figure BDA0003190395760000132
o t =σ(W o [h t-1 ,x t ]+b o )
h t =o t *tanh(C t )
from top to bottom, there can be a forgetting gate, an input gate, a temporary state, a final state, an output gate, and an output layer in the LSTM model, respectively.
Further, on the basis of the above embodiment of the invention, the small sample learning network includes at least one of the following: a prototype network model, a relationship network model, a model independent element learning model and a measurement learning model.
In an exemplary implementation manner, fig. 8 is an exemplary diagram of a graph neural network provided in an embodiment two of the present invention, and referring to fig. 8, a file such as an excel table or an excel table picture may be processed, and a result output of the method is a node of a Cell, a text, and a key-value corresponding to the Cell. The processing of the network model in the table element identification method used in the embodiment of the invention is composed of three parts. The graph Nodes of the table cells represent the edge representation corresponding to each Node, the text representation of the cells, and the picture characteristics of the table. The first step is to acquire the cell position of each table to represent the node of the graph, and then divide the nodes of the same category into an edge through an SVM classifier, wherein the edge is 0 if the same row or column is the node, and the edge is 1 if the same row or column is the node. And secondly, acquiring text features of the cells, and constructing the GNN network by the text features and the position features. And the third part acquires the excel table or the picture characteristic of the table, so that each node of the graph comprises an image characteristic, a position characteristic and a text characteristic. The training process adopts a small sample pre-training and fine-tune method, and optimizes the graph neural network by using a BP algorithm. On the basis of the neural network of the graph, the method for identifying the table elements provided by the embodiment of the invention can comprise the following steps of:
1. constructing a graph from cell nodes of a table
Figure BDA0003190395760000141
Wherein phi (x) i ) Is a feature of a cell, h (l) i ) As a vector of labels
And 2, connecting different layer nodes of the GNN network by adopting a multilayer perceptron MLP (maximum likelihood probability) as shown in the formulas (2) and (3).
Figure BDA0003190395760000142
Figure BDA0003190395760000143
The adjacency matrix between nodes is:
Figure BDA0003190395760000144
3. the update between nodes and the loss function are as in equations (4) and (5):
node update function:
Figure BDA0003190395760000145
Figure BDA0003190395760000146
is the matrix that needs to be learned.
Loss function:
Figure BDA0003190395760000147
4. the small sample learning objective function is as formula (6)
Figure BDA0003190395760000148
The embodiment of the invention introduces a small sample learning processing training sample set. The GNN and small sample learning are fused and applied to table header positioning identification, when available samples are few, the accuracy of a graph neural network can be improved by using the small sample learning, in the work of the invention, a solution for the problem of table element identification is provided, and the solution can also be applied to scenes such as table picture identification, table header keyword Key positioning, value corresponding to the obtained Key, extraction of table header position characteristics and the like. The process of small sample learning in the embodiment of the invention can comprise the following steps:
inputting: distribution of the diagram ε, number of layers L, step length a, balance parameter r of the loss function
Step 1: and randomly initializing a parameter theta of the small sample learning network.
Step 2: while not done do
And step 3: for each batch of graphs Gi, the sampling of the connection matrix Ai and the feature matrix Xi of the graph is carried out
And 4, step 4: for all Gi do
And 5: sampling a support set Si and a query set Qi;
step 6: computation embedding f θ (Ai, xi) and a loss function Lr (Ai, xi);
and 7: computing a feature representation for each layer
Figure BDA0003190395760000151
Similar parameters to prototype
Figure BDA0003190395760000152
And step 8: calculating a relationship graph
Figure BDA0003190395760000153
Sampling support set and calculating prototype of graph
Figure BDA0003190395760000154
And step 9: calculating a matched query set and evaluating loss;
step 10: end for
Step 11:
Figure BDA0003190395760000155
step 12: end while
EXAMPLE III
Fig. 9 is a schematic structural diagram of a table element recognition apparatus provided in the third embodiment of the present invention, which is capable of executing the table element recognition method provided in any embodiment of the present invention, and has functional modules and beneficial effects corresponding to the execution method. The device can be implemented by software and/or hardware, and specifically comprises: a graph construction module 401, a feature extraction module 402, and an element determination module 403.
And the graph constructing module 401 is used for constructing an intermediate processing graph according to the cell information of the table to be processed.
A feature extraction module 402, configured to extract feature information of at least one node in the intermediate processing graph, where the feature information at least includes text feature information and location feature information.
An element determining module 403, configured to process the feature information according to a pre-trained graph neural network to determine a table element of each cell in the table to be processed.
According to the embodiment of the invention, the cell information of the table to be processed is constructed into the intermediate processing graph through the graph construction module, the feature extraction module is used for processing the intermediate processing graph to obtain the feature information of each node, the element determination module is used for determining the table elements corresponding to each cell according to the feature information and the pre-trained graph neural network, so that the data table element identification is realized, the semantic features and the structural features of the data table are reserved, the accuracy of the table element identification can be enhanced, and the subsequent detection and processing of the table data are facilitated.
Further, on the basis of the above embodiment of the invention, the apparatus further includes:
the model training module is used for constructing a corresponding sample node graph for each sample table in a training sample set, wherein the training sample set is generated through small sample learning; extracting characteristic information and label information of table nodes in each sample node graph to form characteristic vectors and label vectors; and inputting each feature vector and each label vector into the graph neural network for iterative training until a loss function of the graph neural network meets a preset condition.
Further, on the basis of the above embodiment of the invention, the loss function of the model training module in the device is
Figure BDA0003190395760000161
Wherein, Y represents the output of the neural network of the graph, T represents the training sample set, and log is a logarithmic function.
Further, on the basis of the above embodiment of the invention, the connection formula used by the multilayer perceptron in the graph neural network in the device includes at least one of the following:
Figure BDA0003190395760000162
wherein, the x (k) i A set of vectors, x, comprising the feature vector and the label vector representing an ith sample node graph (k) j A vector set including the feature vector and the label vector representing a jth sample node graph, abs represents a vector distance function, k represents a kth training process,
Figure BDA0003190395760000163
representing distances between the adjoining matrices of the ith sample node map and the jth sample node map;
Figure BDA0003190395760000164
wherein the content of the first and second substances,
Figure BDA0003190395760000165
and representing the distance between the adjacent matrixes of the ith sample node map and the jth sample node map.
Further, on the basis of the embodiment of the present invention, the model training module further includes: the system comprises a sample training unit, a query unit and a processing unit, wherein the sample training unit is used for dividing an original training sample set into a support set and a query set, and a sample table included in the support set is marked with a table element label; determining form element labels of the query set according to the support set by using a small sample learning network trained in advance; and using the query set and the support set with the table element labels as a training sample set.
Further, on the basis of the above embodiment of the present invention, the small sample learning network in the sample training unit includes at least one of: a prototype network model, a relationship network model, a model independent element learning model and a measurement learning model.
Further, on the basis of the above embodiment of the present invention, the graph building module 401 includes:
and the parameter extraction unit is used for acquiring the position coordinates and the adjacent relation of each cell of the table to be processed.
And the construction execution unit is used for constructing an adjacency matrix as the intermediate processing graph according to the position coordinates and the adjacent relation.
Further, on the basis of the above embodiment of the present invention, the feature extraction module 402 includes:
and the position characteristic unit is used for extracting the relative position of the node in the intermediate processing graph as position characteristic information.
An image feature unit to process the intermediate processing map using a convolutional neural network to obtain image feature information.
And the text characteristic unit is used for processing the node text of the intermediate processing graph according to the embedding operation to acquire text characteristic information.
Further, on the basis of the above embodiment of the invention, the table element in the element determining module 403 includes at least one of: column element, metadata, title element, data element.
Further, on the basis of the above embodiment of the present invention, the element determining module 403 includes:
and the characteristic input unit is used for inputting the characteristic information corresponding to each cell into an input layer of the graph neural network.
And the classification acquisition unit is used for acquiring the classification vector output by the output layer of the graph neural network.
And the element determining unit is used for searching the table elements corresponding to the classification values in the classification vector and taking the table elements as the table elements of the cells.
Example four
Fig. 10 is a schematic structural diagram of a computer device according to a fourth embodiment of the present invention, and fig. 10 shows a block diagram of a computer device 312 suitable for implementing an embodiment of the present invention. The computer device 312 shown in FIG. 10 is only an example and should not bring any limitations to the functionality or scope of use of embodiments of the present invention. The device 312 is a computing device for update functionality of a typical intelligent robot map.
As shown in fig. 10, computer device 312 is in the form of a general purpose computing device. The components of computer device 312 may include, but are not limited to: one or more processors 316, a storage device 328, and a bus 318 that couples the various system components including the storage device 328 and the processors 316.
Bus 318 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an enhanced ISA bus, a Video Electronics Standards Association (VESA) local bus, and a Peripheral Component Interconnect (PCI) bus.
Computer device 312 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer device 312 and includes both volatile and nonvolatile media, removable and non-removable media.
Storage 328 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 330 and/or cache Memory 332. The computer device 312 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 334 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 10 and commonly referred to as a "hard drive"). Although not shown in FIG. 10, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a Compact disk-Read Only Memory (CD-ROM), digital Video disk (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 318 by one or more data media interfaces. Storage 328 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
Program 336 having a set (at least one) of program modules 326 may be stored, for example, in storage 328, such program modules 326 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which or some combination of which may comprise an implementation of a network environment. Program modules 326 generally carry out the functions and/or methodologies of embodiments of the invention as described herein.
The computer device 312 may also communicate with one or more external devices 314 (e.g., keyboard, pointing device, camera, display 324, etc.), with one or more devices that enable a user to interact with the computer device 312, and/or with any devices (e.g., network card, modem, etc.) that enable the computer device 312 to communicate with one or more other computing devices. Such communication may occur through input/output (I/O) interfaces 322. Also, computer device 312 may communicate with one or more networks (e.g., a Local Area Network (LAN), wide Area Network (WAN), etc.) and/or a public Network, such as the internet, via Network adapter 320. As shown, network adapter 320 communicates with the other modules of computer device 312 via bus 318. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the computer device 312, including but not limited to: microcode, device drivers, redundant processing units, external disk drive Arrays, disk array (RAID) systems, tape drives, and data backup storage systems, to name a few.
Processor 316 executes various functional applications and data processing, such as implementing the table element identification methods provided by the above-described embodiments of the present invention, by executing programs stored in storage 328.
EXAMPLE five
Fig. 11 is a schematic structural diagram of a chip according to a fifth embodiment of the present invention, where the chip 900 includes one or more processors 901 and an interface circuit 902. Optionally, chip 900 may also contain bus 903. Wherein:
the processor 901 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be implemented by integrated logic circuits of hardware or instructions in the form of software in the processor 901. The processor 901 described above may be one or more of a general purpose processor, a Digital Signal Processor (DSP), an application specific integrated circuit ((ASIC), a field programmable gate array ((FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, MCU, MPU, CPU, or co-processor.
The interface circuit 902 may be used for sending or receiving data, instructions or information, and the processor 901 may perform processing by using the data, instructions or other information received by the interface circuit 902, and may send out processing completion information through the interface circuit 902.
Optionally, the chip further comprises a memory, which may include read only memory and random access memory, and provides operating instructions and data to the processor. The portion of memory may also include non-volatile random access memory (NVRAM).
Optionally, the memory stores executable software modules or data structures, and the processor may perform corresponding operations by calling the operation instructions stored in the memory (the operation instructions may be stored in an operating system).
Alternatively, the chip may be used in the target detection apparatus according to the embodiment of the present application. Optionally, the interface circuit 902 may be used to output the execution result of the processor 901. For the target detection method provided in one or more embodiments of the present application, reference may be made to the foregoing embodiments, and details are not repeated here.
It should be noted that the functions corresponding to the processor 901 and the interface circuit 902 may be implemented by hardware design, may also be implemented by software design, and may also be implemented by a combination of software and hardware, which is not limited herein.
EXAMPLE six
Sixth embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored, and when the program is executed by a processing apparatus, the computer program implements a table element identification method as in the sixth embodiment of the present invention. The computer readable medium of the present invention described above may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the computer device; or may exist separately without being assembled into the computer device.
The computer readable medium carries one or more programs which, when executed by the computing device, cause the computing device to: constructing an intermediate processing graph according to the cells of the table to be processed; extracting feature information of at least one node in the intermediate processing graph, wherein the feature information at least comprises text feature information and position feature information; and processing the characteristic information according to a pre-trained graph neural network to determine a table element corresponding to each cell.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. Those skilled in the art will appreciate that the present invention is not limited to the particular embodiments described herein, and that various obvious changes, rearrangements and substitutions will now be apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (13)

1. A method for identifying a form element, the method comprising:
constructing an intermediate processing graph according to the cells of the table to be processed;
extracting feature information of at least one node in the intermediate processing graph, wherein the feature information at least comprises text feature information and position feature information;
and processing the characteristic information according to a pre-trained graph neural network to determine a table element corresponding to each cell.
2. The method of claim 1, wherein the training process of the graph neural network comprises:
constructing a corresponding sample node graph for each sample table in a training sample set, wherein the training sample set is generated through small sample learning;
extracting characteristic information and label information of table nodes in each sample node graph to form characteristic vectors and label vectors;
and inputting each feature vector and each label vector into the graph neural network for iterative training until a loss function of the graph neural network meets a preset condition.
3. The method of claim 2, wherein the loss function is
Figure FDA0003190395750000011
Wherein, Y represents the output of the neural network of the graph, T represents the training sample set, and log is a logarithmic function.
4. The method of claim 2, wherein the connection formula used by the multi-layer perceptron in the graph neural network comprises at least one of:
Figure FDA0003190395750000012
wherein, x is (k) i A set of vectors, x, comprising the feature vector and the label vector representing an ith sample node graph (k) j A vector set including the feature vector and the label vector representing a jth sample node graph, abs represents a vector distance function, k represents a kth training process,
Figure FDA0003190395750000013
representing distances between the adjoining matrices of the ith sample node map and the jth sample node map;
Figure FDA0003190395750000014
wherein the content of the first and second substances,
Figure FDA0003190395750000015
and representing the distance between the adjacent matrixes of the ith sample node map and the jth sample node map.
5. The method of claim 2, wherein the generating the set of training samples via small sample learning comprises:
dividing an original training sample set into a support set and a query set, wherein a sample table included in the support set is marked with a table element label;
determining the table element labels of the query set according to the support set by using a small sample learning network trained in advance;
and using the query set and the support set with the table element labels as a training sample set.
6. The method of claim 5, wherein the small sample learning network comprises at least one of: a prototype network model, a relationship network model, a model independent element learning model and a measurement learning model.
7. The method of claim 1, wherein constructing the intermediate processing graph according to the cells of the table to be processed comprises:
acquiring the position coordinates and the adjacent relation of each cell of the table to be processed;
and constructing an adjacency matrix as the intermediate processing graph according to the position coordinates and the adjacent relation.
8. The method of claim 1, wherein extracting feature information of at least one node in the intermediate processing graph comprises at least one of:
extracting the relative position of the node in the intermediate processing graph as position characteristic information;
processing the intermediate processing map using a convolutional neural network to obtain image feature information;
and processing the node texts of the intermediate processing graph according to the embedding operation to acquire text characteristic information.
9. The method of claim 1, wherein the table element comprises at least one of: column element, metadata, title element, data element.
10. The method of claim 1, wherein the processing the feature information according to a pre-trained graph neural network to determine the table element corresponding to the table to be processed comprises:
inputting the feature information corresponding to each cell into an input layer of the graph neural network;
obtaining a classification vector output by an output layer of the graph neural network;
and searching table elements corresponding to all classification values in the classification vector, and taking all the table elements as the table elements of all the cells.
11. A form element identification apparatus, the apparatus comprising:
the graph construction module is used for constructing an intermediate processing graph according to the cell information of the table to be processed;
the characteristic extraction module is used for extracting characteristic information of at least one node in the intermediate processing graph, wherein the characteristic information at least comprises text characteristic information and position characteristic information;
and the element determining module is used for processing the characteristic information according to a pre-trained graph neural network to determine the table elements of each unit in the table to be processed.
12. A computer device, characterized in that the computer device comprises:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the form element identification method of any of claims 1-10.
13. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a table element identification method according to any one of claims 1 to 10.
CN202110875581.3A 2021-07-30 2021-07-30 Table element identification method and device, computer equipment and storage medium Pending CN115700828A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110875581.3A CN115700828A (en) 2021-07-30 2021-07-30 Table element identification method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110875581.3A CN115700828A (en) 2021-07-30 2021-07-30 Table element identification method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115700828A true CN115700828A (en) 2023-02-07

Family

ID=85120806

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110875581.3A Pending CN115700828A (en) 2021-07-30 2021-07-30 Table element identification method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115700828A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118115819A (en) * 2024-04-24 2024-05-31 深圳格隆汇信息科技有限公司 Deep learning-based chart image data identification method and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118115819A (en) * 2024-04-24 2024-05-31 深圳格隆汇信息科技有限公司 Deep learning-based chart image data identification method and system

Similar Documents

Publication Publication Date Title
CN111274811B (en) Address text similarity determining method and address searching method
WO2021212749A1 (en) Method and apparatus for labelling named entity, computer device, and storage medium
CN111241304B (en) Answer generation method based on deep learning, electronic device and readable storage medium
CN108427738B (en) Rapid image retrieval method based on deep learning
WO2017075939A1 (en) Method and device for recognizing image contents
CN109189767B (en) Data processing method and device, electronic equipment and storage medium
CN110750965B (en) English text sequence labeling method, english text sequence labeling system and computer equipment
CN112819023B (en) Sample set acquisition method, device, computer equipment and storage medium
CN110033018B (en) Graph similarity judging method and device and computer readable storage medium
CN110633366B (en) Short text classification method, device and storage medium
CN110363049B (en) Method and device for detecting, identifying and determining categories of graphic elements
WO2020224106A1 (en) Text classification method and system based on neural network, and computer device
CN110795527B (en) Candidate entity ordering method, training method and related device
CN113688631B (en) Nested named entity identification method, system, computer and storage medium
WO2021208727A1 (en) Text error detection method and apparatus based on artificial intelligence, and computer device
CN111753863A (en) Image classification method and device, electronic equipment and storage medium
CN114358203A (en) Training method and device for image description sentence generation module and electronic equipment
CN113255714A (en) Image clustering method and device, electronic equipment and computer readable storage medium
CN113806582B (en) Image retrieval method, image retrieval device, electronic equipment and storage medium
CN113486178B (en) Text recognition model training method, text recognition method, device and medium
CN115482418B (en) Semi-supervised model training method, system and application based on pseudo-negative labels
CN116089648B (en) File management system and method based on artificial intelligence
CN116304307A (en) Graph-text cross-modal retrieval network training method, application method and electronic equipment
CN112214595A (en) Category determination method, device, equipment and medium
CN110020638B (en) Facial expression recognition method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination