WO2023077995A1 - Information extraction method and apparatus, and device, medium and product - Google Patents

Information extraction method and apparatus, and device, medium and product Download PDF

Info

Publication number
WO2023077995A1
WO2023077995A1 PCT/CN2022/121551 CN2022121551W WO2023077995A1 WO 2023077995 A1 WO2023077995 A1 WO 2023077995A1 CN 2022121551 W CN2022121551 W CN 2022121551W WO 2023077995 A1 WO2023077995 A1 WO 2023077995A1
Authority
WO
WIPO (PCT)
Prior art keywords
edge
node
nodes
text
classification
Prior art date
Application number
PCT/CN2022/121551
Other languages
French (fr)
Chinese (zh)
Inventor
范湉湉
黄灿
王长虎
Original Assignee
北京有竹居网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京有竹居网络技术有限公司 filed Critical 北京有竹居网络技术有限公司
Publication of WO2023077995A1 publication Critical patent/WO2023077995A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Definitions

  • the present application relates to the field of computer technology, and in particular to an information extraction method, device, equipment, computer-readable storage medium and computer program product.
  • the purpose of the present disclosure is to provide an information extraction method, device, device, computer-readable storage medium and computer program product, capable of accurately extracting information from images with complex typesetting and no fixed format.
  • the present disclosure provides an information extraction method, the method comprising:
  • each text behavior in the text area is a node of the graph network model
  • the present disclosure provides an information extraction device, the device comprising:
  • a detection module configured to perform text detection on the image to obtain a text area in the image, the text area including a plurality of text lines;
  • a building module configured to construct a graph network model according to the text region, where each text in the text region acts as a node of the graph network model;
  • a classification module configured to classify nodes in the graph network model through a node classification model, and classify edges between nodes in the graph network model through an edge classification model;
  • An obtaining module configured to obtain at least one key-value pair in the image according to the classification results of the nodes and the classification results of the edges.
  • the present disclosure provides an electronic device, including: a storage device, on which a computer program is stored; a processing device, configured to execute the computer program in the storage device, so as to implement the first aspect of the present disclosure. method steps.
  • the present disclosure provides a computer-readable medium on which a computer program is stored, and when the program is executed by a processing device, the steps of the method described in the first aspect of the present disclosure are implemented.
  • the present disclosure provides a computer program product containing instructions, which, when run on a device, causes the device to execute the steps of the method described in the first aspect above.
  • the present disclosure has at least the following advantages:
  • the electronic device performs text detection on the image to obtain a text area including multiple text lines, and then uses each text line in the text area as a node to construct a graph network model, and uses the node classification model to classify the text in the graph network model.
  • the nodes are classified, and the edges in the graph network model are classified by the edge classification model, and then the key-value pairs in the image are obtained according to the node classification results and the edge classification results.
  • the electronic device not only classifies the nodes in the graph network model, but also classifies the edges in the graph network model, so that the characteristics of the text lines in the image and the characteristics of the associated text lines can be considered comprehensively, so that the image Accurately extract information with complex typesetting and no fixed format.
  • FIG. 1 is a schematic flow diagram of an information extraction method provided in an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a text bounding box of an image provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of a graph neural network model provided in an embodiment of the present application.
  • FIG. 4 is a schematic diagram of node embedding of a graph neural network model provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an information extraction device provided by an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • first and second in the embodiments of the present application are used for description purposes only, and cannot be understood as indicating or implying relative importance or implicitly indicating the quantity of indicated technical features. Thus, a feature defined as “first” and “second” may explicitly or implicitly include one or more of these features.
  • OCR optical character recognition
  • OCR refers to the process in which electronic devices determine the shape of printed characters by detecting dark and bright patterns, and then use character recognition methods to translate the shape into computer text.
  • OCR can optically convert the text of the printed font into a black and white dot matrix image file, and then convert the text in the image into a text format through recognition software.
  • text recognition especially for images with complex typesetting and no fixed format, there may be a variety of information in different typesetting intervals in the recognized text, making it difficult to accurately classify irrelevant text.
  • an image with a certain part of the text that is long it may not be possible to combine multiple lines of related text, and it is difficult to accurately combine the related text.
  • An electronic device refers to a device capable of data processing, such as a server or a terminal.
  • the terminal includes, but is not limited to, a smart phone, a tablet computer, a notebook computer, a personal digital assistant (personal digital assistant, PDA) or a smart wearable device.
  • the server may be a cloud server, for example, a central server in a central cloud computing cluster, or an edge server in an edge cloud computing cluster.
  • the server may also be a server in a local data center.
  • An on-premises data center refers to a data center directly controlled by the user.
  • the electronic device performs text detection on the image, obtains a text area including multiple text lines in the image, and establishes a graph network model with a node of each text behavior graph network model, and then classifies the nodes in the graph network model through the node classification model. Classify the nodes, classify the edges between the nodes through the edge classification model, and then obtain at least one key-value pair in the image according to the classification results of the nodes and the classification results of the edges, so that the image can be accurately extracted. There is no fixed format for the information.
  • the result of edge classification can provide a reference for the result of node classification, so that the node classification model can obtain more accurate node classification results.
  • the result of node classification can provide a reference for the result of edge classification, so that the edge classification model can obtain more accurate edge classification results.
  • the electronic device integrates node classification and edge classification, comprehensively considers the characteristics of the text line itself and the characteristics between the associated text lines, and realizes the accurate extraction of information with complex typesetting and no fixed format in the image.
  • the following uses an electronic device as a terminal as an example, as shown in FIG. 1 , to introduce the information extraction method provided by the embodiment of the present disclosure.
  • Step 102 the terminal performs text detection on the image to obtain a text area in the image.
  • an image refers to an image including a text area, and the text area in the image includes multiple text lines.
  • the terminal can perform text detection on images in various ways, for example, the terminal can perform text detection through OCR technology.
  • OCR includes text detection and text recognition. Text detection is used to find and segment text areas in pictures, and text recognition is used to convert text characters into computer text.
  • the terminal can discover the text area in the picture through the OCR technology, and segment the text area in the form of a text bounding box (bounding box, bbox), as shown in Figure 2, wherein the text area in the image is shown in Figure 202 As shown, the text area 202 includes multiple text bounding boxes 204-1, 204-2, etc., and each text bounding box corresponds to a text line.
  • bounding box bbox
  • the terminal recognizes the text in the text bounding box in the text area, and obtains the text information of the text line corresponding to each text bounding box.
  • the text information recognized and acquired by the terminal can be displayed in the image text bounding box.
  • Step 104 The terminal builds a graph network model according to the text area.
  • the graph network (GN) model refers to the model established according to the graph structure.
  • the graph includes two basic features of nodes and edges, where each node has the characteristic information of the node, and each node in the graph has the structural information of the node, that is, the edge information.
  • the terminal may construct a graph neural network (graph neural network, GNN) model according to the text region.
  • the graph neural network model is a neural network model established based on the corresponding relationship between nodes and edges in the graph. It comprehensively considers the characteristic information of each node in the graph and the structural information (edge information) of the node to realize the Accurate extraction of information.
  • the terminal can use each text line in the text area as a node to build a graph network model.
  • the terminal may determine the edges in the graph network model according to the positional relationship in the text lines, as shown in FIG. 3 .
  • the terminal can determine the edges in the node according to the viewing circle visibility.
  • Visibility of the viewing circle refers to establishing the edges in the graph network model according to the diameter of the visible circle.
  • the edges determined based on the visibility of the viewing circle can satisfy: the circles generated by using all the edges in the graph as diameters do not intersect.
  • the edge established through the visibility of the viewing circle can avoid the edge connection between non-adjacent text lines, reduce the impact on subsequent model recognition, reduce the difficulty of model learning, and improve the accuracy of the model.
  • Step 106 The terminal extracts the features of the nodes.
  • the terminal can extract the features of each node in the image in multiple ways, for example, it can use up-sampling or down-sampling, or a combination of up-sampling and down-sampling.
  • upsampling refers to a technology that can make an image into a higher resolution.
  • the upsampling method can specifically include interpolation, deconvolution, and anti-pooling methods. Interpolation is the use of mathematical formulas to calculate missing pixels from surrounding pixels without generating pixels.
  • Deconvolution is the inverse process of convolution, which can be understood as a special forward convolution. First, the size of the input image is enlarged by filling 0 according to a certain ratio, and then the convolution kernel is rotated to perform forward convolution.
  • Anti-pooling is the inverse operation of pooling, which can specifically include anti-maximum pooling and anti-average pooling. Among them, anti-maximum pooling needs to record the position of the maximum value during pooling. Downsampling refers to a new sequence obtained by sampling once every few samples of the original sequence.
  • the terminal in order to avoid obtaining too many or too few features in the image, the terminal can first down-sample and then up-sample the features in the image, so that image feature samples with a relatively uniform number of samples can be obtained, for example
  • the terminal can use UNet to extract features from the entire image.
  • the terminal can further determine the features of the corresponding nodes in the image.
  • the features of the node can include features containing various information of the node, such as image features containing the color, font and font size information of the text in the node, text features containing the text content in the node, and position features containing the coordinates of the node in the image .
  • the terminal can use Region of interest pooling (Region of interest pooling, ROI pooling for short) or ROI align to process the image features in the entire image to obtain the image features corresponding to each node.
  • the image feature of the node may be any one or more of the color, font and font size of the text in the text area corresponding to the node.
  • the terminal may use the language model to extract the text feature of the text line corresponding to the node.
  • a language model (language model, LM) refers to a probability model established for a certain language, which can establish a probability distribution describing the occurrence of a given word sequence in a language.
  • the terminal can use bidirectional long short term memory (Bi-directional long short term memory, Bi-LSTM) or bidirectional encoding representation based on converters (bidirectional encoder representation from transformers, BERT) and other language models to extract the information in the image.
  • Bi-LSTM bidirectional long short term memory
  • Bi-LSTM bidirectional encoding representation based on converters
  • converters bidirectional encoder representation from transformers, BERT
  • Bi-LSTM is generated by combining the forward long short term memory (LSTM) with the backward LSTM.
  • LSTM can learn long-term dependent information, so it can have high recognition accuracy.
  • LSTM can Capturing long-distance dependencies, taking into account the order of words in sentences with long distances.
  • Bi-LSTM can not only learn the information from front to back, but also from back to front, so it can better capture the bidirectional semantic dependence.
  • BERT is a pre-trained language representation model.
  • the model no longer uses the traditional one-way language model or two one-way language models for pre-training, but uses a new masked language model (masked language model, MLM), so as to be able to generate deep bidirectional language representation.
  • MLM masked language model
  • the terminal can obtain the text content surrounded by the text bounding box corresponding to each node, so as to obtain the text features of the node.
  • the terminal can determine the location characteristics of the node according to the location of the node. Specifically, the terminal may determine the location feature of the node according to the location information (for example, coordinate information) of the text bounding box. The terminal may also perform embedding processing on the location information of the text bounding box to obtain the location feature of the node. Among them, the embedding process refers to converting the position feature of the node into a low-dimensional real-valued vector through calculation, and can fuse multiple features into a continuous and computable vector. In this embodiment, the terminal may express the obtained location information of the text bounding box of the node as a low-dimensional computable real-valued vector through embedding processing, so as to obtain the location feature of the node.
  • the location information for example, coordinate information
  • the terminal may also perform embedding processing on the location information of the text bounding box to obtain the location feature of the node.
  • the embedding process refers to converting the position feature of the node into a low-dimensional real-
  • the terminal can extract node features such as image features, text features, and location features of each node in the image.
  • the node features include various information about the node, and the multiple node features corresponding to each node constitute the input graph neural network of the node. node characteristics.
  • Step 108 The terminal extracts features of edges between nodes.
  • the terminal can obtain the edge features in the graph network model according to the relative position and relative width and height between the text lines with the edge connection relationship.
  • the edge connection relationship is the edge determined in step 104 .
  • the relative position between the text lines may be the relative position of the text bounding boxes corresponding to the text lines
  • the relative width and height between the text lines may be the relative width and height of the text bounding boxes corresponding to the text lines.
  • the coordinates of the center of the text bounding box A corresponding to the text line A are (xA, yA), the width is wA, and the height is hA, and the coordinates of the center of the text bounding box B corresponding to the text line B are (xB, yB), The width is wB, and the height is hB.
  • the relative position between the text line A and the text line B can be (xB-xA, yB-yA), the relative width is wA/wB, and the relative height is hA/hB.
  • the terminal can obtain node features in the graph neural network including image features, text features, and position features, and edge features in the graph neural network including relative positions between text lines and relative width and height.
  • Step 110 The terminal aggregates the characteristics of the neighbor nodes of the node according to the characteristics of the edges, and obtains the embedding representation of the node.
  • the neighbor nodes of a node refer to nodes that have an edge association relationship with the node, and two nodes on the same edge are neighbor nodes.
  • the terminal can obtain the node features of the node and the edge features of the edges including the node, and then obtain the node features of another node of the edge, that is, the node’s neighbor nodes, so as to obtain the relevant information.
  • the terminal obtains the characteristics of the neighbor nodes of the node according to the node characteristics of the node and the edge characteristics of the corresponding edge of the node through the graph neural network, and jointly obtains the characteristics of the neighbor nodes including the node and
  • the embedded representation of this node for edge features is shown in Figure 4.
  • the terminal can use spatial graph convolutional networks (graph convolutional networks, GCN) to aggregate the characteristics of the neighbor nodes of the node according to the characteristics of the edge, and obtain the embedded representation of the node.
  • GCN graph convolutional networks
  • Graph convolution network, graph recurrent network (graph recurrent network, GRN), graph attention network (graph attention network, GAT), graph autoencoder (graph autoencoders, GAE) all belong to the graph neural network.
  • graph Convolutional network is used as an example to introduce.
  • GCN can be applied in the non-Euclidean space where the neighbor nodes are not fixed.
  • the convolution of a node by the graph convolutional network is actually a weighted summation of the node and the neighbor nodes with edge associations, so that the node's own characteristics, edge characteristics and neighbor node characteristics can be aggregated to obtain the embedding of the node express.
  • Graph convolutional networks are mainly divided into spatial-domain-based graph convolutional networks and frequency-domain-based graph convolutional networks.
  • the graph convolutional network based on the spatial domain can directly convolve the nodes in the image, while the graph convolutional network based on the frequency domain needs to perform Fourier transform first, and then perform convolution.
  • Step 112 The terminal classifies the nodes in the graph network model through the node classification model.
  • the node classification model refers to a model that can classify nodes.
  • the input of the node classification model can be the node embedding in the graph neural network, and the output is the type of the node, such as key (key), value (value) and other .
  • the type corresponding to the node can be set by the user. As shown in Figure 2, it is a product label.
  • the product attribute can be set as the key, the product feature corresponding to the product attribute can be set as the value, and the rest of the content can be set as others.
  • the end-to-end model of the node classification model may be, for example, a multilayer perceptron (multilayer perceptron, MLP) model.
  • MLP is also called artificial neural network (ANN), which includes an input layer, an output layer, and at least one hidden layer.
  • ANN artificial neural network
  • the node classification model can be other trained multi-classification models, such as k-nearest neighbors, decision trees, naive bayes, random forest, and gradient Models such as gradient boosting.
  • the types of nodes can also be two types, such as key and value, so a binary classification model can also be used to classify the nodes in the graph network model.
  • Step 114 The terminal classifies the edges between the nodes in the graph network model through the edge classification model.
  • the edge classification model refers to a model that can classify edges, where the input of the edge classification model can be the splicing of two node embeddings in the graph neural network, and the output is the type of edge, such as the corresponding key-value edge in step 112 , key-key edges, value-value edges, and others.
  • the edge between a commodity attribute and the commodity feature corresponding to the attribute is a key-value edge
  • the edge between two commodity attributes is a key-key edge
  • the edge between two commodity features is a value-value edge, etc.
  • the edge classification model may also be an end-to-end model, and the terminal obtains an edge classification model capable of classifying edges between nodes by training the MLP model.
  • the node classification model and the edge classification model can be jointly trained as input and output of each other.
  • the node classification model can be verified through the edge classification model.
  • edge A the node classification model judges that the two nodes of edge A are key and value respectively, and whether edge A is a key-value edge can be verified through the edge classification model.
  • edge B the node classification model judges that both nodes of edge B are keys, and it can be verified whether edge B is a key edge through the edge classification model.
  • edge C the node classification model judges that both nodes of edge C are values, and can be verified by edge The classification model verifies whether edge C is a value-value edge, etc.
  • the edge classification model can also be verified through the node classification model. For example, the edge classification model judges that edge D is a key-value edge. E is a key-key edge. You can use the node classification model to judge whether the two nodes of edge E are both keys. The edge classification model judges that edge F is a value-value edge. You can use the node classification model to judge whether the two nodes of edge F are both values. .
  • Step 116 The terminal obtains at least one key-value pair in the image according to the classification results of the nodes and the classification results of the edges.
  • the terminal can determine two adjacent nodes that are key and value respectively as key-value edges according to the classification results of the nodes, and then verify whether the edge formed by the two nodes is key-value pairs.
  • the terminal can also determine the key-value edge according to the edge classification result, and then judge whether the two nodes of the edge are key and value according to the node classification result.
  • the terminal determines to obtain a key-value pair in the image, so that at least one key-value pair in the image can be obtained right.
  • the present disclosure provides an information extraction method.
  • the terminal performs text detection on the image to obtain a text area including multiple text lines, and then uses each text line in the text area as a node to construct a graph network model, classifies the nodes in the graph network model through the node classification model, and uses the edge
  • the classification model classifies the edges in the graph network model, and then obtains the key-value pairs in the image according to the node classification results and the edge classification results.
  • the terminal not only classifies the nodes in the graph network model, but also classifies the edges in the graph network model, and can comprehensively consider the characteristics of the text lines in the image itself and the characteristics of the associated text lines, so as to be able to typesetting in the image Accurate extraction of complex and unformatted information.
  • Fig. 5 is a schematic diagram of an information extraction device according to an exemplary disclosed embodiment. As shown in Fig. 5, the information extraction device 500 includes:
  • a detection module 502 configured to perform text detection on the image, and obtain a text area in the image, where the text area includes a plurality of text lines;
  • a construction module 504 configured to construct a graph network model according to the text region, where each text in the text region acts as a node of the graph network model;
  • a classification module 506 configured to classify nodes in the graph network model through a node classification model, and classify edges between nodes in the graph network model through an edge classification model;
  • the obtaining module 508 is configured to obtain at least one key-value pair in the image according to the classification results of the nodes and the classification results of the edges.
  • the device further includes an extraction module 510, and the extraction module 510 may be used to:
  • the classification module 506 can be used for:
  • the embedded representations of the two nodes corresponding to the edges are spliced, and according to the spliced embedded representations, the edges between the nodes in the graph network model are classified by an edge classification model.
  • the classification module 506 may be used to:
  • the edges between the nodes in the graph network model are classified by using an edge classification model.
  • the classification result of the node includes one of the following labels: key, value and others
  • the classification result of the edge includes one of the following labels: key One of value edge, value-value edge, key-key edge, or other.
  • the classification result of the edge when the classification result of the node is a key, the classification result of the edge includes a key-value edge or a key-key edge; when the classification result of the node is a value, then the The edge classification results include key-value edges or value-value edges.
  • the features of the nodes include at least one of the image features, text features, and position features of the nodes, and the edge features include the relative position and the relative width between the text lines. At least one from high school.
  • the node classification model and the edge classification model are end-to-end models.
  • FIG. 6 it shows a schematic structural diagram of an electronic device 600 suitable for implementing an embodiment of the present disclosure.
  • the terminal equipment in the embodiment of the present disclosure may include but not limited to such as mobile phone, notebook computer, digital broadcast receiver, PDA (personal digital assistant), PAD (tablet computer), PMP (portable multimedia player), vehicle terminal (such as mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers and the like.
  • the electronic device shown in FIG. 6 is only an example, and should not limit the functions and application scope of the embodiments of the present disclosure.
  • an electronic device 600 may include a processing device (such as a central processing unit, a graphics processing unit, etc.) 601, which may be randomly accessed according to a program stored in a read-only memory (ROM) 602 or loaded from a storage device 608.
  • a processing device such as a central processing unit, a graphics processing unit, etc.
  • RAM memory
  • various appropriate actions and processes are executed by programs in the memory (RAM) 603 .
  • RAM 603 In the RAM 603, various programs and data necessary for the operation of the electronic device 600 are also stored.
  • the processing device 601 , ROM 602 and RAM 603 are connected to each other through a bus 604 .
  • An input/output (I/O) interface 605 is also connected to the bus 604 .
  • the following devices can be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speaker, vibration an output device 607 such as a computer; a storage device 608 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 609.
  • the communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While FIG. 6 shows electronic device 600 having various means, it should be understood that implementing or having all of the means shown is not a requirement. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, where the computer program includes program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from ROM 602.
  • the processing device 601 When the computer program is executed by the processing device 601, the above-mentioned functions defined in the methods of the embodiments of the present disclosure are performed.
  • the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two.
  • a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can transmit, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted by any appropriate medium, including but not limited to wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
  • the client and the server can communicate using any currently known or future network protocols such as HTTP (HyperText Transfer Protocol, Hypertext Transfer Protocol), and can communicate with digital data in any form or medium
  • HTTP HyperText Transfer Protocol
  • the communication eg, communication network
  • Examples of communication networks include local area networks (“LANs”), wide area networks (“WANs”), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device, or may exist independently without being incorporated into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: performs text detection on the image, obtains a text area in the image, and the text
  • the region includes a plurality of text lines; a graph network model is constructed according to the text region, and each text behavior in the text region is a node of the graph network model; nodes in the graph network model are processed through a node classification model Classify, and classify the edges between the nodes in the graph network model through the edge classification model; according to the classification results of the nodes and the classification results of the edges, at least one key value in the image is obtained right.
  • Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, or combinations thereof, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages - such as "C" or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (for example, using an Internet service provider to connected via the Internet).
  • LAN local area network
  • WAN wide area network
  • Internet service provider for example, using an Internet service provider to connected via the Internet.
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • modules involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the name of the module does not constitute a limitation on the module itself under certain circumstances.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs System on Chips
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • Example 1 provides an information extraction method, the method comprising: performing text detection on an image, and obtaining a text area in the image, the text area including a plurality of text lines ; Construct a graph network model according to the text region, each text behavior in the text region is a node of the graph network model; classify the nodes in the graph network model through the node classification model, and classify the nodes through the edge classification The model classifies the edges between the nodes in the graph network model; according to the classification results of the nodes and the classification results of the edges, at least one key-value pair in the image is obtained.
  • Example 2 provides the method of Example 1, the method further includes: extracting the features of the nodes, and extracting the features of the edges; aggregating the The characteristics of the neighbor nodes of the node are obtained to obtain the embedded representation of the node; the node classification model is used to classify the nodes in the graph network model, and the edge classification model is used to classify the nodes in the graph network model Classifying the edges includes: according to the embedded representation of the nodes, classifying the nodes in the graph network model through a node classification model; splicing the embedded representations of the two nodes corresponding to the edges, and according to the spliced The embedding represents that edges between nodes in the graph network model are classified by an edge classification model.
  • Example 3 provides the method of Example 1, and classifying the edges between nodes in the graph network model through an edge classification model includes:
  • the edges between the nodes in the graph network model are classified by using an edge classification model.
  • Example 4 provides the method of any one of Example 1 to Example 3, the classification result of the node includes one of the following labels: key, value and others, and The classification result of the edge includes one of the following labels: key-value edge, value-value edge, key-key edge or one of others.
  • Example 5 provides the method of Example 4.
  • the classification result of the edge includes a key-value edge or a key-key edge; for all
  • the classification result of the edge includes a key-value edge or a value-value edge.
  • Example 6 provides the method of any one of Examples 1 to 5, wherein the features of the node include at least one of image features, text features, and location features of the node,
  • the feature of the side is at least one of the relative position between the text lines, the relative width and the high height.
  • Example 7 provides the method of any one of Example 1 to Example 5, wherein the node classification model and the edge classification model are end-to-end models.
  • Example 8 provides an information extraction device, the device includes: a detection module, configured to perform text detection on an image, and obtain a text area in the image, the text area Including a plurality of text lines; a building module for constructing a graph network model according to the text region, each text behavior in the text region is a node of the graph network model; a classification module for classifying the model through the node Classify the nodes in the graph network model, and classify the edges between the nodes in the graph network model through the edge classification model; the acquisition module is used to classify the nodes according to the classification results of the nodes and the A classification result of the image to obtain at least one key-value pair in the image.
  • a detection module configured to perform text detection on an image, and obtain a text area in the image, the text area Including a plurality of text lines
  • a building module for constructing a graph network model according to the text region, each text behavior in the text region is a node of the graph network model
  • a classification module for classifying the
  • Example 9 provides the device of Example 8, the device further includes an extraction module, the extraction module is configured to: extract the feature of the node, and extract the feature of the edge; Aggregating the characteristics of the neighbor nodes of the node according to the characteristics of the edge to obtain the embedded representation of the node; the classification module can be used to: classify the graph network model through a node classification model according to the embedded representation of the node The nodes in the graph network model are classified; the embedded representations of the two nodes corresponding to the edges are spliced, and according to the spliced embedded representations, the edges between the nodes in the graph network model are classified by an edge classification model.
  • the extraction module is configured to: extract the feature of the node, and extract the feature of the edge; Aggregating the characteristics of the neighbor nodes of the node according to the characteristics of the edge to obtain the embedded representation of the node; the classification module can be used to: classify the graph network model through a node classification model according to the embedded representation of the node The nodes in
  • Example 10 provides the apparatus of Example 8, and the classification module may be configured to: use an edge classification model to classify nodes in the graph network model according to the classification result of the nodes classify the edges between them.
  • Example 11 provides the device of any one of Example 8 to Example 10, the classification result of the node includes one of the following labels: key, value and others, and The classification result of the edge includes one of the following labels: key-value edge, value-value edge, key-key edge or one of others.
  • Example 12 provides the apparatus of Example 11.
  • the classification result of the node is a key
  • the classification result of the edge includes a key-value edge or a key-key edge; for all
  • the classification result of the edge includes a key-value edge or a value-value edge.
  • Example 13 provides the apparatus of any one of Example 8 to Example 12, wherein the characteristics of the node include at least one of image characteristics, text characteristics and location characteristics of the node,
  • the feature of the side is at least one of the relative position between the text lines, the relative width and the high height.
  • Example 14 provides the apparatus of any one of Example 8 to Example 12, wherein the node classification model and the edge classification model are end-to-end models.

Abstract

Provided in the present application are an information extraction method and apparatus, and a device, a medium and a product. The method comprises: an electronic device performing text detection on an image to obtain a text area that comprises a plurality of text lines; then constructing a graph network model by taking each text line in the text area as a node; classifying the nodes in the graph network model by means of a node classification model; classifying edges in the graph network model by means of an edge classification model; and then, obtaining a key value pair in the image according to a node classification result and an edge classification result. Therefore, a feature of a text line itself and a feature of an associated text line can be comprehensively taken into consideration, thereby realizing the accurate extraction of information.

Description

信息提取方法、装置、设备、介质及产品Information extraction method, device, equipment, medium and product
本申请要求于2021年11月04日提交中国专利局、申请号为202111300845.9、申请名称为“信息提取方法、装置、设备、介质及产品”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202111300845.9 and the application title "Information extraction method, device, equipment, medium and product" filed with the China Patent Office on November 04, 2021, the entire contents of which are incorporated by reference in this application.
技术领域technical field
本申请涉及计算机技术领域,尤其涉及一种信息提取方法、装置、设备、计算机可读存储介质以及计算机程序产品。The present application relates to the field of computer technology, and in particular to an information extraction method, device, equipment, computer-readable storage medium and computer program product.
背景技术Background technique
随着信息时代的到来,互联网中产生了大量的数据,尤其是图像形式的数据。例如,电子商务应用中产生大量的商品图像;又例如,手机银行应用中产生大量的票证图像。这些图像中通常包括丰富的信息,如商品图像中可以包括商品参数信息,票证图像中包括用户身份信息。With the advent of the information age, a large amount of data, especially in the form of images, has been generated on the Internet. For example, a large number of product images are generated in an e-commerce application; another example is a large number of ticket images are generated in a mobile banking application. These images usually contain rich information, for example, commodity images may include commodity parameter information, and ticket images may include user identity information.
上述信息对于商品推荐或者身份审核至关重要。然而,这些图像中的信息的排版通常比较复杂,而且没有固定的格式。如果依靠人工录入,则需要花费大量的时间,并且需要消耗较多的人力成本。The above information is crucial for product recommendation or identity verification. However, the typesetting of information in these images is often complex and has no fixed format. If you rely on manual entry, it will take a lot of time and consume a lot of labor costs.
如何从图像中提取排版复杂且没有固定格式的信息成为业界重点关注的问题。How to extract information with complex typesetting and no fixed format from images has become a major concern in the industry.
发明内容Contents of the invention
本公开的目的在于:提供了一种信息提取方法、装置、设备、计算机可读存储介质以及计算机程序产品,能够对排版复杂且没有固定格式的图像中的信息进行准确提取。The purpose of the present disclosure is to provide an information extraction method, device, device, computer-readable storage medium and computer program product, capable of accurately extracting information from images with complex typesetting and no fixed format.
第一方面,本公开提供了一种信息提取方法,所述方法包括:In a first aspect, the present disclosure provides an information extraction method, the method comprising:
对图像进行文本检测,获得所述图像中的文本区域,所述文本区域包括多个文本行;Perform text detection on the image to obtain a text area in the image, where the text area includes a plurality of text lines;
根据所述文本区域构建图网络模型,所述文本区域中的每个文本行为所述图网络模型的一个节点;Constructing a graph network model according to the text area, each text behavior in the text area is a node of the graph network model;
通过节点分类模型对所述图网络模型中的节点进行分类,以及通过边分类模型对所述图网络模型中的节点之间的边进行分类;classifying nodes in the graph network model by a node classification model, and classifying edges between nodes in the graph network model by using an edge classification model;
根据对所述节点的分类结果以及对所述边的分类结果,获得所述图像中的至少一个键值对。Obtain at least one key-value pair in the image according to the classification results of the nodes and the classification results of the edges.
第二方面,本公开提供了一种信息提取装置,所述装置包括:In a second aspect, the present disclosure provides an information extraction device, the device comprising:
检测模块,用于对图像进行文本检测,获得所述图像中的文本区域,所述文本区域包括多个文本行;A detection module, configured to perform text detection on the image to obtain a text area in the image, the text area including a plurality of text lines;
构建模块,用于根据所述文本区域构建图网络模型,所述文本区域中的每个文本行为所述图网络模型的一个节点;A building module, configured to construct a graph network model according to the text region, where each text in the text region acts as a node of the graph network model;
分类模块,用于通过节点分类模型对所述图网络模型中的节点进行分类,以及通过边分类模型对所述图网络模型中的节点之间的边进行分类;A classification module, configured to classify nodes in the graph network model through a node classification model, and classify edges between nodes in the graph network model through an edge classification model;
获取模块,用于根据对所述节点的分类结果以及对所述边的分类结果,获得所述图像中的至少一个键值对。An obtaining module, configured to obtain at least one key-value pair in the image according to the classification results of the nodes and the classification results of the edges.
第三方面,本公开提供一种电子设备,包括:存储装置,其上存储有计算机程序;处 理装置,用于执行所述存储装置中的所述计算机程序,以实现本公开第一方面所述方法的步骤。In a third aspect, the present disclosure provides an electronic device, including: a storage device, on which a computer program is stored; a processing device, configured to execute the computer program in the storage device, so as to implement the first aspect of the present disclosure. method steps.
第四方面,本公开提供一种计算机可读介质,其上存储有计算机程序,该程序被处理装置执行时实现本公开第一方面所述方法的步骤。In a fourth aspect, the present disclosure provides a computer-readable medium on which a computer program is stored, and when the program is executed by a processing device, the steps of the method described in the first aspect of the present disclosure are implemented.
第五方面,本公开提供了一种包含指令的计算机程序产品,当其在设备上运行时,使得设备执行上述第一方面所述方法的步骤。In a fifth aspect, the present disclosure provides a computer program product containing instructions, which, when run on a device, causes the device to execute the steps of the method described in the first aspect above.
从以上技术方案可以看出,本公开至少具有如下优点:As can be seen from the above technical solutions, the present disclosure has at least the following advantages:
在上述技术方案中,电子设备对图像进行文本检测,获得包括多个文本行的文本区域,然后将文本区域中的每一个文本行作为节点构建图网络模型,通过节点分类模型对图网络模型中的节点进行分类,通过边分类模型对图网络模型中的边进行分类,然后根据节点分类结果和边分类结果,获得图像中的键值对。其中,电子设备不仅对于图网络模型中的节点进行节点分类,并且对于图网络模型中的边也进行分类,如此可以综合考虑图像中文本行自身的特征以及关联文本行的特征,从而能够对于图像中排版复杂且没有固定格式的信息进行准确提取。In the above technical solution, the electronic device performs text detection on the image to obtain a text area including multiple text lines, and then uses each text line in the text area as a node to construct a graph network model, and uses the node classification model to classify the text in the graph network model. The nodes are classified, and the edges in the graph network model are classified by the edge classification model, and then the key-value pairs in the image are obtained according to the node classification results and the edge classification results. Among them, the electronic device not only classifies the nodes in the graph network model, but also classifies the edges in the graph network model, so that the characteristics of the text lines in the image and the characteristics of the associated text lines can be considered comprehensively, so that the image Accurately extract information with complex typesetting and no fixed format.
本公开的其他特征和优点将在随后的具体实施方式部分予以详细说明。Other features and advantages of the present disclosure will be described in detail in the detailed description that follows.
附图说明Description of drawings
为了更清楚地说明本申请实施例的技术方法,下面将对实施例中所需使用的附图作以简单地介绍。In order to more clearly illustrate the technical methods of the embodiments of the present application, the following will briefly introduce the drawings required in the embodiments.
图1为本申请实施例提供的一种信息提取方法的流程示意图;FIG. 1 is a schematic flow diagram of an information extraction method provided in an embodiment of the present application;
图2为本申请实施例提供的一种图像的文本包围框示意图;FIG. 2 is a schematic diagram of a text bounding box of an image provided by an embodiment of the present application;
图3为本申请实施例提供的一种图神经网络模型的示意图;FIG. 3 is a schematic diagram of a graph neural network model provided in an embodiment of the present application;
图4为本申请实施例提供的一种图神经网络模型的节点嵌入的示意图;FIG. 4 is a schematic diagram of node embedding of a graph neural network model provided by an embodiment of the present application;
图5为本公开实施例提供的一种信息提取装置的结构示意图;FIG. 5 is a schematic structural diagram of an information extraction device provided by an embodiment of the present disclosure;
图6为本公开实施例提供的一种电子设备的结构示意图。FIG. 6 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
具体实施方式Detailed ways
本申请实施例中的术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。The terms "first" and "second" in the embodiments of the present application are used for description purposes only, and cannot be understood as indicating or implying relative importance or implicitly indicating the quantity of indicated technical features. Thus, a feature defined as "first" and "second" may explicitly or implicitly include one or more of these features.
首先对本申请实施例中所涉及到的一些技术术语进行介绍。First, some technical terms involved in the embodiments of the present application are introduced.
对于图像中文字信息的提取,通常情况下采用光学字符识别(optical character recognition,OCR)技术。OCR是指电子设备对于打印的字符,通过检测暗、亮的模式确定形状,然后用字符识别方法将形状翻译成计算机文字的过程。OCR可以针对印刷体字符,采用光学的方式将印刷字体的文字转换成为黑白点阵的图像文件,然后通过识别软件将图像中的文字转换成文本格式。但是在文字识别中,特别是对于排版复杂且没有固定格式的图像来说,可能存在所识别出的文本杂糅不同排版区间内的多种信息,难以对于不相关的文字进行准确划分。并且,对于某一部分文本篇幅较长的图像,可能存在无法将多行相关的文字组合在一起,难以对于相关的文字进行准确的组合。For the extraction of text information in images, optical character recognition (OCR) technology is usually used. OCR refers to the process in which electronic devices determine the shape of printed characters by detecting dark and bright patterns, and then use character recognition methods to translate the shape into computer text. For printed characters, OCR can optically convert the text of the printed font into a black and white dot matrix image file, and then convert the text in the image into a text format through recognition software. However, in text recognition, especially for images with complex typesetting and no fixed format, there may be a variety of information in different typesetting intervals in the recognized text, making it difficult to accurately classify irrelevant text. Moreover, for an image with a certain part of the text that is long, it may not be possible to combine multiple lines of related text, and it is difficult to accurately combine the related text.
有鉴于此,本申请提供一种准确的信息提取方法,该方法应用于电子设备。电子设备是指具有数据处理能力的设备,例如可以是服务器,或者是终端。其中,终端包括但不限于智能手机、平板电脑、笔记本电脑、个人数字助理(personal digital assistant,PDA)或者智能穿戴设备等。服务器可以是云服务器,例如是中心云计算集群中的中心服务器,或者是边缘云计算集群中的边缘服务器。当然,服务器也可以是本地数据中心中的服务器。本地数据中心是指用户直接控制的数据中心。In view of this, the present application provides an accurate information extraction method, which is applied to electronic equipment. An electronic device refers to a device capable of data processing, such as a server or a terminal. Wherein, the terminal includes, but is not limited to, a smart phone, a tablet computer, a notebook computer, a personal digital assistant (personal digital assistant, PDA) or a smart wearable device. The server may be a cloud server, for example, a central server in a central cloud computing cluster, or an edge server in an edge cloud computing cluster. Certainly, the server may also be a server in a local data center. An on-premises data center refers to a data center directly controlled by the user.
具体地,电子设备对图像进行文本检测,获得图像中包括多个文本行的文本区域,将每个文本行为图网络模型的一个节点建立图网络模型,然后通过节点分类模型对图网络模型中的节点进行分类,通过边分类模型对节点之间的边进行分类,然后根据对节点的分类结果以及对边的分类结果,获得图像中的至少一个键值对,如此能够准确提取图像中排版复杂且没有固定格式的信息。Specifically, the electronic device performs text detection on the image, obtains a text area including multiple text lines in the image, and establishes a graph network model with a node of each text behavior graph network model, and then classifies the nodes in the graph network model through the node classification model. Classify the nodes, classify the edges between the nodes through the edge classification model, and then obtain at least one key-value pair in the image according to the classification results of the nodes and the classification results of the edges, so that the image can be accurately extracted. There is no fixed format for the information.
一方面,边分类的结果可以为节点分类的结果提供参考,使节点分类模型获取更加准确的节点分类结果。另一方面,节点分类的结果可以为边分类的结果提供参考,使边分类模型获取更加准确的边分类结果。如此,电子设备综合节点分类与边分类,综合考虑文本行自身特征与关联文本行之间的特征,实现对于图像中排版复杂且没有固定格式的信息的准确提取。On the one hand, the result of edge classification can provide a reference for the result of node classification, so that the node classification model can obtain more accurate node classification results. On the other hand, the result of node classification can provide a reference for the result of edge classification, so that the edge classification model can obtain more accurate edge classification results. In this way, the electronic device integrates node classification and edge classification, comprehensively considers the characteristics of the text line itself and the characteristics between the associated text lines, and realizes the accurate extraction of information with complex typesetting and no fixed format in the image.
为了使得本公开的技术方案更加清楚、易于理解,下面从电子设备为终端为例,如图1所示,对本公开实施例提供的信息提取方法进行介绍。In order to make the technical solution of the present disclosure clearer and easier to understand, the following uses an electronic device as a terminal as an example, as shown in FIG. 1 , to introduce the information extraction method provided by the embodiment of the present disclosure.
步骤102:终端对图像进行文本检测,获得图像中的文本区域。Step 102: the terminal performs text detection on the image to obtain a text area in the image.
本实施例中,图像是指包括文本区域的图像,图像中的文本区域包括多个文本行。终端可以通过多种方式对图像进行文本检测,例如终端可以通过OCR技术进行文本检测。通常情况下,OCR包括文本检测和文本识别,文本检测用于发现并分割图片中的文字区域,文本识别用于将文字字符转化为计算机文字。In this embodiment, an image refers to an image including a text area, and the text area in the image includes multiple text lines. The terminal can perform text detection on images in various ways, for example, the terminal can perform text detection through OCR technology. Typically, OCR includes text detection and text recognition. Text detection is used to find and segment text areas in pictures, and text recognition is used to convert text characters into computer text.
具体地,终端可以通过OCR技术发现图片中的文字区域,以文本包围框(bounding box,bbox)的形式将文字区域分割出来,如图2所示,其中,图像中的文本区域如图中202所示,该文本区域202中包括多个文本包围框204-1、204-2等,每个文本包围框对应一个文本行。Specifically, the terminal can discover the text area in the picture through the OCR technology, and segment the text area in the form of a text bounding box (bounding box, bbox), as shown in Figure 2, wherein the text area in the image is shown in Figure 202 As shown, the text area 202 includes multiple text bounding boxes 204-1, 204-2, etc., and each text bounding box corresponds to a text line.
终端对文本区域中的文本包围框中的文字进行识别,获取每一个文本包围框对应的文本行的文本信息,在一些可能的实现方式中,终端所识别获取的文本信息可以显示在图像中的文本包围框中。The terminal recognizes the text in the text bounding box in the text area, and obtains the text information of the text line corresponding to each text bounding box. In some possible implementations, the text information recognized and acquired by the terminal can be displayed in the image text bounding box.
步骤104:终端根据文本区域构建图网络模型。Step 104: The terminal builds a graph network model according to the text area.
图网络(graph network,GN)模型是指根据图(graph)结构所建立的模型。图(graph)可以用来表现多种类型的结构或系统,图可以通过节点(node,N)和边(edge,E)来进行描述,例如,G=(N,E)。图包括节点和边两个基本特征,其中每个节点都具有该节点的特征信息,图中每个节点均具有该节点的结构信息,即边信息。The graph network (GN) model refers to the model established according to the graph structure. A graph (graph) can be used to represent various types of structures or systems, and a graph can be described by nodes (node, N) and edges (edge, E), for example, G=(N, E). The graph includes two basic features of nodes and edges, where each node has the characteristic information of the node, and each node in the graph has the structural information of the node, that is, the edge information.
在本实施例中,终端可以根据文本区域构建图神经网络(graph neural network,GNN)模型。图神经网络模型是一种基于图中节点与边的对应关系所建立的一种神经网络模型, 综合考虑图中每个节点本身的特征信息与节点的结构信息(边信息),实现对于图中信息的准确提取。In this embodiment, the terminal may construct a graph neural network (graph neural network, GNN) model according to the text region. The graph neural network model is a neural network model established based on the corresponding relationship between nodes and edges in the graph. It comprehensively considers the characteristic information of each node in the graph and the structural information (edge information) of the node to realize the Accurate extraction of information.
终端可以将文本区域中的每个文本行作为一个节点,构建图网络模型。在一些可能的实现方式中,终端可以根据文本行中的位置关系,确定图网络模型中的边,如图3所示。具体地,终端可以根据视圆可见性确定节点中的边。视圆可见性是指根据可视的圆的直径,建立图网络模型中的边。基于视圆可见性所确定的边能够满足:以图中所有边作为直径所生成的圆不相交。通过视圆可见性所建立的边能够避免非相邻文本行间的边连接,减少对于后续模型识别的影响,降低模型学习的难度,提高模型的准确率。The terminal can use each text line in the text area as a node to build a graph network model. In some possible implementation manners, the terminal may determine the edges in the graph network model according to the positional relationship in the text lines, as shown in FIG. 3 . Specifically, the terminal can determine the edges in the node according to the viewing circle visibility. Visibility of the viewing circle refers to establishing the edges in the graph network model according to the diameter of the visible circle. The edges determined based on the visibility of the viewing circle can satisfy: the circles generated by using all the edges in the graph as diameters do not intersect. The edge established through the visibility of the viewing circle can avoid the edge connection between non-adjacent text lines, reduce the impact on subsequent model recognition, reduce the difficulty of model learning, and improve the accuracy of the model.
步骤106:终端提取节点的特征。Step 106: The terminal extracts the features of the nodes.
终端可以通过多种方式提取图像中各节点的特征,例如可以通过上采样的方式或者下采样的方式或者综合采用上采样与下采样的方式。在深度学习中,上采样是指可以让图像变成更高分辨率的技术,上采样的方式具体可以包括插值法、反卷积和反池化方法。插值法是指在不生成像素的情况下,根据周围像素使用数学公式计算出所丢失的像素。反卷积是卷积的逆过程,可以理解为一种特殊的正向卷积,先按照一定的比例通过补0扩大输入图像的尺寸,然后旋转卷积核进行正向卷积。反池化是池化的逆操作,具体可以包括反最大池化和反平均池化。其中,反最大池化需要记录池化时最大值的位置。下采样是指对于原序列的间隔几个样值取样一次所获得的新序列。The terminal can extract the features of each node in the image in multiple ways, for example, it can use up-sampling or down-sampling, or a combination of up-sampling and down-sampling. In deep learning, upsampling refers to a technology that can make an image into a higher resolution. The upsampling method can specifically include interpolation, deconvolution, and anti-pooling methods. Interpolation is the use of mathematical formulas to calculate missing pixels from surrounding pixels without generating pixels. Deconvolution is the inverse process of convolution, which can be understood as a special forward convolution. First, the size of the input image is enlarged by filling 0 according to a certain ratio, and then the convolution kernel is rotated to perform forward convolution. Anti-pooling is the inverse operation of pooling, which can specifically include anti-maximum pooling and anti-average pooling. Among them, anti-maximum pooling needs to record the position of the maximum value during pooling. Downsampling refers to a new sequence obtained by sampling once every few samples of the original sequence.
在本实施例中,为了避免获取到过多或者过少的图像中的特征,终端对于图像中的特征可以先进行下采样再进行上采样,如此能够获得样本数量比较均匀的图像特征样本,例如终端可以采用UNet提取整张图像中的特征。In this embodiment, in order to avoid obtaining too many or too few features in the image, the terminal can first down-sample and then up-sample the features in the image, so that image feature samples with a relatively uniform number of samples can be obtained, for example The terminal can use UNet to extract features from the entire image.
在获取到整张图像特征的基础上,终端可以进一步确定出该图像中对应的节点的特征。其中,节点的特征可以包括包含节点各种信息的特征,例如包含节点中文字的颜色、字体以及字号信息的图像特征,包含节点中文字内容的文本特征,包含节点在图像中的坐标的位置特征。On the basis of acquiring the features of the entire image, the terminal can further determine the features of the corresponding nodes in the image. Among them, the features of the node can include features containing various information of the node, such as image features containing the color, font and font size information of the text in the node, text features containing the text content in the node, and position features containing the coordinates of the node in the image .
在一些可能的实现方式中,终端可以采用感兴趣区域池化(Region of interest pooling,简称ROI pooling)或者ROI align对整张图中的图像特征进行处理,获取每一个节点对应的图像特征。其中,节点的图像特征可以为该节点对应的文本区域中文字的颜色、字体以及字号中的任意一种或多种。In some possible implementations, the terminal can use Region of interest pooling (Region of interest pooling, ROI pooling for short) or ROI align to process the image features in the entire image to obtain the image features corresponding to each node. Wherein, the image feature of the node may be any one or more of the color, font and font size of the text in the text area corresponding to the node.
对于节点的文本特征,终端可以利用语言模型提取节点对应的文本行的文本特征。语言模型(language model,LM)是指针对某种语言建立的概率模型,该模型可以建立一个描述给定词序列在语言中出现的概率分布。在本实施例中,终端可以通过双向长短时记忆网络(Bi-directional long short term memory,Bi-LSTM)或基于转换器的双向编码表征(bidirectional encoder representation from transformers,BERT)等语言模型提取图像中每一个节点对应的文本特征,从而获取每一个节点对应的文本特征。其中,Bi-LSTM是将前向的长短时记忆网络(long short term memory,LSTM)与后向的LSTM结合生成。LSTM可以学习长期依赖信息,从而能够具有较高的识别精度。在对于文本信息进行识别的过程中,通常可以简单的将所识别出的字符组合形成对应的语句,但是这种识别精度没有考虑词语在句子中的先后顺序,识别精度较低,而采用LSTM可以捕捉获取较长距离的依赖关系, 考虑到词语在较长距离的句子中的先后顺序。进一步地,Bi-LSTM不仅可以学习从前到后的信息,还可以学习从后到前的信息,因此能够更好的捕捉双向的语义依赖。BERT是一个预训练的语言表征模型,该模型不再采用传统的单向语言模型或者把两个单向语言模型进行浅层拼接的方法进行预训练,而是采用新的掩码语言模型(masked language model,MLM),从而能够生成深度的双向语言表征。通过上述方法,终端可以获取每一个节点对应的文本包围框中所包围的文本内容,从而获取该节点的文本特征。For the text feature of the node, the terminal may use the language model to extract the text feature of the text line corresponding to the node. A language model (language model, LM) refers to a probability model established for a certain language, which can establish a probability distribution describing the occurrence of a given word sequence in a language. In this embodiment, the terminal can use bidirectional long short term memory (Bi-directional long short term memory, Bi-LSTM) or bidirectional encoding representation based on converters (bidirectional encoder representation from transformers, BERT) and other language models to extract the information in the image. The text features corresponding to each node, so as to obtain the text features corresponding to each node. Among them, Bi-LSTM is generated by combining the forward long short term memory (LSTM) with the backward LSTM. LSTM can learn long-term dependent information, so it can have high recognition accuracy. In the process of recognizing text information, it is usually possible to simply combine the recognized characters to form a corresponding sentence, but this recognition accuracy does not consider the sequence of words in the sentence, and the recognition accuracy is low, while using LSTM can Capturing long-distance dependencies, taking into account the order of words in sentences with long distances. Further, Bi-LSTM can not only learn the information from front to back, but also from back to front, so it can better capture the bidirectional semantic dependence. BERT is a pre-trained language representation model. The model no longer uses the traditional one-way language model or two one-way language models for pre-training, but uses a new masked language model (masked language model, MLM), so as to be able to generate deep bidirectional language representation. Through the above method, the terminal can obtain the text content surrounded by the text bounding box corresponding to each node, so as to obtain the text features of the node.
终端可以根据节点所在的位置确定节点的位置特征。具体地,终端可以根据文本包围框的位置信息(例如坐标信息),确定节点的位置特征。终端也可以对文本包围框的位置信息进行嵌入(embedding)处理,获取节点的位置特征。其中,嵌入处理是将指通过计算,将节点的位置特征转化为一个低维的实值向量,可以将多个特征融合为一个连续的可计算的向量。在本实施例中,终端可以通过嵌入处理,将所获取的节点的文本包围框的位置信息表示为一个低维的可计算的实值向量,从而获取节点的位置特征。The terminal can determine the location characteristics of the node according to the location of the node. Specifically, the terminal may determine the location feature of the node according to the location information (for example, coordinate information) of the text bounding box. The terminal may also perform embedding processing on the location information of the text bounding box to obtain the location feature of the node. Among them, the embedding process refers to converting the position feature of the node into a low-dimensional real-valued vector through calculation, and can fuse multiple features into a continuous and computable vector. In this embodiment, the terminal may express the obtained location information of the text bounding box of the node as a low-dimensional computable real-valued vector through embedding processing, so as to obtain the location feature of the node.
如此,终端可以提取图像中各节点的图像特征、文本特征以及位置特征等节点特征,节点特征中包括该节点的各项信息,每个节点对应的多个节点特征构成该节点输入图神经网络的节点特征。In this way, the terminal can extract node features such as image features, text features, and location features of each node in the image. The node features include various information about the node, and the multiple node features corresponding to each node constitute the input graph neural network of the node. node characteristics.
步骤108:终端提取节点之间边的特征。Step 108: The terminal extracts features of edges between nodes.
对于边的特征,终端可以根据具有边连接关系的文本行之间的相对位置,相对宽高获取图网络模型中的边特征。其中,边连接关系为步骤104中所确定的边。具体地,文本行之间的相对位置可以为文本行对应的文本包围框的相对位置,文本行之间的相对宽高可以为文本行对应的文本包围框的相对宽高。例如,文本行A对应的文本包围框A的中心的坐标为(xA,yA),宽度为wA,高度为hA,文本行B对应的文本包围框B的中心的坐标为(xB,yB),宽度为wB,高度为hB,那么,文本行A与文本行B之间的相对位置可以为(xB-xA,yB-yA),相对宽为wA/wB,相对高为hA/hB。For the edge features, the terminal can obtain the edge features in the graph network model according to the relative position and relative width and height between the text lines with the edge connection relationship. Wherein, the edge connection relationship is the edge determined in step 104 . Specifically, the relative position between the text lines may be the relative position of the text bounding boxes corresponding to the text lines, and the relative width and height between the text lines may be the relative width and height of the text bounding boxes corresponding to the text lines. For example, the coordinates of the center of the text bounding box A corresponding to the text line A are (xA, yA), the width is wA, and the height is hA, and the coordinates of the center of the text bounding box B corresponding to the text line B are (xB, yB), The width is wB, and the height is hB. Then, the relative position between the text line A and the text line B can be (xB-xA, yB-yA), the relative width is wA/wB, and the relative height is hA/hB.
如此,终端可以获取包括图像特征、文本特征以及位置特征的图神经网络中的节点特征,以及包括文本行之间的相对位置、相对宽高的图神经网络中的边特征。In this way, the terminal can obtain node features in the graph neural network including image features, text features, and position features, and edge features in the graph neural network including relative positions between text lines and relative width and height.
步骤110:终端根据边的特征聚合节点的邻居节点的特征,获得节点的嵌入表示。Step 110: The terminal aggregates the characteristics of the neighbor nodes of the node according to the characteristics of the edges, and obtains the embedding representation of the node.
其中,节点的邻居节点是指与该节点之间具有边关联关系的节点,同一条边上的两个节点互为邻居节点。对于图像中的任意一个节点,终端可以获取该节点的节点特征,以及包括该节点的边的边特征,然后获取边的另一个节点,即该节点的邻居节点的节点特征,从而获取与该节点有关的信息。Among them, the neighbor nodes of a node refer to nodes that have an edge association relationship with the node, and two nodes on the same edge are neighbor nodes. For any node in the image, the terminal can obtain the node features of the node and the edge features of the edges including the node, and then obtain the node features of another node of the edge, that is, the node’s neighbor nodes, so as to obtain the relevant information.
具体地,终端通过图神经网络根据该节点的节点特征,该节点对应的边的边特征,根据该节点的边特征聚合获取该节点的邻居节点的特征,共同获取包括该节点的邻居节点特征与边特征的该节点的嵌入表示,如图4所示。Specifically, the terminal obtains the characteristics of the neighbor nodes of the node according to the node characteristics of the node and the edge characteristics of the corresponding edge of the node through the graph neural network, and jointly obtains the characteristics of the neighbor nodes including the node and The embedded representation of this node for edge features is shown in Figure 4.
在一些可能的实现方式中,终端可以采用空域图卷积网络(graph convolutional networks,GCN)根据边的特征聚合节点的邻居节点的特征,获得节点的嵌入表示。图卷积网络、图循环网络(graph recurrent network,GRN)、图注意力网络(graph attention network,GAT)、图自编码器(graph autoencoders,GAE)均属于图神经网络,本实施例中以图卷积网络为例进行介绍。相比于传统的CNN主要应用于邻居节点固定的欧式空间中,GCN可 以应用在邻居节点不固定的非欧空间中。图卷积网络对于某一个节点进行卷积实际上是对于该节点及具有边关联关系的邻居节点进行加权求和,从而能够聚合该节点自身特征、边特征以及邻居节点特征,获得该节点的嵌入表示。图卷积网络主要分为基于空域的图卷积网络与基于频域的图卷积网络。基于空域的图卷积网络可以直接对于图像中的节点进行卷积,而基于频域的图卷积网络需要先进行傅里叶变换,再进行卷积。In some possible implementations, the terminal can use spatial graph convolutional networks (graph convolutional networks, GCN) to aggregate the characteristics of the neighbor nodes of the node according to the characteristics of the edge, and obtain the embedded representation of the node. Graph convolution network, graph recurrent network (graph recurrent network, GRN), graph attention network (graph attention network, GAT), graph autoencoder (graph autoencoders, GAE) all belong to the graph neural network. In this embodiment, graph Convolutional network is used as an example to introduce. Compared with the traditional CNN, which is mainly used in the Euclidean space where the neighbor nodes are fixed, GCN can be applied in the non-Euclidean space where the neighbor nodes are not fixed. The convolution of a node by the graph convolutional network is actually a weighted summation of the node and the neighbor nodes with edge associations, so that the node's own characteristics, edge characteristics and neighbor node characteristics can be aggregated to obtain the embedding of the node express. Graph convolutional networks are mainly divided into spatial-domain-based graph convolutional networks and frequency-domain-based graph convolutional networks. The graph convolutional network based on the spatial domain can directly convolve the nodes in the image, while the graph convolutional network based on the frequency domain needs to perform Fourier transform first, and then perform convolution.
步骤112:终端通过节点分类模型对图网络模型中的节点进行分类。Step 112: The terminal classifies the nodes in the graph network model through the node classification model.
节点分类模型是指能够对于节点进行分类的模型,其中节点分类模型的输入可以为图神经网络中的节点嵌入,输出为该节点的类型,例如可以为键(key)、值(value)以及其他。其中节点对应的类型可以由用户进行设置,如图2所示为某一商品标签,可以将商品的属性设置为键,商品属性对应的商品特征设置为值,其余内容设置为其他。The node classification model refers to a model that can classify nodes. The input of the node classification model can be the node embedding in the graph neural network, and the output is the type of the node, such as key (key), value (value) and other . The type corresponding to the node can be set by the user. As shown in Figure 2, it is a product label. The product attribute can be set as the key, the product feature corresponding to the product attribute can be set as the value, and the rest of the content can be set as others.
在一些可能的实现方式中,节点分类模型端到端模型,例如可以为多层感知机(multilayer perceptron,MLP)模型。MLP也称人工神经网络(artificial neural network,ANN),包括输入层、输出层以及至少一个隐层。MLP通常可以用来处理分类问题。In some possible implementation manners, the end-to-end model of the node classification model may be, for example, a multilayer perceptron (multilayer perceptron, MLP) model. MLP is also called artificial neural network (ANN), which includes an input layer, an output layer, and at least one hidden layer. MLP can often be used to deal with classification problems.
节点分类模型可以为其他经过训练的多分类模型,例如可以为k最近邻(k-nearest neighbors)、决策树(decision trees)、朴素贝叶斯(naive bayes)、随机森林(random forest)以及梯度提升(gradient boosting)等模型。在一些可能的实现方式中,节点的类型也可以为两类,例如键和值,因此也可以采用二分类模型对于图网络模型中的节点进行分类。The node classification model can be other trained multi-classification models, such as k-nearest neighbors, decision trees, naive bayes, random forest, and gradient Models such as gradient boosting. In some possible implementation manners, the types of nodes can also be two types, such as key and value, so a binary classification model can also be used to classify the nodes in the graph network model.
步骤114:终端通过边分类模型对图网络模型中的节点之间的边进行分类。Step 114: The terminal classifies the edges between the nodes in the graph network model through the edge classification model.
边分类模型是指能够对于边进行分类的模型,其中边分类模型的输入可以为图神经网络中边的两个节点嵌入的拼接,输出为边的类型,例如与步骤112中对应的键值边、键键边、值值边以及其他。商品属性与该属性对应的商品特征之间的边为键值边,两个商品属性之间的边为键键边,两个商品特征之间的边为值值边等。The edge classification model refers to a model that can classify edges, where the input of the edge classification model can be the splicing of two node embeddings in the graph neural network, and the output is the type of edge, such as the corresponding key-value edge in step 112 , key-key edges, value-value edges, and others. The edge between a commodity attribute and the commodity feature corresponding to the attribute is a key-value edge, the edge between two commodity attributes is a key-key edge, and the edge between two commodity features is a value-value edge, etc.
同样地,边分类模型也可以为端到端模型,终端通过对于MLP模型的训练,获得能够对节点之间的边进行分类的边分类模型。Similarly, the edge classification model may also be an end-to-end model, and the terminal obtains an edge classification model capable of classifying edges between nodes by training the MLP model.
在一些可能的实现方式中,节点分类模型和边分类模型可以共同训练,互相作为输入与输出。其中,可以通过边分类模型验证节点分类模型,例如对于边A,节点分类模型判断边A的两个节点分别为键和值,可以通过边分类模型验证边A是否为键值边,对于边B,节点分类模型判断边B的两个节点均为键,可以通过边分类模型验证边B是否为键键边,对于边C,节点分类模型判断边C的两个节点均为值,可以通过边分类模型验证边C是否为值值边等。In some possible implementations, the node classification model and the edge classification model can be jointly trained as input and output of each other. Among them, the node classification model can be verified through the edge classification model. For example, for edge A, the node classification model judges that the two nodes of edge A are key and value respectively, and whether edge A is a key-value edge can be verified through the edge classification model. For edge B , the node classification model judges that both nodes of edge B are keys, and it can be verified whether edge B is a key edge through the edge classification model. For edge C, the node classification model judges that both nodes of edge C are values, and can be verified by edge The classification model verifies whether edge C is a value-value edge, etc.
同样地,也可以通过节点分类模型验证边分类模型,例如边分类模型判断边D为键值边,可以通过节点分类模型判断边D的两个节点是否分别为键和值,边分类模型判断边E为键键边,可以通过节点分类模型判断边E的两个节点是否均为键,边分类模型判断边F为值值边,可以通过节点分类模型判断边F的两个节点是否均为值。Similarly, the edge classification model can also be verified through the node classification model. For example, the edge classification model judges that edge D is a key-value edge. E is a key-key edge. You can use the node classification model to judge whether the two nodes of edge E are both keys. The edge classification model judges that edge F is a value-value edge. You can use the node classification model to judge whether the two nodes of edge F are both values. .
步骤116:终端根据对节点的分类结果以及对边的分类结果,获得图像中的至少一个键值对。Step 116: The terminal obtains at least one key-value pair in the image according to the classification results of the nodes and the classification results of the edges.
在一些可能的实现方式中,终端可以根据节点的分类结果,将相邻并且分别为键和值的两个节点确定为键值边,然后根据边分类结果,验证两个节点组成的边是否为键值对。 终端也可以根据边分类结果,确定出键值边,然后根据节点分类结果判断该边的两个节点是否分别为键和值。当边分类模型判断该边为键值边且节点分类模型判断该边的两个节点分别为键和值时,终端确定获得图像中的一个键值对,如此可以获取图像中的至少一个键值对。In some possible implementations, the terminal can determine two adjacent nodes that are key and value respectively as key-value edges according to the classification results of the nodes, and then verify whether the edge formed by the two nodes is key-value pairs. The terminal can also determine the key-value edge according to the edge classification result, and then judge whether the two nodes of the edge are key and value according to the node classification result. When the edge classification model judges that the edge is a key-value edge and the node classification model judges that the two nodes of the edge are key and value, the terminal determines to obtain a key-value pair in the image, so that at least one key-value pair in the image can be obtained right.
基于以上内容的描述,本公开提供了一种信息提取方法。终端对图像进行文本检测,获得包括多个文本行的文本区域,然后将文本区域中的每一个文本行作为节点构建图网络模型,通过节点分类模型对图网络模型中的节点进行分类,通过边分类模型对图网络模型中的边进行分类,然后根据节点分类结果和边分类结果,获得图像中的键值对。如此,终端不仅对于图网络模型中的节点进行节点分类,并且对于图网络模型中的边也进行分类,可以综合考虑图像中文本行自身的特征以及关联文本行的特征,从而能够对于图像中排版复杂且没有固定格式的信息进行准确提取。Based on the above description, the present disclosure provides an information extraction method. The terminal performs text detection on the image to obtain a text area including multiple text lines, and then uses each text line in the text area as a node to construct a graph network model, classifies the nodes in the graph network model through the node classification model, and uses the edge The classification model classifies the edges in the graph network model, and then obtains the key-value pairs in the image according to the node classification results and the edge classification results. In this way, the terminal not only classifies the nodes in the graph network model, but also classifies the edges in the graph network model, and can comprehensively consider the characteristics of the text lines in the image itself and the characteristics of the associated text lines, so as to be able to typesetting in the image Accurate extraction of complex and unformatted information.
图5是根据一示例性公开实施例示出的一种信息提取的装置的示意图,如图5所示,所述信息提取装置500包括:Fig. 5 is a schematic diagram of an information extraction device according to an exemplary disclosed embodiment. As shown in Fig. 5, the information extraction device 500 includes:
检测模块502,用于对图像进行文本检测,获得所述图像中的文本区域,所述文本区域包括多个文本行;A detection module 502, configured to perform text detection on the image, and obtain a text area in the image, where the text area includes a plurality of text lines;
构建模块504,用于根据所述文本区域构建图网络模型,所述文本区域中的每个文本行为所述图网络模型的一个节点;A construction module 504, configured to construct a graph network model according to the text region, where each text in the text region acts as a node of the graph network model;
分类模块506,用于通过节点分类模型对所述图网络模型中的节点进行分类,以及通过边分类模型对所述图网络模型中的节点之间的边进行分类;A classification module 506, configured to classify nodes in the graph network model through a node classification model, and classify edges between nodes in the graph network model through an edge classification model;
获取模块508,用于根据对所述节点的分类结果以及对所述边的分类结果,获得所述图像中的至少一个键值对。The obtaining module 508 is configured to obtain at least one key-value pair in the image according to the classification results of the nodes and the classification results of the edges.
在一种可能的实施方式下,所述装置还包括提取模块510,所述提取模块510可以用于:In a possible implementation manner, the device further includes an extraction module 510, and the extraction module 510 may be used to:
提取所述节点的特征,以及提取所述边的特征;extracting features of the nodes, and extracting features of the edges;
根据所述边的特征聚合所述节点的邻居节点的特征,获得所述节点的嵌入表示;Aggregating the characteristics of the neighbor nodes of the node according to the characteristics of the edge to obtain the embedded representation of the node;
所述分类模块506可以用于:The classification module 506 can be used for:
根据所述节点的嵌入表示,通过节点分类模型对所述图网络模型中的节点进行分类;Classifying the nodes in the graph network model through a node classification model according to the embedded representation of the nodes;
将所述边对应的两个节点的嵌入表示拼接,根据拼接后的所述嵌入表示,通过边分类模型对所述图网络模型中的节点之间的边进行分类。The embedded representations of the two nodes corresponding to the edges are spliced, and according to the spliced embedded representations, the edges between the nodes in the graph network model are classified by an edge classification model.
在一种可能的实施方式下,所述分类模块506可以用于:In a possible implementation manner, the classification module 506 may be used to:
根据对所述节点的分类结果,通过边分类模型对所述图网络模型中的节点之间的边进行分类。According to the classification result of the nodes, the edges between the nodes in the graph network model are classified by using an edge classification model.
在一种可能的实施方式下,所述对所述节点的分类结果包括如下标签中的一种:键、 值和其他,所述对所述边的分类结果包括如下标签中的一种:键值边、值值边、键键边或其他中的一种。In a possible implementation manner, the classification result of the node includes one of the following labels: key, value and others, and the classification result of the edge includes one of the following labels: key One of value edge, value-value edge, key-key edge, or other.
在一种可能的实施方式下,对所述节点的分类结果为键时,则对所述边的分类结果包括键值边或键键边;对所述节点的分类结果为值时,则对所述边的分类结果包括键值边或值值边。In a possible implementation manner, when the classification result of the node is a key, the classification result of the edge includes a key-value edge or a key-key edge; when the classification result of the node is a value, then the The edge classification results include key-value edges or value-value edges.
在一种可能的实施方式下,所述节点的特征包括所述节点的图像特征、文本特征和位置特征中的至少一种,所述边的特征所述文本行之间的相对位置、相对宽高中的至少一种。In a possible implementation manner, the features of the nodes include at least one of the image features, text features, and position features of the nodes, and the edge features include the relative position and the relative width between the text lines. At least one from high school.
在一种可能的实施方式下,所述节点分类模型和所述边分类模型为端到端模型。In a possible implementation manner, the node classification model and the edge classification model are end-to-end models.
上述各模块的功能在上一实施例中的方法步骤中已详细阐述,在此不做赘述。The functions of the above modules have been described in detail in the method steps in the previous embodiment, and will not be repeated here.
下面参考图6,其示出了适于用来实现本公开实施例的电子设备600的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图6示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。Referring now to FIG. 6 , it shows a schematic structural diagram of an electronic device 600 suitable for implementing an embodiment of the present disclosure. The terminal equipment in the embodiment of the present disclosure may include but not limited to such as mobile phone, notebook computer, digital broadcast receiver, PDA (personal digital assistant), PAD (tablet computer), PMP (portable multimedia player), vehicle terminal (such as mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers and the like. The electronic device shown in FIG. 6 is only an example, and should not limit the functions and application scope of the embodiments of the present disclosure.
如图6所示,电子设备600可以包括处理装置(例如中央处理器、图形处理器等)601,其可以根据存储在只读存储器(ROM)602中的程序或者从存储装置608加载到随机访问存储器(RAM)603中的程序而执行各种适当的动作和处理。在RAM603中,还存储有电子设备600操作所需的各种程序和数据。处理装置601、ROM602以及RAM603通过总线604彼此相连。输入/输出(I/O)接口605也连接至总线604。As shown in FIG. 6, an electronic device 600 may include a processing device (such as a central processing unit, a graphics processing unit, etc.) 601, which may be randomly accessed according to a program stored in a read-only memory (ROM) 602 or loaded from a storage device 608. Various appropriate actions and processes are executed by programs in the memory (RAM) 603 . In the RAM 603, various programs and data necessary for the operation of the electronic device 600 are also stored. The processing device 601 , ROM 602 and RAM 603 are connected to each other through a bus 604 . An input/output (I/O) interface 605 is also connected to the bus 604 .
通常,以下装置可以连接至I/O接口605:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置606;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置607;包括例如磁带、硬盘等的存储装置608;以及通信装置609。通信装置609可以允许电子设备600与其他设备进行无线或有线通信以交换数据。虽然图6示出了具有各种装置的电子设备600,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。Typically, the following devices can be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speaker, vibration an output device 607 such as a computer; a storage device 608 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While FIG. 6 shows electronic device 600 having various means, it should be understood that implementing or having all of the means shown is not a requirement. More or fewer means may alternatively be implemented or provided.
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置609从网络上被下载和安装,或者从存储装置608被安装,或者从ROM 602被安装。在该计算机程序被处理装置601执行时,执行本公开实施例的方法中限定的上述功能。In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts can be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, where the computer program includes program code for executing the method shown in the flowchart. In such an embodiment, the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from ROM 602. When the computer program is executed by the processing device 601, the above-mentioned functions defined in the methods of the embodiments of the present disclosure are performed.
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器 件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。It should be noted that the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two. A computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above. In the present disclosure, a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device. In the present disclosure, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. A computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can transmit, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device . Program code embodied on a computer readable medium may be transmitted by any appropriate medium, including but not limited to wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。In some embodiments, the client and the server can communicate using any currently known or future network protocols such as HTTP (HyperText Transfer Protocol, Hypertext Transfer Protocol), and can communicate with digital data in any form or medium The communication (eg, communication network) interconnections. Examples of communication networks include local area networks ("LANs"), wide area networks ("WANs"), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network of.
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。The above-mentioned computer-readable medium may be included in the above-mentioned electronic device, or may exist independently without being incorporated into the electronic device.
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:对图像进行文本检测,获得所述图像中的文本区域,所述文本区域包括多个文本行;根据所述文本区域构建图网络模型,所述文本区域中的每个文本行为所述图网络模型的一个节点;通过节点分类模型对所述图网络模型中的节点进行分类,以及通过边分类模型对所述图网络模型中的节点之间的边进行分类;根据对所述节点的分类结果以及对所述边的分类结果,获得所述图像中的至少一个键值对。可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言——诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)——连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。The above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: performs text detection on the image, obtains a text area in the image, and the text The region includes a plurality of text lines; a graph network model is constructed according to the text region, and each text behavior in the text region is a node of the graph network model; nodes in the graph network model are processed through a node classification model Classify, and classify the edges between the nodes in the graph network model through the edge classification model; according to the classification results of the nodes and the classification results of the edges, at least one key value in the image is obtained right. Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, or combinations thereof, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages - such as "C" or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In cases involving a remote computer, the remote computer may be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (for example, using an Internet service provider to connected via the Internet).
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算 机指令的组合来实现。The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved. It should also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
描述于本公开实施例中所涉及到的模块可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,模块的名称在某种情况下并不构成对该模块本身的限定。The modules involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the name of the module does not constitute a limitation on the module itself under certain circumstances.
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。The functions described herein above may be performed at least in part by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), System on Chips (SOCs), Complex Programmable Logical device (CPLD) and so on.
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。In the context of the present disclosure, a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device. A machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing. More specific examples of machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
根据本公开的一个或多个实施例,示例1提供了一种信息提取方法,所述方法包括:对图像进行文本检测,获得所述图像中的文本区域,所述文本区域包括多个文本行;根据所述文本区域构建图网络模型,所述文本区域中的每个文本行为所述图网络模型的一个节点;通过节点分类模型对所述图网络模型中的节点进行分类,以及通过边分类模型对所述图网络模型中的节点之间的边进行分类;根据对所述节点的分类结果以及对所述边的分类结果,获得所述图像中的至少一个键值对。According to one or more embodiments of the present disclosure, Example 1 provides an information extraction method, the method comprising: performing text detection on an image, and obtaining a text area in the image, the text area including a plurality of text lines ; Construct a graph network model according to the text region, each text behavior in the text region is a node of the graph network model; classify the nodes in the graph network model through the node classification model, and classify the nodes through the edge classification The model classifies the edges between the nodes in the graph network model; according to the classification results of the nodes and the classification results of the edges, at least one key-value pair in the image is obtained.
根据本公开的一个或多个实施例,示例2提供了示例1的方法,所述方法还包括:提取所述节点的特征,以及提取所述边的特征;根据所述边的特征聚合所述节点的邻居节点的特征,获得所述节点的嵌入表示;所述通过节点分类模型对所述图网络模型中的节点进行分类,以及通过边分类模型对所述图网络模型中的节点之间的边进行分类,包括:根据所述节点的嵌入表示,通过节点分类模型对所述图网络模型中的节点进行分类;将所述边对应的两个节点的嵌入表示拼接,根据拼接后的所述嵌入表示,通过边分类模型对所述图网络模型中的节点之间的边进行分类。According to one or more embodiments of the present disclosure, Example 2 provides the method of Example 1, the method further includes: extracting the features of the nodes, and extracting the features of the edges; aggregating the The characteristics of the neighbor nodes of the node are obtained to obtain the embedded representation of the node; the node classification model is used to classify the nodes in the graph network model, and the edge classification model is used to classify the nodes in the graph network model Classifying the edges includes: according to the embedded representation of the nodes, classifying the nodes in the graph network model through a node classification model; splicing the embedded representations of the two nodes corresponding to the edges, and according to the spliced The embedding represents that edges between nodes in the graph network model are classified by an edge classification model.
根据本公开的一个或多个实施例,示例3提供了示例1的方法,所述通过边分类模型对所述图网络模型中的节点之间的边进行分类,包括:According to one or more embodiments of the present disclosure, Example 3 provides the method of Example 1, and classifying the edges between nodes in the graph network model through an edge classification model includes:
根据对所述节点的分类结果,通过边分类模型对所述图网络模型中的节点之间的边进行分类。According to the classification result of the nodes, the edges between the nodes in the graph network model are classified by using an edge classification model.
根据本公开的一个或多个实施例,示例4提供了示例1至示例3任意一项的方法,所述对所述节点的分类结果包括如下标签中的一种:键、值和其他,所述对所述边的分类结果包括如下标签中的一种:键值边、值值边、键键边或其他中的一种。According to one or more embodiments of the present disclosure, Example 4 provides the method of any one of Example 1 to Example 3, the classification result of the node includes one of the following labels: key, value and others, and The classification result of the edge includes one of the following labels: key-value edge, value-value edge, key-key edge or one of others.
根据本公开的一个或多个实施例,示例5提供了示例4的方法,对所述节点的分类结 果为键时,则对所述边的分类结果包括键值边或键键边;对所述节点的分类结果为值时,则对所述边的分类结果包括键值边或值值边。According to one or more embodiments of the present disclosure, Example 5 provides the method of Example 4. When the classification result of the node is a key, the classification result of the edge includes a key-value edge or a key-key edge; for all When the classification result of the node is a value, the classification result of the edge includes a key-value edge or a value-value edge.
根据本公开的一个或多个实施例,示例6提供了示例1至示例5任意一项的方法,所述节点的特征包括所述节点的图像特征、文本特征和位置特征中的至少一种,所述边的特征所述文本行之间的相对位置、相对宽高中的至少一种。According to one or more embodiments of the present disclosure, Example 6 provides the method of any one of Examples 1 to 5, wherein the features of the node include at least one of image features, text features, and location features of the node, The feature of the side is at least one of the relative position between the text lines, the relative width and the high height.
根据本公开的一个或多个实施例,示例7提供了示例1至示例5任意一项的方法,所述节点分类模型和所述边分类模型为端到端模型。According to one or more embodiments of the present disclosure, Example 7 provides the method of any one of Example 1 to Example 5, wherein the node classification model and the edge classification model are end-to-end models.
根据本公开的一个或多个实施例,示例8提供了一种信息提取装置,所述装置包括:检测模块,用于对图像进行文本检测,获得所述图像中的文本区域,所述文本区域包括多个文本行;构建模块,用于根据所述文本区域构建图网络模型,所述文本区域中的每个文本行为所述图网络模型的一个节点;分类模块,用于通过节点分类模型对所述图网络模型中的节点进行分类,以及通过边分类模型对所述图网络模型中的节点之间的边进行分类;获取模块,用于根据对所述节点的分类结果以及对所述边的分类结果,获得所述图像中的至少一个键值对。According to one or more embodiments of the present disclosure, Example 8 provides an information extraction device, the device includes: a detection module, configured to perform text detection on an image, and obtain a text area in the image, the text area Including a plurality of text lines; a building module for constructing a graph network model according to the text region, each text behavior in the text region is a node of the graph network model; a classification module for classifying the model through the node Classify the nodes in the graph network model, and classify the edges between the nodes in the graph network model through the edge classification model; the acquisition module is used to classify the nodes according to the classification results of the nodes and the A classification result of the image to obtain at least one key-value pair in the image.
根据本公开的一个或多个实施例,示例9提供了示例8的装置,所述装置还包括提取模块,所述提取模块用于:提取所述节点的特征,以及提取所述边的特征;根据所述边的特征聚合所述节点的邻居节点的特征,获得所述节点的嵌入表示;所述分类模块可以用于:根据所述节点的嵌入表示,通过节点分类模型对所述图网络模型中的节点进行分类;将所述边对应的两个节点的嵌入表示拼接,根据拼接后的所述嵌入表示,通过边分类模型对所述图网络模型中的节点之间的边进行分类。According to one or more embodiments of the present disclosure, Example 9 provides the device of Example 8, the device further includes an extraction module, the extraction module is configured to: extract the feature of the node, and extract the feature of the edge; Aggregating the characteristics of the neighbor nodes of the node according to the characteristics of the edge to obtain the embedded representation of the node; the classification module can be used to: classify the graph network model through a node classification model according to the embedded representation of the node The nodes in the graph network model are classified; the embedded representations of the two nodes corresponding to the edges are spliced, and according to the spliced embedded representations, the edges between the nodes in the graph network model are classified by an edge classification model.
根据本公开的一个或多个实施例,示例10提供了示例8的装置,所述分类模块可以用于:根据对所述节点的分类结果,通过边分类模型对所述图网络模型中的节点之间的边进行分类。According to one or more embodiments of the present disclosure, Example 10 provides the apparatus of Example 8, and the classification module may be configured to: use an edge classification model to classify nodes in the graph network model according to the classification result of the nodes classify the edges between them.
根据本公开的一个或多个实施例,示例11提供了示例8至示例10任意一项的装置,所述对所述节点的分类结果包括如下标签中的一种:键、值和其他,所述对所述边的分类结果包括如下标签中的一种:键值边、值值边、键键边或其他中的一种。According to one or more embodiments of the present disclosure, Example 11 provides the device of any one of Example 8 to Example 10, the classification result of the node includes one of the following labels: key, value and others, and The classification result of the edge includes one of the following labels: key-value edge, value-value edge, key-key edge or one of others.
根据本公开的一个或多个实施例,示例12提供了示例11的装置,对所述节点的分类结果为键时,则对所述边的分类结果包括键值边或键键边;对所述节点的分类结果为值时,则对所述边的分类结果包括键值边或值值边。According to one or more embodiments of the present disclosure, Example 12 provides the apparatus of Example 11. When the classification result of the node is a key, the classification result of the edge includes a key-value edge or a key-key edge; for all When the classification result of the node is a value, the classification result of the edge includes a key-value edge or a value-value edge.
根据本公开的一个或多个实施例,示例13提供了示例8至示例12任意一项的装置,所述节点的特征包括所述节点的图像特征、文本特征和位置特征中的至少一种,所述边的特征所述文本行之间的相对位置、相对宽高中的至少一种。According to one or more embodiments of the present disclosure, Example 13 provides the apparatus of any one of Example 8 to Example 12, wherein the characteristics of the node include at least one of image characteristics, text characteristics and location characteristics of the node, The feature of the side is at least one of the relative position between the text lines, the relative width and the high height.
根据本公开的一个或多个实施例,示例14提供了示例8至示例12任意一项的装置,所述节点分类模型和所述边分类模型为端到端模型。According to one or more embodiments of the present disclosure, Example 14 provides the apparatus of any one of Example 8 to Example 12, wherein the node classification model and the edge classification model are end-to-end models.
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应 当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。The above description is only a preferred embodiment of the present disclosure and an illustration of the applied technical principles. Those skilled in the art should understand that the disclosure scope involved in this disclosure is not limited to the technical solution formed by the specific combination of the above-mentioned technical features, but also covers the technical solutions formed by the above-mentioned technical features or Other technical solutions formed by any combination of equivalent features. For example, a technical solution formed by replacing the above-mentioned features with (but not limited to) technical features with similar functions disclosed in this disclosure.
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。In addition, while operations are depicted in a particular order, this should not be understood as requiring that the operations be performed in the particular order shown or performed in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while the above discussion contains several specific implementation details, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are merely example forms of implementing the claims. Regarding the apparatus in the foregoing embodiments, the specific manner in which each module executes operations has been described in detail in the embodiments related to the method, and will not be described in detail here.

Claims (17)

  1. 一种信息提取方法,所述方法包括:An information extraction method, the method comprising:
    对图像进行文本检测,获得所述图像中的文本区域,所述文本区域包括多个文本行;Perform text detection on the image to obtain a text area in the image, where the text area includes a plurality of text lines;
    根据所述文本区域构建图网络模型,所述文本区域中的每个文本行为所述图网络模型的一个节点;Constructing a graph network model according to the text area, each text behavior in the text area is a node of the graph network model;
    通过节点分类模型对所述图网络模型中的节点进行分类,以及通过边分类模型对所述图网络模型中的节点之间的边进行分类;classifying nodes in the graph network model by a node classification model, and classifying edges between nodes in the graph network model by using an edge classification model;
    根据对所述节点的分类结果以及对所述边的分类结果,获得所述图像中的至少一个键值对。Obtain at least one key-value pair in the image according to the classification results of the nodes and the classification results of the edges.
  2. 根据权利要求1所述的方法,其中,所述方法还包括:The method according to claim 1, wherein the method further comprises:
    提取所述节点的特征,以及提取所述边的特征;extracting features of the nodes, and extracting features of the edges;
    根据所述边的特征聚合所述节点的邻居节点的特征,获得所述节点的嵌入表示;Aggregating the characteristics of the neighbor nodes of the node according to the characteristics of the edge to obtain the embedded representation of the node;
    所述通过节点分类模型对所述图网络模型中的节点进行分类,以及通过边分类模型对所述图网络模型中的节点之间的边进行分类,包括:The classifying the nodes in the graph network model through the node classification model, and classifying the edges between the nodes in the graph network model through the edge classification model include:
    根据所述节点的嵌入表示,通过节点分类模型对所述图网络模型中的节点进行分类;Classifying the nodes in the graph network model through a node classification model according to the embedded representation of the nodes;
    将所述边对应的两个节点的嵌入表示拼接,根据拼接后的所述嵌入表示,通过边分类模型对所述图网络模型中的节点之间的边进行分类。The embedded representations of the two nodes corresponding to the edges are spliced, and according to the spliced embedded representations, the edges between the nodes in the graph network model are classified by an edge classification model.
  3. 根据权利要求1所述的方法,其中,所述通过边分类模型对所述图网络模型中的节点之间的边进行分类,包括:The method according to claim 1, wherein said classifying edges between nodes in said graph network model through an edge classification model comprises:
    根据对所述节点的分类结果,通过边分类模型对所述图网络模型中的节点之间的边进行分类。According to the classification result of the nodes, the edges between the nodes in the graph network model are classified by using an edge classification model.
  4. 根据权利要求1所述的方法,其中,所述对所述节点的分类结果包括如下标签中的一种:键、值和其他,所述对所述边的分类结果包括如下标签中的一种:键值边、值值边、或键键边中的一种。The method according to claim 1, wherein the classification result of the node includes one of the following labels: key, value and others, and the classification result of the edge includes one of the following labels : One of key-value edge, value-value edge, or key-key edge.
  5. 根据权利要求4所述的方法,其中,对所述节点的分类结果为键时,则对所述边的分类结果包括键值边或键键边;对所述节点的分类结果为值时,则对所述边的分类结果包括键值边或值值边。The method according to claim 4, wherein, when the classification result of the node is a key, the classification result of the edge includes a key-value edge or a key-key edge; when the classification result of the node is a value, Then the classification result of the edge includes a key-value edge or a value-value edge.
  6. 根据权利要求1至5任一项所述的方法,其中,所述节点的特征包括所述节点的图像特征、文本特征和位置特征中的至少一种,所述边的特征所述文本行之间的相对位置、相对宽高中的至少一种。The method according to any one of claims 1 to 5, wherein the features of the nodes include at least one of image features, text features, and position features of the nodes, and the text features of the edges are At least one of the relative position of the space, the relative width and high height.
  7. 根据权利要求1至5任一项所述的方法,其中,所述节点分类模型和所述边分类模型为端到端模型。The method according to any one of claims 1 to 5, wherein the node classification model and the edge classification model are end-to-end models.
  8. 一种信息提取装置,所述装置包括:An information extraction device, the device comprising:
    检测模块,用于对图像进行文本检测,获得所述图像中的文本区域,所述文本区域包括多个文本行;A detection module, configured to perform text detection on the image to obtain a text area in the image, the text area including a plurality of text lines;
    构建模块,用于根据所述文本区域构建图网络模型,所述文本区域中的每个文本 行为所述图网络模型的一个节点;A building block for constructing a graph network model according to the text region, where each text in the text region acts as a node of the graph network model;
    分类模块,用于通过节点分类模型对所述图网络模型中的节点进行分类,以及通过边分类模型对所述图网络模型中的节点之间的边进行分类;A classification module, configured to classify nodes in the graph network model through a node classification model, and classify edges between nodes in the graph network model through an edge classification model;
    获取模块,用于根据对所述节点的分类结果以及对所述边的分类结果,获得所述图像中的至少一个键值对。An obtaining module, configured to obtain at least one key-value pair in the image according to the classification results of the nodes and the classification results of the edges.
  9. 根据权利要求8所述的装置,其中,所述装置还包括提取模块,所述提取模块用于:The device according to claim 8, wherein the device further comprises an extraction module, the extraction module is used for:
    提取所述节点的特征,以及提取所述边的特征;extracting features of the nodes, and extracting features of the edges;
    根据所述边的特征聚合所述节点的邻居节点的特征,获得所述节点的嵌入表示;Aggregating the characteristics of the neighbor nodes of the node according to the characteristics of the edge to obtain the embedded representation of the node;
    所述分类模型具体用于:The classification model is specifically used for:
    根据所述节点的嵌入表示,通过节点分类模型对所述图网络模型中的节点进行分类;Classifying the nodes in the graph network model through a node classification model according to the embedded representation of the nodes;
    将所述边对应的两个节点的嵌入表示拼接,根据拼接后的所述嵌入表示,通过边分类模型对所述图网络模型中的节点之间的边进行分类。The embedded representations of the two nodes corresponding to the edges are spliced, and according to the spliced embedded representations, the edges between the nodes in the graph network model are classified by an edge classification model.
  10. 根据权利要求8所述的装置,其中,所述分类模块具体用于:The device according to claim 8, wherein the classification module is specifically used for:
    根据对所述节点的分类结果,通过边分类模型对所述图网络模型中的节点之间的边进行分类。According to the classification result of the nodes, the edges between the nodes in the graph network model are classified by using an edge classification model.
  11. 根据权利要求8所述的装置,其中,所述对所述节点的分类结果包括如下标签中的一种:键、值和其他,所述对所述边的分类结果包括如下标签中的一种:键值边、值值边、或键键边中的一种。The apparatus according to claim 8, wherein the classification result of the node includes one of the following labels: key, value and others, and the classification result of the edge includes one of the following labels : One of key-value edge, value-value edge, or key-key edge.
  12. 根据权利要求11所述的装置,其中,对所述节点的分类结果为键时,则对所述边的分类结果包括键值边或键键边;对所述节点的分类结果为值时,则对所述边的分类结果包括键值边或值值边。The device according to claim 11, wherein when the classification result of the node is a key, the classification result of the edge includes a key-value edge or a key-key edge; when the classification result of the node is a value, Then the classification result of the edge includes a key-value edge or a value-value edge.
  13. 根据权利要求8至12任一项所述的装置,其中,所述节点的特征包括所述节点的图像特征、文本特征和位置特征中的至少一种,所述边的特征所述文本行之间的相对位置、相对宽高中的至少一种。The device according to any one of claims 8 to 12, wherein the feature of the node includes at least one of an image feature, a text feature and a position feature of the node, and the feature of the edge is that the text line At least one of the relative position of the space, the relative width and high height.
  14. 根据权利要求8至12任一项所述的装置,其中,所述节点分类模型和所述边分类模型为端到端模型。The apparatus according to any one of claims 8 to 12, wherein the node classification model and the edge classification model are end-to-end models.
  15. 一种设备,所述设备包括处理器和存储器;An apparatus comprising a processor and a memory;
    所述处理器用于执行所述存储器中存储的指令,以使得所述设备执行如权利要求1至7中任一项所述的方法。The processor is configured to execute instructions stored in the memory, so that the device executes the method according to any one of claims 1-7.
  16. 一种计算机可读存储介质,包括指令,所述指令指示设备执行如权利要求1至7中任一项所述的方法。A computer-readable storage medium comprising instructions instructing a device to perform the method according to any one of claims 1-7.
  17. 一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得计算机执行如权利要求1至7中任一项所述的方法。A computer program product, which causes the computer to execute the method according to any one of claims 1 to 7 when the computer program product is run on a computer.
PCT/CN2022/121551 2021-11-04 2022-09-27 Information extraction method and apparatus, and device, medium and product WO2023077995A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111300845.9 2021-11-04
CN202111300845.9A CN114037985A (en) 2021-11-04 2021-11-04 Information extraction method, device, equipment, medium and product

Publications (1)

Publication Number Publication Date
WO2023077995A1 true WO2023077995A1 (en) 2023-05-11

Family

ID=80142797

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/121551 WO2023077995A1 (en) 2021-11-04 2022-09-27 Information extraction method and apparatus, and device, medium and product

Country Status (2)

Country Link
CN (1) CN114037985A (en)
WO (1) WO2023077995A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111783760A (en) * 2020-06-30 2020-10-16 北京百度网讯科技有限公司 Character recognition method and device, electronic equipment and computer readable storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114037985A (en) * 2021-11-04 2022-02-11 北京有竹居网络技术有限公司 Information extraction method, device, equipment, medium and product
CN114359912B (en) * 2022-03-22 2022-06-24 杭州实在智能科技有限公司 Software page key information extraction method and system based on graph neural network
CN116011515B (en) * 2022-12-26 2024-01-26 人民网股份有限公司 Geometric neural network model construction method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112036395A (en) * 2020-09-04 2020-12-04 联想(北京)有限公司 Text classification identification method and device based on target detection
US20210103797A1 (en) * 2019-10-04 2021-04-08 Lunit Inc. Method and system for analyzing image
CN113536856A (en) * 2020-04-20 2021-10-22 阿里巴巴集团控股有限公司 Image recognition method and system, and data processing method
CN114037985A (en) * 2021-11-04 2022-02-11 北京有竹居网络技术有限公司 Information extraction method, device, equipment, medium and product

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110738238B (en) * 2019-09-18 2023-05-26 平安科技(深圳)有限公司 Classification positioning method and device for certificate information
CN111191715A (en) * 2019-12-27 2020-05-22 深圳市商汤科技有限公司 Image processing method and device, electronic equipment and storage medium
CN112949476B (en) * 2021-03-01 2023-09-29 苏州美能华智能科技有限公司 Text relation detection method, device and storage medium based on graph convolution neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210103797A1 (en) * 2019-10-04 2021-04-08 Lunit Inc. Method and system for analyzing image
CN113536856A (en) * 2020-04-20 2021-10-22 阿里巴巴集团控股有限公司 Image recognition method and system, and data processing method
CN112036395A (en) * 2020-09-04 2020-12-04 联想(北京)有限公司 Text classification identification method and device based on target detection
CN114037985A (en) * 2021-11-04 2022-02-11 北京有竹居网络技术有限公司 Information extraction method, device, equipment, medium and product

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIAOJING LIU; FEIYU GAO; QIONG ZHANG; HUASHA ZHAO: "Graph Convolution for Multimodal Information Extraction from Visually Rich Documents", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 27 March 2019 (2019-03-27), 201 Olin Library Cornell University Ithaca, NY 14853 , XP081158575 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111783760A (en) * 2020-06-30 2020-10-16 北京百度网讯科技有限公司 Character recognition method and device, electronic equipment and computer readable storage medium
CN111783760B (en) * 2020-06-30 2023-08-08 北京百度网讯科技有限公司 Character recognition method, device, electronic equipment and computer readable storage medium
US11775845B2 (en) 2020-06-30 2023-10-03 Beijing Baidu Netcom Science And Technology Co., Ltd. Character recognition method and apparatus, electronic device and computer readable storage medium

Also Published As

Publication number Publication date
CN114037985A (en) 2022-02-11

Similar Documents

Publication Publication Date Title
WO2023077995A1 (en) Information extraction method and apparatus, and device, medium and product
US20220129731A1 (en) Method and apparatus for training image recognition model, and method and apparatus for recognizing image
US9619735B1 (en) Pure convolutional neural network localization
WO2022089115A1 (en) Image segmentation method and apparatus, and device, and storage medium
WO2022257578A1 (en) Method for recognizing text, and apparatus
US20210406592A1 (en) Method and apparatus for visual question answering, computer device and medium
WO2022012179A1 (en) Method and apparatus for generating feature extraction network, and device and computer-readable medium
CN110826567B (en) Optical character recognition method, device, equipment and storage medium
WO2023143178A1 (en) Object segmentation method and apparatus, device and storage medium
WO2022252881A1 (en) Image processing method and apparatus, and readable medium and electronic device
WO2023078070A1 (en) Character recognition method and apparatus, device, medium, and product
CN112766284B (en) Image recognition method and device, storage medium and electronic equipment
WO2023005386A1 (en) Model training method and apparatus
WO2023142914A1 (en) Date recognition method and apparatus, readable medium and electronic device
WO2023179310A1 (en) Image restoration method and apparatus, device, medium, and product
CN113204691A (en) Information display method, device, equipment and medium
CN111209856B (en) Invoice information identification method and device, electronic equipment and storage medium
CN113408507B (en) Named entity identification method and device based on resume file and electronic equipment
WO2023130925A1 (en) Font recognition method and apparatus, readable medium, and electronic device
CN113420757A (en) Text auditing method and device, electronic equipment and computer readable medium
WO2023065895A1 (en) Text recognition method and apparatus, readable medium, and electronic device
CN110674813B (en) Chinese character recognition method and device, computer readable medium and electronic equipment
WO2023030426A1 (en) Polyp recognition method and apparatus, medium, and device
WO2022100401A1 (en) Image recognition-based price information processing method and apparatus, device, and medium
WO2022052889A1 (en) Image recognition method and apparatus, electronic device, and computer-readable medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22889021

Country of ref document: EP

Kind code of ref document: A1