CN117454851B - PDF document-oriented form data extraction method and device - Google Patents
PDF document-oriented form data extraction method and device Download PDFInfo
- Publication number
- CN117454851B CN117454851B CN202311786233.4A CN202311786233A CN117454851B CN 117454851 B CN117454851 B CN 117454851B CN 202311786233 A CN202311786233 A CN 202311786233A CN 117454851 B CN117454851 B CN 117454851B
- Authority
- CN
- China
- Prior art keywords
- text
- initial
- reconstruction
- list
- column number
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims description 61
- 238000013075 data extraction Methods 0.000 title claims description 26
- 238000004458 analytical method Methods 0.000 claims description 21
- 238000006243 chemical reaction Methods 0.000 claims description 18
- 230000000007 visual effect Effects 0.000 claims description 14
- 238000012550 audit Methods 0.000 claims description 10
- 230000011218 segmentation Effects 0.000 claims description 5
- 238000005520 cutting process Methods 0.000 claims description 3
- 238000009826 distribution Methods 0.000 claims description 2
- 238000000605 extraction Methods 0.000 abstract description 8
- 238000010586 diagram Methods 0.000 description 13
- 230000008569 process Effects 0.000 description 8
- 238000003860 storage Methods 0.000 description 8
- 238000012545 processing Methods 0.000 description 6
- 239000000284 extract Substances 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 238000012015 optical character recognition Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000000737 periodic effect Effects 0.000 description 2
- 238000012552 review Methods 0.000 description 2
- 238000007619 statistical method Methods 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013501 data transformation Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000012916 structural analysis Methods 0.000 description 1
- 230000000153 supplemental effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/12—Use of codes for handling textual entities
- G06F40/151—Transformation
- G06F40/157—Transformation using dictionaries or tables
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/335—Filtering based on additional data, e.g. user or group profiles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/166—Editing, e.g. inserting or deleting
- G06F40/177—Editing, e.g. inserting or deleting of tables; using ruled lines
- G06F40/18—Editing, e.g. inserting or deleting of tables; using ruled lines of spreadsheets
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Machine Translation (AREA)
Abstract
In the extraction method, after an initial form is obtained by analyzing a PDF document, a text list corresponding to a page where the initial form is located is firstly segmented to obtain a text two-dimensional list. Then, a table category of the initial table is determined based on the number of rows and columns of the initial table and the number of columns of the text two-dimensional list. Finally, the initial form is reconstructed based on the determined form category and the text list, and a reconstructed form is obtained as form data extracted from the PDF document. Therefore, the extraction efficiency and accuracy of the table data can be greatly improved.
Description
Technical Field
One or more embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a method and an apparatus for extracting form data for a PDF document.
Background
In most cases, the multi-source heterogeneous multi-dimensional supply chain data contains rich valuable information, and has important significance in guiding the aspects of business management, decision support, business model innovation and the like of enterprises. Among them, the portable document format (portable document format, PDF) is a widely used unstructured data form, which has significant advantages in terms of cross-platform, high fidelity, security, etc., and thus is widely used in the production and propagation of various documents. In particular, in the field of enterprise applications, PDF documents are important carriers for internal and external communications of enterprises, such as, for example, a specification of a bid, periodic reports of a marketing company (including annual, semi-annual, and quarterly reports), contractual agreements, and product specifications. The PDF documents contain a large amount of enterprise information such as management conditions, financial indexes, market competitiveness, product characteristics and the like, and have important values for enterprises and stakeholders thereof. However, since PDF documents are often non-editable and contain a variety of unstructured data, such as tables, pictures, and text, efficient extraction of data therefrom becomes cumbersome and time-consuming. At present, the method for extracting data from PDF documents mainly comprises manual extraction and input, PDF converter, open source tool, intelligent algorithm and the like, but the methods have certain limitations and disadvantages. The method comprises the following steps:
(1) And the problem of complex data. PDF documents are usually composed of unstructured data with complexity and diversity characteristics such as tables, pictures and texts, and common data conversion methods/tools are low in efficiency, high in cost and limited in usability, and cannot provide visual analysis functions.
(2) Data quality problems. Due to various factors such as subjective judgment, negligence or fatigue, the unstructured data of the PDF document is easy to miss and error by manually extracting, and even certain important data information is ignored, so that the subsequent analysis application can be negatively influenced.
(3) Data integrity issues. When an automation tool is adopted to extract data from a PDF document, only some conventional financial index data can be extracted, and information such as financial notes, pictures and texts which have extremely high value on data analysis is ignored, so that the data integrity and analysis accuracy are affected.
(4) Data comparison problem. Structured data extracted from a PDF document by a manual mode is usually stored in an Excel or Word document table, and when statistical analysis of indexes such as the same ratio, the ring ratio, the accumulation in the year and the like is required in the future, quick retrieval and calling of historical data cannot be realized.
(5) Data fusion problems. Structured data extracted from PDF documents by adopting a traditional data extraction method/tool is difficult to reasonably summarize and store in a classified manner according to service topics, and the usability of the data is not strong, so that challenges in data fusion are brought.
In order to effectively solve the above-mentioned problems, it is desirable to provide a more efficient PDF document-oriented data extraction method.
Disclosure of Invention
One or more embodiments of the present disclosure describe a method and an apparatus for extracting form data for PDF documents, which can greatly improve efficiency and accuracy of extracting form data.
In a first aspect, a method for extracting form data for a PDF document is provided, including:
analyzing the PDF document to obtain an initial form and a plurality of pages of text contents contained in the PDF document;
converting the multi-page text content into corresponding respective text lists, a single text list comprising a plurality of lines of text;
selecting a target text list corresponding to the page where the initial table is located from the text lists;
cutting the target text list according to a preset symbol to obtain a text two-dimensional list;
determining a table category of the initial table according to the first row number and the first column number of the initial table and the second column number of the text two-dimensional list;
The determining the table category of the initial table includes determining that the table category is a three-wire table if the first row number is smaller than a preset row number and the first column number is equal to the second column number; if the difference value between the second column number and the first column number is equal to a preset column number, determining that the table type is a frame missing table; if the difference value between the second column number and the first column number is larger than the preset column number, determining the table category as a color ladder table;
reconstructing the initial table according to the determined table category to obtain a reconstructed table;
the reconstruction table is determined as table data extracted from the PDF document.
In a second aspect, there is provided a PDF document-oriented form data extracting apparatus, including:
the analysis unit is used for analyzing the PDF document to obtain an initial form and a plurality of pages of text contents contained in the PDF document;
a conversion unit, configured to convert the multi-page text content into corresponding respective text lists, where a single text list includes a plurality of lines of text;
a selecting unit, configured to select, from the text lists, a target text list corresponding to a page where the initial table is located;
The segmentation unit is used for segmenting the target text list according to a preset symbol to obtain a text two-dimensional list;
the determining unit is used for determining the table category of the initial table according to the first line number and the first column number of the initial table and the second column number of the text two-dimensional list;
the determining unit is specifically configured to: if the first row number is smaller than the preset row number and the first column number is equal to the second column number, determining that the table type is a three-wire table; if the difference value between the second column number and the first column number is equal to a preset column number, determining that the table type is a frame missing table; if the difference value between the second column number and the first column number is larger than the preset column number, determining the table category as a color ladder table;
the reconstruction unit is used for reconstructing the initial table according to the determined table category to obtain a reconstructed table;
the determination unit is further configured to determine the reconstruction table as table data extracted from the PDF document.
According to the table data extraction method and device for the PDF document, after an initial table is obtained by analyzing the PDF document, a text list corresponding to a page where the initial table is located is segmented, and a text two-dimensional list is obtained. Then, a table category of the initial table is determined based on the number of rows and columns of the initial table and the number of columns of the text two-dimensional list. Finally, the initial form is reconstructed based on the determined form category and the text list, and a reconstructed form is obtained as form data extracted from the PDF document. Therefore, the extraction efficiency and accuracy of the table data can be greatly improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present description, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present description, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an implementation scenario disclosed in one embodiment of the present disclosure;
FIG. 2 illustrates a flowchart of a method for extracting form data for a PDF document, in accordance with an embodiment;
FIG. 3 shows a schematic diagram of a PDF document parsing process in one example;
FIG. 4 shows a text listing schematic in one example;
FIG. 5a shows a schematic diagram of a target text list in one example;
FIG. 5b shows a schematic diagram of a text two-dimensional list in one example;
FIG. 6 illustrates a schematic diagram of a PDF document-oriented form data extraction method in one example;
FIG. 7a shows a schematic view of a document overview in a visual analysis system;
FIG. 7b shows a schematic view of a data extraction in a visual analysis system;
FIG. 7c illustrates a schematic view of a data conversion audit view in a visual analysis system;
FIG. 8 illustrates a schematic diagram of a form data extraction apparatus for PDF document according to one embodiment.
Detailed Description
The following describes the scheme provided in the present specification with reference to the drawings.
Typically, the periodic reporting PDF documents by marketers contain rich data information, which is typically presented in tabular form, such as liabilities, profit margins, cash flow, and financial reporting notes. The form data are extracted from the PDF document, so that a more reliable data basis can be provided for enterprise decision making, data of different time points or different enterprises can be more conveniently compared, financial change conditions of target enterprises can be better known, and corresponding plans and decisions can be more specifically made.
In order to realize automatic extraction of form data in PDF documents, the prior proposal provides a plurality of document structured data conversion technologies.
PDF documents are typically stored in the form of pictures or binary codes, etc., and document parsing (document parsing) methods are used to decode the document structure and parse the data types. Struthopoulos et al propose a document parsing method based on PDF document structure and keywords, which can automatically identify and extract text information therein and accurately determine paragraph boundaries and sentence integrity. Zhang et al studied a rule-based document parsing method to convert PDF documents into XML format and extract metadata therefrom. Nguyen et al introduced a method of converting PDF documents into image format, using Computer Vision (CV) and Image Processing (IP) techniques to identify forms, pictures and text. Grijava et al developed a data conversion platform that first extracted text cells, bitmap images, and lines from a scanned PDF document, and then analyzed the document content using a machine learning (classification) method. Rizvi et al propose a page layout analysis of PDF documents using a Mask region-based convolutional neural network (Mask R-CNN) BRExSys system framework. In addition, ahmed et al also propose a document parsing method based on multidimensional features such as text blocks, typesetting and geometric information. However, this method has a low accuracy and requires more memory and computing resources to handle large-scale PDF documents.
Data extraction (data extraction) refers to identifying and extracting a specific type of information from a table, picture, or text of a PDF document. For table data extraction, it is first necessary to detect and understand the table structure and then extract the data therein. Traditional methods rely primarily on predefined templates and rule matching to extract specific field content, but suffer from limitations in template creation and are difficult to accommodate for different table structures. The machine learning method adopts image segmentation and recognition algorithms such as YOLO and UNet to detect the table structure, and then adopts an optical character recognition (optical character recognition, OCR) technology to extract the table data. Hashmi et al propose a method based on guide anchor points for precisely positioning rows and columns in a table image, with a strong generalization capability. Jiang et al propose a deep learning model based on a table cell structure, which improves the accuracy of processing heterogeneous table data by learning the characteristics of different types and content cells of a table.
Different from the method, the scheme provides a flow-type table data extraction method, which not only can extract the table subject information in the PDF document, but also can realize the structural analysis and data extraction of the complex table.
Fig. 1 is a schematic diagram of an implementation scenario disclosed in one embodiment of the present specification. In fig. 1, a PDF document is first parsed to obtain an initial form and text content. Then, the initial table can be reconstructed based on the text content, and extracted table data can be obtained. Finally, the extracted form data can be visually displayed for being checked and audited by data analysts, and the audited form data is stored in a database.
FIG. 2 illustrates a flowchart of a method for extracting form data for a PDF document, which may be performed by any apparatus, device, platform, cluster of devices having computing and processing capabilities, in accordance with one embodiment. As shown in fig. 2, the method may include the following steps.
Step S202, analyzing the PDF document to obtain an initial form and text contents of a plurality of pages contained in the PDF document.
In one embodiment, the PDF document may be parsed using a Python-based open source tool (pdfplumber) to obtain an initial form and text content of multiple pages therein (abbreviated as multi-page text content). It should be appreciated that the text content of the plurality of pages includes the text content of the page on which the initial form is located.
Fig. 3 shows a schematic diagram of a PDF document parsing process in one example, in fig. 3, for a given PDF document, it is first read and converted into Python objects in the form of a binary content stream, and then the PDF document is traversed page by page, parsing various objects in the page, such as lines, rectangles, dots, images, and characters. Referring to the thought of a Nurminen algorithm for the table, firstly, acquiring the actually existing table lines based on the information such as one-dimensional lines, two-dimensional rectangles, connection points and the like; then, through analyzing the position information of character alignment, virtual lines possibly existing are presumed, and then the lines are combined to construct a table cell; the text characters in the cells are then further extracted and the form data is saved as a text two-dimensional list.
For text content, firstly, text is extracted based on a PDFMiner method, a text content stream is decoded to extract characters, then horizontal and vertical distances between the characters are calculated, spaces and line-wrapping characters are inserted between the characters to reconstruct a text content structure, and the text is stored as a character string line by line.
Returning to fig. 2, fig. 2 may further include the steps of:
in step S204, the multi-page text content is converted into corresponding respective text lists, the single text list including a plurality of lines of text.
In one embodiment, for any page of text content, the page of text content may be cut into multiple lines of text according to a line feed, and then the multiple lines of text may be arranged into a list form, so that a corresponding text list may be obtained. In a more specific embodiment, the text list may also indicate an Index (Index), a data Type (Type), a Size (Size), etc. of each line of text.
FIG. 4 shows a text list schematic in one example. In fig. 4, the text list includes an Index (Index) column, a Type (Type) column, a Size (Size) column, and a Value (Value) column. Wherein the content of the index column is a text identification, which may be numbered starting from 0. The content of the type column is the data type of the text, such as may be a string (Str) or the like. The content of the size column is the number of character strings contained in the text. The content of the numerical column is text (also called character string).
Step S206, selecting a target text list corresponding to the page where the initial table is located from the text lists.
As previously described, the text content for each page is converted to a corresponding text list. Here, a target text list obtained by converting text contents of a page where an initial form is located is extracted.
And step S208, cutting the target text list according to the preset symbol to obtain a text two-dimensional list.
In one embodiment, the preset symbol herein may be, for example, a blank (blank).
As described above, the target text list includes a plurality of lines of text, wherein each line of text is recorded in the form of a character string, and the above-described splitting of the target text list may be understood as splitting the character string of each line into a plurality of sub-strings, thereby forming the sub-list.
Fig. 5a shows a schematic diagram of a target text list in one example, for which the target text list in fig. 5a, after segmentation for which the resulting text two-dimensional list may be as shown in fig. 5 b. In fig. 5b, the sub-list corresponding to each row includes four sub-strings, and the sub-strings are separated by commas, so that the text two-dimensional list may also be understood to include 4 columns.
It should be noted that, because different PDF document formats are different, there are often multiple table types such as a three-line table, a frame missing table, a color ladder table, a cross page table, a continuous table, a nested table, and a multi-head table, and the extraction modes are often different for different table types, so that the table types are judged first.
Step S210, determining the table category of the initial table according to the number of rows and columns of the initial table and the number of columns of the text two-dimensional list.
Specifically, if table D is initialized t The number of rows is less than a preset number of rows (e.g., 2), and the initial table D t Column and text two-dimensional list D l,t Equal column number, the initial table D is determined t The table category of (2) is a three-wire table; if text two-dimensional list D l,t Column number and initial table D t The difference n between the number of columns of (a) is equal to the preset number of columns (e.g., 2), an initial table D is determined t The form class is a frameA miss table; if text two-dimensional list D l,t Column number and initial table D t If the difference n between the columns of the table (D) is greater than the preset column number, determining an initial table (D) t The table category of (2) is a color ladder.
It should be noted that, since the initial table is resolved by the open source tool, the table resolved by the open source tool may have the following problems: three-wire watches generally use three transverse lines to distinguish between a head and a body, but this approach may identify the body portion as a row; the frame missing table (also called as two-end missing table) usually lacks lines on the left side and the right side of the table, but the method can only identify the middle part of the table; the color ladder usually adopts colors with different shades to distinguish adjacent rows, but the method is insensitive to the colors of the table, and two adjacent rows are easily identified as the same cell.
In view of the above-mentioned initial table parsed by the open source tool, there are corresponding problems. To this end, the present solution will reconstruct the initial table.
Step S212, reconstructing the initial table according to the determined table category to obtain a reconstructed table.
Specifically, for the three-line table, for each row of the corresponding region of the initial table in the target text list, dividing the initial table according to spaces, clustering a plurality of one-dimensional lists obtained by dividing to determine the target column number, and correspondingly filling the content in the initial table into the table with the target column number and the row number contained in the corresponding region to obtain the reconstruction table.
The method for determining the corresponding area of the initial table in the target text list may include matching (e.g., calculating a similarity) the first i rows (e.g., the first 2 rows) of the initial table with each row in the target text list to determine the initial row of the initial table in the target text list. Then, judging whether the line containing the space in the target text list is downward from the initial line one by one, and if a certain line does not have the space, taking the line as the ending line of the initial table in the target text list. Finally, based on the determined start line and end line, a corresponding region of the initial form in the target text list may be determined.
In addition, the one-dimensional lists obtained by the segmentation may be regarded as cells, and the target column number may be obtained by clustering the cells segmented for each row by using a geographic position-based clustering algorithm such as Kmeans. It will be appreciated that a new table may be obtained based on the target column number and the number of rows contained in the corresponding region.
Finally, the foregoing correspondingly filling the content in the initial table into the table having the target column number and the line number contained in the corresponding area specifically includes, for each cell in the initial table, correspondingly filling the content into the corresponding position in the newly built table. For example, the content of the ith row and the jth column in the initial table is correspondingly filled into the ith row and the jth column of the newly built table. It should be understood that after the content of each cell in the initial table is correspondingly filled into the new table, a reconstructed table corresponding to the initial table can be obtained.
Of course, in practical application, after the content of each cell in the initial table is correspondingly filled into the new table, whether the line spacing of the new table is different or not can be further determined, whether the same line spans the line or not is determined according to the line spacing difference (position difference) and the alignment of the position of the first cell or not, and line merging or the like is performed according to the situations. And finally, determining the newly built table after the line merging treatment as a reconstruction table.
For the color ladder, the reconstruction method is similar to that of the three-line list, except that the initial table may be preprocessed before the initial table is split in the corresponding region of the target text list, for example, the None column in the initial table is removed, where the None column refers to that the corresponding column includes None (null value) only, or includes None and null at the same time.
And (3) for the frame missing table, the left column and the right column of the initial table can be filled in, and missing contents in the initial table after the filling in of the columns are filled in by using None, so that the corresponding reconstruction table is obtained.
Step S214, the reconstruction table is determined as the table data extracted from the PDF document.
In the scheme, accurate form data can be obtained by reconstructing the initial form extracted from the PDF document.
Of course, in practical applications, in addition to the table data itself, it is necessary to acquire subject information associated with the table, such as a table name, a measurement unit, and a currency unit, and a method for acquiring the subject information is described below.
Will initiate form D t Is associated with the target text list L p,i Matching to determine an initial form D t In the target text list L p,i The start line P in s . Judging on the target text list L p,i From the beginning line P s Whether the total number of lines m starting forward is not less than a preset number ρ, and if not, according to the starting line P s And a preset number ρ, from the target text list L p,i And extracting the corresponding area as the area where the table subject information is located. Specifically, the corresponding region refers to the text in the target text list L p,i From the beginning line P s A preset number p of rows forward is started. Under the condition that the preset number rho is smaller than the preset number rho, calculating a difference rho-m between the preset number rho and the preset number m, and according to the difference rho-m, a target text list L p,i And other text list L p,i-1 And determining the area where the table theme information is located. Wherein other text list L p,i-1 Is a text list corresponding to the text content of the last page of the page where the initial table is located. Determining an initial form D by extracting keywords from an area where form subject information is located t Is provided.
Wherein the target text list L is based on the difference value ρ -m p,i And other text list L p,i-1 The determination of the area of the form subject information specifically includes that the form subject information is to be selected from other text lists L p,i-1 ρ -m lines starting from the last line of (1) onward as target text list L p,i Is added to the previous supplemental content of (a). Target text list L after adding supplementary content p,i And determining the area where the table theme information is located.
To this end, for the PDF document, the form data and the form subject information of each page therein are extracted.
Because the table in the PDF document may have the condition of page crossing display, for the extracted adjacent two or more pages of table data, whether the table data is a page crossing table (page crossing table for short) or a continuous table (continuous table for short) is also required to be judged, and the corresponding method is adopted to restore and combine the table data. The merging of the continuous tables is performed because each page of tables except the first page table in the continuous tables has no topic information, so that merging is needed to ensure the integrity and accuracy of the topic information of the tables, so that data fusion and comparison analysis can be performed better.
The above-described determination and merging process of the spread and continuous form is described below.
Assuming that by the method shown in fig. 2, the extracted form data includes a first reconstruction form D t,-1 And a second reconstruction table D t,1 And a first reconstruction table D t,-1 At the upper page, a second reconstruction table D t,1 Located on the next page, it may first be determined whether the first condition is satisfied. The first condition may include that the first reconstruction table D t,-1 And the last line of the corresponding first text list L t,i-1 The last line of the second reconstruction table D matches t,1 And corresponding second text list L t,i Is matched with the first row of the first reconstruction table D t,-1 Column number and second reconstruction table D t,1 Equal column number (or first reconstruction table D t,-1 And a second reconstruction table D t,1 Is consistent with header data). I.e. the first condition comprises three items of constraint content.
If the first condition is satisfied, the first reconstruction table and the second reconstruction table are judged to be page-crossing tables, and if the first condition is not satisfied, the first reconstruction table and the second reconstruction table are judged to be independent two tables.
Under the condition that the first reconstruction table and the second reconstruction table are page-crossing tables, whether the similarity between the last row of the first reconstruction table and the first row of the second reconstruction table is larger than a preset threshold sigma or not can be judged, if yes, the first reconstruction table and the second reconstruction table are determined to be different-row page-crossing tables, and therefore after repeated header data (namely header data of the second reconstruction table) are removed, the first reconstruction table and the second reconstruction table can be combined, and a combined table is obtained. If the similarity is not greater than the preset threshold sigma, the first reconstruction table and the second reconstruction table are the same-row cross page table, so that the last row and the first row can be intercepted from the first reconstruction table and the second reconstruction table respectively for merging, and then the rest parts of the first reconstruction table and the second reconstruction table are merged to obtain a merged table.
In addition, the continuous table is also a special cross-page table, wherein the sub-table occupies one page, that is, the initial table is consistent with the content of the text list, and the processing method is similar to the cross-page table, and is not repeated here.
It should also be noted that, for the merged table obtained by the above method, it is possible to have a complex structure table, such as a nested table or a multi-headed table, etc. For a complex structure table, the scheme can also split the complex structure table.
Specifically, for the above-described merge table, it may be determined whether there is an intermediate row containing only one non-None in the merge table. If yes, determining the merging table as a nested table, so that the merging table can be split into an upper part and a lower part by the middle behavior boundary; if not, the splitting treatment is not carried out.
In this scheme, after splitting the merging table, it may be further determined whether or not each of the split tables is a multi-head table, which will be described below.
Assume that the upper and lower parts obtained by splitting the merging table include: the first split table and the second split table may acquire header data for the first split table (or the second split table), determine whether the number of rows of the header data is greater than 1 row, and if so, determine that the first split table (or the second split table) is a multi-head table if one row contains None and the other row does not contain None, thereby merging the two rows to obtain the target table.
In the case where the first/second split table is determined to be the multi-head table, the above-described operation is performed in parallel because the policy adopted in the process of analyzing the document by using the open source tool predicts the cells with the line of the finest granularity, resulting in the split of the merged cells in the multi-head table and the filling with None.
It should be further noted that, the reconstructed table, the merged table, the split table or the target table obtained by the embodiments of the present disclosure may be stored in a CSV file format, or may be further converted into a JSON format and stored in a database. The data analysis tool can be used for carrying out operations such as data cleaning, statistical analysis and visual display according to actual demands, so that the financial condition and business condition of a target enterprise can be known more deeply.
FIG. 6 illustrates a schematic diagram of a PDF document-oriented form data extraction method in one example. In fig. 6, after the initial table and the text list corresponding thereto are acquired, table subject information including table names, metering units, currency units, and the like may be extracted from the initial table. In addition, for the initial table, data extraction of an irregular table (such as a three-line table, a frame missing table, a color ladder table and the like), data extraction of a cross-page table and a continuous table, data extraction of a complex structure table (such as a nested table, a multi-head table and the like) and the like can be sequentially performed. The last extracted form data includes form subject information and form data itself, where the form data itself may be stored in JSON format or the like.
In the scheme, the extracted form data can be displayed to the user, and the user is supported to audit and analyze the extracted form data.
In one embodiment, the extracted tabular data may be presented by a visual analysis system. The visual analysis system may include three views: a document overview view, a data extraction view, and a data conversion audit view. Wherein, the document overview is used for showing PDF documents. And the data extraction view is used for showing the distribution condition of the tables of different table categories extracted from the PDF document. And the data conversion auditing view is used for auditing the form data extracted from the PDF document.
The three views are described in detail below.
Fig. 7a shows a schematic view of a document overview in a visual analysis system, where the document overview includes an a1 area and an a2 area, the a1 area is used to display a PDF document, and the a2 area adopts a two-layer tree structure to display an overview of document elements such as tables, pictures, and texts of each section and bar in the PDF document. The root node represents the document name, the leaf nodes represent the individual chapters of the document, and the chapter names are displayed on the connection lines of the root node and the leaf nodes. The size of the leaf node indicates how many corresponding chapter document elements are. The leaf nodes display the sections contained in the chapters in the form of annular tree graphs (circular tree map), each pie chart represents one section, the size of the pie chart represents the number of document elements of the corresponding section, and the pie chart encodes the number proportion of tables, pictures and texts of the corresponding section. When the mouse is hovered over a pie chart, the name of the corresponding section will be displayed. Clicking the chapter name or a small pie chart can jump to the corresponding position of the PDF document.
Fig. 7b shows a schematic diagram of a data extraction view in the visual analysis system, wherein the left side of the view shows a diagram of different types of tables such as a standard table, a three-line table, a frame missing table, a color ladder table, a cross-page table, a continuous table, a nested table and a multi-head table, and the right side shows the total number and the checking state of the corresponding types of tables in a form of a bar chart. The user can select the form type to be viewed on the right, and further view the audit situation by clicking on the histogram correspondence bar of interest.
FIG. 7c illustrates a schematic diagram of a data transformation review view in a visual analysis system that supports a user review and analysis of extracted form data. The user can select filtering in the data extraction view in an interactive mode to view, trace, analyze and correct the extracted form data. For auditing of the form data, the user can sort by clicking the column header of each column, and drag the column header to move left and right to change the sequence of the columns so as to organize the form content according to personal analysis habits. The magnifying glass at the upper right corner of the data table represents conversion tracing, and the original PDF document corresponding to the data table is highlighted in the document overview view by clicking the magnifying glass icon, so that a user can conveniently analyze data before and after conversion and check and confirm the accuracy of the data.
Specifically, the data conversion audit view described above may include four regions c1-c4, which are described below.
As shown in the area c1, for the extracted form data, when a user hovers a mouse over a certain data line, the right side displays an 'edit' icon and a 'remark' icon, and the user can click the 'edit' icon to modify and record according to the need, or click the 'remark' icon to directly record the content as accurate. As shown in region c2, the data line that is accurate for the audit will be marked as a light gray background. The right side of the data line which is subjected to the audit displays a remark icon, and the audit log can be clicked and checked at any time. In addition, as shown in the area c3, if the user finds that the data has errors, clicking the "edit" icon on the right side of the data line, the data line will be marked as dark gray background, meanwhile, a modification line with a light gray background is inserted at the lower end of the data line, the error line is copied as it is, each data cell can be edited, the user can directly modify the data, and the modified data will be displayed in a bolded manner. Finally, as shown in the area c4, clicking on the "remark" icon on the right side of the data line, the user can record the modification log, including information such as whether the data is correct or not and checking remark description.
By combining the above, the scheme firstly analyzes the acquired PDF document, extracts the form therein, and then processes such as reconstruction and the like on the extracted form to realize data conversion. Specifically, for the table data, the scheme adopts a data extraction method to obtain the subject information of the table and the table data. In order to further improve the quality of data conversion, aiming at the problems of data accuracy and efficiency possibly existing in the data conversion process, the scheme also provides a visual analysis system, so that the data can be compared, traced and analyzed. Finally, the converted structured data is fused into a database, so that future retrieval and calling are facilitated.
In summary, the scheme calculates a set of PDF document intelligent processing strategy with special content structure and style characteristics aiming at regular reporting of a marketing company, and improves the quality and efficiency of PDF document structured conversion processing. A novel visual analysis system is constructed for displaying the extracted form data. In addition, the visual analysis system also supports the user to audit and analyze the extracted form data.
Corresponding to the above-mentioned method for extracting table data for a PDF document, an embodiment of the present disclosure further provides an apparatus for extracting table data for a PDF document, as shown in fig. 8, where the apparatus may include:
The parsing unit 802 is configured to parse the PDF document to obtain an initial table and multi-page text content contained in the PDF document.
A conversion unit 804, configured to convert the multi-page text content into corresponding respective text lists, where a single text list includes multiple lines of text.
And a selecting unit 806, configured to select, from each text list, a target text list corresponding to the page where the initial table is located.
And the segmentation unit 808 is configured to segment the target text list according to a preset symbol, so as to obtain a text two-dimensional list.
The determining unit 810 is configured to determine a table category of the initial table according to the first number of rows and the first number of columns of the initial table and the second number of columns of the text two-dimensional list.
The determining unit 810 is specifically configured to: if the first row number is smaller than the preset row number and the first column number is equal to the second column number, determining the table type as a three-wire table; if the difference value between the second column number and the first column number is equal to the preset column number, determining the form type as a frame missing form; if the difference between the second column number and the first column number is greater than the preset column number, determining the table category as a color ladder table.
And a reconstruction unit 812, configured to reconstruct the initial table according to the determined table category, to obtain a reconstructed table.
The determining unit 810 is further configured to determine the reconstruction table as table data extracted from the PDF document.
In one embodiment, the number of the reconstruction tables is two, and the two reconstruction tables include a first reconstruction table located at a previous page and a second reconstruction table located at a next page; the apparatus further comprises:
a determining unit 814, configured to determine whether a first condition is met, where the first condition includes that a last line of the first reconfiguration table matches a last line of the corresponding first text list, a first line of the second reconfiguration table matches a first line of the corresponding second text list, a column number of the first reconfiguration table is equal to a column number of the second reconfiguration table, or header data of the first reconfiguration table is consistent with header data of the second reconfiguration table;
the determining unit 814 is further configured to determine whether a similarity between a last row of the first reconfiguration table and a first row of the second reconfiguration table is greater than a preset threshold if the first condition is met, and if so, combine the first reconfiguration table and the second reconfiguration table after removing the repeated header data to obtain a combined table; if not, the merging table is obtained by merging the last row of the first reconstruction table and the first row of the second reconstruction table.
In one embodiment, the apparatus further comprises: a splitting unit 816;
a judging unit 814, configured to further judge whether there is an intermediate row that only includes one non-None in the merge table;
and a splitting unit 816, configured to split the merge table into an upper portion and a lower portion according to the middle action boundary if it is determined that there is a middle line that includes only one non-None in the merge table.
In one embodiment, the two portions include a first split table and a second split table; the apparatus further comprises: a merging unit 818;
and a merging unit 818, configured to obtain header data of the first/second split tables, and if the number of rows of the header data is greater than 1 row, and one row contains None and the other row does not contain None, merge the two rows to obtain the target table.
In one embodiment, the reconstruction unit 812 is specifically configured to:
under the condition that the table type is a three-line table or a color ladder table, dividing an initial table according to spaces for each row of a corresponding area of the initial table in a target text list, clustering a plurality of one-dimensional lists obtained by dividing to determine a target column number, and correspondingly filling the content in the initial table into a table with the target column number and the row number contained in the corresponding area to obtain a reconstruction table;
When the table type is a frame missing table, the left and right columns of the initial table are filled in, missing contents in the initial table after the filling in columns are filled in by None, and a corresponding reconstruction table is obtained.
In one embodiment, the apparatus further comprises:
a matching unit 820 for matching the first i rows of the initial table with the target text list to determine the initial row of the initial table in the target text list;
an extracting unit 822, configured to extract, when all the lines starting from the start line and going forward in the target text list are not less than a preset number, a corresponding area from the target text list as an area where the table subject information is located according to the start line and the preset number;
the determining unit 810 is further configured to calculate a difference between the preset number and the total number of rows if the total number of rows is less than the preset number, and determine an area where the table subject information is located according to the difference, the target text list, and the other text list; the other text list is a text list corresponding to the text content of the last page of the page where the initial table is located;
the determining unit 810 is further configured to determine the table topic information by extracting keywords from an area where the table topic information is located.
In one embodiment, the determining unit 810 is specifically configured to:
taking the difference value lines from the last line of the other text list to the front as the previous supplementary content of the target text list;
and determining the target text list added with the supplementary content as an area where the table theme information is located.
In one embodiment, the conversion unit 804 is specifically configured to:
for a page of text content, the page of text content is cut into multiple lines of text according to line breaks, and the multiple lines of text form a corresponding text list.
The functions of the functional units of the apparatus in the foregoing embodiments of the present disclosure may be implemented by the steps of the foregoing method embodiments, so that the specific working process of the apparatus provided in one embodiment of the present disclosure is not repeated herein.
According to the table data extraction device for the PDF document, provided by the embodiment of the specification, the extraction efficiency and accuracy of the table data can be greatly improved.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the server embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and reference is made to the description of the method embodiment for relevant points.
The steps of a method or algorithm described in connection with the disclosure herein may be embodied in hardware, or may be embodied in software instructions executed by a processor. The software instructions may be comprised of corresponding software modules that may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. In addition, the ASIC may reside in a server. The processor and the storage medium may reside as discrete components in a server.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the present invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The foregoing detailed description of the embodiments has further described the objects, technical solutions and advantages of the present specification, and it should be understood that the foregoing description is only a detailed description of the embodiments of the present specification, and is not intended to limit the scope of the present specification, but any modifications, equivalents, improvements, etc. made on the basis of the technical solutions of the present specification should be included in the scope of the present specification.
Claims (9)
1. A form data extraction method for PDF documents comprises the following steps:
analyzing the PDF document to obtain an initial form and a plurality of pages of text contents contained in the PDF document;
Converting the multi-page text content into corresponding respective text lists, a single text list comprising a plurality of lines of text;
selecting a target text list corresponding to the page where the initial table is located from the text lists;
cutting the target text list according to a preset symbol to obtain a text two-dimensional list;
determining a table category of the initial table according to the first row number and the first column number of the initial table and the second column number of the text two-dimensional list;
the determining the table category of the initial table includes determining that the table category is a three-wire table if the first row number is smaller than a preset row number and the first column number is equal to the second column number; if the difference value between the second column number and the first column number is equal to a preset column number, determining that the table type is a frame missing table; if the difference value between the second column number and the first column number is larger than the preset column number, determining the table category as a color ladder table;
reconstructing the initial table according to the determined table category to obtain a reconstructed table;
determining the reconstruction table as table data extracted from the PDF document;
The reconstructing the initial table includes:
under the condition that the table type is a three-line table or a color ladder table, dividing the initial table according to spaces for each row of a corresponding area in the target text list, clustering a plurality of one-dimensional lists obtained by dividing to determine a target column number, and correspondingly filling the content in the initial table into a table with the target column number and the row number contained in the corresponding area to obtain the reconstruction table;
and when the table type is a frame missing table, filling left and right columns of the initial table, and filling missing contents in the initial table after filling the columns with None to obtain a corresponding reconstruction table.
2. The method of claim 1, wherein the number of reconstruction tables is two, and the two reconstruction tables include a first reconstruction table at a previous page and a second reconstruction table at a next page; the method further comprises the steps of:
judging whether a first condition is met or not, wherein the first condition comprises that the last line of the first reconstruction table is matched with the last line of a corresponding first text list; the first row of the second reconstruction table is matched with the first row of the corresponding second text list; the column numbers of the first reconstruction table and the second reconstruction table are equal, or the header data of the first reconstruction table and the header data of the second reconstruction table are consistent;
Judging whether the similarity between the last row of the first reconstruction table and the first row of the second reconstruction table is larger than a preset threshold value or not under the condition that the first condition is met, if so, merging the first reconstruction table and the second reconstruction table after removing repeated header data to obtain a merged table; if not, the merging table is obtained by merging the last row and the first row.
3. The method of claim 2, further comprising:
judging whether an intermediate line only containing one non-None exists in the merging table;
if yes, the merging table is split into an upper part and a lower part by the middle behavior boundary.
4. A method according to claim 3, wherein the two parts comprise a first split table and a second split table;
and for the first split table and the second split table, acquiring header data in the first split table and the second split table, and if the number of lines of the header data is greater than 1 line, and one line contains None and the other line does not contain None, merging the two lines to obtain the target table.
5. The method of claim 1, further comprising:
matching the first i rows of the initial table with the target text list to determine the initial rows of the initial table in the target text list;
Under the condition that all the lines starting from the initial line and going forward in the target text list are not smaller than the preset number, extracting a corresponding area from the target text list as an area where the table subject information is located according to the initial line and the preset number;
calculating the difference value between the preset number and the total number of lines when the total number of lines is smaller than the preset number, and determining the area where the table subject information is located according to the difference value, the target text list and other text lists; the other text lists are text lists corresponding to the text content of the last page of the page where the initial table is located;
and determining the form subject information by extracting keywords from the area where the form subject information is located.
6. The method of claim 5, wherein the determining the area in which the table theme information is located includes:
-taking the difference number of lines starting from the last line of the other text list onwards as the preceding supplementary content of the target text list;
and determining the target text list added with the supplementary content as the area where the table theme information is located.
7. The method of claim 1, wherein the converting the multi-page text content into a plurality of text listings comprises:
For a page of text content, the page of text content is cut into multiple lines of text according to line breaks, and the multiple lines of text form a corresponding text list.
8. A visual analysis system, comprising:
a document overview view for displaying a target PDF document;
a data extraction view for showing the distribution of different table categories extracted from the target PDF document;
a data conversion audit view for exposing form data extracted from the target PDF document according to the method of claim 1.
9. A PDF document-oriented form data extraction apparatus comprising:
the analysis unit is used for analyzing the PDF document to obtain an initial form and a plurality of pages of text contents contained in the PDF document;
a conversion unit, configured to convert the multi-page text content into corresponding respective text lists, where a single text list includes a plurality of lines of text;
a selecting unit, configured to select, from the text lists, a target text list corresponding to a page where the initial table is located;
the segmentation unit is used for segmenting the target text list according to a preset symbol to obtain a text two-dimensional list;
the determining unit is used for determining the table category of the initial table according to the first line number and the first column number of the initial table and the second column number of the text two-dimensional list;
The determining unit is specifically configured to: if the first row number is smaller than the preset row number and the first column number is equal to the second column number, determining that the table type is a three-wire table; if the difference value between the second column number and the first column number is equal to a preset column number, determining that the table type is a frame missing table; if the difference value between the second column number and the first column number is larger than the preset column number, determining the table category as a color ladder table;
the reconstruction unit is used for reconstructing the initial table according to the determined table category to obtain a reconstructed table;
the determining unit is further configured to determine the reconstruction table as table data extracted from the PDF document;
the reconstruction unit is specifically configured to:
under the condition that the table type is a three-line table or a color ladder table, dividing the initial table according to spaces for each row of a corresponding area in the target text list, clustering a plurality of one-dimensional lists obtained by dividing to determine a target column number, and correspondingly filling the content in the initial table into a table with the target column number and the row number contained in the corresponding area to obtain the reconstruction table;
And when the table type is a frame missing table, filling left and right columns of the initial table, and filling missing contents in the initial table after filling the columns with None to obtain a corresponding reconstruction table.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311786233.4A CN117454851B (en) | 2023-12-25 | 2023-12-25 | PDF document-oriented form data extraction method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311786233.4A CN117454851B (en) | 2023-12-25 | 2023-12-25 | PDF document-oriented form data extraction method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117454851A CN117454851A (en) | 2024-01-26 |
CN117454851B true CN117454851B (en) | 2024-03-12 |
Family
ID=89580265
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311786233.4A Active CN117454851B (en) | 2023-12-25 | 2023-12-25 | PDF document-oriented form data extraction method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117454851B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111340000A (en) * | 2020-03-23 | 2020-06-26 | 深圳智能思创科技有限公司 | Method and system for extracting and optimizing PDF document table |
JP6758448B1 (en) * | 2019-04-15 | 2020-09-23 | 株式会社フィエルテ | Document analysis device, document analysis method and document analysis program |
CN113361257A (en) * | 2021-06-29 | 2021-09-07 | 深圳壹账通智能科技有限公司 | PDF document analysis method, system, electronic device and storage medium |
CN113961685A (en) * | 2021-07-13 | 2022-01-21 | 北京金山数字娱乐科技有限公司 | Information extraction method and device |
CN114036909A (en) * | 2021-11-04 | 2022-02-11 | 深圳市财富趋势科技股份有限公司 | PDF document page-crossing table merging method and device and related equipment |
CN114201620A (en) * | 2021-12-17 | 2022-03-18 | 上海朝阳永续信息技术股份有限公司 | Method, apparatus and medium for mining PDF tables in PDF file |
CN115935928A (en) * | 2022-11-18 | 2023-04-07 | 华能招标有限公司 | Method and device for extracting document information |
CN116127928A (en) * | 2023-04-17 | 2023-05-16 | 广东粤港澳大湾区国家纳米科技创新研究院 | Table data identification method and device, storage medium and computer equipment |
CN116311259A (en) * | 2022-12-07 | 2023-06-23 | 中国矿业大学(北京) | Information extraction method for PDF business document |
-
2023
- 2023-12-25 CN CN202311786233.4A patent/CN117454851B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6758448B1 (en) * | 2019-04-15 | 2020-09-23 | 株式会社フィエルテ | Document analysis device, document analysis method and document analysis program |
CN111340000A (en) * | 2020-03-23 | 2020-06-26 | 深圳智能思创科技有限公司 | Method and system for extracting and optimizing PDF document table |
CN113361257A (en) * | 2021-06-29 | 2021-09-07 | 深圳壹账通智能科技有限公司 | PDF document analysis method, system, electronic device and storage medium |
CN113961685A (en) * | 2021-07-13 | 2022-01-21 | 北京金山数字娱乐科技有限公司 | Information extraction method and device |
CN114036909A (en) * | 2021-11-04 | 2022-02-11 | 深圳市财富趋势科技股份有限公司 | PDF document page-crossing table merging method and device and related equipment |
CN114201620A (en) * | 2021-12-17 | 2022-03-18 | 上海朝阳永续信息技术股份有限公司 | Method, apparatus and medium for mining PDF tables in PDF file |
CN115935928A (en) * | 2022-11-18 | 2023-04-07 | 华能招标有限公司 | Method and device for extracting document information |
CN116311259A (en) * | 2022-12-07 | 2023-06-23 | 中国矿业大学(北京) | Information extraction method for PDF business document |
CN116127928A (en) * | 2023-04-17 | 2023-05-16 | 广东粤港澳大湾区国家纳米科技创新研究院 | Table data identification method and device, storage medium and computer equipment |
Non-Patent Citations (4)
Title |
---|
A Symbol Dominance Based Formulae Recognition Approach for PDF Documents;Xiaode Zhang et al.;《2017 14th IAPR International Conference on Document Analysis and Recognition》;20180129;1144-1149 * |
PDF文档表格信息的识别与提取;田翠华 等;《厦门理工学院学报》;20200630;第28卷(第3期);70-76 * |
一种面向PDF文件的表格数据抽取方法的研究与实现;唐皓瑾;《中国优秀硕士学位论文全文数据库》;20150815;1-75 * |
支持多维度数据去重的交互式可视分析方法;朱海洋 等;《计算机辅助设计与图形学学报》;20220630;第34卷(第6期);841-851 * |
Also Published As
Publication number | Publication date |
---|---|
CN117454851A (en) | 2024-01-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7814111B2 (en) | Detection of patterns in data records | |
CN110968667B (en) | Periodical and literature table extraction method based on text state characteristics | |
US11803706B2 (en) | Systems and methods for structure and header extraction | |
US8356045B2 (en) | Method to identify common structures in formatted text documents | |
CN110825882A (en) | Knowledge graph-based information system management method | |
US20210366055A1 (en) | Systems and methods for generating accurate transaction data and manipulation | |
CN110427488B (en) | Document processing method and device | |
CN112926299B (en) | Text comparison method, contract review method and auditing system | |
Klampfl et al. | A comparison of two unsupervised table recognition methods from digital scientific articles | |
CN111191429A (en) | System and method for automatic filling of data table | |
CN113159969A (en) | Financial long text rechecking system | |
CN117648093A (en) | RPA flow automatic generation method based on large model and self-customized demand template | |
CN117454851B (en) | PDF document-oriented form data extraction method and device | |
CN113779218B (en) | Question-answer pair construction method, question-answer pair construction device, computer equipment and storage medium | |
WO2023180343A1 (en) | Analysing communications data | |
US12094232B2 (en) | Automatically determining table locations and table cell types | |
CN112667666A (en) | SQL operation time prediction method and system based on N-gram | |
CN117496545B (en) | PDF document-oriented form data fusion processing method and device | |
Assaf et al. | Improving schema matching with linked data | |
CN113268425B (en) | Rule-based micro-service source file preprocessing method | |
Guo | Research on logical structure annotation in English streaming document based on deep learning | |
Assaf et al. | RUBIX: a framework for improving data integration with linked data | |
CN118350371B (en) | Patent text-oriented mark pair extraction method and system | |
CN102385700B (en) | Off-line handwriting recognizing method and device | |
CN116129454A (en) | Positioning algorithm-based wireless form accurate identification method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |