CN110544263A - simplified method for detecting form in form image - Google Patents
simplified method for detecting form in form image Download PDFInfo
- Publication number
- CN110544263A CN110544263A CN201910764490.5A CN201910764490A CN110544263A CN 110544263 A CN110544263 A CN 110544263A CN 201910764490 A CN201910764490 A CN 201910764490A CN 110544263 A CN110544263 A CN 110544263A
- Authority
- CN
- China
- Prior art keywords
- line
- image
- finding
- coordinates
- lambda
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 16
- 239000011159 matrix material Substances 0.000 claims abstract description 13
- 238000010586 diagram Methods 0.000 claims description 8
- 230000001174 ascending effect Effects 0.000 claims description 6
- 230000001502 supplementing effect Effects 0.000 claims description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/155—Segmentation; Edge detection involving morphological operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20152—Watershed segmentation
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
the invention discloses a simplified method for detecting a table in a table image, which comprises the following steps: s1, positioning a transverse line; s2, positioning a longitudinal line; s3, analyzing into a table; the method for determining the table according to the matrix generated by the transverse line and the longitudinal line of the table has the characteristics of high efficiency, simplicity, convenience, practicability, accuracy and the like.
Description
Technical Field
The invention relates to the technical field of form recognition, in particular to a simplified method for detecting a form in a form image.
background
Tables in images are detected and the watershed algorithm is generally used to segment tables in tables, but because of manipulating the image, the computation is not efficient and if the line is too thin or does not top to the head, several tables may be treated as one table.
disclosure of Invention
in order to solve the problems, the invention provides a simplified method for detecting the table in the table image, which determines the table according to the matrix generated by the transverse line and the longitudinal line of the table and has the characteristics of high efficiency, simplicity, convenience, practicability, accuracy and the like.
in order to achieve the purpose, the invention adopts the technical scheme that:
a simplified method of detecting a form in a form image, comprising the steps of:
S1, positioning transverse line
s1.1, transversely projecting a table image into a vertical projection diagram;
S1.2, finding a low-lying area and a peak area in vertical projection by adopting a watershed-like algorithm, descending from a maximum value to a minimum value, and finding the low-lying area, left and right boundaries of the low-lying area and the peak area among the low-lying areas;
S1.3, lines with peak height exceeding lambda 1, peak width smaller than lambda 2 and left and right low-lying areas exceeding lambda 3 are taken as candidate lines Li, and left and right coordinates of the root of the peak are Li1 and Li 2; λ 1, λ 2, λ 3 are empirical values obtained after statistics;
s1.4, for each line Li found by the horizontal projection drawing Hr, finding the position of each line Li on the image, wherein the coordinates of the upper left corner are (Vjk, Li1), and the coordinates of the lower right corner are (Vjk, Li 2);
s1.4.1, cutting out a transverse line image Ih from left and right to a margin from the table sub-image In for the line Li found In the horizontal projection image Hr, wherein the upper boundary line and the lower boundary line of the transverse line image In the table sub-image In are Li1 and Li2 respectively, and projecting the transverse line image Ih to the longitudinal direction to obtain a projection Lh;
s1.4.2, binarizing the longitudinal projection Lh according to a threshold value generated by an OSTU algorithm, generating a line for each region with the value of 1 and the width of more than lambda 1 characters, wherein the starting position of the region is Hj1, the ending position of the region is Hj2, and the coordinates of the upper left corner and the lower right corner of the line are (Hj1, Li1) and (Hj2, Li 2);
s1.4.3, repeating the steps 1.3.1 and 1.3.2 for each line Li found by the horizontal projection drawing Hr, and finding the coordinates of the upper left corner and the lower right corner of the k-th line as (Hjk, Li1) and (Hjk, Li 2);
S2, positioning longitudinal line
S2.1, longitudinally projecting the table image into a horizontal projection diagram Vc;
s2.2, finding a low-lying area and a peak area in the projection by adopting a watershed-like algorithm, descending from the maximum value to the minimum value, and finding the low-lying area, the left and right boundaries of the low-lying area and the peak area among the low-lying areas;
s2.3, regarding lines with peak height exceeding lambda 1, peak width smaller than lambda 2 and left and right low-lying areas exceeding lambda 3 as candidate lines Li, wherein left and right coordinates of roots of the peaks are Li1 and Li2, and lambda 1, lambda 2 and lambda 3 are experience values obtained after statistics;
s2.4, finding the position of each line Li found in the horizontal projection diagram Vc on the table image, wherein the coordinates of the upper left corner are (Li1, Vjk), and the coordinates of the lower right corner are (Li2, Vjk);
S2.4.1, cutting a vertical line image Iv from the table sub-image In up and down to the margin for the line Li found In the horizontal projection image, wherein the left boundary line of the vertical line image In the table sub-image In is Li1, and the right boundary line of the vertical line image In the table sub-image In is Li2, and projecting the vertical line image Iv In the horizontal direction to obtain a projection Lv;
S2.4.2, binarizing the transverse projection Lv according to a threshold value generated by an OSTU algorithm, generating a line for each area with the value of 1 and the width of more than lambda 1 characters, wherein the area comprises a starting position Vj1 and an ending position Vj2, and the coordinates of the upper left corner of the line are (Li1, Vj1) and the coordinates of the lower right corner of the line are (Li2, Vj 2);
s2.4.3, repeating the steps 2.3.1 and 2.3.2 for each line Li found by the horizontal projection diagram Vc, and finding the coordinates of the upper left corner and the lower right corner of the kth line as (Li1, Vjk) and (Li2, Vjk);
s3, parsing into table
s3.1, arranging horizontal lines H in ascending order into rows and arranging vertical lines in ascending order into columns to form a list matrix Mt; the multiple lines in the same row are arranged in the same row/column;
s3.2, detecting whether transverse lines and longitudinal lines of elements in each line H are crossed or not, and setting the crossed elements in the line H to be X; if the line is intersected with the line in the other direction, the remaining line is left, but the remaining line is not intersected with the next line in the other direction, the remaining line is processed to be intersected with the next line in the other direction, and X is added at the corresponding position; if the line does not intersect with all lines in the other direction, processing to prolong the intersection of the line and the two lines nearest to the other direction, and supplementing X at the corresponding position;
S3.3, the crossing element X of the table matrix Mt, a plurality of consecutive crossing elements X adjacent on a row/column form a line, therefore: a line segment having at least two intersections X; if there is only one intersection point X, then the intersection point X is complemented at an adjacent position on the same row/column;
S3.4, finding a set of O fully surrounded by the intersection points X or continuous O as a table: starting from the second row and the second column, following a method of from top to bottom and from left to right, finding the intersection X at the four corners of the upper left corner, the upper right corner, the lower left corner and the lower right corner in the calendar matrix Mt to form a fully enclosed O, and storing as a table; if O is not fully surrounded by surrounding intersection points X, only two or three intersection points X are found, then finding the next O horizontally to the right until being blocked by the continuous intersection point X on the right, then finding O on the next row vertically downwards until being blocked by the continuous intersection point X on the lower side, wherein the common intersection point X of the vertical continuous intersection point X and the vertical continuous intersection point X is the lower right corner point;
and S3.5, outputting the position of each table.
the method for determining the table according to the matrix generated by the transverse line and the longitudinal line of the table has the characteristics of high efficiency, simplicity, convenience, practicability, accuracy and the like.
drawings
FIG. 1 is a schematic view of a positioning line according to an embodiment of the present invention.
fig. 2 shows a table matrix Mt and its intersection X in an embodiment of the invention.
FIG. 3 is a table of measurements made by an embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications can be made by persons skilled in the art without departing from the spirit of the invention. All falling within the scope of the present invention.
the embodiment of the invention provides a simplified method for detecting a table in a table image, which comprises the following steps:
s1, positioning transverse line
s1.1, finding a low-lying area and a peak area in projection by adopting a watershed-like algorithm, descending from a maximum value to a minimum value, and finding the low-lying area, left and right boundaries of the low-lying area and the peak area among the low-lying areas;
s1.2, lines with peak height exceeding lambda 1, peak width smaller than lambda 2 and left and right low-lying areas exceeding lambda 3 are taken as candidate lines Li, and left and right coordinates of the root of the peak are Li1 and Li 2; λ 1, λ 2, λ 3 are empirical values obtained after statistics;
S1.3, for each line Li found by the horizontal projection graph Hr, finding the position of each line Li on the image, wherein the coordinates of the upper left corner are (Vj1, Li1), and the coordinates of the lower right corner are (Vj2, Li 2);
S1.3.1, cutting out a transverse line image Ih from left and right to a margin from the table sub-image In for the line Li found In the horizontal projection image Hr, wherein the upper boundary line and the lower boundary line of the transverse line image In the table sub-image In are Li1 and Li2 respectively, and projecting the transverse line image Ih to the longitudinal direction to obtain a projection Lh;
s1.3.2, binarizing the longitudinal projection Lh according to a threshold value generated by an OSTU algorithm, generating a line for each region with the value of 1 and the width of more than lambda 1 characters, wherein the starting position of the region is Hj1, the ending position of the region is Hj2, the coordinates of the upper left corner of the line are (Hj1, Li1), and the coordinates of the lower right corner of the line are (Hj2, Li 2;
S1.4, similarly, for each line Li found in the vertical projection drawing Vc, finding the position of each line Li on the image, wherein the coordinates of the upper left corner (Li1, Vj1) and the coordinates of the lower right corner (Li2, Vj2) are (;
S1.4.1, cutting a vertical line image Iv from the table sub-image In up and down to the margin for the line Li found In the vertical projection view Vc, wherein the left boundary line of the vertical line image In the table sub-image In is Li1, and the right boundary line of the vertical line image In is Li2, and projecting the vertical line image Iv In the horizontal direction to obtain a projection Lv;
s1.4.2, binarizing the longitudinal projection Lv according to a threshold value generated by an OSTU algorithm, generating a line with the initial position Vj1 and the end position Vj of each region with the value of 1 and the width of more than lambda 1 characters, wherein the coordinates of the upper left corner of the line are (Li1, Vj1) and the coordinates of the lower right corner of the line are (Li2, Vj 2);
s2, positioning longitudinal line
S2.1, finding a low-lying area and a peak area in the projection by adopting a watershed-like algorithm, descending from the maximum value to the minimum value, and finding the low-lying area, the left and right boundaries of the low-lying area and the peak area among the low-lying areas;
s2.2, regarding the candidate lines Li, wherein the peak height exceeds lambda 1, the peak width is less than lambda 2, and the left and right low-lying areas exceed lambda 3 (lambda 1, lambda 2 and lambda 3 are empirical values obtained after statistics), and the left and right coordinates of the root of the peak are Li1 and Li 2;
s2.3, for each line Li found by the horizontal projection graph Hr, finding the position of each line Li on the image, wherein the coordinates of the upper left corner are (Vj1, Li1), and the coordinates of the lower right corner are (Vj2, Li 2);
s2.3.1, cutting out a transverse line image Ih from left and right to a margin from the table sub-image In for the line Li found In the horizontal projection image Hr, wherein the upper boundary line and the lower boundary line of the transverse line image In the table sub-image In are Li1 and Li2 respectively, and projecting the transverse line image Ih to the longitudinal direction to obtain a projection Lh;
s2.3.2, binarizing the longitudinal projection Lh according to a threshold value generated by an OSTU algorithm, generating a line for each region with the value of 1 and the width of more than lambda 1 characters, wherein the starting position of the region is Hj1, the ending position of the region is Hj2, the coordinates of the upper left corner of the line are (Hj1, Li1), and the coordinates of the lower right corner of the line are (Hj2, Li 2);
S2.4, similarly, for each line Li found by the vertical projection drawing Vc, finding the position of each line Li on the image, wherein the coordinates of the upper left corner are (Li1, Vj1), and the coordinates of the lower right corner are (Li2, Vj 2);
s2.4.1, cutting a vertical line image Iv from the table sub-image In up and down to the margin for the line Li found In the vertical projection view Vc, wherein the left boundary line of the vertical line image In the table sub-image In is Li1, and the right boundary line of the vertical line image In is Li2, and projecting the vertical line image Iv In the horizontal direction to obtain a projection Lv;
s2.4.2, binarizing the longitudinal projection Lv according to a threshold value generated by an OSTU algorithm, generating a line for each area with the value of 1 and the width exceeding lambda 1 characters, wherein the area comprises a starting position Vj1 and an ending position Vj2, and the coordinates of the upper left corner of the line are (Li1, Vj1) and the coordinates of the lower right corner of the line are (Li2, Vj 2);
s3, parsing into table
s3.1, arranging horizontal lines H in ascending order into rows and arranging vertical lines in ascending order into columns to form a list matrix Mt; the multiple lines in the same row are arranged in the same row/column;
S3.2, detecting whether transverse lines and longitudinal lines of elements in each line H are crossed or not, and setting the crossed elements in the line H to be X; if the line is intersected with the line in the other direction, the remaining line is left, but the remaining line is not intersected with the next line in the other direction, the remaining line is processed to be intersected with the next line in the other direction, and X is added at the corresponding position; if the line does not intersect with all lines in the other direction, processing to prolong the intersection of the line and the two lines nearest to the other direction, and supplementing X at the corresponding position;
S3.3, the crossing element X of the table matrix Mt, a plurality of consecutive crossing elements X adjacent on a row/column form a line, therefore: a line segment having at least two intersections X; if there is only one cross point X, then the cross point X is supplemented on the adjacent position on the same row/column, as the bold X in the figure;
S3.4, finding a set of O fully surrounded by the intersection points X or continuous O as a table: starting from the second row and the second column, following a method of going from top to bottom and from left to right, traversing each row-column intersection point O in the table matrix Mt, finding the intersection points X at four corner points of the upper left corner, the upper right corner, the lower left corner and the lower right corner to form a fully enclosed O, as shown in the first step in FIG. 2, and storing as a table; if o is not fully surrounded by surrounding intersection points X, but only two or three intersection points X, then find the next o horizontally to the right until interrupted by the right successive intersection point X, then find the next row of o vertically downwards until interrupted by the lower successive intersection point X, the common intersection point X of the vertical successive intersection point X and the vertical successive intersection point X being the right lower corner point, as indicated by the filled o in fig. 2; the method actually finds the boundary by a watershed method with O as a seed point and X as a boundary, and the points on the boundary are corner points of the table.
s3.5, the detected tables are shown in figure 3, and the position of each table is output.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.
Claims (1)
1. a simplified method for detecting a form in a form image, comprising: the method comprises the following steps:
s1, positioning transverse line
S1.1, transversely projecting a table image into a vertical projection diagram;
S1.2, finding a low-lying area and a peak area in vertical projection by adopting a watershed-like algorithm, descending from a maximum value to a minimum value, and finding the low-lying area, left and right boundaries of the low-lying area and the peak area among the low-lying areas;
S1.3, lines with peak height exceeding lambda 1, peak width smaller than lambda 2 and left and right low-lying areas exceeding lambda 3 are taken as candidate lines Li, and left and right coordinates of the root of the peak are Li1 and Li 2; λ 1, λ 2, λ 3 are empirical values obtained after statistics;
s1.4, for each line Li found by the horizontal projection drawing Hr, finding the position of each line Li on the image, wherein the coordinates of the upper left corner are (Vjk, Li1), and the coordinates of the lower right corner are (Vjk, Li 2);
s1.4.1, cutting out a transverse line image Ih from left and right to a margin from the table sub-image In for the line Li found In the horizontal projection image Hr, wherein the upper boundary line and the lower boundary line of the transverse line image In the table sub-image In are Li1 and Li2 respectively, and projecting the transverse line image Ih to the longitudinal direction to obtain a projection Lh;
S1.4.2, binarizing the longitudinal projection Lh according to a threshold value generated by an OSTU algorithm, generating a line for each region with the value of 1 and the width of more than lambda 1 characters, wherein the starting position of the region is Hj1, the ending position of the region is Hj2, and the coordinates of the upper left corner and the lower right corner of the line are (Hj1, Li1) and (Hj2, Li 2);
s1.4.3, repeating the steps 1.3.1 and 1.3.2 for each line Li found by the horizontal projection drawing Hr, and finding the coordinates of the upper left corner and the lower right corner of the k-th line as (Hjk, Li1) and (Hjk, Li 2);
S2, positioning longitudinal line
s2.1, longitudinally projecting the table image into a horizontal projection diagram Vc;
s2.2, finding a low-lying area and a peak area in the projection by adopting a watershed-like algorithm, descending from the maximum value to the minimum value, and finding the low-lying area, the left and right boundaries of the low-lying area and the peak area among the low-lying areas;
s2.3, regarding lines with peak height exceeding lambda 1, peak width smaller than lambda 2 and left and right low-lying areas exceeding lambda 3 as candidate lines Li, wherein left and right coordinates of roots of the peaks are Li1 and Li2, and lambda 1, lambda 2 and lambda 3 are experience values obtained after statistics;
S2.4, finding the position of each line Li found in the horizontal projection diagram Vc on the table image, wherein the coordinates of the upper left corner are (Li1, Vjk), and the coordinates of the lower right corner are (Li2, Vjk);
S2.4.1, cutting a vertical line image Iv from the table sub-image In up and down to the margin for the line Li found In the horizontal projection image, wherein the left boundary line of the vertical line image In the table sub-image In is Li1, and the right boundary line of the vertical line image In the table sub-image In is Li2, and projecting the vertical line image Iv In the horizontal direction to obtain a projection Lv;
s2.4.2, binarizing the transverse projection Lv according to a threshold value generated by an OSTU algorithm, generating a line for each area with the value of 1 and the width of more than lambda 1 characters, wherein the area comprises a starting position Vj1 and an ending position Vj2, and the coordinates of the upper left corner of the line are (Li1, Vj1) and the coordinates of the lower right corner of the line are (Li2, Vj 2);
S2.4.3, repeating the steps 2.3.1 and 2.3.2 for each line Li found by the horizontal projection diagram Vc, and finding the coordinates of the upper left corner and the lower right corner of the kth line as (Li1, Vjk) and (Li2, Vjk);
s3, parsing into table
s3.1, arranging horizontal lines H in ascending order into rows and arranging vertical lines in ascending order into columns to form a list matrix Mt; the multiple lines in the same row are arranged in the same row/column;
s3.2, detecting whether transverse lines and longitudinal lines of elements in each line H are crossed or not, and setting the crossed elements in the line H to be X; if the line is intersected with the line in the other direction, the remaining line is left, but the remaining line is not intersected with the next line in the other direction, the remaining line is processed to be intersected with the next line in the other direction, and X is added at the corresponding position; if the line does not intersect with all lines in the other direction, processing to prolong the intersection of the line and the two lines nearest to the other direction, and supplementing X at the corresponding position;
s3.3, the crossing element X of the table matrix Mt, a plurality of consecutive crossing elements X adjacent on a row/column form a line, therefore: a line segment having at least two intersections X; if there is only one intersection point X, then the intersection point X is complemented at an adjacent position on the same row/column;
s3.4, finding a set of O fully surrounded by the intersection points X or continuous O as a table: starting from the second row and the second column, following a method of from top to bottom and from left to right, finding the intersection X at the four corners of the upper left corner, the upper right corner, the lower left corner and the lower right corner in the calendar matrix Mt to form a fully enclosed O, and storing as a table; if O is not fully surrounded by surrounding intersection points X, only two or three intersection points X are found, then finding the next O horizontally to the right until being blocked by the continuous intersection point X on the right, then finding O on the next row vertically downwards until being blocked by the continuous intersection point X on the lower side, wherein the common intersection point X of the vertical continuous intersection point X and the vertical continuous intersection point X is the lower right corner point;
and S3.5, outputting the position of each table.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910764490.5A CN110544263A (en) | 2019-08-19 | 2019-08-19 | simplified method for detecting form in form image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910764490.5A CN110544263A (en) | 2019-08-19 | 2019-08-19 | simplified method for detecting form in form image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110544263A true CN110544263A (en) | 2019-12-06 |
Family
ID=68711557
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910764490.5A Pending CN110544263A (en) | 2019-08-19 | 2019-08-19 | simplified method for detecting form in form image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110544263A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100111438A1 (en) * | 2008-11-04 | 2010-05-06 | Electronics And Telecommunications Research Institute | Anisotropic diffusion method and apparatus based on direction of edge |
CN106156761A (en) * | 2016-08-10 | 2016-11-23 | 北京交通大学 | The image form detection of facing moving terminal shooting and recognition methods |
CN106897690A (en) * | 2017-02-22 | 2017-06-27 | 南京述酷信息技术有限公司 | PDF table extracting methods |
CN109522805A (en) * | 2018-10-18 | 2019-03-26 | 成都中科信息技术有限公司 | A kind of form processing method for Form ballot paper in community election |
CN109858325A (en) * | 2018-12-11 | 2019-06-07 | 科大讯飞股份有限公司 | A kind of table detection method and device |
-
2019
- 2019-08-19 CN CN201910764490.5A patent/CN110544263A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100111438A1 (en) * | 2008-11-04 | 2010-05-06 | Electronics And Telecommunications Research Institute | Anisotropic diffusion method and apparatus based on direction of edge |
CN106156761A (en) * | 2016-08-10 | 2016-11-23 | 北京交通大学 | The image form detection of facing moving terminal shooting and recognition methods |
CN106897690A (en) * | 2017-02-22 | 2017-06-27 | 南京述酷信息技术有限公司 | PDF table extracting methods |
CN109522805A (en) * | 2018-10-18 | 2019-03-26 | 成都中科信息技术有限公司 | A kind of form processing method for Form ballot paper in community election |
CN109858325A (en) * | 2018-12-11 | 2019-06-07 | 科大讯飞股份有限公司 | A kind of table detection method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20190294663A1 (en) | Method and device for positioning table in pdf document | |
US8548246B2 (en) | Method and system for preprocessing an image for optical character recognition | |
CN101876967B (en) | Method for generating PDF text paragraphs | |
WO1996015510A1 (en) | A method for reducing the size of an image | |
DE102006022062A1 (en) | Method and apparatus for efficient image rotation | |
CN110046462B (en) | Automatic layout method for container profile | |
DE19806985A1 (en) | Organizational process for volumetric data that enables efficient cache rendering accelerations and an efficient graphics hardware design | |
CN110544263A (en) | simplified method for detecting form in form image | |
CN101464998A (en) | Non-gauss veins noise smooth filtering method for textile industry | |
CN103761708A (en) | Image restoration method based on contour matching | |
JPH08194780A (en) | Feature extracting method | |
CN114627041A (en) | Method for detecting defects of photovoltaic module | |
CN106991753A (en) | A kind of image binaryzation method and device | |
JP3420864B2 (en) | Frame extraction device and rectangle extraction device | |
CN101901333B (en) | Method for segmenting word in text image and identification device using same | |
CN114357958A (en) | Table extraction method, device, equipment and storage medium | |
CN110853007B (en) | Self-adaptive drawing file segmentation method based on graphic characteristics and galvanometer processing characteristics | |
CN112506499B (en) | Method for automatically arranging measurement software tags | |
CN117473588B (en) | Nozzle arrangement method, device, equipment and storage medium | |
JPH05300372A (en) | High speed sorting method for median filter | |
CN110532537A (en) | A method of text is cut based on two points of threshold methods and sciagraphy multistage | |
CN111026028B (en) | Method for realizing two-dimensional planar grid division processing for processing workpiece | |
DE10242640A1 (en) | Method for establishing weighting factors for color calculation of chrominance value of texture element for graphics system, involves determining texture element of texture-element | |
CN102117326A (en) | Traversal method used for searching image features | |
EP3327672B1 (en) | Method and device for determining an association between a matrix element of a matrix and a comparison matrix element of a comparison matrix by writing in multiple correspondence tables |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |