CN110544263A - simplified method for detecting form in form image - Google Patents

simplified method for detecting form in form image Download PDF

Info

Publication number
CN110544263A
CN110544263A CN201910764490.5A CN201910764490A CN110544263A CN 110544263 A CN110544263 A CN 110544263A CN 201910764490 A CN201910764490 A CN 201910764490A CN 110544263 A CN110544263 A CN 110544263A
Authority
CN
China
Prior art keywords
line
image
finding
coordinates
lambda
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910764490.5A
Other languages
Chinese (zh)
Inventor
罗胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wenzhou University
Original Assignee
Wenzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wenzhou University filed Critical Wenzhou University
Priority to CN201910764490.5A priority Critical patent/CN110544263A/en
Publication of CN110544263A publication Critical patent/CN110544263A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20152Watershed segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

the invention discloses a simplified method for detecting a table in a table image, which comprises the following steps: s1, positioning a transverse line; s2, positioning a longitudinal line; s3, analyzing into a table; the method for determining the table according to the matrix generated by the transverse line and the longitudinal line of the table has the characteristics of high efficiency, simplicity, convenience, practicability, accuracy and the like.

Description

Simplified method for detecting form in form image
Technical Field
The invention relates to the technical field of form recognition, in particular to a simplified method for detecting a form in a form image.
background
Tables in images are detected and the watershed algorithm is generally used to segment tables in tables, but because of manipulating the image, the computation is not efficient and if the line is too thin or does not top to the head, several tables may be treated as one table.
disclosure of Invention
in order to solve the problems, the invention provides a simplified method for detecting the table in the table image, which determines the table according to the matrix generated by the transverse line and the longitudinal line of the table and has the characteristics of high efficiency, simplicity, convenience, practicability, accuracy and the like.
in order to achieve the purpose, the invention adopts the technical scheme that:
a simplified method of detecting a form in a form image, comprising the steps of:
S1, positioning transverse line
s1.1, transversely projecting a table image into a vertical projection diagram;
S1.2, finding a low-lying area and a peak area in vertical projection by adopting a watershed-like algorithm, descending from a maximum value to a minimum value, and finding the low-lying area, left and right boundaries of the low-lying area and the peak area among the low-lying areas;
S1.3, lines with peak height exceeding lambda 1, peak width smaller than lambda 2 and left and right low-lying areas exceeding lambda 3 are taken as candidate lines Li, and left and right coordinates of the root of the peak are Li1 and Li 2; λ 1, λ 2, λ 3 are empirical values obtained after statistics;
s1.4, for each line Li found by the horizontal projection drawing Hr, finding the position of each line Li on the image, wherein the coordinates of the upper left corner are (Vjk, Li1), and the coordinates of the lower right corner are (Vjk, Li 2);
s1.4.1, cutting out a transverse line image Ih from left and right to a margin from the table sub-image In for the line Li found In the horizontal projection image Hr, wherein the upper boundary line and the lower boundary line of the transverse line image In the table sub-image In are Li1 and Li2 respectively, and projecting the transverse line image Ih to the longitudinal direction to obtain a projection Lh;
s1.4.2, binarizing the longitudinal projection Lh according to a threshold value generated by an OSTU algorithm, generating a line for each region with the value of 1 and the width of more than lambda 1 characters, wherein the starting position of the region is Hj1, the ending position of the region is Hj2, and the coordinates of the upper left corner and the lower right corner of the line are (Hj1, Li1) and (Hj2, Li 2);
s1.4.3, repeating the steps 1.3.1 and 1.3.2 for each line Li found by the horizontal projection drawing Hr, and finding the coordinates of the upper left corner and the lower right corner of the k-th line as (Hjk, Li1) and (Hjk, Li 2);
S2, positioning longitudinal line
S2.1, longitudinally projecting the table image into a horizontal projection diagram Vc;
s2.2, finding a low-lying area and a peak area in the projection by adopting a watershed-like algorithm, descending from the maximum value to the minimum value, and finding the low-lying area, the left and right boundaries of the low-lying area and the peak area among the low-lying areas;
s2.3, regarding lines with peak height exceeding lambda 1, peak width smaller than lambda 2 and left and right low-lying areas exceeding lambda 3 as candidate lines Li, wherein left and right coordinates of roots of the peaks are Li1 and Li2, and lambda 1, lambda 2 and lambda 3 are experience values obtained after statistics;
s2.4, finding the position of each line Li found in the horizontal projection diagram Vc on the table image, wherein the coordinates of the upper left corner are (Li1, Vjk), and the coordinates of the lower right corner are (Li2, Vjk);
S2.4.1, cutting a vertical line image Iv from the table sub-image In up and down to the margin for the line Li found In the horizontal projection image, wherein the left boundary line of the vertical line image In the table sub-image In is Li1, and the right boundary line of the vertical line image In the table sub-image In is Li2, and projecting the vertical line image Iv In the horizontal direction to obtain a projection Lv;
S2.4.2, binarizing the transverse projection Lv according to a threshold value generated by an OSTU algorithm, generating a line for each area with the value of 1 and the width of more than lambda 1 characters, wherein the area comprises a starting position Vj1 and an ending position Vj2, and the coordinates of the upper left corner of the line are (Li1, Vj1) and the coordinates of the lower right corner of the line are (Li2, Vj 2);
s2.4.3, repeating the steps 2.3.1 and 2.3.2 for each line Li found by the horizontal projection diagram Vc, and finding the coordinates of the upper left corner and the lower right corner of the kth line as (Li1, Vjk) and (Li2, Vjk);
s3, parsing into table
s3.1, arranging horizontal lines H in ascending order into rows and arranging vertical lines in ascending order into columns to form a list matrix Mt; the multiple lines in the same row are arranged in the same row/column;
s3.2, detecting whether transverse lines and longitudinal lines of elements in each line H are crossed or not, and setting the crossed elements in the line H to be X; if the line is intersected with the line in the other direction, the remaining line is left, but the remaining line is not intersected with the next line in the other direction, the remaining line is processed to be intersected with the next line in the other direction, and X is added at the corresponding position; if the line does not intersect with all lines in the other direction, processing to prolong the intersection of the line and the two lines nearest to the other direction, and supplementing X at the corresponding position;
S3.3, the crossing element X of the table matrix Mt, a plurality of consecutive crossing elements X adjacent on a row/column form a line, therefore: a line segment having at least two intersections X; if there is only one intersection point X, then the intersection point X is complemented at an adjacent position on the same row/column;
S3.4, finding a set of O fully surrounded by the intersection points X or continuous O as a table: starting from the second row and the second column, following a method of from top to bottom and from left to right, finding the intersection X at the four corners of the upper left corner, the upper right corner, the lower left corner and the lower right corner in the calendar matrix Mt to form a fully enclosed O, and storing as a table; if O is not fully surrounded by surrounding intersection points X, only two or three intersection points X are found, then finding the next O horizontally to the right until being blocked by the continuous intersection point X on the right, then finding O on the next row vertically downwards until being blocked by the continuous intersection point X on the lower side, wherein the common intersection point X of the vertical continuous intersection point X and the vertical continuous intersection point X is the lower right corner point;
and S3.5, outputting the position of each table.
the method for determining the table according to the matrix generated by the transverse line and the longitudinal line of the table has the characteristics of high efficiency, simplicity, convenience, practicability, accuracy and the like.
drawings
FIG. 1 is a schematic view of a positioning line according to an embodiment of the present invention.
fig. 2 shows a table matrix Mt and its intersection X in an embodiment of the invention.
FIG. 3 is a table of measurements made by an embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications can be made by persons skilled in the art without departing from the spirit of the invention. All falling within the scope of the present invention.
the embodiment of the invention provides a simplified method for detecting a table in a table image, which comprises the following steps:
s1, positioning transverse line
s1.1, finding a low-lying area and a peak area in projection by adopting a watershed-like algorithm, descending from a maximum value to a minimum value, and finding the low-lying area, left and right boundaries of the low-lying area and the peak area among the low-lying areas;
s1.2, lines with peak height exceeding lambda 1, peak width smaller than lambda 2 and left and right low-lying areas exceeding lambda 3 are taken as candidate lines Li, and left and right coordinates of the root of the peak are Li1 and Li 2; λ 1, λ 2, λ 3 are empirical values obtained after statistics;
S1.3, for each line Li found by the horizontal projection graph Hr, finding the position of each line Li on the image, wherein the coordinates of the upper left corner are (Vj1, Li1), and the coordinates of the lower right corner are (Vj2, Li 2);
S1.3.1, cutting out a transverse line image Ih from left and right to a margin from the table sub-image In for the line Li found In the horizontal projection image Hr, wherein the upper boundary line and the lower boundary line of the transverse line image In the table sub-image In are Li1 and Li2 respectively, and projecting the transverse line image Ih to the longitudinal direction to obtain a projection Lh;
s1.3.2, binarizing the longitudinal projection Lh according to a threshold value generated by an OSTU algorithm, generating a line for each region with the value of 1 and the width of more than lambda 1 characters, wherein the starting position of the region is Hj1, the ending position of the region is Hj2, the coordinates of the upper left corner of the line are (Hj1, Li1), and the coordinates of the lower right corner of the line are (Hj2, Li 2;
S1.4, similarly, for each line Li found in the vertical projection drawing Vc, finding the position of each line Li on the image, wherein the coordinates of the upper left corner (Li1, Vj1) and the coordinates of the lower right corner (Li2, Vj2) are (;
S1.4.1, cutting a vertical line image Iv from the table sub-image In up and down to the margin for the line Li found In the vertical projection view Vc, wherein the left boundary line of the vertical line image In the table sub-image In is Li1, and the right boundary line of the vertical line image In is Li2, and projecting the vertical line image Iv In the horizontal direction to obtain a projection Lv;
s1.4.2, binarizing the longitudinal projection Lv according to a threshold value generated by an OSTU algorithm, generating a line with the initial position Vj1 and the end position Vj of each region with the value of 1 and the width of more than lambda 1 characters, wherein the coordinates of the upper left corner of the line are (Li1, Vj1) and the coordinates of the lower right corner of the line are (Li2, Vj 2);
s2, positioning longitudinal line
S2.1, finding a low-lying area and a peak area in the projection by adopting a watershed-like algorithm, descending from the maximum value to the minimum value, and finding the low-lying area, the left and right boundaries of the low-lying area and the peak area among the low-lying areas;
s2.2, regarding the candidate lines Li, wherein the peak height exceeds lambda 1, the peak width is less than lambda 2, and the left and right low-lying areas exceed lambda 3 (lambda 1, lambda 2 and lambda 3 are empirical values obtained after statistics), and the left and right coordinates of the root of the peak are Li1 and Li 2;
s2.3, for each line Li found by the horizontal projection graph Hr, finding the position of each line Li on the image, wherein the coordinates of the upper left corner are (Vj1, Li1), and the coordinates of the lower right corner are (Vj2, Li 2);
s2.3.1, cutting out a transverse line image Ih from left and right to a margin from the table sub-image In for the line Li found In the horizontal projection image Hr, wherein the upper boundary line and the lower boundary line of the transverse line image In the table sub-image In are Li1 and Li2 respectively, and projecting the transverse line image Ih to the longitudinal direction to obtain a projection Lh;
s2.3.2, binarizing the longitudinal projection Lh according to a threshold value generated by an OSTU algorithm, generating a line for each region with the value of 1 and the width of more than lambda 1 characters, wherein the starting position of the region is Hj1, the ending position of the region is Hj2, the coordinates of the upper left corner of the line are (Hj1, Li1), and the coordinates of the lower right corner of the line are (Hj2, Li 2);
S2.4, similarly, for each line Li found by the vertical projection drawing Vc, finding the position of each line Li on the image, wherein the coordinates of the upper left corner are (Li1, Vj1), and the coordinates of the lower right corner are (Li2, Vj 2);
s2.4.1, cutting a vertical line image Iv from the table sub-image In up and down to the margin for the line Li found In the vertical projection view Vc, wherein the left boundary line of the vertical line image In the table sub-image In is Li1, and the right boundary line of the vertical line image In is Li2, and projecting the vertical line image Iv In the horizontal direction to obtain a projection Lv;
s2.4.2, binarizing the longitudinal projection Lv according to a threshold value generated by an OSTU algorithm, generating a line for each area with the value of 1 and the width exceeding lambda 1 characters, wherein the area comprises a starting position Vj1 and an ending position Vj2, and the coordinates of the upper left corner of the line are (Li1, Vj1) and the coordinates of the lower right corner of the line are (Li2, Vj 2);
s3, parsing into table
s3.1, arranging horizontal lines H in ascending order into rows and arranging vertical lines in ascending order into columns to form a list matrix Mt; the multiple lines in the same row are arranged in the same row/column;
S3.2, detecting whether transverse lines and longitudinal lines of elements in each line H are crossed or not, and setting the crossed elements in the line H to be X; if the line is intersected with the line in the other direction, the remaining line is left, but the remaining line is not intersected with the next line in the other direction, the remaining line is processed to be intersected with the next line in the other direction, and X is added at the corresponding position; if the line does not intersect with all lines in the other direction, processing to prolong the intersection of the line and the two lines nearest to the other direction, and supplementing X at the corresponding position;
S3.3, the crossing element X of the table matrix Mt, a plurality of consecutive crossing elements X adjacent on a row/column form a line, therefore: a line segment having at least two intersections X; if there is only one cross point X, then the cross point X is supplemented on the adjacent position on the same row/column, as the bold X in the figure;
S3.4, finding a set of O fully surrounded by the intersection points X or continuous O as a table: starting from the second row and the second column, following a method of going from top to bottom and from left to right, traversing each row-column intersection point O in the table matrix Mt, finding the intersection points X at four corner points of the upper left corner, the upper right corner, the lower left corner and the lower right corner to form a fully enclosed O, as shown in the first step in FIG. 2, and storing as a table; if o is not fully surrounded by surrounding intersection points X, but only two or three intersection points X, then find the next o horizontally to the right until interrupted by the right successive intersection point X, then find the next row of o vertically downwards until interrupted by the lower successive intersection point X, the common intersection point X of the vertical successive intersection point X and the vertical successive intersection point X being the right lower corner point, as indicated by the filled o in fig. 2; the method actually finds the boundary by a watershed method with O as a seed point and X as a boundary, and the points on the boundary are corner points of the table.
s3.5, the detected tables are shown in figure 3, and the position of each table is output.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (1)

1. a simplified method for detecting a form in a form image, comprising: the method comprises the following steps:
s1, positioning transverse line
S1.1, transversely projecting a table image into a vertical projection diagram;
S1.2, finding a low-lying area and a peak area in vertical projection by adopting a watershed-like algorithm, descending from a maximum value to a minimum value, and finding the low-lying area, left and right boundaries of the low-lying area and the peak area among the low-lying areas;
S1.3, lines with peak height exceeding lambda 1, peak width smaller than lambda 2 and left and right low-lying areas exceeding lambda 3 are taken as candidate lines Li, and left and right coordinates of the root of the peak are Li1 and Li 2; λ 1, λ 2, λ 3 are empirical values obtained after statistics;
s1.4, for each line Li found by the horizontal projection drawing Hr, finding the position of each line Li on the image, wherein the coordinates of the upper left corner are (Vjk, Li1), and the coordinates of the lower right corner are (Vjk, Li 2);
s1.4.1, cutting out a transverse line image Ih from left and right to a margin from the table sub-image In for the line Li found In the horizontal projection image Hr, wherein the upper boundary line and the lower boundary line of the transverse line image In the table sub-image In are Li1 and Li2 respectively, and projecting the transverse line image Ih to the longitudinal direction to obtain a projection Lh;
S1.4.2, binarizing the longitudinal projection Lh according to a threshold value generated by an OSTU algorithm, generating a line for each region with the value of 1 and the width of more than lambda 1 characters, wherein the starting position of the region is Hj1, the ending position of the region is Hj2, and the coordinates of the upper left corner and the lower right corner of the line are (Hj1, Li1) and (Hj2, Li 2);
s1.4.3, repeating the steps 1.3.1 and 1.3.2 for each line Li found by the horizontal projection drawing Hr, and finding the coordinates of the upper left corner and the lower right corner of the k-th line as (Hjk, Li1) and (Hjk, Li 2);
S2, positioning longitudinal line
s2.1, longitudinally projecting the table image into a horizontal projection diagram Vc;
s2.2, finding a low-lying area and a peak area in the projection by adopting a watershed-like algorithm, descending from the maximum value to the minimum value, and finding the low-lying area, the left and right boundaries of the low-lying area and the peak area among the low-lying areas;
s2.3, regarding lines with peak height exceeding lambda 1, peak width smaller than lambda 2 and left and right low-lying areas exceeding lambda 3 as candidate lines Li, wherein left and right coordinates of roots of the peaks are Li1 and Li2, and lambda 1, lambda 2 and lambda 3 are experience values obtained after statistics;
S2.4, finding the position of each line Li found in the horizontal projection diagram Vc on the table image, wherein the coordinates of the upper left corner are (Li1, Vjk), and the coordinates of the lower right corner are (Li2, Vjk);
S2.4.1, cutting a vertical line image Iv from the table sub-image In up and down to the margin for the line Li found In the horizontal projection image, wherein the left boundary line of the vertical line image In the table sub-image In is Li1, and the right boundary line of the vertical line image In the table sub-image In is Li2, and projecting the vertical line image Iv In the horizontal direction to obtain a projection Lv;
s2.4.2, binarizing the transverse projection Lv according to a threshold value generated by an OSTU algorithm, generating a line for each area with the value of 1 and the width of more than lambda 1 characters, wherein the area comprises a starting position Vj1 and an ending position Vj2, and the coordinates of the upper left corner of the line are (Li1, Vj1) and the coordinates of the lower right corner of the line are (Li2, Vj 2);
S2.4.3, repeating the steps 2.3.1 and 2.3.2 for each line Li found by the horizontal projection diagram Vc, and finding the coordinates of the upper left corner and the lower right corner of the kth line as (Li1, Vjk) and (Li2, Vjk);
s3, parsing into table
s3.1, arranging horizontal lines H in ascending order into rows and arranging vertical lines in ascending order into columns to form a list matrix Mt; the multiple lines in the same row are arranged in the same row/column;
s3.2, detecting whether transverse lines and longitudinal lines of elements in each line H are crossed or not, and setting the crossed elements in the line H to be X; if the line is intersected with the line in the other direction, the remaining line is left, but the remaining line is not intersected with the next line in the other direction, the remaining line is processed to be intersected with the next line in the other direction, and X is added at the corresponding position; if the line does not intersect with all lines in the other direction, processing to prolong the intersection of the line and the two lines nearest to the other direction, and supplementing X at the corresponding position;
s3.3, the crossing element X of the table matrix Mt, a plurality of consecutive crossing elements X adjacent on a row/column form a line, therefore: a line segment having at least two intersections X; if there is only one intersection point X, then the intersection point X is complemented at an adjacent position on the same row/column;
s3.4, finding a set of O fully surrounded by the intersection points X or continuous O as a table: starting from the second row and the second column, following a method of from top to bottom and from left to right, finding the intersection X at the four corners of the upper left corner, the upper right corner, the lower left corner and the lower right corner in the calendar matrix Mt to form a fully enclosed O, and storing as a table; if O is not fully surrounded by surrounding intersection points X, only two or three intersection points X are found, then finding the next O horizontally to the right until being blocked by the continuous intersection point X on the right, then finding O on the next row vertically downwards until being blocked by the continuous intersection point X on the lower side, wherein the common intersection point X of the vertical continuous intersection point X and the vertical continuous intersection point X is the lower right corner point;
and S3.5, outputting the position of each table.
CN201910764490.5A 2019-08-19 2019-08-19 simplified method for detecting form in form image Pending CN110544263A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910764490.5A CN110544263A (en) 2019-08-19 2019-08-19 simplified method for detecting form in form image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910764490.5A CN110544263A (en) 2019-08-19 2019-08-19 simplified method for detecting form in form image

Publications (1)

Publication Number Publication Date
CN110544263A true CN110544263A (en) 2019-12-06

Family

ID=68711557

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910764490.5A Pending CN110544263A (en) 2019-08-19 2019-08-19 simplified method for detecting form in form image

Country Status (1)

Country Link
CN (1) CN110544263A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100111438A1 (en) * 2008-11-04 2010-05-06 Electronics And Telecommunications Research Institute Anisotropic diffusion method and apparatus based on direction of edge
CN106156761A (en) * 2016-08-10 2016-11-23 北京交通大学 The image form detection of facing moving terminal shooting and recognition methods
CN106897690A (en) * 2017-02-22 2017-06-27 南京述酷信息技术有限公司 PDF table extracting methods
CN109522805A (en) * 2018-10-18 2019-03-26 成都中科信息技术有限公司 A kind of form processing method for Form ballot paper in community election
CN109858325A (en) * 2018-12-11 2019-06-07 科大讯飞股份有限公司 A kind of table detection method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100111438A1 (en) * 2008-11-04 2010-05-06 Electronics And Telecommunications Research Institute Anisotropic diffusion method and apparatus based on direction of edge
CN106156761A (en) * 2016-08-10 2016-11-23 北京交通大学 The image form detection of facing moving terminal shooting and recognition methods
CN106897690A (en) * 2017-02-22 2017-06-27 南京述酷信息技术有限公司 PDF table extracting methods
CN109522805A (en) * 2018-10-18 2019-03-26 成都中科信息技术有限公司 A kind of form processing method for Form ballot paper in community election
CN109858325A (en) * 2018-12-11 2019-06-07 科大讯飞股份有限公司 A kind of table detection method and device

Similar Documents

Publication Publication Date Title
CN108446264A (en) Table vector analysis method and device in PDF document
US10853565B2 (en) Method and device for positioning table in PDF document
US8548246B2 (en) Method and system for preprocessing an image for optical character recognition
DE102016124879A1 (en) Method, device and device for determining lane lines
CN101876967B (en) Method for generating PDF text paragraphs
EP0739524A1 (en) A method for reducing the size of an image
US11227433B2 (en) Device and method for extracting terrain boundary
DE102006022062A1 (en) Method and apparatus for efficient image rotation
CN111259854A (en) Method and device for identifying structured information of table in text image
CN110046462B (en) Automatic layout method for container profile
CN110544263A (en) simplified method for detecting form in form image
CN106909869A (en) A kind of sampling grid partitioning method and device of matrix two-dimensional code
CN101464998A (en) Non-gauss veins noise smooth filtering method for textile industry
CN103761708A (en) Image restoration method based on contour matching
JPH08194780A (en) Feature extracting method
CN114627041A (en) Method for detecting defects of photovoltaic module
JP3420864B2 (en) Frame extraction device and rectangle extraction device
CN114357958A (en) Table extraction method, device, equipment and storage medium
DE60317455T2 (en) Segmentation of a composite image using basic rectangles
CN114820869A (en) Incomplete scatter diagram overlap removing method
CN110853007B (en) Self-adaptive drawing file segmentation method based on graphic characteristics and galvanometer processing characteristics
CN202736078U (en) Feature extraction module used for digital image processing
CN117995112B (en) Demura data processing method and display device based on self-adaptive rectangular partitioning
JPH0648501B2 (en) Character position detection circuit
CN117473588B (en) Nozzle arrangement method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination