EP0654751B1 - Procédé d'analyse de données définissant une image - Google Patents

Procédé d'analyse de données définissant une image Download PDF

Info

Publication number
EP0654751B1
EP0654751B1 EP94308654A EP94308654A EP0654751B1 EP 0654751 B1 EP0654751 B1 EP 0654751B1 EP 94308654 A EP94308654 A EP 94308654A EP 94308654 A EP94308654 A EP 94308654A EP 0654751 B1 EP0654751 B1 EP 0654751B1
Authority
EP
European Patent Office
Prior art keywords
data
image
row
column
input image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP94308654A
Other languages
German (de)
English (en)
Other versions
EP0654751A2 (fr
EP0654751A3 (fr
Inventor
James Mahoney
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xerox Corp
Original Assignee
Xerox Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xerox Corp filed Critical Xerox Corp
Publication of EP0654751A2 publication Critical patent/EP0654751A2/fr
Publication of EP0654751A3 publication Critical patent/EP0654751A3/fr
Application granted granted Critical
Publication of EP0654751B1 publication Critical patent/EP0654751B1/fr
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/412Layout analysis of documents structured with printed lines or input boxes, e.g. business forms or tables

Definitions

  • the present invention relates to a method of analyzing data defining an image and, more particularly, to techniques for analyzing an image showing a graphical representation.
  • a method of analyzing data defining an image and, more particularly, to techniques for analyzing an image showing a graphical representation.
  • JP-A-05 012 489 discloses table recognizing device which can recognize a table structure in graphic data read from printed matter or a document regardless of the type of lines used.
  • An extraction part is provided to extract horizontal and vertical black picture elements having a length longer than a fixed length.
  • a ruled line extraction part is provided to extract ruled lines by linking the horizontal and vertical black picture element runs.
  • a table structure extraction part is provided to retrieve the rectangle provided by the extracted ruled lines and to extract it as cell of the table.
  • Multi-media RISSC Informatics: Retrieving Information with Simple Structural Components deals with the segmentation of documents.
  • the goal is to partition the document which is given as a pixel area, into regions that capture its layout.
  • the problem can be separated into to parts: determining where the regions are and classifying them according to their layout structure.
  • layout structures are tables.
  • This document is also directed to detecting tables.
  • the corresponding method operates on a block of text which constitutes a 2-dimensional area.
  • the algorithm creates a white space density graph by calculating the percentage of black spaces in each column of block.
  • a column that serves as a separator in perfect tables consists of 100% black spaces. Considering slight irregularities, a percentage of less than 100% is considered to be sufficient. If there are local maxima in the white space density graph which are higher than the required percentage of black spaces, they are candidates for column separators in a possible table. Then, each candidate column is analyzed for lexical structure. If there is lexical structure in each column, the block of text is considered to be a table.
  • the invention is based on a technique for analyzing an image showing a row/column representation such as a table; a matrix; an array; a calendar; a two-dimensional connection diagram such as a programmable logic array (PLA) diagram; a puzzle or game diagram such as a crossword puzzle, a chess or checkers diagram, a go game diagram, or ticktacktoe; or another graphical representation that includes rows and columns and represents information for row/column combinations.
  • a row/column representation such as a table; a matrix; an array; a calendar; a two-dimensional connection diagram such as a programmable logic array (PLA) diagram; a puzzle or game diagram such as a crossword puzzle, a chess or checkers diagram, a go game diagram, or ticktacktoe; or another graphical representation that includes rows and columns and represents information for row/column combinations.
  • PDA programmable logic array
  • the technique is based on the recognition that a row/column representation includes. features that extend across the representation in orthogonal directions, with information for each row/column combination represented in a part of the representation where two such features intersect, one feature being a row and the other being a column.
  • the technique therefore analyzes an image by first obtaining the line or band for a row and the line or band for a column; the lines or bands can then be used to obtain the information represented for the row/column combination.
  • the technique obtains input image data defining an input image showing a row/column representation.
  • the technique uses the input image data to obtain row data and column data.
  • the row data indicate parts of the input image within a row of the row/column representation.
  • the column data indicate parts of the input image within a column.
  • the technique uses the row data and the column data to obtain content data indicating information represented for the row/column combination.
  • the technique can, for example, obtain the row data and column data by first obtaining horizontal data and vertical data indicating, respectively, horizontal and vertical lines in the input image. Then, the technique can use the horizontal data to obtain row data indicating parts of the input image between horizontal lines and can also use the vertical data to obtain column data indicating parts of the input image between vertical lines.
  • the row data and the column data together indicate rows and columns.
  • the row data and column data can therefore include two versions of the input image, one a row data image indicating pixels in parts of the input image that are between horizontal lines, the other a column data image indicating pixels between vertical lines.
  • each pixel in a row could be labeled with an integer uniquely identifying its row
  • each pixel in a column could be labeled with an integer uniquely identifying its column.
  • Content data can be obtained for each row/column combination.
  • the content data can include a list of sublists, with each sublist including a list of combination identifiers identifying combinations of columns with one of the row.
  • Each sublist can include, for each combination, bounding box data indicating a bounding box within which information for the identified combination of a column with a row is represented.
  • the technique can store the content data or use it somehow.
  • the technique can use the content data to provide control signals to a system.
  • the input image can show a sketch of a row/column representation and the technique can use the content data to obtain data defining an image showing a precisely formed row/column representation similar to the sketch, such as a table.
  • the technique can be implemented with a machine that includes image input circuitry and data indicating image processing instructions.
  • the image input circuitry can receive data defining an image set that shows a row/column representation.
  • the machine's processor can execute the image processing instructions. In executing the image processing instructions, the processor can receive the input image data from the image input circuitry and use the input image data to obtain row data and column data as described above. The processor can then use the row data and column data to obtain content data indicating information represented by the row/column representation for a combination of a row and a column.
  • the machine can be a high-speed image processing server that responds to image processing requests from a network to which it is connected.
  • the machine can also include image output circuitry, and the processor can use the content data to obtain output image data defining an output image that shows a row/column representation similar to the analyzed row/column representation.
  • the machine can be a fax server or a copier.
  • the technique can also be implemented in a software product that includes a storage medium and data stored by the storage medium.
  • the software product can be used in a machine that includes image input circuitry.
  • the data stored by the storage medium can include image processing instructions the machine's processor can execute.
  • the processor can receive input image data from the image input circuitry defining an input image that shows a row/column representation and use the input image data to obtain row data and column data as described above.
  • the processor can then use the row/column data to obtain content data indicating information represented by the row/column representation for a combination of a row and a column.
  • the technique described above is advantageous because it makes it possible to automatically analyze a variety of row/column representations.
  • the content data obtained by the technique can be used to produce an image showing a precisely formed row/column representation in response to a simple sketch by a user.
  • a “combination of a row and a column” or a “row/column combination” is a pairing of a row and a column that both include one position at which information can be represented. The position may be called the "intersection" of the row and the column in the combination.
  • a "row/column representation” is a graphical representation that includes rows and columns and represents information for row/column combinations.
  • the following categories of graphical representations are examples of row/column representations: tables, matrices, arrays, calendars, two-dimensional connection diagrams, puzzles such as crossword puzzles, or game diagrams such as chess or checkers diagrams, go game diagrams, or ticktacktoe games.
  • Fig. 1 shows schematically how an image showing a row/column representation can be analyzed.
  • Fig. 2 shows general acts in analyzing an image showing a row/column representation.
  • Fig. 3 shows general components of a software product and of a machine in which it can be used.
  • image 10 shows row/column representation 12.
  • Image 10 can, for example, be a sketch.
  • Row/column representation 12 illustratively includes two rows and two columns if the rows and columns are features between horizontal and vertical lines, and three rows and three columns if the rows and columns are features that include horizontal and vertical lines.
  • a machine receiving data defining image 10 can respond by automatically obtaining row data 20 indicating parts of image 10 that are within row m.
  • the machine can also automatically obtain column data 22 indicating parts of image 10 that are within column n.
  • the machine can automatically use row data 20 and column data 22 to obtain content data 24 indicating information represented by row/column representation 12 for the combination of row m and column n.
  • the general acts in Fig. 2 begin in box 40 by receiving input image data defining an input image that shows a row/column representation.
  • the act in box 42 uses the input image data to obtain row data indicating parts of the input image within a row of the row/column representation and column data indicating parts of the input image within a column of the row/column representation.
  • the act in box 44 then uses the row data and column data to obtain content data indicating information represented by the row/column representation for the combination of the row and the column.
  • Fig. 3 shows software product 60, an article of manufacture that can be used in a system that includes components like those shown in Fig. 3.
  • Software product 60 includes data storage medium 62 that can be accessed by storage medium access device 64.
  • Data storage medium 62 could, for example, be a magnetic medium such as a set of one or more tapes, diskettes, or floppy disks; an optical medium such as a set of one or more CD-ROMs; or any other appropriate medium for storing data.
  • Data storage medium 62 stores data that storage medium access device 64 can provide to processor 66.
  • Processor 66 is connected for accessing memory 68, which can include program memory storing data indicating instructions that processor 66 can execute and also data memory storing data that processor 66 can access in executing the instructions.
  • Processor 66 is also connected for receiving data defining images from image input circuitry 70.
  • the data could be obtained from facsimile (fax) machine 72; from scanner 74; from editor 76, which could be a forms editor or other interactive image editor controlled by user input devices such as a keyboard and mouse or a pen- or stylus-based input device; or from network 78, which could be a local area network or other network capable of transmitting data defining an image.
  • software product 60 includes data stored by storage medium 62.
  • the stored data include data indicating image processing instructions 80, which processor 66 can execute to perform acts like those in Fig. 2.
  • processor 66 receives input image data defining an input image from image input circuitry 70.
  • the input image shows a row/column representation.
  • Processor 66 uses the input image data to obtain row data indicating parts of the input image within a row of the row/column representation and column data indicating parts of the input image within a column of the row/column representation.
  • Processor 66 uses the row data and column data to obtain content data indicating information represented by the row/column representation for the row/column combination.
  • Processor 66 can also be connected for providing data defining images to image output circuitry 90.
  • software product 60 could include data indicating instructions processor 66 can execute to use the content data to obtain output image data defining an output image.
  • the output image could show a table or other representation showing the information represented for each row/column combination, for example.
  • the output image data could be provided to image output circuitry 90, and could in turn be provided to fax machine 92, to printer 94, to display 96, or to network 98.
  • the content data could also be used to provide control signals.
  • memory 68 could store control instructions processor 66 can execute to use the content data to obtain control data defining control signals.
  • the control data could be provided to control output circuitry 100, which could respond by providing control signals to system 102.
  • the content data could instead be stored in memory 68 for possible future use. This would be appropriate, for example, where information indicating an operation to be performed on an input image has not been obtained at the time data defining the input image is received.
  • Fig. 4 illustrates ways in which a user can provide an image showing a hand sketch of a row/column representation.
  • Fig. 5 illustrates ways in which a user can provide an image showing a row/column representation by interacting with a machine.
  • Image 100 shows an array with two rows and two columns; for each row/column combination, the array includes a feature, illustratively a character.
  • Image 102 shows a table with two rows and two columns; for each row/column combination, the table includes a rectangle containing a feature, illustratively a character.
  • Image 104 shows a matrix with two rows and two columns, enclosed by brackets at left and right; for each row/column combination, the matrix includes a feature, illustratively a character.
  • Image 106 shows a crossword puzzle with three rows and two columns; for each row/column combination, the puzzle includes a square, with some squares shaded and others filled in with characters.
  • Image 108 shows an excerpt from a calendar with rows for weeks and a column for each day of the week; for each row/column combination, the calendar includes a rectangle large enough to include a number for a day of the month and additional information, illustratively a character for the first day of the month.
  • Image 110 shows an excerpt from a chess diagram; for each row/column combination, the diagram includes a square, with alternate squares white and shaded as in a checkerboard, and with some squares including symbols for chess pieces.
  • Image 112 shows a tic-tac-toe game with three rows and three columns; for each row/column combination, the puzzle includes an area bounded on at least two sides, with some areas blank and others filled in with "O" or "X".
  • Image 114 shows an excerpt from a circuit diagram of a programmable logic array (PLA) or another such array of circuitry with input lines extending in one direction and output lines extending in the other, defining rows and columns of intersections; some intersections are unmarked, but others are marked with an "X" to indicate a connection.
  • Image 116 shows an excerpt from a go game diagram with horizontal and vertical lines forming rows and columns of intersections; some intersections are vacant, but others are covered with a white or black circle to indicate a white or black stone. As suggested by the examples, a wide variety of row/column representations can be formed.
  • the images in Fig. 4 can be obtained in any appropriate way.
  • the row/column representations can be sketches produced by marking actions performed on a marking medium by hand.
  • scanner 130 can receive the sheet. Scanner 130 operates on the sheet to provide data defining an image showing a row/column representation.
  • the marking medium is a marking surface of an electronic device that can sense marks
  • encoder 132 can receive signals from the electronic device and use the signals to obtain data defining an image showing a row/column representation. This data can then be provided to printer 134 to obtain a sheet on which marks are printed, and this sheet can be provided to scanner 130. Scanner 130 provides data defining an image showing a row/column representation.
  • Fig. 4 also shows that data from encoder 132 could be used directly as data defining an image showing a row/column representation. This would be appropriate if encoder 132 could provide data defining an image in response to marking actions.
  • Fig. 5 shows machine 150, which could be a personal computer, a workstation, or another data processing system.
  • Machine 150 includes processor 152; display 154; keyboard 156; pointing device 158, illustratively a mouse; and screen position indicating device 160, illustratively a stylus.
  • a user can operate keyboard 156 and pointing device 158 to provide signals to processor 152.
  • a user can perform marking actions with screen position indicating device 160 on the surface of display 154 to provide signals to processor 152.
  • processor 152 presents and modifies image 162 on display 154, so that the user can continue to provide signals until image 162 shows a desired row/column representation. Then the user can provide a signal requesting that processor 152 provide data defining image 162.
  • Processor 152 could execute a number of types of software to permit a user to produce an image in the manner described above.
  • Processor 152 could execute document editing software or image editing software, for example.
  • Fig. 6 shows a system in which the general features described above have been implemented.
  • System 180 in Fig. 6 includes workstation 182, a Sun SPARCStation 10 workstation.
  • Scanner 184 can be a conventional scanner such as a Xerox Datacopy GS Plus scanner.
  • Printer 186 can be a conventional printer such as a Xerox laser printer.
  • Network 188 can be a conventional network operating in accordance with a standard protocol, such as the Ethernet protocol.
  • Workstation CPU 190 is connected to receive data from scanner 184 and network 188 and is connected to provide data to printer 186 and network 188.
  • CPU 190 can receive data defining an image showing a row/column representation from scanner 184 as described above in relation to Fig. 4.
  • CPU 190 can receive data defining an image obtained in the manner described above in relation to Fig. 5 from network 188.
  • workstation CPU 190 is connected to access program memory 192 and data memory 194 and other conventional workstation peripherals (not shown).
  • Data memory 194 is illustratively storing image data 196 defining an image showing a row/column representation.
  • Program memory 192 stores instructions CPU 190 can execute to perform operations implementing the general acts in Fig. 2.
  • CPU 190 executes operating system instructions 200 that provide a Unix operating system (Unix is a registered trademark) or other appropriate operating system.
  • the other instructions stored by program memory 192 make calls to operating system instructions 200 in a conventional manner.
  • the instructions can be obtained from source code in a conventional programming language such as Lisp, C, or the like with conventional compiler or interpreter techniques that produce object code.
  • a machine can store data indicating the source code or the resulting object code on a data storage medium in manufacturing a software product as described above in relation to Fig. 3, with the source code or object code being stored for access by a storage medium access device when the software product is used in a machine like system 180.
  • CPU 190 In executing image receiving instructions 202, CPU 190 receives data defining an image and stores it in data memory 194, as illustrated by image data 196.
  • the data defining the image may be received from scanner 184 or network 188.
  • CPU 190 calls indexing instructions 206 and content extraction instructions 208.
  • Image processing instructions 204 also perform other operations relating to analysis of row/column representations.
  • CPU 190 calls analysis instructions 210 to perform basic geometric analysis of the image defined by image data 196, producing row data 220 and column data 222.
  • Row data 220 indicate parts of the image within a row.
  • Column data 222 indicate parts of the image within a column.
  • CPU 190 can call analysis instructions 210 to perform basic geometric analysis of images defined by row data 220 and column data 222, producing content data 224.
  • Content data 224 indicate information represented by the row/column representation for the row/column combination.
  • Fig. 7 shows acts in executing image processing instructions 204 in Fig. 6.
  • Fig. 8 shows acts in executing indexing instructions 206 in Fig. 6.
  • Figs. 7 and 8 are performed on items of data, each of which defines an image. Each item is referred to as a "data image.” Some data images can be used in obtaining others. In general, all of the data images define images with the same number of pixels, and each operation produces an image with the same number of pixels. An operation on two images typically uses values of pairs of pixels to produce, for each pair, a pixel value in an image being produced; within each pair, one pixel is from each image and the two pixels in the pair are both at the same location as the pixel value in the image being produced. Many examples of such operations are described in European Patent Applications EP 654 748 A2 and EP 654 766 A2.
  • the act in box 240 in Fig. 7 begins by receiving data defining an input image.
  • the input image data may have been received previously by executing image receiving instructions 202, and may be provided with a call to image processing instructions 204.
  • the act in box 242 uses the input image data from box 240 to obtain an indexed rows data image and an indexed columns data image. If the input image shows a row/column representation that meets certain constraints, the indexed rows data image includes, for each row, a connected component in which each pixel is labeled with a unique identifier for the row. Similarly, the indexed columns data image includes, for each column, a connected component in which each pixel is labeled with a unique identifier for the column. An implementation of the act in box 242 is discussed in greater detail below.
  • the act in box 250 begins an outer iterative loop that performs an iteration for each row's connected component in the indexed rows data image.
  • the act in box 252 begins an inner iterative loop that performs an iteration for each column's connected component in the indexed columns data image.
  • the act in box 254 begins each inner iteration by obtaining a table item data image showing a table item that is in both the row and in the column currently being handled.
  • the act in box 254 can be implemented by comparing the unique identifier of the current row with each pixel's label in the indexed rows data image to obtain a current row data image in which each pixel in the current row's connected component is ON; the act in box 254 can also compare the unique identifier of the current column with each pixel's label in the indexed columns data image to obtain a current column data image in which each pixel in the current column's connected component is ON.
  • the act in box 254 can then AND the current row data image, the current column data image, and a table items data image showing the items in the table to obtain the table item data image.
  • the table items data image can be obtained as discussed below in relation to the implementation of the act in box 242.
  • the act in box 256 then uses the table item data image from box 254 to obtain box data indicating the table item's bounding box.
  • the box data can be a list of four items--a left x coordinate, a top y coordinate, a width, and a height.
  • the act in box 258 then adds the box data from box 256 to a sublist of boxes for the row currently being handled.
  • the act in box 260 adds the current row's sublist to a list of sublists, and when all the rows have been handled, the act in box 262 returns the list of sublists as the content data.
  • the list of sublists indicates a bounding box for an item at each row/column combination, and the indicated bounding box can be used with the input image data from box 240 to obtain data defining an image of the item.
  • Fig. 8 shows how the act in box 242 in Fig. 7 can be implemented.
  • Each box in Fig. 8 represents a data image.
  • Input image data 270 is received from box 240 in Fig. 7.
  • the act in box 242 can obtain table boundary data image 272 by selecting a connected component that has the largest bounding box.
  • the act in box 242 can label each connected component in input image 270 with a unique identifier. Then, each connected component can be handled separately to obtain a bounding box area that is the product of the box width and height.
  • the act in box 242 can perform a spread operation to label the pixels in each connected component with the connected component's bounding box area.
  • the act in box 242 can obtain the maximum bounding box area, and can compare each pixel's label from the spread operation with the maximum bounding box area, keeping a pixel ON if its label is equal to the maximum to obtain table boundary data image 272.
  • the act in box 242 can then use table boundary data image 272 to obtain gap-filled horizontal and vertical data images 274.
  • the act in box 242 can first use table boundary data image 272 to obtain a horizontal non-gap data image and a vertical non-gap data images.
  • the act in box 242 can then use the horizontal and vertical non-gap data images to obtain horizontals and verticals data images.
  • the act in box 242 can then perform a gap fill operation on each of these data images to obtain gap filled horizontals and verticals data images 274.
  • the act in box 242 can obtain the horizontal non-gap data image by first using table boundary data image 272 to obtain a skeleton data image.
  • the act in box 242 can first obtain the complement of table boundary data image 272, then use the complement to obtain four partial edge data images.
  • the partial edge data images can then be used to obtain an edge direction data image in which pixels on edges in opposite directions are labeled differently.
  • the act in box 242 can obtain a horizontal maximum data image by taking, at each pixel, the maximum of the value from the - x partial edge data image and twice the value from the + x partial edge data image.
  • the act in box 242 can obtain a vertical maximum data image by taking, at each pixel, the maximum of the value from the - y partial edge data image and twice the value from the + y partial edge data image.
  • the act in box 242 can obtain a differences data image for each of the horizontal and vertical maximum data images, and can gate the value at each pixel of the horizontal and vertical maximum data images by the value at the same pixel in the complement of the respective differences data image; in the differences data image, a pixel is ON if it has a four-neighbor with a distinct nonzero value in the maximum data image from which it was obtained.
  • the act in box 242 can perform a read operation to obtain resulting data images in which each pixel is labeled with the value of the revised horizontal or vertical maximum data image at the nearest neighboring ON pixel in the complement of table boundary data image 272; obtain a differences data image for each resulting data image; and obtain the skeleton data image as the union of the differences data images.
  • the act in box 242 can use the skeleton data image to obtain two partial edge data images in the +y and -y directions.
  • the act in box 242 can obtain two intersection data images, one by intersecting the + y partial edge data image with a translation of the - y partial edge data image by one pixel in the - y direction, the other by intersecting the - y partial edge data image with a translation of the +y partial edge data image by one pixel in the +y direction.
  • the act in box 242 can take the union of the intersection data images, obtaining a union data image.
  • the act in box 242 use the union data image to obtain a horizontally grown data image in which any pixel is ON if it was within two pixels in a horizontal direction from an edge between an ON pixel and an OFF pixel that are horizontal neighbors.
  • the act in box 242 can take the intersection of the skeleton data image with the horizontally grown data image, and can then take the union of the resulting data image with the union data image to obtain a horizontal non-gap data image in which a two-pixel wide gap at a crossing has been eliminated.
  • the act in box 242 can similarly obtain the vertical non-gap data image using the same skeleton data image to obtain two partial edge data images in the + x and - x directions; obtaining two intersection data images; taking the union of the intersection data images; taking the intersection of the skeleton data image with a vertically grown data image obtained from the union data image; and taking the union of the resulting data image with the union data image to obtain the vertical non-gap data image.
  • the act in box 242 can then obtain a vertical subset data image showing connected components in the vertical non-gap data image that are completely included in connected components in the horizontal non-gap data image.
  • the act in box 242 can then perform a set difference operation to remove the vertical subset data image from the horizontal non-gap data image, obtaining a vertical-free horizontals data image.
  • the act in box 242 can similarly obtain a horizontal subset data image showing connected components in the horizontal non-gap data image that are completely included in connected components in the vertical non-gap data image.
  • the act in box 242 can then perform a set difference operation to remove the horizontal subset data image from the vertical non-gap data image, obtaining a horizontal-free verticals data image.
  • Each subset data image can be obtained from a first image and a second image by first performing a set difference operation to remove the first image from the second image, obtaining a seed data image.
  • the seed data image is then used in a coloring operation to obtain a version of the second image in which pixels are ON in the connected components that include parts of the second image that are not within connected components in the first image.
  • This version can then be removed from the second image to obtain the subset data image showing connected components of the second image that are subsets of connected components in the first image.
  • the act in box 242 can then use the vertical-free horizontals data image and the horizontal-free verticals data image to obtain horizontals and verticals data images, respectively.
  • the act in box 242 can obtain two distances data images, a first one in which each pixel is labeled with the distance to the nearest connected component in the vertical-free horizontals data image and a second one in which each pixel is labeled with the distance to the nearest connected component in the skeleton data image.
  • the distances data images are used to obtain an exclusive closer data image in which a pixel is ON if its distance in the first distances data image is smaller than its distance in the second distances data image, so that the skeleton is grown out to the full width of the curve in those parts indicated by the vertical-free horizontals data image.
  • the act in box 242 then obtains the intersection of table boundary data image 272 and the exclusive closer data image to obtain the horizontals data image.
  • the act in box 242 can similarly use the horizontal-free verticals data image to obtain the verticals data image.
  • the act in box 242 can perform a gap fill operation on the horizontals data image and table boundary data image 272 using a gap fill radius such as five pixels. If a gap between two ON pixels in the horizontals data image is less than the gap fill radius and if the two ON pixels are in the same connected component in table boundary data image 272, the gap fill operation connects the two ON pixels with other ON pixels extending between them, obtaining a gap filled horizontal data image. Similarly, the act in box 242 can perform a gap fill operation on the verticals data image and table boundary data image 272 using the same gap fill radius, obtaining a gap filled vertical data image.
  • a gap fill radius such as five pixels.
  • the act in box 242 can use gap filled horizontal and vertical data images 274 to obtain h-band and v-band data images 276.
  • a pixel is ON in the h-band data image if it has nearest neighbors both in the + and -y directions in the gap filled horizontal data image.
  • each pixel is ON in the v-band data image if it has nearest neighbors both in the + x and - x directions in the gap filled vertical data image.
  • the act in box 242 can also use table boundary data image 272 to obtain filled data image 278.
  • the act in box 242 can then AND h-band and v-band data images 276 and filled data image 278 to obtain table h-band and v-band data images 280.
  • a pixel is ON in the table h-band data image if it is ON both in the h-band data image and in filled data image 278.
  • a pixel is ON in the table v-band data image if it is ON both in the v-band data image and in filled data image 278.
  • the act in box 242 can also use table boundary data image 272 to obtain table panes data image 282.
  • the act in box 242 can use table boundary data image 272 to obtain a holes data image and an internals data image, as described in relation to Fig. 7 of the Node-Link Structure Application. Then the act in box 242 can take the union of the holes data image and the internals data image to obtain table panes data image 282 as a filled holes data image.
  • table items data image 284 shows connected components that are within the panes of the table.
  • the act in box 242 could obtain table items data image 284 by obtaining an internals data image from table boundary data image 272.
  • a column constraint can include a width criterion requiring a column to be wider than the narrowest set of items in any column and a height criterion requiring the column to have a height greater than the shortest height of the set of items in any column; similarly, a row constraint can include a height criterion requiring a row to be taller than the shortest set of items in any row and a width criterion requiring the row to have a width greater than the smallest width of the set of items in any row.
  • each pixel is ON in the rows data image if it is within the x and y span of one of the connected components in the table h-band data image that has x and y spans greater than the minimum spans of the set of connected components in table items data image 284 that are within the same connected component in the table h-band data image.
  • each pixel is ON in the columns data image if it is within the x and y span of one of the connected components in the table v-band data image that has x and y spans greater than the minimum spans of the set of connected components in table items data image 284 that are within the same connected component in the table v-band data image.
  • the act in box 242 can use rows and columns data images 286 to obtain indexed rows and columns data images 288.
  • the act in box 242 can use the rows data image to obtain an up edges data image in which each pixel is ON if it is at an upward edge of a connected component in the rows data image.
  • the act in box 242 can then perform a project operation, labeling each pixel that is ON in the up edges data image with one more than the number of pixels above it that are ON.
  • the act in box 242 can use the resulting data image to perform a spread operation, labeling each pixel in a connected component in the rows data image with the maximum value of any of its pixels in the data image resulting from the project operation.
  • the act in box 242 can label each pixel with a unique identifier starting with one, the unique identifiers being assigned in the same order as the labels from the spread operation, obtaining the indexed rows data image.
  • the act in box 242 can use the columns data image to obtain a left edges data image in which each pixel is ON if it is at a leftward edge of a connected component in the columns data image.
  • the act in box 242 can then perform a project operation, labeling each pixel that is ON in the left edges data image with one more than the number of pixels left of it that are ON.
  • the act in box 242 can use the resulting data image to perform a spread operation, labeling each pixel in a connected component in the columns data image with the maximum value of any of its pixels in the data image resulting from the project operation.
  • the act in box 242 can label each pixel with a unique identifier starting with one, the unique identifiers being assigned in the same order as the labels from the spread operation, obtaining the indexed columns data image.
  • the list from box 262 in Fig. 7, indicating an item for each row/column combination, can be used for various purposes. For example, it can be used to obtain a precisely formed row/column representation, such as a table.
  • Fig. 9 illustrates how a list obtained from a sketch of a table has been used to obtain a precisely formed table with items copied from within the sketch.
  • input image 310 shows a sketch of a row/column representation that is a table.
  • the sketch includes a separate title bar across the top, four rows, and three columns.
  • the first row includes headings for the columns, and the heading of the second column includes a subdivision into two parts.
  • the table includes a double line between the first and second columns.
  • output image 312 also shows a table which includes the items from within the rows and columns of the sketched table in input image 310.
  • the title bar, which was separate, is not included, the double line between the first and second columns is changed to a single line, and the heading of the second column is copied, including the lines forming its subdivision.
  • a rendering operation can produce a table as in output image 312 using a list of sublists like that returned in box 262 in Fig. 7.
  • the rendering operation can begin by setting up a LaTex command string or other page description language (PDL) file which can be provided to a printer when completed.
  • PDL page description language
  • the rendering operation can then obtain the list of sublists.
  • the rendering operation can go through the list of sublists to find the maximum height of the bounding boxes; the maximum height can then be used to scale all other heights to the size of the output. Then the rendering operation can perform an iterative loop to create a table string that can be included in the LaTex command string in the math mode.
  • the iterative loop can go through the bounding boxes in the list of sublists, obtaining a list of postscript files, each defining an item for a row/column combination, creating a LaTex table string.
  • Each item can be scaled by dividing its bounding box height by the maximum height of the items, and then multiplying the result by a fixed height for the rows of the precisely formed table.
  • Fig. 10 shows an alternative implementation that uses an image processing server.
  • System 390 in Fig. 10 includes network 392, workstation 394, storage server 396, and image processing server 398.
  • a user can operate workstation 394 to provide requests on network 392 for storage of data defining images, such as from a scanner or other source.
  • storage server 396 can store the data.
  • the user can operate workstation 394 to provide requests for image processing operations like those described above.
  • image processing server 388 can perform the requested operations, executing instructions like those described above in relation to Fig. 6.
  • Fig. 11 shows how the techniques described above could be applied in a personal computer that can operate as a fax server.
  • Fig. 12 illustrates how the techniques described above could be applied in a copier.
  • System 400 in Fig. 11 includes CPU 402, which can be the CPU of a personal computer such as an IBM PC compatible machine.
  • CPU 402 is connected to receive user input signals from keyboard 404 and mouse 406, and can present images to a user through display 408.
  • CPU 402 is also connected to a number of other peripheral devices, illustratively including disk drive 410, modem 412, scanner 414, and printer 416.
  • Program memory 420 stores operating system (OS) instructions 422, which can be a version of DOS; user interface instructions 424; fax server instructions 426; and image processing instructions 428.
  • Fax server instructions 426 can be similar to the PaperWorksTM software product from Xerox Corporation.
  • Image processing instructions 428 can be implemented as described above in relation to image processing instructions 204 in Fig. 6 and in relation to Figs. 7-9. Fax server instructions 426 and image processing instructions 428 could be obtained in the form of a software product stored on a floppy disk, diskette, or CD-ROM, and accessed for storage in program memory 420 by disk drive 410.
  • Data memory 440 stores input image data 442, row data 444, column data 446, and content data 448 as described above in relation to Figs. 6-8.
  • Data memory 440 can also store output image data 450 if image processing instructions 428 obtain data defining an output image as described above in relation to Fig. 9.
  • System 400 can obtain input image data 442 defining an image that shows a row/column representation in many ways: Data defining an image showing a row/column representation could be produced interactively as described above in relation to Fig. 5, such as by executing user interface instructions 424. Any appropriate user interface techniques could be used, including pen-based techniques. Data defining a previously produced image showing a row/column representation could be retrieved from a storage medium by disk drive 410. Data defining an image showing a row/column representation could be obtained from scanner 414 as described above in relation to Fig. 4. A user could produce data defining an image showing a row/column representation elsewhere and provide it to system 400 through modem 412, such as by making a facsimile transmission to modem 412.
  • CPU 402 could execute fax server instructions 426 in response to a request received by facsimile transmission through modem 412.
  • the request could include a form indicating an analysis operation and also indicating an output image destination such as a fax machine or printer 416.
  • the request could also include data defining an image showing a row/column representation or could indicate an image previously obtained by system 400.
  • Fax server instructions 426 could include calls to image processing instructions 428 to perform acts like those shown in Figs. 7 and 8 if the request indicates an analysis operation. Execution of fax server instructions 426 could further provide data defining an output image, which could be provided to modem 412 for facsimile transmission or to printer 416 for printing.
  • copier 460 can be a digital copier or other electronic reprographics system.
  • Scanning circuitry 462 obtains data defining input image 464 showing a row/column representation.
  • User interface circuitry 470 includes touch sensing device 472, which can be a push button, a heat or pressure sensitive element, a capacitance sensing element, or other device for sensing a touching action. When a user touches device 472, user interface circuitry 470 provides touch data indicating that device 472 has been touched.
  • Processing circuitry 480 uses the touch data to obtain request data indicating a request for an analysis operation. Then, responding to the request, processing circuitry 480 uses data defining input image 464 to automatically obtain row/column data indicating rows and columns in the row/column representation. Processing circuitry 480 then uses the row/column data to obtain content data indicating an item for each of a set of row/column combinations. Processing circuitry 480 then uses the content data to obtain data defining an output image that shows a table or other representation of the item for each row/column combination in the set. This data is provided to printing circuitry 490 for printing of output image 492.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Claims (6)

  1. Procédé comprenant les étapes consistant à :
    obtenir des données d'image d'entrée définissant une image d'entrée (10) qui montre une repiésentazion de rangée/colonne (12) comportant des rangées et des colonnes ;
       caractérisé par les étapes consistant à :
    obtenir une image des données de frontière de table (272) en sélectionnant un composant connecté qui présente le pavé de limite le plus grand en étiquetant les pixels dans chaque composant connecté avec la zone du pavé de limite de composants connectée et en conservant un pixel ACTIF si son étiquette est égale à la zone du pavé limite maximale ;
    utiliser l'image des données de frontière de table (272) pour obtenir des données horizontales et des données verticales ; les données horizontales indiquant les lignes horizontales dans l'image d'entrée ; les données verticales indiquant les lignes verticales dans l'image d'entrée ; et
    utiliser les données horizontales et les données d'image d'entrée pour obtenir les données de rangée (20) et utiliser de même les données verticales et les données d'image d'entrée pour obtenir les données de colonne (22) ; les données de rangée (20) indiquant des parties de l'image d'entrée entre les lignes horizontales et les données de colonne (22) indiquant les parties de l'image d'entrée entre des lignes verticales, la représentation en rangée/colonne représentant les informations pour une combinaison de la m-ième rangée et de la n-ième colonne ; et
    utiliser les données de rangée (20) et les données de colonne (22) pour obtenir les donnees constitutives (24 ; 286) indiquant les informations représentées par la représentation rangée/colonne (12) pour la combinaison de la rangée et de la colonne.
  2. Procédé selon la revendication 1, dans lequel l'image d'entrée inclut des pixels ; l'action consistant à utiliser les données horizontales pour obtenir les données de rangée (20) et utiliser les données verticales pour obtenir les données de colonne (22) comprenant les étapes consistant à :
    utiliser les données horizontales pour obtenir une image des données de rangée indiquant une valeur de couleur pour chaque pixel dans l'image d'entrée ; l'image des données de rangée indiquant pour chaque pixel dans une partie de l'image d'entrée qui est entre les lignes horizontales, une valeur de couleur qui identifie particulièrement une rangée entre les lignes horizontales ; et
    utiliser les données verticales pour obtenir une image des données de colonne indiquant une valeur de couleur pour chaque pixel dans l'image d'entrée ; l'image des données de colonne indiquant, pour chaque pixel dans une partie de l'image d'entrée qui est entre les lignes verticales, une valeur de couleur qui identifie particulièrement une colonne entre les lignes verticales.
  3. Procédé selon l'une quelconque des revendications 1 ou 2, dans lequel les données constitutives 24) incluent une liste de sous-listes, chaque sous-liste incluant une liste d'identificateurs de combinaison, chaque identificateur de combinaison identifiant une combinaison d'une des colonnes de la représentation de rangée/colonne avec une rangée de la représentation de rangée/colonne (12), les identificateurs de combinaison identifiant ensemble les combinaisons de la totalité des colonnes et de la représentation de rangée/colonne avec la rangée.
  4. Procédé selon l'une quelconque des revendications 1 à 3, le procédé comprenant, en outre, les étapes consistant à :
    fournir des données définissant des images comme sortie via des circuits de sorties d'image (90) ; l'image d'entrée montrant un contour d'une représentation de rangée/colonne (12) ;
    utiliser les données constitutives pour obtenir les données d'image de sortie définissant une image de sortie qui inclut une représentation de rangée/colonne similaire au contour ; et
    délivrer les données d'image de sortie au circuit de sortie d'image.
  5. Machine comprenant :
    des circuits d'entrée d'image (70) pour obtenir des données définissant des images comme entrée ;
    un processeur (66) connecté pour recevoir les données définissant les images depuis les circuits d'entrée d'image (70) et connecté pour accéder aux données stockées dans une mémoire ;
    les données stockées dans la mémoire comprenant des données d'instruction indiquant les instructions de traitement d'image que le processeur peut exécuter ; le processeur, dans l'exécution des instructions de traitement d'image :
    recevant les données d'image d'entrée depuis les circuits d'entrée d'image, les données d'image d'entrée définissant une image d'entrée qui montre une représentation de rangée/colonne ;
       caractérisé par l'exécution des instructions consistant à :
    obtenir une image de données de frontière de table (272) en sélectionnant un composant connecté qui a le pavé de limite le plus grand en étiquetant les pixels dans chaque composant connecté avec la zone de pavé limite de composants connectée et en conservant un pixel ACTIF si son étiquette est égale à la zone de pavé limite maximale ;
    utiliser les images de données de frontière de table (272) pour obtenir des données horizontales et des données verticales ; les données horizontales indiquant des lignes horizontales dans l'image d'entrée ; les données verticales indiquant des lignes verticales dans l'image d'entrée ; et
    utiliser les données horizontales et les données d'image d'entrée pour obtenir des données de rangée (20) et utiliser de même les données verticales et les données d'image d'entrée pour obtenir les données de colonne (22) ; les données de rangée (20) indiquant les parties de l'image d'entrée entre les lignes horizontales et les données de colonne (22) indiquant les parties de l'image d'entrée entre les lignes verticales ; la representation de rangée/colonne représentant les informations pour une combinaison de la rangée et de la colonne ; et
    utiliser les données de rangée (20) et les données de colonne (22) pour obtenir les données constitutives (24 ; 286) indiquant les informations représentées par la représentation de rangée/colonne (12) pour la combinaison de la rangée et de la colonne.
  6. Article de fabrication pour utilisation dans une machine qui inclut :
    des circuits d'entrée d'image (70) pour obtenir des données définissant des images comme entrée ; un dispositif d'accès à un support de stockage (64) pour accéder à un support (62) qui mémorise les données ; et
    un processeur (66) connecté pour recevoir les données définissant les images depuis les circuits d'entrée d'image (70) ; le processeur étant, en outre, connecté pour recevoir les données provenant du dispositif d'accès au support de stockage ;
    l'article comprenant :
    un support de stockage auquel on peut accéder par le dispositif d'accès au support de stockage lorsque l'article est utilisé dans le système ; et
    les données mémorisées par le support ce stockage de sorte que le dispositif d'accès au support de stockage peut délivrer les données mémorisées au processeur lorsque l'article est utilisé dans le système ; les données mémorisées comprenant des données d'instruction indiquant les instructions que le processeur peut exécuter ; le processeur, dans l'exécution des instructions :
    recevant les données d'image d'entrée depuis le circuit d'entrée d'image, les données d'image d'entrée définissant une image d'entrée qui présence une représentation de rangée/colonne ;
       caractérisé par l'exécution des instructions consistant à :
    obtenir une image des données frontières de table (272) en sélectionnant un composant connecté qui présente le pavé de limite le plus grand en étiquetant les pixels dans chaque composant connecté avec les composants connectés reliant la zone du pavé et en conservant un pixel ACTIF si son étiquette est égale à la zone de pavé limite maximale ;
    utiliser l'image des données frontières de table (272) pour obtenir des données horizontales et des données verticales, les données horizontales indiquant les lignes horizontales dans l'image d'entrée ; les données verticales indiquant les lignes verticales dans l'image d'entrée ;
    utiliser les données horizontales et les données d'image d'entrée pour obtenir les données de rangée (20) ainsi qu'utiliser les données verticales et les données d'image d'entrée pour obtenir les données de colonne (22) ; les données de rangée (20) indiquant des parties de l'image d'entrée entre les lignes horizontales et les données de colonne (22) indiquant les parties de l'image d'entrée entre les lignes verticales ; la représentation de rangée/colonne représentant les informations pour une combinaison de la rangée et de la colonne ; et
    utiliser les données de rangée (20) et les données de colonne (22) pour obtenir les données constitutives (24 ; 286) indiquant les informations représentées par la représentation de rangée/colonne (12) pour la combinaison de la rangée et de la colonne.
EP94308654A 1993-11-24 1994-11-23 Procédé d'analyse de données définissant une image Expired - Lifetime EP0654751B1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/157,782 US5392130A (en) 1993-11-24 1993-11-24 Analyzing an image showing a row/column representation
US157782 1993-11-24

Publications (3)

Publication Number Publication Date
EP0654751A2 EP0654751A2 (fr) 1995-05-24
EP0654751A3 EP0654751A3 (fr) 1995-11-22
EP0654751B1 true EP0654751B1 (fr) 2002-02-13

Family

ID=22565263

Family Applications (1)

Application Number Title Priority Date Filing Date
EP94308654A Expired - Lifetime EP0654751B1 (fr) 1993-11-24 1994-11-23 Procédé d'analyse de données définissant une image

Country Status (4)

Country Link
US (1) US5392130A (fr)
EP (1) EP0654751B1 (fr)
JP (1) JPH07200839A (fr)
DE (1) DE69429853T2 (fr)

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5563991A (en) * 1993-11-24 1996-10-08 Xerox Corporation Using an image showing a perimeter relationship representation to obtain data indicating a relationship among distinctions
US5513271A (en) * 1993-11-24 1996-04-30 Xerox Corporation Analyzing an image showing a proportioned parts graph
US5544267A (en) * 1993-11-24 1996-08-06 Xerox Corporation Using a category to analyze an image showing a graphical representation
US5544330A (en) * 1994-07-13 1996-08-06 Emc Corporation Fault tolerant interconnect topology using multiple rings
US6208744B1 (en) * 1994-12-14 2001-03-27 Casio Computer Co., Ltd. Document image processor and method for setting a document format conforming to a document image
JP3068131B2 (ja) * 1995-02-03 2000-07-24 富士ゼロックス株式会社 文書処理装置
US5623345A (en) * 1995-03-06 1997-04-22 Motorola, Inc. Facsimile communication with a selective call system and method thereof
US5822593A (en) * 1996-12-06 1998-10-13 Xerox Corporation High-level loop fusion
EP1052593B1 (fr) * 1999-05-13 2015-07-15 Canon Kabushiki Kaisha Appareil et méthode pour faire des recherches dans les formulaires
US7003733B2 (en) * 2001-01-30 2006-02-21 Duemler David W Programmable logic controller programming system
DE10156579C2 (de) * 2001-11-20 2003-11-20 Smi Cognitive Software Gmbh Verfahren und Vorrichtung für ein elektronisches Kreuzworträtsel
KR100472102B1 (ko) * 2002-04-12 2005-03-08 (주)유라비젼 바둑 기보 생성 방법 및 장치
US7089261B2 (en) * 2002-07-25 2006-08-08 International Business Machines Corporation Programmable use of data extracted from common presentation files
US7120275B2 (en) * 2003-01-16 2006-10-10 Microsoft Corporation Ink recognition for use in character-based applications
US7583841B2 (en) * 2005-12-21 2009-09-01 Microsoft Corporation Table detection in ink notes
US7664325B2 (en) * 2005-12-21 2010-02-16 Microsoft Corporation Framework for detecting a structured handwritten object
US7707488B2 (en) * 2006-02-09 2010-04-27 Microsoft Corporation Analyzing lines to detect tables in documents
US8600164B2 (en) * 2008-03-28 2013-12-03 Smart Technologies Ulc Method and tool for recognizing a hand-drawn table
US8634645B2 (en) * 2008-03-28 2014-01-21 Smart Technologies Ulc Method and tool for recognizing a hand-drawn table
US9202140B2 (en) * 2008-09-05 2015-12-01 Siemens Medical Solutions Usa, Inc. Quotient appearance manifold mapping for image classification
US8887038B2 (en) 2010-10-08 2014-11-11 Business Objects Software Limited Extrapolating tabular structure in a freeform document
US9074892B2 (en) 2013-03-15 2015-07-07 Ian Michael Fink System and method of determining a position of a remote object
US9697423B1 (en) * 2015-12-31 2017-07-04 Konica Minolta Laboratory U.S.A., Inc. Identifying the lines of a table
US10452952B2 (en) * 2017-06-30 2019-10-22 Konica Minolta Laboratory U.S.A., Inc. Typesetness score for a table
US10417516B2 (en) * 2017-08-24 2019-09-17 Vastec, Inc. System and method for preprocessing images to improve OCR efficacy
JP6792529B2 (ja) * 2017-08-24 2020-11-25 株式会社三共 遊技機
US12008829B2 (en) 2022-02-16 2024-06-11 Vastec, Inc. System and method for improved OCR efficacy through image segmentation

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS57150062A (en) * 1981-03-12 1982-09-16 Fuji Xerox Co Ltd Processing of papers
US4972475A (en) * 1987-02-10 1990-11-20 Veritec Inc. Authenticating pseudo-random code and apparatus
US4884146A (en) * 1987-07-14 1989-11-28 Sharp Kabushiki Kaisha Image display system
US4924078A (en) * 1987-11-25 1990-05-08 Sant Anselmo Carl Identification symbol, system and method
KR930009639B1 (ko) * 1989-07-09 1993-10-08 가부시끼가이샤 히다찌세이사꾸쇼 화상데이타를 이용하는 문서데이타 처리방법 및 장치
JP2930612B2 (ja) * 1989-10-05 1999-08-03 株式会社リコー 画像形成装置

Also Published As

Publication number Publication date
EP0654751A2 (fr) 1995-05-24
DE69429853D1 (de) 2002-03-21
DE69429853T2 (de) 2002-06-20
EP0654751A3 (fr) 1995-11-22
JPH07200839A (ja) 1995-08-04
US5392130A (en) 1995-02-21

Similar Documents

Publication Publication Date Title
EP0654751B1 (fr) Procédé d'analyse de données définissant une image
US6009196A (en) Method for classifying non-running text in an image
US5563991A (en) Using an image showing a perimeter relationship representation to obtain data indicating a relationship among distinctions
US5522022A (en) Analyzing an image showing a node-link structure
EP0434930B1 (fr) Edition de texte dans une image
US5850490A (en) Analyzing an image of a document using alternative positionings of a class of segments
US5889886A (en) Method and apparatus for detecting running text in an image
US5537491A (en) Analyzing an image or other data to obtain a stable number of groups
US5659639A (en) Analyzing an image showing editing marks to obtain category of editing operation
US5430808A (en) Image segmenting apparatus and methods
US5513271A (en) Analyzing an image showing a proportioned parts graph
US7350142B2 (en) Method and system for creating a table version of a document
CA2118344C (fr) Utililsation d'une categorie pour analyser une image de representation graphique
US5455898A (en) Analyzing an image showing a graphical representation of a layout
EP0543599A2 (fr) Procédé et appareil de détection de marquage à main d'images
NL9301004A (nl) Inrichting voor het bewerken en reproduceren van digitale beeldinformatie.
JPH0869543A (ja) 画像編集方法及び編集システム
JP2008108114A (ja) 文書処理装置および文書処理方法
JP4390523B2 (ja) 最小領域による合成画像の分割
JP4466241B2 (ja) 文書処理手法及び文書処理装置
EP1439485B1 (fr) Segmentation d'une image composite au moyen de rectangles de base
EP0654753A2 (fr) Procédé d'analyse d'une image portant une représentation graphique
JP2023036833A (ja) 情報処理装置、及びプログラム
JPH02105981A (ja) 会話型文字認識方式
Huang Automatic form document processing

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): DE FR GB

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): DE FR GB

17P Request for examination filed

Effective date: 19960522

17Q First examination report despatched

Effective date: 19981211

GRAG Despatch of communication of intention to grant

Free format text: ORIGINAL CODE: EPIDOS AGRA

GRAG Despatch of communication of intention to grant

Free format text: ORIGINAL CODE: EPIDOS AGRA

GRAG Despatch of communication of intention to grant

Free format text: ORIGINAL CODE: EPIDOS AGRA

GRAH Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOS IGRA

GRAH Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOS IGRA

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

REG Reference to a national code

Ref country code: GB

Ref legal event code: IF02

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): DE FR GB

REF Corresponds to:

Ref document number: 69429853

Country of ref document: DE

Date of ref document: 20020321

ET Fr: translation filed
PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20021114

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20041117

Year of fee payment: 11

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20041118

Year of fee payment: 11

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20050825

Year of fee payment: 12

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20051123

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20060601

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20051123

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20070731

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20061130