GB2278705A - Facsimile machine - Google Patents

Facsimile machine Download PDF

Info

Publication number
GB2278705A
GB2278705A GB9311256A GB9311256A GB2278705A GB 2278705 A GB2278705 A GB 2278705A GB 9311256 A GB9311256 A GB 9311256A GB 9311256 A GB9311256 A GB 9311256A GB 2278705 A GB2278705 A GB 2278705A
Authority
GB
United Kingdom
Prior art keywords
fax
neurons
matrix
card
character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB9311256A
Other versions
GB9311256D0 (en
Inventor
Vernon John Spencer
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to GB9311256A priority Critical patent/GB2278705A/en
Publication of GB9311256D0 publication Critical patent/GB9311256D0/en
Publication of GB2278705A publication Critical patent/GB2278705A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/40Picture signal circuits
    • H04N1/40062Discrimination between different image types, e.g. two-tone, continuous tone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Character Discrimination (AREA)
  • Facsimiles In General (AREA)

Abstract

A Fax machine has one or more neural networks to recognise characters in a scanned or received electronic facsimile image and convert them into those of a standard digital or graphical character set. Characters in the electronic facsimile image may be presented to the neural networks either individually or as lines or blocks of text. Auto- and hetero-associative neural network topologies suitable for this are described. Performance can be further enhanced by manual or automatic switching between different neural networks for the recognition and classification of particular character sets and between digital, graphical, and mixed digital/graphical modes of operation, depending upon the document format. <IMAGE>

Description

Improvements in or relating to FAX machines and FAX modems, or to FAX cards for personal computers Background of the invention This invention relates to improvements in the reproduction quality of FAX machines and FAX modems, or of FAX cards for personal computers (PCs), where the term FAX is a common corruption of the word facsimile.
The main advantage of sending information in facsimile form by FAX machines is their ability to transmit and receive documentary information in any format, whether handwritten, graphical, typographical or any combination thereof. In order to enable communication between FAX machines and PCs, external FAX modems and internal FAX cards have also been developed. FAX machines, FAX modems and FAX cards of CCITT group 3 operate digitally in black and white, without grey tones, by converting a scanned and compressed image into a stream of coded bits, or vice versa, according to a recognised FAX protocol for transmission or reception by telephone line. Their main disadvantage is that, in comparison with the actual content, a large amount of data needs to be transmitted, and that keeping this amount within bounds compromises the quality of reproduction. The two standard resolutions are horizontally 8 points per mm and vertically 3.85 or 7.7 lines per mm, and a typical facsimile image has a width of 1,728 pixels and a length of 1,142 or 2,284 pixels. However, the information received can only be displayed and/or printed graphically with a much lower quality of reproduction than that available when using a standard digital or graphical character code to display and/or print. The invention disclosed in this application seeks to address this particular problem in a straightforward and economic way.
Summary of the invention It is the primary object of the present invention to overcome the aforesaid disadvantages of prior art FAX machines, FAX modems and FAX cards for personal computers. The invention is based on the realisation that most information transmitted by FAX consists of standard written, typed or printed characters, and that a trained neural network can be used to improve the quality of scanned images containing such characters. This is achieved by converting typed, printed or handwritten characters from a facsimile format into either an standard digital ASCII coded text format or a standard graphical format. Once an image is in a digital or a standard graphical format it can be displayed or printed by a PC with a higher standard of reproduction than is possible using the original facsimile format. In the case of a FAX machine a digital character format would need to be converted back into a standard graphical format, which would now be a scanned form of a standard printing character set. Alternatively, the original scanned facsimile format could be converted directly into a higher quality facsimile format using a standard graphical character set, without the intermediate step of conversion into a digital ASCII character set. Although this procedure would normally only be undertaken after reception of a facsimile transmission, it could also be carried out before transmission to improve the quality of the facsimile image transmitted. It would also be possible to train a neural network to recognise handwriting for conversion into a typographical format before transmission as a facsimile document.
One difficulty with character recognition is the differentiation between similar characters such as the number "0" the letter "O" and between the number "1" and the letter "1".
Also the accurate segmentation of a line of facsimile text into separate characters is particularly difficult. For example, if characters are broken into parts or adjacent characters are merged together, then this can lead to a broken "m" being incorrectly classified as "r n" and a failure to classify "fl" at all. Advantageously, in one embodiment of the invention a line or block of text is presented to the input layer matrix of a neural network. This gives the neural network the chance to decide whether broken characters should be joined or whether merged characters should be split at the same time as it classifies the characters in a line of text.
According to one aspect of the present invention there is provided a FAX machine, FAX modem, or a FAX card for a PC, characterised by one or more neural networks set to recognise characters in a scanned or received electronic facsimile image and to convert them into those of a standard digital or graphical character set.
According to another aspect of the invention there is provided a PC card, for use in combination with a FAX modem or a FAX card, characterised by one or more neural networks set to recognise characters in a scanned or received electronic facsimile image and to convert them into those of a standard digital or graphical character set.
The present invention should not be regarded as limited to use in combination with FAX apparatus, as it is suitable for improving the quality of electronic facsimile images in general. Thus yet another aspect of the invention concerns a method of processing an electronic facsimile image, characterised by recognising characters in that image and converting them into those of a standard digital or graphical character set by means of one or more neural networks.
A further aspect of the invention concerns an auto-associative character classifying neural network suitable for use in combination with a FAX machine, FAX modem, a FAX card, or a PC card, characterised by an input layer, a plurality of hidden layers, and an output layer, the neurons in the layers being interconnected in such a way that a j by k matrix in the output layer, corresponding to the j times k pixels of a standard or non standard character matrix, is substantially fully interconnected with an m by n matrix in the input layer, corresponding to the m times n pixels of the same or another standard or non standard character matrix, where j, m are integers greater than four and k, n are integers greater than six.
A yet further aspect of the invention concerns a hetero-associative character classifying neural network suitable for use in combination with a FAX machine, FAX modem, a FAX card, or a PC card, characterised by an input layer, a plurality of hidden layers, and an output layer, the neurons being interconnected between the layers in such a way that j neurons in the output layer, corresponding to the bits of a standard or non standard digital character code plus one, are substantially fully interconnected with an m by n matrix in the input layer, corresponding to the m times n pixels of a standard or non standard character matrix, where j, m are integers greater than four and n is an integer greater than six.
By means of the present invention the quality of reproduction of FAX machines, FAX modems, or FAX cards for personal computers can be considerably enhanced.
Brief description of the Drawings The foregoing invention can be better understood by a detailed description of the preferred embodiments thereof with reference to the accompanying drawings in which: Figure 1 illustrates, in block diagram form, a FAX machine according to the invention.
Figure 2 illustrates, in block diagram form, a FAX modem or a FAX card for a PC according to the invention.
Figure 3 presents a schematic cross-section in the x-z plane of an auto-associative neural network topology according to the invention.
Figure 4 presents a schematic cross-section in the y-z plane of an auto-associative neural network topology according to the invention.
Figure 5 presents a schematic cross section in the x-z plane of a hetero-associative neural network topology according to the invention.
Figure 6 presents a schematic cross section in the y-z plane of a hetero-associative neural network topology according to the invention.
Detailed description A FAX machine comprises four main components: a scanner, a coding/decoding unit, a modem and a printer. In addition, the FAX machine according to one embodiment of the invention, as shown in Fig. 1, comprises neural network means connected via a control unit to the other components of a prior art FAX machine. A FAX modem or a FAX card for a PC comprises a coding/decoding unit and a modem, and relies on software to generate the scanner output and printer/monitor inputs. In addition, the FAX modem or FAX card according another embodiment of the invention, as shown in Fig. 2, comprises neural network means connected via a control unit to the other components of a prior art FAX modem or FAX card. The purpose of the neural network means is to convert facsimile text in an outgoing/incoming electronic facsimile image into standard digital or graphical characters before transmission, printing, or display on a PC monitor.
A neural network consists of a set of simple and complex neurons, with multiple inputs and outputs, connected to one another in the form a network. Each complex neuron comprises an input function, such as the sum of the products of the input signals and their weightings; a transfer function, which determines the activity; and an output function that determines the output signal from the activity. In a neural network sets of neurons are associated in layers, with an input layer, one or more hidden layers, and an output layer. The input layer comprises simple buffer neurons with single inputs, while the other layers comprise complex neurones with multiple inputs. In a feed forward neural network each layer has a plurality of neurons connected in a particular way to neurons in the following layer. A feedback neural network also has a plurality of neurons in each layer connected to others in the same and/or previous layers. In effect a neural network comprises multiple layers of nonlinear matrix multipliers. The performance of a neural network with a particular topology is determined by the weightings set at the inputs to its neurons. The weightings are defined by training the neural network to recognise and identify distorted facsimile characters either as standard graphical characters, or as digital ASCII characters. The knowledge of a neural network is represented by the weightings set during the training process. Unlike a conventional computer, which uses serial processing, a neural network processes in parallel and can therefore operate much faster. Because of the way in which knowledge is distributed across a neural network in the input weightings, it can tolerate of a certain amount of damage to its neurons or their interconnections. This enables neural networks to be manufactured more economically than if only zero defects were acceptable.
The input weightings of a neural network are determined by special algorithms, such as the delta rule, which cause the input weightings to converge towards their optimum values.
According to this rule the weighting change is proportional to the error between the actual and the desired output. It is not possible to determine the input weightings for the hidden layers with the delta rule, and for these the backward error propagation algorithm is used. According to this the weighting change is equal to the learning rate times the error plus the momentum times the previous weighting change. Both the learning rate and the momentum have values that are less than unity. The momentum is normally fixed, and the learning rate is set relatively high at first and then gradually reduced.
A typical facsimile image has a width of 1,728 pixels and a length of 1,142 or 2,284 pixels, depending upon the resolution, and requires 246,672 or 493,344 bytes of memory for storage. Initially the facsimile image is stored in memory and then rectified so that lines of text are horizontal. This is achieved by determining the orientation of the white strips between lines of text. Normally this would only involve rotation of the image through 900 or 1800, as necessary, but might require other angular adjustments. The image is then scanned to determine the locations of lines of text.
These are then divided into separate characters, with various matrix formats depending upon the character concerned, and in the preferred embodiments each character is presented within a standard 17 by 9 rectangular format to a trained neural network for classification, either hetero-associatively as a digital ASCII code, or auto-associatively as a standard graphical character. Unidentified characters could be presented to further neural networks trained to recognise other character sets. Any characters not identified during this process, are retained in their original facsimile format within a manipulated image, either in a mixed text/graphical format or in a purely graphical format.
Any text containing abnormally large or small characters, such as to be found in letter headings and footings, is not converted but retained in its original facsimile format. In order to cope with various printing formats, such as columns of text, the format of the original document is retained after conversion. This is achieved by replacing any facsimile characters in the document stored in memory by standard graphical characters. The original facsimile document could also be retained separately in memory to enable a direct comparison of the improved and the original facsimile images.
Alternatively, if a neural network had classified characters in terms of their ASCII codes, the original facsimile image with text blocks blanked could be displayed and over printed with ASCII text in a mixed mode format.
Although the above embodiment has the advantage that it requires only a relatively small neural network, as it processes the characters in series, it is relatively slow. It is also difficult to separate characters accurately, which leads to recognition errors. Larger neural networks can be trained to recognise lines, or blocks of text. For a complete A4 page the inputs and outputs of the neural network could directly correspond to the 1,973,376 or 3,946,752 pixels of the original and improved facsimile images. In a particular advantageous embodiment complete lines of text are presented to a neural network. This both reduces redundancy due the spacing between lines, and gives the neural network the chance to recognise broken or merged characters for what they are.
Ideally a neural network for classifying single characters should be fully interconnected between the input and output layers, that is each neuron in the output layer is indirectly connected to every neuron in the input layer. However, this is neither necessary nor desirable when lines or blocks of facsimile text are presented to the input layer matrix of a neural network. A rectangular system of co-ordinates x, y, z, is used, where the x-direction extends across a character matrix width, the y-direction along a character matrix height, and the layers are separated in the z-direction. In this case the neurons aligned along the y-direction need only be substantially interconnected between the input and output layers over a distance equal to the height of two characters, while the neurons aligned along the x-direction need only be substantially interconnected between the input and output layers over a distance equal to the width of two characters. This both limits the number of interconnections required between neurons and increases the reliability of recognition. Most neural network chips provide for 16 inputs per neuron, and it may be necessary to limit the connectivity between neurons to this value. If each neuron is connected to a 3 by 5 matrix of neurons in the previous layer, then this leaves it with a spare input for applying a fixed bias.
Figs. 3 and 4 represent schematic cross-sections of the x-z and y-z planes of an auto-associative single or multiple character classifying neural network, which comprises a 17 by 9 input layer at the bottom, seven hidden layers in the middle, and a 17 by 9 output layer at the top, each having equal numbers of neurons. If neurons spaced by at least one or two neurons from the edges are connected to a 5 by 3 matrix of neurons in the previous layer, then a 17 by 9 matrix of neurons in the output layer will be fully connected to a 17 by 9 matrix of neurons in the input layer and vice versa.
A single 17 by 9 character classifier with this topology would require just 153 neurons per layer, or 1,377 altogether. For a complete line of facsimile text a neural network with this topology would require 17 times 1,728, that is 29,376 neurons, in each of the nine layers, giving 264,384 neurons altogether. If neurons with 46 inputs were used then the same degree of interconnection as for seven layers and neurons with 16 inputs would be obtained with three hidden layers and each interior neuron connected to a 9 by 5 matrix of neurons in the previous layer. Such networks can be implemented in ASICs if they use binary transfer functions, but other transfer functions, such as sigmoid, require special analogue neural network chips.
Figs. 5 and 6 represent schematic cross-sections of the x-z and y-z planes of a hetero-associative single character classifying neural network, which comprises a 17 by 9 input layer at the bottom, seven hidden layers in the middle, and a 9 by 1 output layer at the top, with successive layers containing one fewer neurons along both the x- and the y-directions. If neurons spaced by at least one, two or three neurons from the edges are connected to a 5 by 3 matrix of neurons in the previous layer, then all the neurons in the 9 by 1 output layer will be connected to the 17 by 9 matrix of neurones in the input layer. One of the output neurons can be used as a flag to indicate whether an ASCII coded output on the other eight neurons relates to a recognised character or not. A single 17 by 9 character classifier with this topology would require just 645 neurons altogether. If the network performance when only the central output neuron was connected the 17 by 9 matrix of neurons in the input layer was acceptable, then the number of hidden layers could be reduced to three. If neurons with 46 inputs were used then the same degree of interconnection as for seven layers would be obtained with three hidden layers and each interior neuron connected to a 9 by 5 matrix of neurons in the previous layer. The performance of this single character classifier can be improved by presenting a line of text to an auto-associative multiple character classifier to separate characters before coding.
The specific topologies described above should not be regarded as limiting the invention, as in general a suitable auto- or hetero-associative character classifying neural network comprises an input layer, a plurality of hidden layers, and an output layer, the neurons in the layers being interconnected in such a way that a j by k matrix in the auto-associative output layer is substantially fully interconnected with an m by n matrix in the input layer, or that j neurons in the hetero-associative output layer are substantially fully interconnected with a m by n matrix in the input layer, where j, m are integers greater than four and k, n are integers greater than six. The m by n matrix in the input layer corresponds to the m times n pixels of a standard or non standard character matrix, the j by k matrix in the auto-associative output layer corresponds to the j times k pixels of the same or another standard or non standard character matrix, and the j neurons in the hetero-associative output layer correspond to the bits of a standard or non standard digital character code plus one. Neurons well spaced from the edges are connected to a p by q matrix of neurons in the previous layer and require at least one more than p times q inputs, where p is an integer greater than two, q is an integer greater than four, and the extra input is for the application of a bias.
Although the invention has been described mainly in terms of improving an received facsimile image, it could also be used to improve the quality of a transmitted facsimile image that had been generated by optical scanning. This is usually the case for documents transmitted by a FAX machine, or those that have been entered into a PC using an optical scanner.
The document could then be transmitted as a higher quality facsimile image. Alternatively, if the document was separated into part facsimile and part ASCII code, then these could be sent separately. As the facsimile part would now be mostly blank and ASCII text can be transmitted more quickly than facsimile, this could result in reduced transmission times.
This feature could also enable a FAX machine to communicate with a PC without FAX capability through its modem.
The performance of a FAX machine, FAX modem, FAX card or a PC card according to the invention can be enhanced by manual or automatic switching between different neural networks for the recognition and classification of particular character sets. For example English, French and German character sets can be distinguished by the presence or absence of accents or umlauts. Manual or automatic switching between digital, graphical, and mixed digital/graphical modes of operation, depending upon the document format could also be provided.
It will, of course, be realised that numerous modifications and variations may be incorporated in the embodiments without departing from the scope of the invention as defined by the following claims.

Claims (1)

  1. Claims
    1). A FAX machine, a FAX modem, or a FAX card characterised by one or more neural networks set to recognise characters in a scanned or received electronic facsimile image and to convert them into those of a standard digital or graphical character set.
    2). A PC card, for use in combination with a FAX modem or a FAX card characterised by one or more neural networks set to recognise characters in a scanned or received electronic facsimile image and to convert them into those of a standard digital or graphical character set.
    3). A FAX machine, a FAX modem, or a FAX card as claimed in claim 1, or a PC card as claimed in claim 2, characterised by one or more neural networks that are set or can be set to recognise one or more forms of handwriting and/or typography.
    4). A FAX machine, a FAX modem, or a FAX card as claimed in claims 1 or 3, or a PC card as claimed in claims 2 or 3, characterised by one or more neural networks implemented using one or more transputers.
    5). A FAX machine, a FAX modem, or a FAX card as claimed in claims 1 or 3, or a PC card as claimed in claims 2 or 3, characterised by one or more neural networks implemented using one or more analogue or digital neural network chips.
    6). A FAX machine, a FAX modem, or a FAX card as claimed in claims 1, or 3 to 5, or a PC card as claimed in claims 2, 4 or 5, characterised by one or more multi-layer neural networks with or without feedback interconnections.
    7). A FAX machine, a FAX modem, or a FAX card as claimed in claims 1 or 3 to 6, or a PC card as claimed in claims 2 or 4 to 6, characterised in that it can automatically switch or be manually switched between digital, graphical, and mixed digital/graphical modes of operation.
    8). A FAX machine, a FAX modem, or a FAX card as claimed in claims 1 or 3 to 7, or a PC card as claimed in claims 2 or 4 to 7, characterised in that it can automatically switch or be manually switched between the recognition and classification of particular character sets.
    9). A method of processing an electronic facsimile image, characterised by recognising characters in that image and converting them into those of a standard digital or graphical character set by means of one or more neural networks.
    10). A method of processing an electronic facsimile image as claimed in claim 9, characterised by presenting characters in that image either individually, as lines, or as blocks of text to said one or more neural networks.
    11). An auto-associative character classifying neural network suitable for use in combination with a FAX machine, FAX modem, or a FAX card as claimed in claims 1 or 3 to 8, or a PC card as claimed in claims 2 or 4 to 8, characterised by an input layer, a plurality of hidden layers, and an output layer, the neurons in the layers being interconnected in such a way that a j by k matrix in the output layer, corresponding to the j times k pixels of a standard or non standard character matrix, is substantially fully interconnected with an m by n matrix in the input layer, corresponding to the m times n pixels of the same or another standard or non standard character matrix, where j, m are integers greater than four and k, n are integers greater than six.
    12). An auto-associative character classifying neural network as claimed in claim 11, characterised in that the input and output layers respectively comprise a single j by k matrix and a single m by n matrix of neurons corresponding to the pixels of standard or non standard graphical characters or multiples of said matrices.
    13). A hetero-associative character classifying neural network suitable for use in combination with a FAX machine, FAX modem, or a FAX card as claimed in claims 1 or 3 to 8, or a PC card as claimed in claims 2 or 4 to 8, characterised by an input layer, a plurality of hidden layers, and an output layer, the neurons being interconnected between the layers in such a way that j neurons in the output layer, corresponding to the bits of a standard or non standard digital character code plus one, are substantially fully interconnected with a m by n matrix in the input layer, corresponding to the m times n pixels of a standard or non standard character matrix, where j, m are integers greater than four and n is an integer greater than six.
    14). A hetero-associative character classifying neural network as claimed in claim 13, characterised in that the input layer comprises an m by n matrix , corresponding to the m times n pixels of a standard or non standard graphical character, or multiples of said matrix and that the output layer comprises a set of j neurons, corresponding to the bits of a standard or non standard digital character code plus one, or multiples of said set.
    15). An auto-associative character classifying neural network as claimed in claims 11 or 12, or a hetero-associative character classifying neural network as claimed in claims 13 or 14, characterised in that neurons well spaced from the edges are connected to a p by q matrix of neurons in the previous layer, where p is an integer greater than two and q is an integer greater than four.
    16). An auto- or hetero-associative character classifying neural network as claimed in claim 15, characterised in that neurons connected to a p by q matrix of neurons in the previ ous layer have at least one more than p times q inputs, the extra input being for the application of a bias.
    17). An auto-associative character classifying neural network as claimed in claims 11, 12, 15 or 16, or a hetero-associative character classifying neural network as claimed in claims 13, 14, 15 or 16, characterised in that interior neurons spaced by at least one or two, or by at least two or four neurons from the edges are connected to a five by three or to a nine by five matrix of neurons in the previous layer.
    18). An auto-associative character classifying neural network as claimed in claims 11, 12, 15, 16 or 17, characterised in that the input layer comprises a single seventeen by nine matrix of neurons or multiples thereof and that the output layer comprises a single seventeen by nine matrix of neurons or multiples of said matrices.
    19). A hetero-associative character classifying neural network as claimed in claims 13 to 17 characterised in that the input layer comprises a single seventeen by nine matrix of neurons or multiples thereof and that the output layer comprises a single nine by one matrix of neurons or multiples of said matrices.
    20). A hetero-associative character classifying neural network as claimed in claims 13 to 17 or 19, characterised in that one neuron of the set or sets of j neurons in the output layer is used as a flag to indicate whether a digitally coded output on the other neurons of said set or sets relates to a recognised input character or not.
GB9311256A 1993-06-01 1993-06-01 Facsimile machine Withdrawn GB2278705A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB9311256A GB2278705A (en) 1993-06-01 1993-06-01 Facsimile machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB9311256A GB2278705A (en) 1993-06-01 1993-06-01 Facsimile machine

Publications (2)

Publication Number Publication Date
GB9311256D0 GB9311256D0 (en) 1993-07-21
GB2278705A true GB2278705A (en) 1994-12-07

Family

ID=10736428

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9311256A Withdrawn GB2278705A (en) 1993-06-01 1993-06-01 Facsimile machine

Country Status (1)

Country Link
GB (1) GB2278705A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001014992A1 (en) * 1999-08-25 2001-03-01 Kent Ridge Digital Labs Document classification apparatus

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001014992A1 (en) * 1999-08-25 2001-03-01 Kent Ridge Digital Labs Document classification apparatus

Also Published As

Publication number Publication date
GB9311256D0 (en) 1993-07-21

Similar Documents

Publication Publication Date Title
EP0383331B1 (en) Color document image processing apparatus
US6069978A (en) Method and apparatus for improving a text image by using character regeneration
US6021256A (en) Resolution enhancement system for digital images
JPS6118381B2 (en)
US5649024A (en) Method for color highlighting of black and white fonts
JPH08289094A (en) Document image scanning method and its device
JPH04333994A (en) Information transmission apparatus and method
US6937762B2 (en) Image processing device and program product
JPH0683357B2 (en) Image processing method
AU722868B2 (en) Method and system for image format conversion
JPH07175823A (en) Image formation storing/retrieving device
US4566039A (en) Facsimile system
JPH07117284A (en) Image processor and method thereof
JPH05233786A (en) Method and device capable of different data type intelligent post-processing
JP3093493B2 (en) Image storage and retrieval device
US5937147A (en) Printing of enhanced images
GB2278705A (en) Facsimile machine
EP0450013A1 (en) Reproduction apparatus operation during malfunction recovery.
JP2001109843A (en) Method and device for character recognition
JP2547716B2 (en) Data processing system
JP3524208B2 (en) Composite image processing apparatus and image processing method
JPH06243285A (en) Character recognition method
JPS6111888A (en) Document reader
JPH0730759A (en) Image transmitter and image receiver
Horak et al. Textfax—Principle for new tools in the office of the future

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)