EP0718807A2 - Method for compressing and decompressing standardized portrait images - Google Patents
Method for compressing and decompressing standardized portrait images Download PDFInfo
- Publication number
- EP0718807A2 EP0718807A2 EP95420341A EP95420341A EP0718807A2 EP 0718807 A2 EP0718807 A2 EP 0718807A2 EP 95420341 A EP95420341 A EP 95420341A EP 95420341 A EP95420341 A EP 95420341A EP 0718807 A2 EP0718807 A2 EP 0718807A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- image
- feature
- features
- images
- block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/94—Vector quantisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/008—Vector quantisation
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/20—Individual registration on entry or exit involving the use of a pass
- G07C9/22—Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder
- G07C9/25—Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder using biometric data, e.g. fingerprints, iris scans or voice recognition
- G07C9/257—Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder using biometric data, e.g. fingerprints, iris scans or voice recognition electronically
Definitions
- the present application is related to:
- EP-A-651,355 entitled “Method And Apparatus For Image Compression, Storage and Retrieval On Magnetic Transaction Cards”.
- EP-A-651,354 entitled "Compression Method For A Standardized Image Library”.
- the present invention relates to the field of digital image compression and decompression.
- the present technique enables the compressed storage of gray-scale portrait images in under 500 bits.
- image compression seeks to reduce the storage requirements for an image. Decompression restores the image. Not all compression/decompression processes restore images to their original form. Those which do are called “lossless" methods. In general, lossless methods do not compress images as highly as do lossy methods which change the image and introduce some degradation in image quality. In applications where high-compression ratios are desired, lossy methods are used most frequently.
- Images can be compressed as they contain spatial correlation. This correlation implies that in most instances differences in neighboring pixel values are small compared to the dynamic range of the image. A basic rule-of-thumb is that more correlation implies a greater potential for higher compression ratios without loss of visual image fidelity.
- the vast majority of image compression methods have their foundations in broad statistical measures. Some methods are more sophisticated and vary the compression algorithm based upon local statistics (see M. Rabbani and J. P. Jones, "Digital Image Compression Techniques", Vol. T77, SPIE Press, Bellingham, Washington, 1991). However, all of these techniques are applied to the entire image as there is no prior knowledge of image features and image position. The statistics account for correlations between neighboring pixels; they do not account for correlations between groups of pixels in corresponding locations of different images.
- Algorithms have been developed to handle motion sequences of images such as sequential frames of a movie (see Bernd Jahne, “Digital Image Processing: Concepts, Algorithms, and Scientific Application", Springer-Verlag, Berlin, 1991). Images taken close in time have a high degree of correlation between them, and the determination of the differences between images as the movement of image segments leads to large compression ratios. This type of image-to-image correlation works well for images which undergo incremental distortions.
- preserving the orientation and quantization of the original image is less important than maintaining the visual information contained within the image.
- identity of the child in the portrait can be ascertained with equal ease from either the original image or an image processed to aid in compression, then there is no loss in putting the processed image into the library.
- This principle can be applied to build the library of processed images by putting the original images into a standardized format. For missing children portraits this might include orienting the head of each child to make the eyes horizontal, centering the head relative to the image boundaries. Once constructed, these standardized images will be well compressed as the knowledge of their standardization adds image-to-image correlation.
- VQ vector quantization
- codebooks are formed from a collection of a number of representative images, known as the training set. Images are partitioned into image blocks, and the image blocks are then considered as vectors in a high-dimensional vector space, e.g., for an 8 x 8 image block, the space has 64 dimensions. Image blocks are selected from each image of the training set of images. Once all the vectors are determined from the training set, clusters are found and representative elements are assigned to each cluster. The clusters are selected to minimize the overall combined distances between a member of the training set and the representative for the cluster the member is assigned to.
- a selection technique is the Linde-Buzo-Gray (LBG) algorithm (see Y. Linde, et.
- the number of clusters is determined by the number of bits budgeted for describing the image block. Given n bits, the codebook can contain up to 2 n cluster representatives or code vectors.
- Previous compression methods used a codebook formation process based on training with the so-called LBG algorithm.
- This algorithm is one form of a class of algorithms referred to as clustering algorithms.
- the point of the algorithm is to find a predetermined number of groups within a data set and to select a representative for that group in an optimal way.
- the dataset consists of blocks of pixels from a number of images, and the algorithm operates to find a predetermined number of blocks which best represents the dataset. While there are many metrics to measure what is good, the most common one is the minimum distance in a Euclidean sense, i.e., sum of the squares of differences between pixels.
- the representative of a group is often the centroid of the group, i.e., average of the blocks in the group.
- the LBG algorithm proceeds by first making an assignment of groups, selecting the centroid for each group, and reassigning the elements of the dataset to a centroid by selecting the closest centroid.
- the reassignment forms a new grouping and the algorithm can continue in an iterative fashion until convergence occurs.
- the reassignment process reduces the overall number of groups.
- the result is sub-optimal, since the addition of another group will give better overall results.
- a method for maintaining the number of active groups is desirable.
- Another problem that can occur is that a group can be reduced to having an extremely small number of members. In the case where a group has a single member, the centroid is that member, and the distance between the member and its centroid is zero, and the cluster remains. The result here is that some groups are large and a better ensemble result could be attained by splitting a large group.
- the preferred method is to maintain the number of active groups by counting the number of groups, and for each group determine the extent, i.e., diameter of the group, and if some groups have no member, split the group of the largest extent into two smaller groups.
- the problem of excessively small groups is handled by treating the data elements of these groups as outliers and eliminating them from the dataset. This process must be executed with caution, otherwise too many of the data elements will be discarded.
- One type of correlation within facial images is the approximate mirror image symmetry between the left and right sides of the face.
- a large degree of correlation exists between the portions of the face close to the centerline.
- the image blocks used to render the part of the face above and below the eyes exhibit a high degree of symmetric correlation.
- the level of symmetry drops off due to the variability of the appearance of the nose when viewed from slightly different angles. What is needed is a method to further reduce the number of bits necessary to store a compressed portrait image by exploiting the natural symmetry of the human face in the regions around the facial centerline without imposing deleterious symmetry constraints on the nose.
- Some areas of a portrait image do not contribute any significant value to the identification of an individual. For instance, shoulder regions are of minimal value to the identification process, and moreover, this region is usually covered by clothing which is highly variable even for the same individual. Since little value is placed in such regions the allocation of bits to encode the image may also be reduced. In the present invention some of these areas have been allocated few if any bits, and the image data is synthesized from image data of neighboring blocks. This permits a greater allocation of bits to encode more important regions.
- One of the preferred methods of the present invention is a method for forming an addressable collection of image features, representing features in a selected class of digitized images comprising the steps of:
- the overall purpose of the present invention is to provide a technique for producing facial images, of adequate quality, for use in the identification process of a cardholder of a transaction card.
- the present invention enables the compression, storage, and retrieval of a portrait image where the compressed portrait.image data can be stored within a single track of a transaction card as specified by international standards.
- the present invention further improves on prior art compression methods by adding an improved training method for codebook generation, a luminance balancing method to standardize the lighting conditions of facial images, and linked image blocks for symmetric facial regions.
- Another object of the present invention is to provide a decoding technique for quickly decoding a coded image.
- Yet another object of the present invention is to provide an improved coded portrait image.
- Still another object of the present invention is the ability to reject, from encoding, portions of an image that are not of interest.
- a further object of the present invention is to provide a technique that enables the insertion of likely image content in the absence of real image data.
- FIG. 1 illustrates, in block diagram form, an overview of the process flow of the present invention.
- Figures 2A, 2B, and 2C illustrate a frontal facial portrait that is tilted, rotated to a standardized position, and sized to a standardized size, respectively.
- FIG. 3 illustrates, in flow chart form, the method of standardization of the present invention.
- Figure 4A shows the positions and sizes of template elements within a template.
- Figure 4B illustrates, using darker shaded areas, the positions and sizes of the template elements within the template that have a left-to-right flip property.
- Figure 4C illustrates, using darker shaded areas, the positions and sizes of the template elements within the template that have a top-to-bottom flip property.
- Figure 4D illustrates, using darker shaded areas, the positions and sizes of the template elements within the template that are linked.
- Figure 5 illustrates, in table form, the portrait features and their characteristics in the template of a specific embodiment of the present invention.
- Figures 6A and 6B illustrate the template element data record for each element in the template of a specific embodiment of the present invention.
- Figure 7 illustrates, in flow chart form, the method of training to construct codebooks for the present invention.
- Figure 8 illustrates a collection of codevectors associated with each of the feature types used in the specific embodiment of the present invention.
- Figure 9A illustrates, in flow chart form, the model of compression.
- Figure 9B illustrates, in flow chart form, the method of constructing the compressed bit-stream.
- Figure 10 illustrates the codevector numbering and labeling for a compressed image.
- Figure 11 illustrates the compressed bit representation for an image.
- Figure 12 illustrates a transaction card with a data storage means.
- FIGS 13A and 13B illustrate, in flow chart form, the method of decompression for the present invention.
- Figure 14 illustrates the codevectors as extracted from the feature type codevector collections with the lighter shaded codevectors having at least one flip property.
- Figure 15 illustrates the codevectors after execution of all flip properties.
- Figure 16 illustrates the final image
- Figure 17 illustrates a preferred system arrangement on which the method of the present invention may be executed.
- FIG. 1 gives an overview of the functionality of the present invention.
- a collection of training images 10 is loaded into a standardizer 12 which processes the training images 10 into standardized training images of consistent format.
- the standardized training images are further processed, using a feature template 14, to generate training image blocks, the functionality of which is represented by trainer 16.
- the trainer 16 forms codebooks 18 consisting of codevectors based on the training image blocks.
- the codebooks, combined with the feature template 14, become the basis for the compression and decompression process.
- a standardized image, for example 20 is formed from an original image such as a portrait, using the functionality of the standardizer 12.
- a data compressor 22 When a standardized image 20, that is to be encoded, is sent to a data compressor 22 the compressor generates a compressed image 24.
- the compressed image can then be stored and or transmitted for future use.
- a decompression process performed by a decompressor 26, using the same feature template and codebook data as was used in the compressor 22, provides a decompressed image 28.
- Figure 2A represents an image that is a frontal facial portrait.
- the face is tilted and translated (off-centered) with respect to the center of the image.
- Dependent on the source of images other variations in the positioning and sizing of the face within the image will be encounted.
- the size, position, and orientation of the face is to be standardized.
- the image is placed into a digital format as a matrix of pixel values.
- the digital format (pixel values) of the image is derived by scanning and/or other well known digital processing techniques that are not part of the present invention.
- the digital image is then displayed on a display device, for example, the display 202 shown in Figure 17, and a standardization process is applied to form a standardized geometric image.
- the images are standardized to provide a quality match between the template elements associated with the feature template 14 (to be described in detail later in this description of the invention).
- the process starts with the image of Figure 2A by locating the center of the left and right eyes of the image subject.
- Figure 2B a new digital image, a partially standardized geometric image, is formed by rotating and translating the original image of Figure 2A, if necessary. The rotation and translating is performed in software using well known image processing operations so as to position the left and right eye centers along a predetermined horizontal axis and equally spaced about a central vertical axis.
- Figure 2C illustrates the image of Figure 2B sized by a scaling process to form the standardized geometric image.
- the selection process is a selection based upon the existence of a frontal facial image of a person that is to have their image processed with the template 14. Included in the selection process is the creation of the digital matrix representation of the image.
- the digital matrix is next loaded, operation 32, into the system for display to an operator (see the system of Figure 17). As previously discussed the operator locates the left and right eye points, operation 34, and performs any needed rotation, translating and rescaling of the facial portion of the image to form the standardized geometrical image as per operation 36.
- the standardized geometric image is then stored per operation 38.
- the standard geometry of the image was set to be an image size of 56 pixels in width and 64 pixels in height with the eye centers located 28 pixels from the top of the image and 8 pixels on either side of an imaginary vertical center line. Identifying the centers of the left and the right eye is done by displaying the initial image to a human operator who points to the centers using a cursor that is driven by means of a device such as a mouse, digital writing tablet, light pen or touch sensitive screen.
- a device such as a mouse, digital writing tablet, light pen or touch sensitive screen.
- An alternate approach would be to automate the process using a feature search program.
- the human operator localizes the eye positions, and a processor fine-tunes the location through an eye-finding search method restricted to a small neighborhood around the operator-specified location.
- the luminance standardization procedure takes place.
- the luminance of the standardized geometric image is processed to reduce the variability in perceived lighting of image subjects, to reduce specular highlights, adjust skin tones to predetermined values, and to reduce shadows with the resultant standardized geometric image being stored for future use.
- the final step of the image standardization process, prior to storage, is to shift the facial mean luminance, i.e., the average lightness found in the general vicinity of the nose, to a preset value.
- a preset value is 165
- a medium skin tone 155 for a medium skin tone 155
- a dark skin tone the value is 135.
- the formed standardized digital image is now represented by a storable matrix of pixel values.
- Figure 4A illustrates the layout of a template 14 that is to be used with the standardized image.
- the template 14 is partitioned into 64 template elements labeled A through M.
- the elements are arranged in accordance with 13 corresponding features of a human face, for example, the template elements labeled A correspond to the hair feature at the top of the head and the template elements labeled G correspond to the eyes.
- the tables of Figures 5, 6A, and 6B provide further description of the remaining template elements.
- the preferred embodiment of the invention is implemented with 64 template elements and 13 features it is to be understood that these numbers may be varied to suit the situation and are not to be construed as limiting the method of this invention. Also to be noted is that some of the regions of the template are not assigned to any element.
- the template size matches the geometry of the standardized image with 56 pixels in width and 64 pixels in height.
- the size of the template elements are based upon the sizes of facial features that they intend to represent. For example, G is the relative size of an eye in a standardized image and both instances of elements G are positioned in the locations of the eyes in a standardized image.
- template elements are assigned a top-to-bottom flip property that will also be described later. It is to be noted that template elements may be assigned more than one property.
- Figure 4D represents, with the darker shaded region, the location of template elements which are part of a link.
- the linkage is horizontal between each pair of darkened template elements, for example, G at the left of center is linked to G at the right of center.
- 7 linked pairs are shown as the preferred embodiment, linkages can occur in groups larger than two and between any set of like labeled elements.
- the template 14 consists of a sequence of template elements representing a sequence of data records where each record, in the preferred embodiment, describes the location, size, label, left-to-right property, top-to-bottom property, and linkage. Records with other and/or additional factors may be created as the need arises.
- the template 14 records the distribution and size of template elements.
- Each template element has assigned to it a codebook 18 (see Figure 1) and a spatial location corresponding to the image.
- the template 14 consists of 64 template elements composed of rectangular pixel regions. These template elements are assigned to one of 13 different codebooks, each corresponding to a different type of facial feature.
- the codebooks 18 are collections of uniformly-sized codevectors of either 4x16, 8x8, 8x5, 4x10, 4x6 or 8x4 pixels.
- the codevectors which populate the codebooks 18 are produced by the trainer 16 ( Figure 1) from the image blocks extracted from the training images.
- a table describes the properties of the template elements A-M shown in Figures 4A-D.
- the first listed property of the codebook is the number of codevectors the codebook contains, and this is either 128 or 256. These numbers are both powers of 2, and in particular, they are 2 7 and 2 8 . This is advantageous as the codebook indexes used to specify a codevector utilize the full range of a 7-bit or 8-bit index.
- the index length of the codebooks is shown in the table of Figure 5 as either an 8 or a 7.
- the dimensions of the image blocks for the codevectors are the second and third properties listed and are given as the block width in pixels and the block height in pixels. The number of pixels per block is the product of the block width and block height.
- the fourth listed property is the number of occurrences of template elements in the feature template of the specific embodiment which are assigned to the codebook.
- the unique property, the fifth listed feature property, represents how many of these template elements are uniquely chosen (which eliminates one member of each pair of linked template elements).
- the sixth listed feature property is the number of bits allocated to store the selected codevectors from the codebook. This is the product of the number of unique template elements and the index length in bits. The sum of the entries in the feature bits row is 442, the total number of bits required to store the compressed image as an unconstrained binary record.
- the feature template thus carries all the information needed to build both the maps of Figures 4A-D and the table shown in Figure 5.
- the tables of Figures 6A and 6B represent the maps of Figures 4A-D in data form.
- the first block, block 700 represents loading standardized training images. This is a collection of images considered to be representative of the portrait images which will be sent to the compressor 22 to be compressed using codebooks 18 formed by this training process.
- First image blocks are extracted from the image for a selected codebook type, as represented by block 702.
- the extracted image blocks are oriented based on their symmetry type, such as the top-to-bottom and the left-to-right flip properties described above. This is represented by block 704.
- decision block 706 checks for the presence of another image in the standardized image training set. In general, 2,000 images are recommended for this training process. If there is another image in the set, the process loops back to block 702, as shown in the diagram.
- centroids are initialized as random image blocks.
- the centroids are initialized as the first sequence of image blocks.
- each image block is assigned to the nearest centroid that was determined in block 708.
- the centroids that are closer to each other than a threshold value are merged. For example, if two centroids have been determined to be less than a predetermined distance away they are flagged as being too close, in which case they are merged and all image blocks are assigned to a single centroid, leaving the other centroids unassigned.
- centroids of large extent are split with unassigned centroids.
- centroid has a very large distance between different codevectors assigned to it, then this centroid is split into two centroids, where one of the new centroids that is assigned to these codevectors comes from an unassigned centroid from the previous step.
- the least populated centroids are unassigned and then used to split the remaining centroids that have been determined to be of large extent.
- the closest centroids are merged together. Once again taking the newly-unassigned centroids resulting from the merge process in block 720, more centroids of large extent are split. Referring to block 722 a new position is found for each of the centroids.
- This process is required because in shuffling the assignment of image blocks to the various centroids the location of the center of the codevectors assigned to the centroid has actually changed. A new position for the center of the group of codevectors assigned to each centroid group is calculated to determine how the centroid has moved through the image block reassignment process. Referring to block 724, an assignment of each one of these image blocks to the nearest centroid is made, since the centroids have now moved around a bit, it is possible that some of the image blocks that have been assigned to one centroid may in fact now be closer to another centroid.
- the centroids are reordered from the most to the least populated. This reordering accelerates the reassignment process in future iterations.
- a convergence test is performed. If convergence has not been completed, the process returns to block 712 where centroids that are too close together are merged. If convergence has been completed, the process proceeds to block 730 where the centroids are stored as the codevectors of the newly-formed codebooks.
- Figure 8 a partial collection of codevectors are shown for the codebooks corresponding to the feature types A, G, and M, is shown in detail.
- the feature element G123, of Figure 8 corresponds to the left and right eye elements in the templates of Figures 3, 4A-4D, 5, and 10.
- the template element A46, of Figure 8 corresponds to hair in the upper left corner of the template shown in Figures 3, 4, 5, and 10.
- Figure 9A illustrates in flow chart form, the process for finding the best-matching codevectors for the various template elements of an image.
- the process begins with the loading of a standardized image or the loading of an image to be standardized, as shown in blocks 900 and 902.
- the standardized image corresponds to block 20 of Figure 1 and is the result of the process represented by block 902.
- next image block that corresponds to the first template element is extracted from the standardized image.
- the extracted image block is then oriented as shown in block 906, based on the symmetry type of the horizontal flip and/or the vertical flip property.
- the index of the best-matched codevector from the codebook is found by comparing the image block with all the codevectors in the codebook that correspond to the feature type for that template element.
- the comparative block 910 determines whether or not the block is linked, as based on the information stored in the template. If the block is linked the flow proceeds to block 914 the value is stored which represents how good of a match occurs.
- the mean-square-error comparison between these two image blocks and the codevector is used as a measure of the goodness of the match though other measures of goodness can be readily conceived.
- the value from the match as in the preferred embodiment, is compared with other link blocks in the group.
- the codevector with the best value is selected, in this case, the lowest value for the mean-square-error test.
- This vector is then used in block 912 as the index of the best-match codevector from that codebook. If an image block is not linked proceed directly to block 912. From block 912 the process proceeds to block 920, another comparative block which determines whether this is the last element in the template. If it is not, the process returns to block 904 and extracts the next image block corresponding to the next template element. If it was the final image block then the process proceeds building the compressed image bit stream in block 922. The building process is described in Figure 9B.
- Block 9B where in block 950 the bit stream pointer, BSP, is set to zero and the template element pointer, TP, is also set to zero.
- Block 952 the template element number TP is retrieved.
- Block 954 is a decision block wherein it is determined if the template element is linked. If the template element is not linked the process proceeds to block 958, to be described later. If the template element is linked the process proceeds to the first decision block 956, to determine if this is a first occurrence of the linkage group. If the element is the first occurrence of the linkage group the process proceeds to block 958, to be described below. If it is not the first occurrence of the linkage group the process proceeds to block 966, to be described below.
- BN a number of bits from the template element, labeled as "BN" are retrieved. This number of bits is used to encode the codevector index of the template element. In the preferred embodiment these are either 7 or 8 bits per template element. Proceeding down to box 960, the codevector index is encoded with BN bits.
- bit stream pointer BSP is incremented by BN
- template element pointer is incremented by one.
- Decision point 968 asks if the template elements have been exhausted and, if the answer is "yes", the bit stream is complete per block 970. If the answer is "no" and more template elements exist, the process loops back to block 952 and the process continues.
- Figure 10 illustrates the results for a specific standardized image of the best-match codevector comparison process described in Figures 9A and 9B.
- each of the template elements has both a letter and a number assigned to it.
- the letter indicates the feature type corresponding to the template element, and the number indicates the index of the best-matched codevector according to the embodied measure of goodness.
- the table of Figure 11 shows the best codevector number, with the numbers arranged in a sequence, from the left column to the right column and, heading down, that corresponds to the sequence of the template elements shown in the tables of Figure 6A and 6B. It is important to maintain the sequence of template elements consistent with Figures 6A and 6B so that the process shown in Figure 9B has the bit stream pointer interpreting the compressed image bit stream correctly.
- the index length for each of the best codevector numbers shown in Figure 11 corresponds to the feature type index length from the table shown in Figure 5 as the index length.
- the representation of the best codevector number is then shown as a binary representation, in the bit representation column, with the bit representation having a length corresponding to the index length.
- the binary number does not have sufficient digits it is padded with leading zeroes in order to have a binary length corresponding to the index length in the preceding column of its bit representation. If the binary digits in the bit representation columns are sequenced, starting with the left column and proceeding down and then into the right column, the resulting 442-bit binary number would correspond to the final output compressed image. This would correspond to the compressed image 24 in Figure 1.
- Figure 12 illustrates a transaction card 120 with a means of digital storage.
- This storage means can be accomplished by a variety of means such as magnetic encoding, or by bar code patterns.
- a common method is for the magnetic storage area 122 to have multiple tracks as indicated in Figure 12 by Track 1, Track 2, and Track 3..
- FIG. 13A the compressed bit stream is decompressed to form a decompressed image.
- the bit stream pointer BSP is initialized to zero and the template element pointer TP is set to zero in block 302.
- the process proceeds to block 304 wherein the template element TP is retrieved.
- a decision is made as to whether or not the template element is linked. If it is not linked the process proceeds to block 310. If it is linked the process proceeds to decision block 308.
- Decision block 308 determines if this is the first occurrence of a linked group. If it is the process proceeds to block 310. If it is not the process proceeds to block 312.
- the number of bits BN for the codevector index and the codebook type are extracted from the template element.
- the bits, BSP through BSP + BN - 1 are extracted from the compressed bit stream in block 314.
- the bit stream pointer is incremented by BN in block 316.
- the process then flows to block 318. Retracing the flow back to block 312, when there is a link group which is not the first occurrence of that group, the codevector index from the previous occurrence of the link group is copied. That previous occurrence is moved into block 318.
- the index codevector from the indicated codebook type is retrieved. This codevector represents a feature type from a particular image.
- the template element indicates how the blocks should be oriented.
- block 320 we orient the blocks as indicated by the template elements. If indicated, a horizontal, left-to-right flip or a vertical, top-to-bottom flip, is executed. In block 322 the block in the location indicated by the template element is inserted into the image. In block 324, the template pointer TP is incremented by one. The decision block 326 determines if all the template elements have been used. If not, the process returns to block 304 and continues. If all the template have been used the process moves to point A in Figure 13B.
- FIG. 13B the process for building the index is shown.
- the process constructs those regions with no template elements assigned to them. Plausible pixels are used for those regions, per block 328.
- Image blocks in block 330 have seams between them which have to be smoothed, and this process, is represented in block 330. Smoothing of the seams is achieved by taking averages across the seams, both horizontally and vertically.
- the contrast of the reconstructed image is enhanced. This is done by making sure that the full dynamic range of the image 0-255 is used in the reconstructed image. In the preferred embodiment this is done by a simple linear re-scaling.
- block 334 spatially-dependent random noise is added.
- the reconstructed image is outputted.
- the outputted image corresponds to the decompressed image of Figure 1, block 28.
- Figures 4B and 4C indicated which template elements possessed the left-to-right and top-to-bottom flipping property, respectively.
- the template elements with these flip properties are also indicated with the TRUE/FALSE flags in the tables of Figure 6A and 6B.
- the codevectors in Figure 14 that are to be flipped are identified by diagonal lines through the boxes representing pixels.
- Figure 15 represents the application of the flipping properties to the codevectors in Figure 14, where all codevectors in Figure 14 which correspond to the darkened template elements in Figure 4B are flipped left-to-right and all codevectors in Figure 14 which correspond to the darkened template elements in Figure 4C are flipped top-to-bottom. It should be noted that some template elements undergo both flips in the transformation of the codevectors from Figure 14 into the codevector orientation of Figure 15 and that the flips take place within the associated element.
- the next step is the formation, by image processing operations, of a final image, shown in Figure 16, based on the oriented codevector mosaic of Figure 15.
- the mosaic of Figure 15 may have certain visually objectionable artifacts as a result of its construction from codevectors. These artifacts can be diminished with some combination of image processing algorithms.
- a combination of well known image processing operations are applied including, smoothing across the codevector boundaries, contrast enhancement, linear interpolation to fill missing image regions, and the addition of spatially dependent random noise.
- the smoothing operation is described by considering three successive pixels, P 1 , P 2 , and P 3 , where P 1 and P 2 are in one codevector and P 3 is in an adjoining codevector.
- the pixel P 2 is replaced by result of: (P 1 + 2 * P 2 + P 3 ) / 4.
- the contrast enhancement is achieved by determining the minimum pixel value, min, and the maximum pixel value, max, for the mosaic.
- the regions of the feature template not corresponding to any template element are filled using linear interpolation.
- the known values of the boundary pixels are used to calculate an average pixel value.
- the unknown corner opposite the known boundary is set to this average value.
- the remainder of the unassigned interior pixels are calculated by linear interpolation.
- i column of the affected pixel
- j row of the affected pixel
- rand is a pseudo-random, floating-point number in the range (-1 to 1).
- the value n(i,j) is added to pixel at location (i,j). If the resultant pixel is greater than 255 it is clipped to 255, and if it is less than zero it is set to 0.
- Figure 16 represents an image after processing by these operations. It should be understood that other image processing operations may be used in other situations, and the preferred embodiment should not be considered limiting.
- FIG 17 illustrates an apparatus 100 on which the present method may be implemented.
- the apparatus 100 is comprised of a means 102 for converting a non-digital image, such as a photo print 80, or a negative image 82, into a digital representation of an image.
- a scanner 104 that outputs signals representing pixel values in analog form.
- An analog-to-digital converter 106 is then used to convert the analog pixel values to digital values representative of the scanned image.
- Other sources of digital images may be directly inputted into a workstation 200.
- the workstation 200 is a SUN SPARC 10, running UNIX as the operating system and encoded using standard C programming language.
- the program portion of the present invention is set forth in full in the attached Appendices A-E.
- Display of the digital images is by way of the display 202 operating under software, keyboard 204, and mouse 206 control.
- Digital images may also be introduced into the system by means of a CD reader 208 or other like device.
- the templates created by the present method and apparatus may be downloaded to a CD writer 210 for storage on a CD, hard copy printed by a printer 212, downloaded to a transaction card writer 216 to be encoded onto the data storage area, or transmitted for further processing or storage at remote locations by means of a modem 214 and transmission lines.
- the number of bits required to represent a compressed image is substantially reduced.
- linked template elements improves image quality without increasing the number of bits required for storage of the compressed image.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Collating Specific Patterns (AREA)
- Compression Of Band Width Or Redundancy In Fax (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Character Discrimination (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
Description
- The present application is related to:
- EP-A-651,355 entitled "Method And Apparatus For Image Compression, Storage and Retrieval On Magnetic Transaction Cards".
- EP-A-651,354 entitled "Compression Method For A Standardized Image Library".
- US-A-5,473,327 entitled "Method And Apparatus For Data Encoding With Reserved Values".
- US 08/361,352 filed December 21, 1994, by Ray, Ellson, and Elbaz, entitled "Method And Apparatus For The Formation Of Standardized Image Templates"
- So far as permitted the teachings of the above referenced Applications being incorporated by reference as if set forth in full herein.
- The present invention relates to the field of digital image compression and decompression. In particular, the present technique enables the compressed storage of gray-scale portrait images in under 500 bits.
- In general, image compression seeks to reduce the storage requirements for an image. Decompression restores the image. Not all compression/decompression processes restore images to their original form. Those which do are called "lossless" methods. In general, lossless methods do not compress images as highly as do lossy methods which change the image and introduce some degradation in image quality. In applications where high-compression ratios are desired, lossy methods are used most frequently.
- Images can be compressed as they contain spatial correlation. This correlation implies that in most instances differences in neighboring pixel values are small compared to the dynamic range of the image. A basic rule-of-thumb is that more correlation implies a greater potential for higher compression ratios without loss of visual image fidelity. The vast majority of image compression methods have their foundations in broad statistical measures. Some methods are more sophisticated and vary the compression algorithm based upon local statistics (see M. Rabbani and J. P. Jones, "Digital Image Compression Techniques", Vol. T77, SPIE Press, Bellingham, Washington, 1991). However, all of these techniques are applied to the entire image as there is no prior knowledge of image features and image position. The statistics account for correlations between neighboring pixels; they do not account for correlations between groups of pixels in corresponding locations of different images.
- Algorithms have been developed to handle motion sequences of images such as sequential frames of a movie (see Bernd Jahne, "Digital Image Processing: Concepts, Algorithms, and Scientific Application", Springer-Verlag, Berlin, 1991). Images taken close in time have a high degree of correlation between them, and the determination of the differences between images as the movement of image segments leads to large compression ratios. This type of image-to-image correlation works well for images which undergo incremental distortions.
- Other collections of images have image-to-image correlation, but not to the degree that motion sequences possess and do not compress well with motion algorithms. Consider a library of pictures of missing children. For this collection of images, there will be a large degree of image-to-image correlation based upon pixel location as faces share certain common features. This correlation across different images, just like the spatial correlation in a given image, can be exploited to improve compression.
- Analysis of some image libraries yields knowledge of the relative importance of image fidelity based on location in the images. Indeed, maintaining good image fidelity on the face of a child would be more important than fidelity in the hair or shoulders which in turn would be more important than the background. Image compression can be more aggressive in regions where visual image fidelity is less important.
- In many applications, preserving the orientation and quantization of the original image is less important than maintaining the visual information contained within the image. In particular, for images in the missing children library, if the identity of the child in the portrait can be ascertained with equal ease from either the original image or an image processed to aid in compression, then there is no loss in putting the processed image into the library. This principle can be applied to build the library of processed images by putting the original images into a standardized format. For missing children portraits this might include orienting the head of each child to make the eyes horizontal, centering the head relative to the image boundaries. Once constructed, these standardized images will be well compressed as the knowledge of their standardization adds image-to-image correlation.
- Techniques from a compression method known as vector quantization (VQ) are useful in finding correlation between portions of an image. Compression by vector quantization VQ is well suited for fixed-rate, lossy, high-ratio compression applications (see R. M. Gray, "Vector Quantization", IEEE ASSP Magazine, Vol. 1, April, 1984, pp. 4-29). This method breaks the image into small patches of "image blocks." These blocks are then matched against other image blocks in a predetermined set of image blocks, commonly known as a codebook. The matching algorithm is commonly the minimum-squared-error (MSE). Since the set of image blocks is predetermined, one of the entries of the set can be referenced by a simple index. As a result a multi-pixel block can be referenced by a single number. Using such a method the number of bits for an image can be budgeted. When a greater number of bits is allocated per image block, either the size of the codebook can be increased or the size of an image block can be made smaller.
- In the prior art, codebooks are formed from a collection of a number of representative images, known as the training set. Images are partitioned into image blocks, and the image blocks are then considered as vectors in a high-dimensional vector space, e.g., for an 8 x 8 image block, the space has 64 dimensions. Image blocks are selected from each image of the training set of images. Once all the vectors are determined from the training set, clusters are found and representative elements are assigned to each cluster. The clusters are selected to minimize the overall combined distances between a member of the training set and the representative for the cluster the member is assigned to. A selection technique is the Linde-Buzo-Gray (LBG) algorithm (see Y. Linde, et. al., "An Algorithm For Vector Quantizer Design", IEEE Transactions On Communications, Vol. COM-28, No. 1, January, 1980, pp. 84-95). The number of clusters is determined by the number of bits budgeted for describing the image block. Given n bits, the codebook can contain up to 2n cluster representatives or code vectors.
- The above referenced patent applications, U.S. Patent Application Serial Number 08/145,051 by Ray, Ellson and Gandhi, and U.S. Patent Application Serial Number 08/145,284 by Ray and Ellson, both describe a system for enabling very high-compression ratios with minimal loss in image quality by taking advantage of standardized features in a library of images.
- These patent applications describe a process for extracting the common features of images in the library and using this as a basis for image standardization. Once standardized into a standardized library image, the image can be compressed and subsequently decompressed into lossy representations of the original library image.
- An overview of the teachings of the above referenced patent applications consist of:
- Select the image features of greatest importance.
- Process a set of representative images from the library to enhance the selected features.
- Locate selected features in the representative images.
- Determine constraints for locations of image features.
- Process the image to meet the image feature location constraints.
- Assign regions of the image based on presence of features or a desired level of image quality.
- Determine the image-to-image correlation of the images for each subregion.
- Allocate capacity for storage of image information for each subregion based on a partition of subregions into image blocks and codebook size.
- Construct codebooks to take advantage of the correlation.
- Process the image to enhance features.
- Locate selected features in the image.
- Standardize the image by processing the image to meet the image feature location constraints.
- Partition the image based on the subregions and their image blocks.
- For each region, determine the entry in the codebook which best approximates the image.
- Store the series of codebook values for each image block as this is the compressed image.
- Extract the codebook values from the series of codebook value.
- Determine the codebook based on corresponding subregion position in the codebook value series.
- Extract an image block based on the codebook value from the above determined codebook.
- Copy the image block into the appropriate image block location in the subregion.
- Continue inserting image blocks until all image block locations are filled in the entire image.
- In order to store a compressed facial image in a manner consistent with international standards for a single track of a magnetic transaction card, the compressed facial image data must be below 500 bits (see ISO 7811). This capacity is compromised further by the existence of certain reserved characters which further reduce the bit capacity of the magnetic track. In U.S. Patent Application Serial Number 08/144,753, Filed October 29, 1993, by Ray and Ellson, and entitled "Method And Apparatus For Data Encoding With Reserved Values" the data limits are shown to have an information-theoretical maximum storage capacity for one of the tracks of 451 bits. That application also describes an encoding method for storing 442 bits of unconstrained data.
- When the target number of bits is extremely small, as is the case for facial image storage in 442 bits, the above described compression/decompression process fails to provide facial images of consistent quality. Consistent image quality is desirable when attempting to identify a person from an image. Additional improvements are necessary in the quality of the compressed images for more demanding verification systems. More specifically, opportunities for improvements exist in codebook formation, image standardization and image block symmetry.
- Previous compression methods used a codebook formation process based on training with the so-called LBG algorithm. This algorithm is one form of a class of algorithms referred to as clustering algorithms. The point of the algorithm is to find a predetermined number of groups within a data set and to select a representative for that group in an optimal way. In the case of the LBG algorithm, the dataset consists of blocks of pixels from a number of images, and the algorithm operates to find a predetermined number of blocks which best represents the dataset. While there are many metrics to measure what is good, the most common one is the minimum distance in a Euclidean sense, i.e., sum of the squares of differences between pixels. The representative of a group is often the centroid of the group, i.e., average of the blocks in the group. The LBG algorithm proceeds by first making an assignment of groups, selecting the centroid for each group, and reassigning the elements of the dataset to a centroid by selecting the closest centroid. The reassignment forms a new grouping and the algorithm can continue in an iterative fashion until convergence occurs.
- In many instances, while the number of groups begins at the desired number, the reassignment process reduces the overall number of groups. The result is sub-optimal, since the addition of another group will give better overall results. Hence, a method for maintaining the number of active groups is desirable. Another problem that can occur is that a group can be reduced to having an extremely small number of members. In the case where a group has a single member, the centroid is that member, and the distance between the member and its centroid is zero, and the cluster remains. The result here is that some groups are large and a better ensemble result could be attained by splitting a large group.
- The preferred method is to maintain the number of active groups by counting the number of groups, and for each group determine the extent, i.e., diameter of the group, and if some groups have no member, split the group of the largest extent into two smaller groups. The problem of excessively small groups is handled by treating the data elements of these groups as outliers and eliminating them from the dataset. This process must be executed with caution, otherwise too many of the data elements will be discarded.
- One type of correlation within facial images is the approximate mirror image symmetry between the left and right sides of the face. Frequently in near front perspective portraits, a large degree of correlation exists between the portions of the face close to the centerline. In particular, the image blocks used to render the part of the face above and below the eyes exhibit a high degree of symmetric correlation. However, along the centerline of the face the level of symmetry drops off due to the variability of the appearance of the nose when viewed from slightly different angles. What is needed is a method to further reduce the number of bits necessary to store a compressed portrait image by exploiting the natural symmetry of the human face in the regions around the facial centerline without imposing deleterious symmetry constraints on the nose.
- While the geometry of the human face may have inherent symmetry, the lighting conditions for portraits may be highly asymmetric. This results in a luminance imbalance between the left and right sides of a human facial portrait. What is needed is a method for balancing the lightness of a human facial portrait in order to achieve a higher degree of facial image portrait standardization and to enhance the natural symmetry of the human facial image.
- With the standardization of both the luminance and the location of image features, specialized codebooks can be developed to better represent the expected image content at a specified location in an image. A method of codebook specialization is described by Sexton in U.S. Patent No. 5,086,480, entitled "Video Image Processing", in which the use of two codebooks is taught. This method finds the best codevector from amongst the two codebooks by exhaustively searching both codebooks, and then flag the codebook in which the best match was found. The net result is a "super-codebook" containing two codebooks of possibly different numbers of codevectors where the flag indicates the codebook selected. Codebook selection does not arise from a priori knowledge of the contents of a region of the image; Sexton calculates which codebook is to be used for every codevector in every image. An opportunity for greater compression is to eliminate the need to store a codebook flag.
- Some areas of a portrait image do not contribute any significant value to the identification of an individual. For instance, shoulder regions are of minimal value to the identification process, and moreover, this region is usually covered by clothing which is highly variable even for the same individual. Since little value is placed in such regions the allocation of bits to encode the image may also be reduced. In the present invention some of these areas have been allocated few if any bits, and the image data is synthesized from image data of neighboring blocks. This permits a greater allocation of bits to encode more important regions.
- One of the preferred methods of the present invention is a method for forming an addressable collection of image features, representing features in a selected class of digitized images comprising the steps of:
- a. standardizing the digitized images from the selected class so as to place at least one expected feature at a normalized orientation and position;
- b. determining expected locations for features represented by the standardized images of the selected class;
- c. extracting from the standardized images of the selected class the image content appearing at the expected locations for each feature; and
- d. forming for each feature an addressable collection of like features from the extracted image content of step c.
- An apparatus embodiment of the present invention for forming an addressable collection of image features by codevectors representing a selected class of images is comprised of:
- means for establishing a standard for the selected class of images including preferred locations for selected image features;
- means for locating at least one selected image feature from within each of the selected class of images;
- means for manipulating each of the selected class of images to form standardized geometric images wherein the at least one selected image feature is positioned according to the established standard;
- means for extracting, from the standardized geometric images, feature blocks, each representing an image feature of interest;
- grouping means for collecting all like feature blocks into one group; and
- means for forming, for each group, an addressable collection of image features by codevectors which represent the respective group.
- The overall purpose of the present invention is to provide a technique for producing facial images, of adequate quality, for use in the identification process of a cardholder of a transaction card. The present invention enables the compression, storage, and retrieval of a portrait image where the compressed portrait.image data can be stored within a single track of a transaction card as specified by international standards. The present invention further improves on prior art compression methods by adding an improved training method for codebook generation, a luminance balancing method to standardize the lighting conditions of facial images, and linked image blocks for symmetric facial regions.
- From the foregoing it can be seen that it is a primary object of the present invention to encode a portrait image with a minimum number of bits.
- Another object of the present invention is to provide a decoding technique for quickly decoding a coded image.
- Yet another object of the present invention is to provide an improved coded portrait image.
- Still another object of the present invention is the ability to reject, from encoding, portions of an image that are not of interest.
- A further object of the present invention is to provide a technique that enables the insertion of likely image content in the absence of real image data.
- The above and other objects of the present invention will become more apparent when taken in conjunction with the following description and drawing wherein like characters indicate like parts and which drawings form a part of the present invention disclosure.
- Figure 1 illustrates, in block diagram form, an overview of the process flow of the present invention.
- Figures 2A, 2B, and 2C, illustrate a frontal facial portrait that is tilted, rotated to a standardized position, and sized to a standardized size, respectively.
- Figure 3 illustrates, in flow chart form, the method of standardization of the present invention.
- Figure 4A shows the positions and sizes of template elements within a template.
- Figure 4B illustrates, using darker shaded areas, the positions and sizes of the template elements within the template that have a left-to-right flip property.
- Figure 4C illustrates, using darker shaded areas, the positions and sizes of the template elements within the template that have a top-to-bottom flip property.
- Figure 4D illustrates, using darker shaded areas, the positions and sizes of the template elements within the template that are linked.
- Figure 5 illustrates, in table form, the portrait features and their characteristics in the template of a specific embodiment of the present invention.
- Figures 6A and 6B illustrate the template element data record for each element in the template of a specific embodiment of the present invention.
- Figure 7 illustrates, in flow chart form, the method of training to construct codebooks for the present invention.
- Figure 8 illustrates a collection of codevectors associated with each of the feature types used in the specific embodiment of the present invention.
- Figure 9A illustrates, in flow chart form, the model of compression.
- Figure 9B illustrates, in flow chart form, the method of constructing the compressed bit-stream.
- Figure 10 illustrates the codevector numbering and labeling for a compressed image.
- Figure 11 illustrates the compressed bit representation for an image.
- Figure 12 illustrates a transaction card with a data storage means.
- Figures 13A and 13B illustrate, in flow chart form, the method of decompression for the present invention.
- Figure 14 illustrates the codevectors as extracted from the feature type codevector collections with the lighter shaded codevectors having at least one flip property.
- Figure 15 illustrates the codevectors after execution of all flip properties.
- Figure 16 illustrates the final image.
- Figure 17 illustrates a preferred system arrangement on which the method of the present invention may be executed.
- The block diagram of Figure 1 gives an overview of the functionality of the present invention. Within Figure 1, a collection of
training images 10 is loaded into astandardizer 12 which processes thetraining images 10 into standardized training images of consistent format. The standardized training images are further processed, using afeature template 14, to generate training image blocks, the functionality of which is represented bytrainer 16. Thetrainer 16 forms codebooks 18 consisting of codevectors based on the training image blocks. The codebooks, combined with thefeature template 14, become the basis for the compression and decompression process. A standardized image, for example 20, is formed from an original image such as a portrait, using the functionality of thestandardizer 12. When astandardized image 20, that is to be encoded, is sent to adata compressor 22 the compressor generates acompressed image 24. The compressed image can then be stored and or transmitted for future use. A decompression process, performed by adecompressor 26, using the same feature template and codebook data as was used in thecompressor 22, provides a decompressedimage 28. - Figure 2A, represents an image that is a frontal facial portrait. In this example of an image the face is tilted and translated (off-centered) with respect to the center of the image. Dependent on the source of images other variations in the positioning and sizing of the face within the image will be encounted. To achieve maximum results with the present invention the size, position, and orientation of the face is to be standardized. In order to operate upon the image, the image is placed into a digital format as a matrix of pixel values. The digital format (pixel values) of the image is derived by scanning and/or other well known digital processing techniques that are not part of the present invention. The digital image is then displayed on a display device, for example, the
display 202 shown in Figure 17, and a standardization process is applied to form a standardized geometric image. The images are standardized to provide a quality match between the template elements associated with the feature template 14 (to be described in detail later in this description of the invention). The process starts with the image of Figure 2A by locating the center of the left and right eyes of the image subject. In Figure 2B a new digital image, a partially standardized geometric image, is formed by rotating and translating the original image of Figure 2A, if necessary. The rotation and translating is performed in software using well known image processing operations so as to position the left and right eye centers along a predetermined horizontal axis and equally spaced about a central vertical axis. Figure 2C illustrates the image of Figure 2B sized by a scaling process to form the standardized geometric image. - Referring now to Figure 3, the process flow for forming the standardized geometrical image is set forth in the left column of flow blocks commencing with the
operation block 30, labeled, "select an image" to identify its function. All of the flow blocks shown in the drawings will be labeled according to their function. The selection process is a selection based upon the existence of a frontal facial image of a person that is to have their image processed with thetemplate 14. Included in the selection process is the creation of the digital matrix representation of the image. The digital matrix, is next loaded,operation 32, into the system for display to an operator (see the system of Figure 17). As previously discussed the operator locates the left and right eye points,operation 34, and performs any needed rotation, translating and rescaling of the facial portion of the image to form the standardized geometrical image as peroperation 36. The standardized geometric image is then stored peroperation 38. - More specifically, in the preferred embodiment of the invention, the standard geometry of the image was set to be an image size of 56 pixels in width and 64 pixels in height with the eye centers located 28 pixels from the top of the image and 8 pixels on either side of an imaginary vertical center line. Identifying the centers of the left and the right eye is done by displaying the initial image to a human operator who points to the centers using a cursor that is driven by means of a device such as a mouse, digital writing tablet, light pen or touch sensitive screen. An alternate approach would be to automate the process using a feature search program. In one embodiment the human operator localizes the eye positions, and a processor fine-tunes the location through an eye-finding search method restricted to a small neighborhood around the operator-specified location.
- With the image size and eye position adjustments made, the luminance standardization procedure takes place. In the right hand column of blocks, evenly labeled 40-54, the luminance of the standardized geometric image is processed to reduce the variability in perceived lighting of image subjects, to reduce specular highlights, adjust skin tones to predetermined values, and to reduce shadows with the resultant standardized geometric image being stored for future use. There are three spatial scales used to standardize the variations in the luminance of the training images; large for light level/direction, medium for correcting asymmetric shadows from side lights, and small for reduction of specular highlights from glasses, jewelry and skin. These procedures change the mean luminance level in the image. Variations in luminance level, referred to as contrast, are also adjusted in order to enhance certain features which are useful in identifying features, but are diminished when converting color images to gray-scale.
- The final step of the image standardization process, prior to storage, is to shift the facial mean luminance, i.e., the average lightness found in the general vicinity of the nose, to a preset value. In the preferred embodiment of the present invention for a light skin toned person the preset value is 165, for a medium skin tone 155, and for a dark skin tone the value is 135. The formed standardized digital image is now represented by a storable matrix of pixel values.
- For more information about standardization of an image, the man skilled in the art could referred to the European Application filed on even date and claiming the priority of US 08/361,352 which is incorporated by reference.
- Figure 4A, illustrates the layout of a
template 14 that is to be used with the standardized image. Thetemplate 14 is partitioned into 64 template elements labeled A through M. The elements are arranged in accordance with 13 corresponding features of a human face, for example, the template elements labeled A correspond to the hair feature at the top of the head and the template elements labeled G correspond to the eyes. The tables of Figures 5, 6A, and 6B provide further description of the remaining template elements. Although the preferred embodiment of the invention is implemented with 64 template elements and 13 features it is to be understood that these numbers may be varied to suit the situation and are not to be construed as limiting the method of this invention. Also to be noted is that some of the regions of the template are not assigned to any element. These unassigned regions will not have their image content based on retrieval of information from codebooks. The method for assigning image content to these regions will be based on the assignment of adjoining regions which will be described below. The template size matches the geometry of the standardized image with 56 pixels in width and 64 pixels in height. The size of the template elements are based upon the sizes of facial features that they intend to represent. For example, G is the relative size of an eye in a standardized image and both instances of elements G are positioned in the locations of the eyes in a standardized image. - Referring now to Figure 4B, the darker shaded template elements are assigned a left-to-right flip property that will be described in detail later.
- Referring to Figure 4C, the darker shaded template elements are assigned a top-to-bottom flip property that will also be described later. It is to be noted that template elements may be assigned more than one property.
- Another property of template elements is linkage. Figure 4D represents, with the darker shaded region, the location of template elements which are part of a link. In this specific embodiment, there exist 7 linked pairs of elements. The linkage is horizontal between each pair of darkened template elements, for example, G at the left of center is linked to G at the right of center. Although 7 linked pairs are shown as the preferred embodiment, linkages can occur in groups larger than two and between any set of like labeled elements.
- As can be observed, the
template 14 consists of a sequence of template elements representing a sequence of data records where each record, in the preferred embodiment, describes the location, size, label, left-to-right property, top-to-bottom property, and linkage. Records with other and/or additional factors may be created as the need arises. - The
template 14 records the distribution and size of template elements. Each template element has assigned to it a codebook 18 (see Figure 1) and a spatial location corresponding to the image. As previously stated, thetemplate 14 consists of 64 template elements composed of rectangular pixel regions. These template elements are assigned to one of 13 different codebooks, each corresponding to a different type of facial feature. Thecodebooks 18 are collections of uniformly-sized codevectors of either 4x16, 8x8, 8x5, 4x10, 4x6 or 8x4 pixels. The codevectors which populate thecodebooks 18 are produced by the trainer 16 (Figure 1) from the image blocks extracted from the training images. - In Figure 5, a table describes the properties of the template elements A-M shown in Figures 4A-D. The first listed property of the codebook is the number of codevectors the codebook contains, and this is either 128 or 256. These numbers are both powers of 2, and in particular, they are 27 and 28. This is advantageous as the codebook indexes used to specify a codevector utilize the full range of a 7-bit or 8-bit index. The index length of the codebooks is shown in the table of Figure 5 as either an 8 or a 7. The dimensions of the image blocks for the codevectors are the second and third properties listed and are given as the block width in pixels and the block height in pixels. The number of pixels per block is the product of the block width and block height. The fourth listed property is the number of occurrences of template elements in the feature template of the specific embodiment which are assigned to the codebook. The unique property, the fifth listed feature property, represents how many of these template elements are uniquely chosen (which eliminates one member of each pair of linked template elements). The sixth listed feature property is the number of bits allocated to store the selected codevectors from the codebook. This is the product of the number of unique template elements and the index length in bits. The sum of the entries in the feature bits row is 442, the total number of bits required to store the compressed image as an unconstrained binary record. The feature template thus carries all the information needed to build both the maps of Figures 4A-D and the table shown in Figure 5. The tables of Figures 6A and 6B represent the maps of Figures 4A-D in data form.
- Referring now to the operational flow of the
trainer 16 shown in Figure 7, the first block, block 700, represents loading standardized training images. This is a collection of images considered to be representative of the portrait images which will be sent to thecompressor 22 to be compressed usingcodebooks 18 formed by this training process. First image blocks are extracted from the image for a selected codebook type, as represented byblock 702. Next the extracted image blocks are oriented based on their symmetry type, such as the top-to-bottom and the left-to-right flip properties described above. This is represented byblock 704. The next block,decision block 706, checks for the presence of another image in the standardized image training set. In general, 2,000 images are recommended for this training process. If there is another image in the set, the process loops back to block 702, as shown in the diagram. If not, proceed to block 708, where the centroids are initialized as random image blocks. In the preferred embodiment, the centroids are initialized as the first sequence of image blocks. Referring next to block 710, each image block is assigned to the nearest centroid that was determined inblock 708. Inblock 712 the centroids that are closer to each other than a threshold value are merged. For example, if two centroids have been determined to be less than a predetermined distance away they are flagged as being too close, in which case they are merged and all image blocks are assigned to a single centroid, leaving the other centroids unassigned. Referring to block 714, centroids of large extent are split with unassigned centroids. This means that if a centroid has a very large distance between different codevectors assigned to it, then this centroid is split into two centroids, where one of the new centroids that is assigned to these codevectors comes from an unassigned centroid from the previous step. Inblock 716 the least populated centroids are unassigned and then used to split the remaining centroids that have been determined to be of large extent. Inblock 718 the closest centroids are merged together. Once again taking the newly-unassigned centroids resulting from the merge process inblock 720, more centroids of large extent are split. Referring to block 722 a new position is found for each of the centroids. This process is required because in shuffling the assignment of image blocks to the various centroids the location of the center of the codevectors assigned to the centroid has actually changed. A new position for the center of the group of codevectors assigned to each centroid group is calculated to determine how the centroid has moved through the image block reassignment process. Referring to block 724, an assignment of each one of these image blocks to the nearest centroid is made, since the centroids have now moved around a bit, it is possible that some of the image blocks that have been assigned to one centroid may in fact now be closer to another centroid. - Referring to block 726, the centroids are reordered from the most to the least populated. This reordering accelerates the reassignment process in future iterations. In
block 728, a convergence test is performed. If convergence has not been completed, the process returns to block 712 where centroids that are too close together are merged. If convergence has been completed, the process proceeds to block 730 where the centroids are stored as the codevectors of the newly-formed codebooks. - In Figure 8, a partial collection of codevectors are shown for the codebooks corresponding to the feature types A, G, and M, is shown in detail. Note that the feature element G123, of Figure 8, corresponds to the left and right eye elements in the templates of Figures 3, 4A-4D, 5, and 10. Likewise, the template element A46, of Figure 8, corresponds to hair in the upper left corner of the template shown in Figures 3, 4, 5, and 10.
- Figure 9A, illustrates in flow chart form, the process for finding the best-matching codevectors for the various template elements of an image. The process begins with the loading of a standardized image or the loading of an image to be standardized, as shown in
blocks block 902. - In
block 904 the next image block that corresponds to the first template element is extracted from the standardized image. The extracted image block is then oriented as shown inblock 906, based on the symmetry type of the horizontal flip and/or the vertical flip property. Inblock 908 the index of the best-matched codevector from the codebook is found by comparing the image block with all the codevectors in the codebook that correspond to the feature type for that template element. Thecomparative block 910, determines whether or not the block is linked, as based on the information stored in the template. If the block is linked the flow proceeds to block 914 the value is stored which represents how good of a match occurs. In the specific embodiment of the present invention the mean-square-error comparison between these two image blocks and the codevector is used as a measure of the goodness of the match though other measures of goodness can be readily conceived. Inblock 916 the value from the match, as in the preferred embodiment, is compared with other link blocks in the group. Inblock 918 the codevector with the best value is selected, in this case, the lowest value for the mean-square-error test. This vector is then used inblock 912 as the index of the best-match codevector from that codebook. If an image block is not linked proceed directly to block 912. Fromblock 912 the process proceeds to block 920, another comparative block which determines whether this is the last element in the template. If it is not, the process returns to block 904 and extracts the next image block corresponding to the next template element. If it was the final image block then the process proceeds building the compressed image bit stream inblock 922. The building process is described in Figure 9B. - Proceed now to Figure 9B, where in
block 950 the bit stream pointer, BSP, is set to zero and the template element pointer, TP, is also set to zero. Inblock 952 the template element number TP is retrieved.Block 954 is a decision block wherein it is determined if the template element is linked. If the template element is not linked the process proceeds to block 958, to be described later. If the template element is linked the process proceeds to thefirst decision block 956, to determine if this is a first occurrence of the linkage group. If the element is the first occurrence of the linkage group the process proceeds to block 958, to be described below. If it is not the first occurrence of the linkage group the process proceeds to block 966, to be described below. - Referring now to block 958 a number of bits from the template element, labeled as "BN" are retrieved. This number of bits is used to encode the codevector index of the template element. In the preferred embodiment these are either 7 or 8 bits per template element. Proceeding down to
box 960, the codevector index is encoded with BN bits. - At
block 962, the number of bits, BN, starting at bit location BSP are inserted. Atblock 964 the bit stream pointer BSP is incremented by BN, and inblock 966, the template element pointer is incremented by one.Decision point 968 asks if the template elements have been exhausted and, if the answer is "yes", the bit stream is complete perblock 970. If the answer is "no" and more template elements exist, the process loops back to block 952 and the process continues. - Figure 10 illustrates the results for a specific standardized image of the best-match codevector comparison process described in Figures 9A and 9B. In Figure 10 each of the template elements has both a letter and a number assigned to it. The letter indicates the feature type corresponding to the template element, and the number indicates the index of the best-matched codevector according to the embodied measure of goodness.
- The table of Figure 11 shows the best codevector number, with the numbers arranged in a sequence, from the left column to the right column and, heading down, that corresponds to the sequence of the template elements shown in the tables of Figure 6A and 6B. It is important to maintain the sequence of template elements consistent with Figures 6A and 6B so that the process shown in Figure 9B has the bit stream pointer interpreting the compressed image bit stream correctly. The index length for each of the best codevector numbers shown in Figure 11 corresponds to the feature type index length from the table shown in Figure 5 as the index length. The representation of the best codevector number is then shown as a binary representation, in the bit representation column, with the bit representation having a length corresponding to the index length. In the cases where the binary number does not have sufficient digits it is padded with leading zeroes in order to have a binary length corresponding to the index length in the preceding column of its bit representation. If the binary digits in the bit representation columns are sequenced, starting with the left column and proceeding down and then into the right column, the resulting 442-bit binary number would correspond to the final output compressed image. This would correspond to the
compressed image 24 in Figure 1. - Figure 12 illustrates a
transaction card 120 with a means of digital storage. This storage means can be accomplished by a variety of means such as magnetic encoding, or by bar code patterns. In the case of the magnetic storage, a common method is for themagnetic storage area 122 to have multiple tracks as indicated in Figure 12 byTrack 1,Track 2, andTrack 3.. - Referring now to Figure 13A. In Figure 13A the compressed bit stream is decompressed to form a decompressed image. The bit stream pointer BSP is initialized to zero and the template element pointer TP is set to zero in
block 302. The process proceeds to block 304 wherein the template element TP is retrieved. In block 306 a decision is made as to whether or not the template element is linked. If it is not linked the process proceeds to block 310. If it is linked the process proceeds todecision block 308.Decision block 308 determines if this is the first occurrence of a linked group. If it is the process proceeds to block 310. If it is not the process proceeds to block 312. - Referring now to block 310, the number of bits BN for the codevector index and the codebook type are extracted from the template element. The bits, BSP through BSP + BN - 1, are extracted from the compressed bit stream in
block 314. The bit stream pointer is incremented by BN inblock 316. The process then flows to block 318. Retracing the flow back to block 312, when there is a link group which is not the first occurrence of that group, the codevector index from the previous occurrence of the link group is copied. That previous occurrence is moved intoblock 318. Inblock 318 the index codevector from the indicated codebook type is retrieved. This codevector represents a feature type from a particular image. The template element indicates how the blocks should be oriented. Inblock 320 we orient the blocks as indicated by the template elements. If indicated, a horizontal, left-to-right flip or a vertical, top-to-bottom flip, is executed. Inblock 322 the block in the location indicated by the template element is inserted into the image. Inblock 324, the template pointer TP is incremented by one. Thedecision block 326 determines if all the template elements have been used. If not, the process returns to block 304 and continues. If all the template have been used the process moves to point A in Figure 13B. - In Figure 13B the process for building the index is shown. Next the process constructs those regions with no template elements assigned to them. Plausible pixels are used for those regions, per
block 328. Image blocks inblock 330 have seams between them which have to be smoothed, and this process, is represented inblock 330. Smoothing of the seams is achieved by taking averages across the seams, both horizontally and vertically. Inblock 332 the contrast of the reconstructed image is enhanced. This is done by making sure that the full dynamic range of the image 0-255 is used in the reconstructed image. In the preferred embodiment this is done by a simple linear re-scaling. In block 334 spatially-dependent random noise is added. This is done relative to the center of the image with the center of the image being very minor and the image noise on the periphery being much more accentuated. Inblock 336 the reconstructed image is outputted. The outputted image corresponds to the decompressed image of Figure 1, block 28. - Figures 4B and 4C indicated which template elements possessed the left-to-right and top-to-bottom flipping property, respectively. The template elements with these flip properties are also indicated with the TRUE/FALSE flags in the tables of Figure 6A and 6B. The codevectors in Figure 14 that are to be flipped are identified by diagonal lines through the boxes representing pixels. Figure 15 represents the application of the flipping properties to the codevectors in Figure 14, where all codevectors in Figure 14 which correspond to the darkened template elements in Figure 4B are flipped left-to-right and all codevectors in Figure 14 which correspond to the darkened template elements in Figure 4C are flipped top-to-bottom. It should be noted that some template elements undergo both flips in the transformation of the codevectors from Figure 14 into the codevector orientation of Figure 15 and that the flips take place within the associated element.
- The next step is the formation, by image processing operations, of a final image, shown in Figure 16, based on the oriented codevector mosaic of Figure 15. The mosaic of Figure 15 may have certain visually objectionable artifacts as a result of its construction from codevectors. These artifacts can be diminished with some combination of image processing algorithms.
- In the preferred embodiment, a combination of well known image processing operations are applied including, smoothing across the codevector boundaries, contrast enhancement, linear interpolation to fill missing image regions, and the addition of spatially dependent random noise. The smoothing operation is described by considering three successive pixels, P1, P2, and P3, where P1 and P2 are in one codevector and P3 is in an adjoining codevector. The pixel P2 is replaced by result of:
- The regions of the feature template not corresponding to any template element are filled using linear interpolation. For each region, the known values of the boundary pixels are used to calculate an average pixel value. The unknown corner opposite the known boundary is set to this average value. The remainder of the unassigned interior pixels are calculated by linear interpolation. In the preferred embodiment of the present invention, there are four such unassigned regions, each located in a corner of the feature template.
The spatially random noise, to be added, is determined by:
v = noise magnitude
where i = column of the affected pixel,
j = row of the affected pixel, and
rand is a pseudo-random, floating-point number in the range (-1 to 1). The value n(i,j) is added to pixel at location (i,j). If the resultant pixel is greater than 255 it is clipped to 255, and if it is less than zero it is set to 0. Figure 16 represents an image after processing by these operations. It should be understood that other image processing operations may be used in other situations, and the preferred embodiment should not be considered limiting. - Figure 17, illustrates an
apparatus 100 on which the present method may be implemented. Theapparatus 100 is comprised of ameans 102 for converting a non-digital image, such as aphoto print 80, or anegative image 82, into a digital representation of an image. Generally, the conversion is performed with ascanner 104 that outputs signals representing pixel values in analog form. An analog-to-digital converter 106 is then used to convert the analog pixel values to digital values representative of the scanned image. Other sources of digital images may be directly inputted into aworkstation 200. In the preferred apparatus embodiment of the invention theworkstation 200 is aSUN SPARC 10, running UNIX as the operating system and encoded using standard C programming language. The program portion of the present invention is set forth in full in the attached Appendices A-E. Display of the digital images is by way of thedisplay 202 operating under software,keyboard 204, andmouse 206 control. Digital images may also be introduced into the system by means of aCD reader 208 or other like device. The templates created by the present method and apparatus may be downloaded to aCD writer 210 for storage on a CD, hard copy printed by aprinter 212, downloaded to atransaction card writer 216 to be encoded onto the data storage area, or transmitted for further processing or storage at remote locations by means of amodem 214 and transmission lines. - The advantages of the proposed invention are:
- The number of bits required to represent a compressed image is substantially reduced.
- Reducing the image-to-image variation of skin mean luminance improves the ability of codebooks in the facial region to provide well matched blocks.
- The use of linked template elements improves image quality without increasing the number of bits required for storage of the compressed image.
- Allows regions of the image not associated with any template element to be reconstructed based on neighboring template elements.
- Provides higher quality codebooks through improved training procedure.
- While there has been shown what are considered to be the preferred embodiments of the invention, it will be manifest that many changes and modifications may be made therein without departing from the essential spirit of the invention. It is intended, therefore, in the annexed claims, to cover all such changes and modifications as may fall within the true scope of the invention.
-
Claims (10)
- A method for forming an addressable collection of image features, representing features in a selected class of digitized images comprising the steps of:a. standardizing the digitized images from the selected class so as to place at least one expected feature at a normalized orientation and position;b. determining expected locations for features represented by the standardized images of the selected class;c. extracting from the standardized images of the selected class the image content appearing at the expected locations for each feature; andd. forming for each feature an addressable collection of like features from the extracted image content of step c.
- The method according to Claim 1 wherein the step of standardizing is comprised of the steps of:locating at least two features appearing in at least one of the digitized images forming the selected class; andpositioning the located features at predetermined locations.
- The method according to Claim 1 wherein the step of standardizing is further comprised of the step of:maintaining the relative orientation and positions of the features in the digitized images when performing the standardizing step a.
- The method according to Claim 1 wherein the step of standardizing is comprised of the steps of:representing each image feature of the digitized image as a collection of pixel values; andadjusting at least one collection of pixel values representing an image feature to have a predetermined average value.
- A method for forming an addressable collection of image features by codevectors representing a selected class of images comprising the steps of:a. establishing a standard for the selected class of images including preferred locations for selected image features;b. selecting one image from the selected class of images;c. locating at least one selected image feature within the selected image;d. manipulating the selected image to form a standardized geometric image wherein the at least one selected image feature is positioned according to the standard established in step a;e. extracting from the standardized geometric image, feature blocks, each representing an image feature of interest;f. repeating steps b through e for the remainder of the plurality of images in the selected class;g. collecting all like feature blocks into one group; andh. forming for each group an addressable collection of feature codevectors which represents the respective group.
- The method according to Claim 5 and further comprising the step of:forming a template element representing the association of the position and dimensions of a feature block within the standardized geometric image with one of said addressable collection of feature codevectors.
- The method according to Claim 6 and further comprising the steps of:associating a vertical symmetry property with at least one template element; andflipping vertically all extracted image blocks for template elements with the vertical symmetry property after the extracting step.
- The method according to Claim 6 and further comprising the steps of:associating a horizontal symmetry property with at least one template element; andflipping horizontally all extracted image blocks for template elements with the horizontal symmetry property after the extracting step.
- The method according to Claim 5 and further comprising the compression of a featured image by the steps of:i. applying steps c through e to the featured image;ii. comparing the extracted feature blocks from step i with the addressable collection of feature codevectors associated with the extracted feature; andiii. recording the address of the best match for each feature block.
- The method according to Claim 9 and further comprising the step of:forming a sequence of template elements each representing the association of the position and dimensions of a feature block within the standardized geometric image with one of said addressable collection of feature codevectors.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US36133894A | 1994-12-21 | 1994-12-21 | |
US361338 | 1994-12-21 |
Publications (3)
Publication Number | Publication Date |
---|---|
EP0718807A2 true EP0718807A2 (en) | 1996-06-26 |
EP0718807A3 EP0718807A3 (en) | 1996-07-10 |
EP0718807B1 EP0718807B1 (en) | 1998-08-26 |
Family
ID=23421636
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP95420341A Expired - Lifetime EP0718807B1 (en) | 1994-12-21 | 1995-12-05 | Method for compressing and decompressing standardized portrait images |
Country Status (7)
Country | Link |
---|---|
EP (1) | EP0718807B1 (en) |
JP (1) | JPH08265753A (en) |
CN (1) | CN1150282A (en) |
AR (1) | AR000238A1 (en) |
BR (1) | BR9505965A (en) |
DE (1) | DE69504289T2 (en) |
ZA (1) | ZA959491B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2319380A (en) * | 1996-11-15 | 1998-05-20 | Daewoo Electronics Co Ltd | User identification for set-top box |
EP1091560A1 (en) * | 1999-10-05 | 2001-04-11 | Hewlett-Packard Company | Method and apparatus for scanning oversized documents |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100369049C (en) * | 2005-02-18 | 2008-02-13 | 富士通株式会社 | Precise dividing device and method for grayscale character |
CN103020576B (en) * | 2011-09-20 | 2015-09-30 | 华晶科技股份有限公司 | Characteristic data compression device, multi-direction human face detecting system and method for detecting thereof |
JP5962937B2 (en) * | 2012-04-20 | 2016-08-03 | ホアウェイ・テクノロジーズ・カンパニー・リミテッド | Image processing method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1988009101A1 (en) * | 1987-05-06 | 1988-11-17 | British Telecommunications Public Limited Company | Video image processing |
WO1992002000A1 (en) * | 1990-07-17 | 1992-02-06 | British Telecommunications Public Limited Company | A method of processing an image |
US5331544A (en) * | 1992-04-23 | 1994-07-19 | A. C. Nielsen Company | Market research method and system for collecting retail store and shopper market research data |
EP0651354A2 (en) * | 1993-10-29 | 1995-05-03 | Eastman Kodak Company | Compression method for a standardized image library |
EP0651355A2 (en) * | 1993-10-29 | 1995-05-03 | Eastman Kodak Company | Method and apparatus for image compression, storage and retrieval on magnetic transaction cards |
-
1995
- 1995-11-08 ZA ZA959491A patent/ZA959491B/en unknown
- 1995-11-30 AR AR33446795A patent/AR000238A1/en unknown
- 1995-12-05 DE DE69504289T patent/DE69504289T2/en not_active Expired - Fee Related
- 1995-12-05 EP EP95420341A patent/EP0718807B1/en not_active Expired - Lifetime
- 1995-12-20 BR BR9505965A patent/BR9505965A/en not_active Application Discontinuation
- 1995-12-20 JP JP7332355A patent/JPH08265753A/en active Pending
- 1995-12-21 CN CN95121165A patent/CN1150282A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1988009101A1 (en) * | 1987-05-06 | 1988-11-17 | British Telecommunications Public Limited Company | Video image processing |
WO1992002000A1 (en) * | 1990-07-17 | 1992-02-06 | British Telecommunications Public Limited Company | A method of processing an image |
US5331544A (en) * | 1992-04-23 | 1994-07-19 | A. C. Nielsen Company | Market research method and system for collecting retail store and shopper market research data |
EP0651354A2 (en) * | 1993-10-29 | 1995-05-03 | Eastman Kodak Company | Compression method for a standardized image library |
EP0651355A2 (en) * | 1993-10-29 | 1995-05-03 | Eastman Kodak Company | Method and apparatus for image compression, storage and retrieval on magnetic transaction cards |
Non-Patent Citations (2)
Title |
---|
FIRST INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS ENGINEERING, August 1992, EDINBURGH, pages 29-34, XP000565073 SUTHERLAND ET AL: "Automatic Face Recognition" * |
RABBANI ET AL: "Digital Image Compression Techniques" 1991 , SPIE PRESS , BELLINGHAM, WASHINGTON XP000566680 * Chapter 12 * * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2319380A (en) * | 1996-11-15 | 1998-05-20 | Daewoo Electronics Co Ltd | User identification for set-top box |
GB2319380B (en) * | 1996-11-15 | 1998-09-30 | Daewoo Electronics Co Ltd | Method and apparatus for controlling a descrambling device |
EP1091560A1 (en) * | 1999-10-05 | 2001-04-11 | Hewlett-Packard Company | Method and apparatus for scanning oversized documents |
US6975434B1 (en) | 1999-10-05 | 2005-12-13 | Hewlett-Packard Development Company, L.P. | Method and apparatus for scanning oversized documents |
Also Published As
Publication number | Publication date |
---|---|
DE69504289D1 (en) | 1998-10-01 |
ZA959491B (en) | 1996-06-29 |
AR000238A1 (en) | 1997-05-28 |
CN1150282A (en) | 1997-05-21 |
JPH08265753A (en) | 1996-10-11 |
BR9505965A (en) | 1997-12-23 |
DE69504289T2 (en) | 1999-03-04 |
EP0718807B1 (en) | 1998-08-26 |
EP0718807A3 (en) | 1996-07-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5574573A (en) | Compression method for a standardized image library | |
US6438268B1 (en) | Vector quantization codebook generation method | |
JP2968582B2 (en) | Method and apparatus for processing digital data | |
US7295704B2 (en) | Image compression usable with animated images | |
US5450504A (en) | Method for finding a most likely matching of a target facial image in a data base of facial images | |
US5802203A (en) | Image segmentation using robust mixture models | |
EP1418507A2 (en) | Method and system for multiple cue integration | |
EP0651355A2 (en) | Method and apparatus for image compression, storage and retrieval on magnetic transaction cards | |
JPH1055444A (en) | Recognition of face using feature vector with dct as base | |
Elad et al. | Low bit-rate compression of facial images | |
Bing et al. | An adjustable algorithm for color quantization | |
CN108765261A (en) | Image conversion method and device, electronic equipment, computer storage media, program | |
JPH0879536A (en) | Picture processing method | |
EP0718807B1 (en) | Method for compressing and decompressing standardized portrait images | |
CN114596374A (en) | Image compression method and device | |
US5394191A (en) | Methods for synthesis of texture signals and for transmission and/or storage of said signals, and devices and systems for performing said methods | |
KR20040034342A (en) | Method and apparatus for extracting feature vector for use in face recognition and retrieval | |
US5727089A (en) | Method and apparatus for multiple quality transaction card images | |
EP0718788A2 (en) | Method and apparatus for the formation of standardized image templates | |
Fu | Color image quality measures and retrieval | |
Matsui et al. | Feature selection by genetic algorithm for MRI segmentation | |
WO2006022741A1 (en) | Image compression usable with animated images | |
JPH0998295A (en) | Data compression method for color image | |
Kim et al. | Color image vector quantization using an enhanced self-organizing neural network | |
Chang et al. | Image coding by a neural net classification process |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
PUAL | Search report despatched |
Free format text: ORIGINAL CODE: 0009013 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): DE FR GB |
|
AK | Designated contracting states |
Kind code of ref document: A3 Designated state(s): DE FR GB |
|
17P | Request for examination filed |
Effective date: 19961220 |
|
17Q | First examination report despatched |
Effective date: 19970321 |
|
GRAG | Despatch of communication of intention to grant |
Free format text: ORIGINAL CODE: EPIDOS AGRA |
|
GRAG | Despatch of communication of intention to grant |
Free format text: ORIGINAL CODE: EPIDOS AGRA |
|
GRAH | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOS IGRA |
|
GRAH | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOS IGRA |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): DE FR GB |
|
REF | Corresponds to: |
Ref document number: 69504289 Country of ref document: DE Date of ref document: 19981001 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 19981203 Year of fee payment: 4 |
|
ET | Fr: translation filed | ||
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 19981230 Year of fee payment: 4 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
26N | No opposition filed | ||
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 19991205 |
|
GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 19991205 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20000831 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20001003 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: ST |
|
REG | Reference to a national code |
Ref country code: HK Ref legal event code: WD Ref document number: 1010760 Country of ref document: HK |