EP0718788A2 - Procédé et dispositif pour produire des patrons d'images standardisées - Google Patents
Procédé et dispositif pour produire des patrons d'images standardisées Download PDFInfo
- Publication number
- EP0718788A2 EP0718788A2 EP95420342A EP95420342A EP0718788A2 EP 0718788 A2 EP0718788 A2 EP 0718788A2 EP 95420342 A EP95420342 A EP 95420342A EP 95420342 A EP95420342 A EP 95420342A EP 0718788 A2 EP0718788 A2 EP 0718788A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- template
- image
- feature
- features
- elements
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/30—Individual registration on entry or exit not involving the use of a pass
- G07C9/32—Individual registration on entry or exit not involving the use of a pass in combination with an identity check
- G07C9/37—Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
Definitions
- the present application is related to:
- EP-A-651,355 entitled “Method And Apparatus For Image Compression, Storage and Retrieval On Magnetic Transaction Cards”.
- EP-A-651,354 entitled "Compression Method For A Standardized Image Library”.
- the present invention relates to the field of digital image processing and more particularly to a method and associated apparatus for forming digital standardized image feature templates for facilitating a reduction in the number of bits needed to adequately represent an image.
- preserving the orientation and quantization of the original image is less important than maintaining the visual information contained within the image.
- identity of the child in the portrait can be ascertained with equal ease from either the original image or an image processed to aid in compression, then there is no loss in putting the processed image into the library.
- This principle can be applied to build the library of processed images by putting the original images into a standardized format. For missing children portraits this might include orienting the head of each child to make the eyes horizontal, centering the head relative to the image boundaries. Once constructed, these standardized images will be well compressed as the knowledge of their standardization adds image-to-image correlation.
- VQ vector quantization
- the number of bits for an image can be budgeted.
- the size of the codebook can be increased.
- the number of image blocks can be increased (and hence the size of each block reduced).
- Codebooks are determined by first forming a collection of representative images, known as the training image set. Next, images are partitioned into image blocks, and the image blocks are then considered as vectors in a high-dimensional vector space, i.e., for an 8 x 8 image block, the space has 64 dimensions. Image blocks are selected from predetermined regions within each image of the training set of images. Once all the vectors are determined from the training set, clusters are found and representative elements assigned to each cluster. The clusters are selected to minimize the overall combined distances between a member of the training set and the representative for the cluster the member is assigned to. A selection technique is the Linde-Buzo-Gray (LBG) algorithm (see Y. Linde, et.
- LBG Linde-Buzo-Gray
- the number of clusters is determined by the number of bits budgeted for describing the image block. Given n bits, the codebook can contain up to 2 n cluster representatives or code vectors.
- the lighting conditions for portraits may be highly asymmetric. This results in a luminance imbalance between the left and right sides of a human facial portrait. What is needed is a method for balancing the lightness of a human facial portrait in order to achieve a higher degree of facial image portrait standardization and enhance the natural symmetry of the human facial image.
- codebooks can be developed to better represent the expected image content at a specified location in an image.
- This compression method finds the best codevector from amongst two codebooks by exhaustively searching both codebooks, and then it flags the codebook in which the best match was found. The net result is a "super-codebook" containing two codebooks of possibly different numbers of codevectors where the flag indicates the codebook selected. Codebook selection does not arise from a priori knowledge of the contents of a region of the image; Sexton calculates which codebook to use for every codevector in every image. An opportunity for greater compression is to eliminate the need to store the codebook flag.
- Some areas of the image do not contribute any significant value to identifying an individual. For instance, shoulder regions are of minimal value to the identification process, and moreover, this region is usually covered by clothing which is highly variable even for the same individual. Since little value is placed in such regions the allocation of bits to encode the image should also be reduced. In the present invention some of these areas have been allocated few if any bits, and the image data is synthesized from image data of neighboring blocks. This permits a greater allocation of bits to encode more important regions.
- the present technique facilitates the formation of an image feature template that finds particular utility in the compression and decompression of like-featured images. More specifically, the feature template enables the compression and decompression of large collections of images which have consistent sets of like image features that can be aligned and scaled to position these features into well correlated regions.
- the feature template of the present invention comprises:
- the preferred methodology for forming a feature template comprises the steps of:
- Figures 1A, 1B, and 1C illustrate a frontal facial portrait that is tilted, rotated and translated to a standardized position, and sized to a standardized size, respectively.
- Figure 2 illustrates, in flow chart form, the method for standardizing an image.
- Figure 3A shows the positions and sizes of the template elements that form a template.
- Figure 3B illustrates, by darker shaded areas, the positions and sizes of the template elements of the template which have a left-to-right flip property.
- Figure 3C illustrates, by darker shaded areas, the positions and sizes of the template elements of the template which have a top-to-bottom flip property.
- Figure 3D illustrates, by darker shaded areas, the positions and sizes of the template elements of the template which are linked.
- Figure 4 illustrates, in table form, the portrait features, their associated labels, and their characteristics.
- Figures 5A and 5B illustrate the template element data record for the elements in the template illustrated in Figures 3A-3D.
- Figure 6 illustrates a collection of tiles associated with each of the feature types A-M used in the specific embodiment of the present invention.
- Figure 7 illustrates the tile numbering and labeling for a compressed image.
- Figure 8 illustrates the tiles as extracted from the feature type tile collections with the lighter shaded tiles having at least one flip property.
- Figure 9 illustrates the tiles after execution of all flip properties.
- Figure 10 illustrates the final image.
- Figure 11 illustrates a preferred apparatus arrangement on which the method of the present invention may be executed.
- Figure 1A represents an image that is a front facial portrait.
- the face is tilted and translated with respect to the center of the image.
- other variations in the positioning and sizing of the face within the borders of the image may be encounted.
- the size, position and orientation of the face is to be standardized.
- the image is placed into a digital format, generally as a matrix of pixel values.
- the digital format (pixel values) of the image is derived by scanning the original image to convert the original image into electrical signal values that are digitized.
- the digital image format is then used to replicate the image on a display to facilitate the application of a standardization process to the displayed image and to the pixel values forming the displayed image to form a standardized geometric image.
- the images are standardized to provide a quality match with the template elements associated with the template (to be described in detail later in the description of the invention).
- the process starts at Figure 1A by locating the center of the left and right eyes of the face in the image.
- Figure 1B a new digital image of the face image, representing a partially standardized geometric image, is formed by rotating and translating the face image of Figure 1A, as necessary, by well known image processing operations, so as to position the left and right eye centers along a predetermined horizontal axis and equally spaced about a central vertical axis.
- Figure 1C illustrates the face image of Figure 1B sized to form a standardized geometric face image by scaling the image to a standard size.
- the method of forming the standardized geometrical image is set forth in the left column of flow blocks commencing with the block labeled "select an image".
- the selection process is based upon the availability of a front facial image of the person that is to have their image processed with the template of the invention. Included in the selection process is the creation of a digital matrix representation of the available image.
- the digital matrix is next loaded into a system (shown in Figure 11) for display to an operator. As previously discussed the operator locates the left and right eye and performs any needed rotation, translation and rescaling the image to form the standardized geometrical image.
- the image standard was set to an image size of width of 56 pixels and a height of 64 pixels with the eye centers located 28 pixels from the top border of the image and 8 pixels on either side of a vertical center line. Identifying the centers of the left and right eyes is done by displaying the initial image to a human operator who points to the centers with a locating device such as a mouse, tablet, light pen or touch sensitive screen. An alternate approach would be to automate the process using a feature search program. The human operator localize the eye positions, and a processor fine-tunes the location through an eye-finding search method restricted to a small neighborhood around the operator-specified location. The next step in standardization is to alter the image to standardize the position of the eyes to a predetermined location. In general, this consists of the standard image processing operations of image translation, scaling and rotation.
- the standardized geometric image is stored, and the luminance standardization procedure takes place.
- the procedure is represented by the flow blocks evenly labeled 40-52.
- the functional operation represented by block 50 shifts the facial mean luminance, i.e., the average lightness found in the general vicinity of the nose, to a preset value.
- the preset value for a light skin toned person is 165, for medium skin 155, for dark skin the value is 135.
- the formed standardized digital image from block 50 is now represented by a storable matrix of pixel values that is stored in response to function block 52.
- Figure 3A illustrates the layout of a template 30 that is to be used with the standardized image of Figure 2.
- the template 30 is partitioned into 64 template elements labeled A through M.
- the elements are arranged in accordance with 13 corresponding features of a human face, for example, the template elements labeled A correspond to the hair feature at the top of the head and the template elements labeled G correspond to the eyes.
- Template elements with like labels share in the representation of a feature.
- the tables of Figures 4, 5A, and 5B provide a further description of the remaining template elements.
- the preferred embodiment of the invention is implemented with 64 template elements and 13 features it is to be understood that these numbers may be varied to suit the situation and are not to be construed as limiting the method of this invention.
- the template size matches that of the standardized image with 56 pixels in width and 64 in height.
- the size of the template elements are based upon the sizes of facial features that they intend to represent. For example, G is the relative size of an eye in a standardized image and both instances of elements assigned to G are positioned in the locations of the eyes in a standardized image.
- Figure 3D represents, with the darker shaded region, the location of template elements which are part of a link.
- the linkage is horizontal between each pair of darkened template elements, for example, G at the left of center is linked to G at the right of center.
- 7 linked pairs are shown as the preferred embodiment, linkages can occur in groups larger than two and between any set of like labeled elements.
- the template 30 is in fact a sequence of data records where each record, in the preferred embodiment, describes the location, size, label, left-to-right property, top-to-bottom property, and linkage of each template element. Data records with other and/or additional factors may be created as the need arises.
- the template 30 records the distribution and size of template elements.
- Each template element has assigned to it a codebook and a spatial location in the image. (Note that some portions of the template have no template element; these regions will be described in detail later.)
- the template shown in Figure 3A consists of 64 template elements composed of rectangular pixel regions. These template elements are assigned to one of 13 different codebooks (labeled A - M).
- the codebooks are collections of uniformly-sized codevectors of either 4x16, 8x8, 8x5, 4x10, 4x6, or 8x4 pixels.
- the codevectors which populate the codebooks are derived from a library of image features.
- the labels A through M represent feature types for human faces.
- the human feature associated with each of the labels A-M in the label row is set forth in the row directly below the label row.
- the remainder of Figure 4 provides information as to the width and height of template elements for each of the associated labels along with the number of occurrences and the number of unique occurrences for each feature.
- a unique occurrence indicates the number of independent template elements that are linked (linked elements count as only a single unique occurrence).
- Figures 5A and 5B illustrate the template element data records. These data records represent the attributes of each template element and include data record fields for the upper left hand pixel coordinates, the width, height, left-to-right flip property, the top-to-bottom flip property, the record of the linkage group, and the feature type. If the record of the linkage group is -1 then no linkage occurs. Other values of the linkage group identify the template elements of that group. For example, the top two template elements D of Figure 3D are linked and are given the same linkage group number O in the linkage group column of the table of Figures 5A and 5B.
- the feature types referenced in Figure 4 are shown in Figure 6 as collections of tiles.
- the tile 1 within the collection for the feature type G the eye feature
- the other tiles 2 through 2 n in the collection, for feature type G are other pictures of eyes.
- the number of tiles in each collection for each feature type is 2 n for some positive integer n. It should be noted that tiles within a collection share visually similar properties as they represent the image features. A comparison of tiles from different feature types will, in general, be visually dissimilar.
- Figure 7 represents an image as an assignment of template elements to tiles.
- Each of the template elements of Figure 7 has a number associated with it, and this number corresponds to a tile for the feature type of the template element.
- the template element 60 is for feature type A, and has the associated tile with number of 46 in the collection of hair feature type tiles A in Figure 6.
- the template element 62 for the eye feature type is numbered 123, and it corresponds to the tile with number 123 in the eye feature type collection labeled G in Figure 6.
- template elements within the same linked group (such as the eye feature type template elements 62 and 64) have identical tile numbers. For ease of identification of linked elements, they appear in bold number printing in Figure 7.
- the tile numbers assigned to each template element in Figure 7 are used to retrieve the like numbered tile from the like labeled feature type collection of tiles.
- the retrieved tile is positioned in the same location as the template element containing the tile number.
- the resulting assembly of tiles produces the mosaic of Figure 8.
- Figures 3B and 3C indicated which template elements possessed the left-to-right and top-to-bottom flipping property, respectively.
- the template elements with these flip properties are also indicated with the TRUE/FALSE flags in the table of Figures 5A and 5B.
- the tiles in Figure 8 that are to be flipped are identified by diagonal lines through the boxes representing pixels.
- Figure 9 represents the application of the flipping properties to the tiles in Figure 8, where all tiles in Figure 8 which correspond to the darkened template elements in Figure 3B are flipped left-to-right and all tiles in Figure 8 which correspond to the darkened template elements in Figure 3C are flipped top-to-bottom. It should be noted that some template elements undergo both flips in the transformation of the tiles from Figure 8 into the tile orientation of Figure 9 and that the flips take place within the associated element.
- the next step is the formation, by image processing operations, of a final image based on the oriented tile mosaic of Figure 9.
- the mosaic of Figure 9 may have certain visually objectionable artifacts as a result of its construction from tiles. These artifacts can be diminished with some combination of image processing algorithms.
- a combination of well known image processing operations are applied including smoothing across the tile boundaries, contrast enhancement, linear interpolation to fill missing image regions and addition of spatially dependent random noise.
- the smoothing operation is described by considering the situation where three successive pixels, P 1 , P 2 , and P 3 , where P 1 and P 2 are in one tile and P 3 is in an adjoining tile.
- the pixel P 2 is replaced by result of (P 1 + 2 * P 2 + P 3 ) / 4.
- the contrast enhancement is achieved by determining the minimum pixel value, min, and the maximum pixel value, max, for the mosaic.
- the regions of the feature template not corresponding to any template element are filled using linear interpolation.
- the known values of the boundary pixels are used to calculate an average pixel value.
- the unknown corner opposite the known boundary is set to this average value.
- the remainder of the unassigned interior pixels are calculated by linear interpolation.
- i column of the affected pixel
- j row of the affected pixel
- rand is a pseudo-random, floating-point number in the range (-1 to 1).
- the value n(i,j) is added to pixel at location (i,j). If the resultant pixel is greater than 255 it is set to 255, and if it is less than zero it is set to 0.
- Figure 10 represents an image after processing by these operations. It should be understood that other image processing operations may be used in other situations, and the preferred embodiment should not be considered limiting.
- FIG 11 illustrates an apparatus 100 on which the present method may be implemented.
- the apparatus 100 is comprised of a means 102 for converting a non-digital image, such as a photo print 80, or a negative image 82, into a digital representation of an image.
- a scanner 104 which outputs signals representing pixel values in analog form.
- An analog-to-digital converter 106 is then used to convert the analog pixel values to digital values representative of the scanned image.
- Other sources of digital images may be directly inputted into a workstation 200.
- the workstation 200 is a SUN SPARC 10, running UNIX as the operating system and encoded using standard C programming language.
- the program portion of the present invention is set forth in full in the attached Appendices A and B.
- Display of the digital images is by way of the display 202 operating under software, keyboard 204 and mouse 206 control.
- Digital images may also be introduced into the system by means of a CD reader 208 or other like device.
- the templates created by the present method and apparatus may be downloaded to a CD writer 210 for storage on a CD, hard copy printed by printer 212 written onto a storage card (such as a transaction card), or transmitted for further processing or storage at remote locations by means of a modem 214 and transmission lines.
- Other uses for the present invention include compression of images other than portrait.
- Other feature types can be represented, for example, the features associated with banking checks such as the bank and account numbers along with signatures and dollar amounts, addresses and the like. Like the human face these features tend to be positioned at the same locations for each check.
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression Of Band Width Or Redundancy In Fax (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US36135294A | 1994-12-21 | 1994-12-21 | |
US361352 | 1999-07-26 |
Publications (2)
Publication Number | Publication Date |
---|---|
EP0718788A2 true EP0718788A2 (fr) | 1996-06-26 |
EP0718788A3 EP0718788A3 (fr) | 1996-07-17 |
Family
ID=23421698
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP95420342A Withdrawn EP0718788A2 (fr) | 1994-12-21 | 1995-12-05 | Procédé et dispositif pour produire des patrons d'images standardisées |
Country Status (6)
Country | Link |
---|---|
EP (1) | EP0718788A2 (fr) |
JP (1) | JPH08249469A (fr) |
CN (1) | CN1150283A (fr) |
AR (1) | AR000239A1 (fr) |
BR (1) | BR9505966A (fr) |
ZA (1) | ZA959492B (fr) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10038902B2 (en) | 2009-11-06 | 2018-07-31 | Adobe Systems Incorporated | Compression of a collection of images using pattern separation and re-organization |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102982501B (zh) * | 2012-11-19 | 2015-07-01 | 山东神思电子技术股份有限公司 | 一种图像样本标定方法 |
CN104021138B (zh) * | 2014-04-23 | 2017-09-01 | 北京智谷睿拓技术服务有限公司 | 图像检索方法及图像检索装置 |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1988009101A1 (fr) * | 1987-05-06 | 1988-11-17 | British Telecommunications Public Limited Company | Traitement d'images video |
GB2231699A (en) * | 1989-05-10 | 1990-11-21 | Nat Res Dev | Obtaining information characterising a person or animal |
WO1992002000A1 (fr) * | 1990-07-17 | 1992-02-06 | British Telecommunications Public Limited Company | Procede de traitement d'image |
US5151951A (en) * | 1990-03-15 | 1992-09-29 | Sharp Kabushiki Kaisha | Character recognition device which divides a single character region into subregions to obtain a character code |
US5237627A (en) * | 1991-06-27 | 1993-08-17 | Hewlett-Packard Company | Noise tolerant optical character recognition system |
US5246253A (en) * | 1991-10-17 | 1993-09-21 | Mykrantz John R | Garden planning kit |
US5331544A (en) * | 1992-04-23 | 1994-07-19 | A. C. Nielsen Company | Market research method and system for collecting retail store and shopper market research data |
US5365596A (en) * | 1992-12-17 | 1994-11-15 | Philip Morris Incorporated | Methods and apparatus for automatic image inspection of continuously moving objects |
EP0651354A2 (fr) * | 1993-10-29 | 1995-05-03 | Eastman Kodak Company | Procédé de compression pour une librairie d'images standardisées |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3364957B2 (ja) * | 1992-08-24 | 2003-01-08 | カシオ計算機株式会社 | モンタージュ作成装置及び顔画像作成方法 |
JPH08141212A (ja) * | 1994-11-24 | 1996-06-04 | Taito Corp | モンタージュ機能を有するゲーム機 |
-
1995
- 1995-11-08 ZA ZA959492A patent/ZA959492B/xx unknown
- 1995-11-30 AR AR33446895A patent/AR000239A1/es unknown
- 1995-12-05 EP EP95420342A patent/EP0718788A2/fr not_active Withdrawn
- 1995-12-20 JP JP7332356A patent/JPH08249469A/ja active Pending
- 1995-12-20 BR BR9505966A patent/BR9505966A/pt not_active Application Discontinuation
- 1995-12-21 CN CN95121123A patent/CN1150283A/zh active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1988009101A1 (fr) * | 1987-05-06 | 1988-11-17 | British Telecommunications Public Limited Company | Traitement d'images video |
GB2231699A (en) * | 1989-05-10 | 1990-11-21 | Nat Res Dev | Obtaining information characterising a person or animal |
US5151951A (en) * | 1990-03-15 | 1992-09-29 | Sharp Kabushiki Kaisha | Character recognition device which divides a single character region into subregions to obtain a character code |
WO1992002000A1 (fr) * | 1990-07-17 | 1992-02-06 | British Telecommunications Public Limited Company | Procede de traitement d'image |
US5237627A (en) * | 1991-06-27 | 1993-08-17 | Hewlett-Packard Company | Noise tolerant optical character recognition system |
US5246253A (en) * | 1991-10-17 | 1993-09-21 | Mykrantz John R | Garden planning kit |
US5331544A (en) * | 1992-04-23 | 1994-07-19 | A. C. Nielsen Company | Market research method and system for collecting retail store and shopper market research data |
US5365596A (en) * | 1992-12-17 | 1994-11-15 | Philip Morris Incorporated | Methods and apparatus for automatic image inspection of continuously moving objects |
EP0651354A2 (fr) * | 1993-10-29 | 1995-05-03 | Eastman Kodak Company | Procédé de compression pour une librairie d'images standardisées |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10038902B2 (en) | 2009-11-06 | 2018-07-31 | Adobe Systems Incorporated | Compression of a collection of images using pattern separation and re-organization |
US11412217B2 (en) | 2009-11-06 | 2022-08-09 | Adobe Inc. | Compression of a collection of images using pattern separation and re-organization |
Also Published As
Publication number | Publication date |
---|---|
CN1150283A (zh) | 1997-05-21 |
AR000239A1 (es) | 1997-05-28 |
JPH08249469A (ja) | 1996-09-27 |
BR9505966A (pt) | 1997-12-23 |
ZA959492B (en) | 1996-07-10 |
EP0718788A3 (fr) | 1996-07-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA1235514A (fr) | Systeme de reconnaissance video | |
US5754697A (en) | Selective document image data compression technique | |
US6246791B1 (en) | Compression/decompression algorithm for image documents having text, graphical and color content | |
US5574573A (en) | Compression method for a standardized image library | |
US6587583B1 (en) | Compression/decompression algorithm for image documents having text, graphical and color content | |
US9042650B2 (en) | Rule-based segmentation for objects with frontal view in color images | |
US5905807A (en) | Apparatus for extracting feature points from a facial image | |
US5373566A (en) | Neural network-based diacritical marker recognition system and method | |
US7006714B2 (en) | Image retrieval device, image retrieval method and storage medium storing similar-image retrieval program | |
KR100698426B1 (ko) | 화상 처리 장치 및 화상 처리 방법 | |
US20110090253A1 (en) | Augmented reality language translation system and method | |
EP1418507A2 (fr) | Procéde et système d'integration de plusieurs charactéristiques | |
CN108765261A (zh) | 图像变换方法和装置、电子设备、计算机存储介质、程序 | |
CN110427972A (zh) | 证件视频特征提取方法、装置、计算机设备和存储介质 | |
US5638190A (en) | Context sensitive color quantization system and method | |
US20030081678A1 (en) | Image processing apparatus and its method, and program | |
EP0718807B1 (fr) | Procédé pour comprimer et décomprimer des images portrait standardisées | |
EP0718788A2 (fr) | Procédé et dispositif pour produire des patrons d'images standardisées | |
US5727089A (en) | Method and apparatus for multiple quality transaction card images | |
CN104376314B (zh) | 一种面向谷歌眼镜物联网网站系统的构成方法 | |
US8064706B1 (en) | Image compression by object segregation | |
JP2001203899A (ja) | カラー量子化の方法および装置 | |
EP4266264A1 (fr) | Identification sans contrainte et élastique de documents d'identité dans une image rvb | |
CN115797958A (zh) | 货币识别方法、装置、设备及存储介质 | |
Kumar et al. | A Review on Various Approaches of Face Recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
PUAL | Search report despatched |
Free format text: ORIGINAL CODE: 0009013 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): DE FR GB |
|
AK | Designated contracting states |
Kind code of ref document: A3 Designated state(s): DE FR GB |
|
17P | Request for examination filed |
Effective date: 19961220 |
|
17Q | First examination report despatched |
Effective date: 19970320 |
|
GRAG | Despatch of communication of intention to grant |
Free format text: ORIGINAL CODE: EPIDOS AGRA |
|
GRAG | Despatch of communication of intention to grant |
Free format text: ORIGINAL CODE: EPIDOS AGRA |
|
GRAH | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOS IGRA |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 19990522 |
|
REG | Reference to a national code |
Ref country code: HK Ref legal event code: WD Ref document number: 1011435 Country of ref document: HK |