EP2831806A1 - Verfahren zum detektieren einer perspektivisch verzerrten mehreckstruktur in einem bild eines identifikationsdokumentes - Google Patents
Verfahren zum detektieren einer perspektivisch verzerrten mehreckstruktur in einem bild eines identifikationsdokumentesInfo
- Publication number
- EP2831806A1 EP2831806A1 EP13712765.0A EP13712765A EP2831806A1 EP 2831806 A1 EP2831806 A1 EP 2831806A1 EP 13712765 A EP13712765 A EP 13712765A EP 2831806 A1 EP2831806 A1 EP 2831806A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- edge
- polygonal
- image
- edges
- detecting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000001514 detection method Methods 0.000 claims description 14
- 238000003708 edge detection Methods 0.000 claims description 9
- 230000009466 transformation Effects 0.000 claims description 8
- 230000003287 optical effect Effects 0.000 claims description 4
- 238000004590 computer program Methods 0.000 claims description 2
- 238000001914 filtration Methods 0.000 description 12
- 230000003044 adaptive effect Effects 0.000 description 4
- 238000002372 labelling Methods 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 238000004026 adhesive bonding Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000010339 dilation Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000003475 lamination Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/40—Document-oriented image-based pattern recognition
- G06V30/41—Analysis of document content
- G06V30/412—Layout analysis of documents structured with printed lines or input boxes, e.g. business forms or tables
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
Definitions
- the present invention relates to the detection of a perspective distorted
- Polygon structure for example, a perspective distorted rectangle, in an image of an identification document.
- passports or ID cards or to verify the authenticity of an identification document is usually a picture of the
- Document verification and mobile document testing devices such as smartphones are used.
- One problem with capturing an image of an identification document by means of a camera of a mobile document validator is the perspective distortion that occurs with a tilt of the mobile document verification device relative to the camera
- Identification document may arise.
- An identification document points to
- the quality of the captured image also depends on the quality of the camera as well as on the outside
- the Hough transformation can be used, as described in the document C. R. Jung and R.
- the invention is based on the finding that, in the case of a perspective distortion of a polygonal structure, originally straight edges of the polygonal structure continue to run in a straight line. Due to the perspective distortion, however, the lengths of the edges are changed, for example, shortened or lengthened. Thus, an inclined recording of a rectangle in the shape of a trapezoid appears with a long base, a shorter top and two sides running in alignment.
- This finding allows a particularly simple detection of, for example, originally rectangular structures by detecting perspectively distorted quadrilateral structures whose edges run in a straight line within a tolerance range, for example +/- 5%. In this way perspective distorted polygonal structures can be detected particularly efficiently and, if necessary, equalized.
- the invention relates to a method for detecting a perspective distorted polygon structure in an image of an identification document, with detecting edges in the image to obtain an edge image, detecting a plurality of polygonal edge structures in the edge image, determining a metric for any polygonal edge structure and selecting those polygonal ones
- Edge structure as the perspective distorted polygonal structure, which has the largest metric.
- the metrics of perspective distorted polyhedra can be compared to capture the largest metric.
- the edge image is an image of edges detectable in the image and may have a plurality of polygonal edge structures, which may be, for example, the edges of the identification document or a passport side, or the edges of the geometric structures depicted therein.
- the respective edge structure can be given, for example, by an edge course, for example by a gray level edge course.
- the identification document can be one of the following documents, with or without electronics: Identity document, such as identity card, passport,
- Access control card authorization card, company card, tax stamp or ticket, birth certificate, driver's license, motor vehicle pass or means of payment.
- the identification document can be one or more layers or paper and / or
- the identification document can be plastic-based.
- the identification document can be constructed from plastic-based films which are joined together to form a card body by means of gluing and / or lamination, the films preferably having similar material properties.
- the identification document may further comprise a chip for storing data.
- edge detection is performed for detecting edges, in particular by means of the Canny algorithm.
- the Canny algorithm any algorithms known per se for edge detection can be used.
- the edge structures can be determined directly by the edges detected in this way.
- the edges are subjected to a transformation to obtain transformed edges, which can be represented by lines.
- the transformation can be the Hough transformation, by means of which defined line images are provided as edge structures.
- Edge pairs detected According to one embodiment, parallel edge pairs with edges whose distance from each other does not exceed a predetermined threshold are detected.
- Parallelism range can specify, for example, an angular range within which adjacent edge pairs are considered to be parallel.
- Parallelism range may be, for example, a range of 0 °, +/- 1 °, +/- 2 °, +/- 5 ° or +/- 10 °.
- the predetermined angular range within which the edges of an edge pair intersect may include, for example, an angular range of 89 ° to 91 ° or to from 85 ° to 95 ° or 80 ° to 100 or 50 ° to 140 °.
- edge pairs are detected, which meet within the angular range.
- it is possible to detect corners of the distorted polygon structure whereby it can be determined whether the distorted polygon structure is, for example, a quadrangular structure resulting from a tilted reception of a rectangle. Further, by detecting the corners, it can be determined whether the distorted polygon structure is in a certain region (ROI: Region of Interest) of the image or the identity document.
- ROI Region of Interest
- the polygonal edge structures are dilated.
- the edge structures are perspectively enlarged in order, for example, to enable a more accurate detection of the perspectively distorted polygonal structure.
- a number of pixels representing the respective edge structure is detected.
- the pixels representing the respective edge structure are gray scale values, for example. By prior dilation of the edge structure, an even more accurate determination of the number of pixels actually representing the respective edge structure can be detected.
- a relative number of the pixels representing the respective edge structure is detected, the relative number of the pixels representing the respective edge structure being based on a ratio between the pixel actually representing the respective edge structure and a maximum number for displaying the respective edge structure possible pixels is calculated.
- the maximum number of pixels possible is the number of pixels that the
- the metric of only those becomes polygonal
- Edge structures determined which are arranged within a predetermined image section, so-called ROI. In this way, it is ensured that the focus in detecting the perspective distorted polygonal structure on those
- Image section of the image is directed, in which a polygonal structure is also expected.
- the perspective distorted polygonal structure is equalized by means of at least one equalization parameter, in particular equalized in perspective.
- the equalization parameter indicates, for example, by what length amount the respective edge should be shortened or extended.
- the at least one equalization parameter is calculated on the basis of average values of lengths of mutually opposite edges. In this case, for example, lengths of opposite edges are added and that
- the perspective distorted polygonal structure is equalized by means of homography.
- the perspective distorted polygon structure is a perspective distorted rectangle
- the polygonal edge structures are quadrangular edge structures.
- the edge structure is filtered to text or
- the invention relates to a device, in particular a smartphone, for detecting a perspectively distorted polygonal structure in an image of an identification document, having an optical pick-up device for picking up the image and a processor which is configured to perform a detection of edges in the image in order to obtain an edge image, to detect a plurality of polygonal edge structures in the edge image, to determine a metric for each polygonal edge structure, and to select that polygonal edge structure as the perspective distorted polygon structure having the largest metric.
- the optical recording device can be, for example, a camera of a smartphone.
- the processor can be set up to carry out the method according to the invention for detecting a perspective-distorted polygonal structure in an image.
- the invention relates to a computer program having a
- Program code for carrying out the method according to the invention, when the program code is executed on a computer.
- Fig. 1 is a schematic representation of an identification document
- FIG. 2 shows a perspective distorted image of an identification document
- 3 is a flowchart of a method for detecting a perspective distorted polygon structure
- 4 is a flowchart of a method for detecting a perspective distorted polygon structure in an image
- FIG. 5 shows a flowchart of a text filtering with a threshold value formation.
- FIG. 1 schematically shows an identification document 101 with an image of a person 103 and a text field 105.
- the border of the identification document 101 is a polygonal structure which is predetermined by the edges of the identification document 101 and, for example, rectangular. This means that adjacent edges or their extensions meet vertically. The same applies to the edges of the image 103 of the person forming a polygonal structure.
- the text field 105 may be a text field of a machine-readable zone of the identification document 101, the edges of which form a polygonal structure, for example a rectangle.
- the image shown in Fig. 2 201 may arise, if that for this example recording
- the polygonal structures 101, 103, 105 are distorted in perspective and thereby converted into perspectively distorted polygonal structures, which are represented by polygonal edge structures 203, 205, 207.
- the polygonal edge structures 203, 205 and 207 can be detected.
- each polygonal edge structure 203, 205 and 207 corresponds to a perspectively distorted polygonal structure 101, 103, 105.
- an ROI can be predetermined in order to match the respective perspective distorted polygonal structure, for example the
- the edge image can have further edge image structures, for example curved lines, which are also detected during edge detection, and which are not shown for reasons of clarity.
- the polygonal edge structure 203 is determined, for example, by the edges 209, 21 1, 213 and 215. The edges 209 and 21 1 run within one
- Parallelism range, for example +/- 5 °, parallel to each other. However, they have different lengths.
- the side edges 213 and 215, however, are aligned. Corner points of the polygonal edge structure 203 are formed by the intersections of the adjacent edges. These corner points can therefore be detected based on the edge structure.
- the edge structure 209 may be converted to a line image by the Hough transform to give a more accurate one
- Edge structures 205 and 207 Edge structures 205 and 207.
- FIG. 3 shows a flowchart of a method for detecting a perspective distorted polygon structure in an image of an identification document.
- the method includes detecting 301 edges in the image to obtain an edge image.
- the Hough transformation can be carried out.
- a plurality of polygonal edge structures are detected in the edge image.
- the polygonal edge structures 205 and 207 can be detected.
- a metric is determined for each detected polygonal edge structure.
- the metric can be determined, for example, by determining the respective number of pixels or the respective relative number of pixels for the pixels representing the respective polygonal edge structure.
- the polygonal edge structure is selected as the detected perspectively distorted polygon structure having the largest metric, for example, the largest number of pixels. Thereafter, the detected perspective distorted polygon structure can optionally be equalized by a reverse perspective distortion.
- the method includes detecting edges 401 401 to capture an edge image having one or a plurality of polygonal edge structures.
- the edge image may, for example, be in the form of an edge folder, which can be opened by means of the Canny Edge detector can be provided.
- Such an edge detector is described in JS Canny, A Computational Approach to Edge Detection, PAMI, 1986.
- an automatic threshold value selection can also be carried out, in which, for example, only edges above a predetermined brightness threshold, for example on the gray scale image, are taken into account.
- the filtering 403 may be text filtering and is performed to remove, for example, high frequency structures that result in falsification of the line detection.
- threshold-weighted image are calculated, as in the example
- the filtering of the edge image can be done by local adaptive thresholding on the
- Grayscale image and the labeling and evaluation of regions e.g.
- a line detection 405 can be carried out, in which, for example, the Hough algorithm is used.
- a plurality of lines within an ROI can be obtained. These lines or edges are grouped in pairs using a condition regarding their course of the course. So parallel edges or parallel lines are grouped in pairs. The same applies to edges or lines which meet or intersect within a predetermined angular range and thus represent corners of a polygonal structure.
- a plurality of polygonal edge structures which as hypotheses, ie as a possible perspective distorted polygonal structures, or as Models can be saved in a list.
- the list can be provided, for example, by means of a database.
- the edge image can be dilated in order, for example, to take account of a specific curvature fraction. This would not take into account completely weighted lines along the current hypothesis.
- To calculate the weight of each hypothesis consider the connecting lines between the four points of a hypothesis. The relative number of pixels placed along these lines on the edge image (support) is noted down to the length of the hypothesis. The hypothesis with the largest support is output as a result of the localization and can be used to equalize the enclosed area if it is a rectangular structure.
- a hypothesis can be determined by intersections of two pairs of lines or edges whose angle lies within the angular range and differs by no more than thangle degrees and whose distance from one another is less than a threshold value thdistance. It can be assumed that the original image is an orthogonal, undistorted image of the wanted polygon structure.
- the desired length of the rectified rectangular structure may be specified or determined as the target length.
- pairs of lines can be considered taking into account the minimum distance and the
- Directional deviation can be determined using the thresholds thdistance and thangle. For example, 90 ° ⁇ thangle ⁇ 0 °.
- the threshold thdistance can be determined from the dimensions of the ROI. According to one embodiment, the intersection of two pieces forms one
- hypotheses when the four intersections are within the ROI.
- the hypotheses can be weighted by determining the respective metric.
- that polygonal edge structure is selected which most likely represents the perspective distorted polygon structure.
- metrics can be determined, as described above.
- step of the equalization 41 1 the detected perspective distorted polygonal structure is equalized.
- the method can according to an embodiment for the extraction of targets, so-called.
- AR Augmented Reality
- a known aspect ratio may be useful for the definition of the region relevant to the search, for example an edge structure, or replace the corresponding estimate in the equalization step.
- a region of interest is understood as meaning an image region in which a search is to be made, for example, for a quadrangular polygonal structure.
- rectangular edge structures can be rectified by rectification.
- Origin image by averaging the lengths for each of the 2 pairs of lines involved. By using the desired target length, the width is now also defined. This determines 4 point correspondences.
- the equalization can be carried out, for example, on the basis of a homography estimation and inverse mapping.
- Comparison metric i. a measure of error to be determined. This can be
- the perspective distorted polygonal structure is a perspective distorted rectangular structure.
- an automatic rectification i. Equalization
- undistorted corners can be determined by averaging over the pixel width and height on the basis of the corresponding hypothesis.
- homographic equalization can be performed for rectification.
- the degree of perspective distortion may be determined from a model rectangle.
- the model rectangle can be distorted in perspective in such a way that a perspective distorted model structure results, which corresponds to the detected, perspective distorted polygonal structure.
- a backward equalization applied to the perspectively distorted polygonal structure this can be equalized.
- the method described above can be carried out, for example, by means of a stack-based or a recursive method.
- look-up tables can be used to indicate forbidden and allowed directions of the edge pairs.
- the image may be a video frame or serve as a tracking target, for example, to enable an immediate contactless interaction.
- Identification document is planar and has a rectangular boundary, which results in a square edge image.
- an aspect ratio is known. This aspect ratio can be determined by means of an estimate. As a result, an equalization of the perspective distorted
- Polygon structure which in the case is a quadrangular structure, be particularly easily performed.
- the identification document is localized by the determination of edges by an edge detection with an optional subsequent line detection by means of the Hough transformation.
- four regions can first be processed according to the ROI. These four regions are an upper region, a lower region, a left region, or a right region. After this, a selection of parallel lines and subsequently of parallel pairs of lines or edges can be carried out. After a
- the current hypotheses can be intersected based on features such as edge image support, connecting lines, aspect ratio, inner angle, and
- the support can be determined, for example, by the number of pixels along the connecting lines.
- a metric is calculated for the individual features whose weighted sum indicates the quality of the hypothesis.
- the hypothesis with the maximum sum is output in each case. This information can be used together with the current aspect ratio for extraction and equalization of the identification document image.
- Fig. 5 is a flowchart of a text filtering is shown with a
- Threshold 501 which may be locally adaptive, a labeling 503 and a subsequent filtering 505 are performed in which, for example, a surface 507 or an aspect ratio 509 of a character are evaluated.
- text filtering is based on the assumption that text areas may vary in both their nature and their spatial arrangement. Especially at
- Text areas in the edge or line detection cause problems.
- Text filtering is an efficient measure that merely assesses the geometry of the regions of a threshold image to filter out the textual components. This may for example be based on the assumption that the text regions are approximately square and filled to a certain extent.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102012205079A DE102012205079A1 (de) | 2012-03-29 | 2012-03-29 | Verfahren zum Detektieren einer perspektivisch verzerrten Mehreckstruktur in einem Bild eines Identifikationsdokumentes |
PCT/EP2013/056389 WO2013144136A1 (de) | 2012-03-29 | 2013-03-26 | Verfahren zum detektieren einer perspektivisch verzerrten mehreckstruktur in einem bild eines identifikationsdokumentes |
Publications (1)
Publication Number | Publication Date |
---|---|
EP2831806A1 true EP2831806A1 (de) | 2015-02-04 |
Family
ID=48013985
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP13712765.0A Ceased EP2831806A1 (de) | 2012-03-29 | 2013-03-26 | Verfahren zum detektieren einer perspektivisch verzerrten mehreckstruktur in einem bild eines identifikationsdokumentes |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP2831806A1 (de) |
DE (1) | DE102012205079A1 (de) |
WO (1) | WO2013144136A1 (de) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102014207439A1 (de) * | 2014-04-17 | 2015-10-22 | IDnow GmbH | Maskierung von sensiblen Daten bei der Benutzer-Identifikation |
DE102015108330A1 (de) * | 2015-05-27 | 2016-12-01 | Bundesdruckerei Gmbh | Elektronisches Zugangskontrollverfahren |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110025860A1 (en) * | 2009-08-03 | 2011-02-03 | Terumitsu Katougi | Image output apparatus, captured image processing system, and recording medium |
EP2388735A2 (de) * | 2010-05-21 | 2011-11-23 | Hand Held Products, Inc. | Interaktive Benutzerschnittstelle zum Erfassen eines Dokuments in einem Bildsignal |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE19713511A1 (de) * | 1997-04-01 | 1998-10-08 | Cwa Gmbh | Verfahren zur automatischen Wiedererkennung von beschrifteten Kennzeichen anhand der Zeichenfolge sowie geometrischer Merkmale |
US6763141B2 (en) * | 2000-12-28 | 2004-07-13 | Xerox Corporation | Estimation of local defocus distance and geometric distortion based on scanned image features |
US7171056B2 (en) * | 2003-02-22 | 2007-01-30 | Microsoft Corp. | System and method for converting whiteboard content into an electronic document |
DE102009060791A1 (de) * | 2009-12-22 | 2011-06-30 | Automotive Lighting Reutlingen GmbH, 72762 | Lichtmodul für eine Beleuchtungseinrichtung eines Kraftfahrzeugs sowie Beleuchtungseinrichtung eines Kraftfahrzeugs mit einem solchen Lichtmodul |
-
2012
- 2012-03-29 DE DE102012205079A patent/DE102012205079A1/de not_active Withdrawn
-
2013
- 2013-03-26 EP EP13712765.0A patent/EP2831806A1/de not_active Ceased
- 2013-03-26 WO PCT/EP2013/056389 patent/WO2013144136A1/de active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110025860A1 (en) * | 2009-08-03 | 2011-02-03 | Terumitsu Katougi | Image output apparatus, captured image processing system, and recording medium |
EP2388735A2 (de) * | 2010-05-21 | 2011-11-23 | Hand Held Products, Inc. | Interaktive Benutzerschnittstelle zum Erfassen eines Dokuments in einem Bildsignal |
Non-Patent Citations (1)
Title |
---|
See also references of WO2013144136A1 * |
Also Published As
Publication number | Publication date |
---|---|
DE102012205079A1 (de) | 2013-10-02 |
WO2013144136A1 (de) | 2013-10-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE112016006366T5 (de) | Verfahren und systeme zur verarbeitung von punktwolkendaten mit einem linienscanner | |
DE102016120775A1 (de) | System und Verfahren zum Erkennen von Linien in einem Bild mit einem Sichtsystem | |
CN109859227A (zh) | 翻拍图像检测方法、装置、计算机设备及存储介质 | |
CH708993B1 (de) | Verfahren und Vorrichtung zum Identifizieren eines zweidimensionalen Punktcodes. | |
WO2013011013A2 (de) | Verfahren und vorrichtung zur ocr-erfassung von wertdokumenten mittels einer matrixkamera | |
CN107464245B (zh) | 一种图像结构边缘的定位方法及装置 | |
DE102017212418A1 (de) | Fahrerassistenzsystem und -verfahren zur leitplankenerkennung | |
CN108960221B (zh) | 基于图像的银行卡识别方法及装置 | |
EP2856390A1 (de) | Verfahren und vorrichtung zur verarbeitung stereoskopischer daten | |
DE102017220752A1 (de) | Bildverarbeitungsvorrichtung, Bildbverarbeitungsverfahren und Bildverarbeitungsprogramm | |
DE102015207903A1 (de) | Vorrichtung und Verfahren zum Erfassen eines Verkehrszeichens vom Balkentyp in einem Verkehrszeichen-Erkennungssystem | |
DE69623564T2 (de) | Gerät zur Extraktion von Fingerabdruckmerkmalen | |
DE102006059659B4 (de) | Vorrichtung, Verfahren und Computerprogramm zur Erkennung von Schriftzeichen in einem Bild | |
EP3158543B1 (de) | Verfahren zum detektieren eines blickwinkelabhängigen merkmals eines dokumentes | |
DE102015205505A1 (de) | Verfahren und Vorrichtung zum Detektieren elliptischer Strukturen in einem Bild | |
DE102005025220B4 (de) | Gerät, Verfahren und Programm zum Beseitigen von Poren | |
EP2831806A1 (de) | Verfahren zum detektieren einer perspektivisch verzerrten mehreckstruktur in einem bild eines identifikationsdokumentes | |
CN108280839A (zh) | 一种作业图像定位与分割方法及其装置 | |
DE102019119138B4 (de) | Bestimmen einer Verteil- und/oder Sortierinformation zum automatisierten Verteilen und/oder Sortieren einer Sendung | |
EP3259703B1 (de) | Mobilgerät zum erfassen eines textbereiches auf einem identifikationsdokument | |
WO2017055277A1 (de) | Dokument und verfahren zum verifizieren eines dokuments | |
DE102019115224A1 (de) | System und verfahren zum auffinden und klassifizieren von linien in einem bild mittels eines schichtsystems | |
DE202022106314U1 (de) | Ein Kantenerkennungssystem, das die fraktale Dimension auf der Grundlage der differentiellen Boxenzählung nutzt | |
WO2015011221A1 (de) | Verfahren zur überprüfung der echtheit eines dokumentes | |
Ahmed et al. | Comparative analysis of global feature extraction methods for off-line signature recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20141028 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: DRESSEL, OLAF Inventor name: FRITZE, FRANK Inventor name: REITMAYR, GERHARD Inventor name: HARTL, ANDREAS |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: HARTL, ANDREAS Inventor name: DRESSEL, OLAF Inventor name: FRITZE, FRANK Inventor name: REITMEYR, GERHARD |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: HARTL, ANDREAS Inventor name: REITMAYR, GERHARD Inventor name: FRITZE, FRANK Inventor name: DRESSEL, OLAF |
|
DAX | Request for extension of the european patent (deleted) | ||
17Q | First examination report despatched |
Effective date: 20160829 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R003 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED |
|
18R | Application refused |
Effective date: 20181011 |
|
P01 | Opt-out of the competence of the unified patent court (upc) registered |
Effective date: 20230526 |