WO2022131517A1 - Method for separating fingerprint from fingerprint overlay image using deep-learning algorithm, and device therefor - Google Patents

Method for separating fingerprint from fingerprint overlay image using deep-learning algorithm, and device therefor Download PDF

Info

Publication number
WO2022131517A1
WO2022131517A1 PCT/KR2021/014664 KR2021014664W WO2022131517A1 WO 2022131517 A1 WO2022131517 A1 WO 2022131517A1 KR 2021014664 W KR2021014664 W KR 2021014664W WO 2022131517 A1 WO2022131517 A1 WO 2022131517A1
Authority
WO
WIPO (PCT)
Prior art keywords
fingerprint
image
deep learning
fingerprints
learning network
Prior art date
Application number
PCT/KR2021/014664
Other languages
French (fr)
Korean (ko)
Inventor
이병호
채민석
이주현
유동헌
조재범
Original Assignee
서울대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 서울대학교산학협력단 filed Critical 서울대학교산학협력단
Publication of WO2022131517A1 publication Critical patent/WO2022131517A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1382Detecting the live character of the finger, i.e. distinguishing from a fake or cadaver finger

Definitions

  • An embodiment of the present invention relates to a method and an apparatus for separating a fingerprint from a fingerprint superimposed image in which a plurality of fingerprints are superimposed, and more particularly, a method and an apparatus for separating a fingerprint using several checkpoints of a deep learning algorithm is about
  • Overlapping oil fingerprints are often found at crime scenes, but in order to use them as court evidence, the image corresponding to each oil fingerprint must be separated from the overlapped image.
  • After irradiating the overlapping oil fingerprints with a laser of high power There are techniques for detecting the fluorescence intensity of fingerprints by various wavelength bands and separating them based on this, and techniques for separating based on spectra obtained after analyzing samples of overlapping oil fingerprints through mass spectrometry.
  • the technical problem to be achieved by the embodiment of the present invention is to provide a method and an apparatus for separating overlapping fingerprints using a deep learning algorithm.
  • An example of a fingerprint separation method for achieving the above technical problem is in a fingerprint superimposed image in which at least two or more fingerprints are superimposed through a first deep learning network learned to separate the superimposed fingerprints Separating the detection target fingerprint to obtain a fingerprint separation image; generating an alternative image in which a region of interest in which a plurality of fingerprints are overlapped in the fingerprint superimposed image is replaced with a region of the fingerprint separated image corresponding to the region of interest; and separating and outputting the background and the detection target fingerprint from the replacement image through the second deep learning network trained to separate the background and the fingerprint.
  • an example of a fingerprint separation apparatus is a fingerprint superimposed image in which at least two or more fingerprints are superimposed through a first deep learning network trained to separate the superimposed fingerprints.
  • a fingerprint separation unit for obtaining a fingerprint separation image by separating the detection target fingerprint;
  • a region of interest setting unit generating a replacement image in which the region of interest, where the fingerprint is overlapped in the fingerprint superimposed image, is replaced with the region of the separated fingerprint image corresponding to the region of interest;
  • a fingerprint output unit that separates and outputs the background and the detection target fingerprint from the alternative image through the second deep learning network trained to separate the background and the fingerprint.
  • fingerprint separation performance can be improved by using several checkpoints of a deep learning algorithm.
  • oil fingerprints obtained at the crime scene can be quickly restored at the scene, which has the advantage of expediting the initial investigation.
  • it is not adopted as evidence, it can be helpful for elementary investigations even if the results restored by the present invention are difficult to adopt as evidence, such as a lie detector that is helpful in the investigation, and the restored results are finally verified by an authorized person later It may be accepted as evidence through
  • an improved target fingerprint image can be obtained compared to fingerprint separation using an existing deep learning algorithm, and unnecessary or incorrect fingerprint information can be reduced to reduce confusion in the initial investigation.
  • FIG. 1 is a view showing an example of a fingerprint separation device according to an embodiment of the present invention.
  • FIG. 2 is a flowchart illustrating an example of a fingerprint separation method according to an embodiment of the present invention
  • 3 and 4 are diagrams schematically showing a fingerprint separation method according to an embodiment of the present invention.
  • FIG. 5 is a diagram showing an example of a deep learning network according to an embodiment of the present invention.
  • 6 and 7 are diagrams illustrating an example of a method of generating learning data for a deep learning network according to an embodiment of the present invention
  • FIG. 8 is a diagram illustrating an example of a pre-processing process of learning data for a deep learning network according to an embodiment of the present invention
  • FIG. 9 is a view showing an example of a method of generating a checkpoint for each fingerprint type according to an embodiment of the present invention.
  • FIG. 10 is a diagram illustrating an example of a fingerprint separation result according to an embodiment of the present invention.
  • FIG. 11 is a diagram illustrating the configuration of an example of a fingerprint separation device according to an embodiment of the present invention.
  • FIG. 1 is a view showing an example of a fingerprint separation device according to an embodiment of the present invention.
  • the fingerprint separation apparatus 100 separates and outputs a detection target fingerprint using a deep learning algorithm when receiving a fingerprint superimposed image in which at least two or more fingerprints are superimposed.
  • the fingerprint superimposed image refers to an image in which at least two or more oil fingerprints are partially or entirely overlapped with each other on the background image.
  • the fingerprint superimposed image may be an image obtained through a predetermined pre-processing process after photographing an oil fingerprint existing on an object at a crime scene using a camera or the like.
  • this embodiment proposes a fingerprint separation method using a first deep learning network specialized in separating overlapping fingerprints and a second deep learning network specialized in separating background and fingerprints.
  • the deep learning network has a difference in the performance of fingerprint separation depending on the deep learning network structure, deep learning learning epoch, learning data, etc.
  • the first checkpoint and the second checkpoint of the deep learning network with excellent performance in separating the background and the fingerprint are respectively called and the first deep learning network and the second deep learning network are set up, and then the fingerprint is separated through these deep learning networks suggest how to
  • FIG. 2 is a flowchart illustrating an example of a fingerprint separation method according to an embodiment of the present invention.
  • the fingerprint separation apparatus 100 first separates the detection target fingerprint from the superimposed fingerprint image using the first deep learning network learned to separate the overlapped fingerprints ( S200 ).
  • the fingerprint separation apparatus 100 may set the first deep learning network by loading the first checkpoint of the deep learning network learned to separate the overlapped fingerprints.
  • the checkpoint is data defining the deep learning network, such as various parameters or variable values of the deep learning network at a specific point in the learning process. By loading a checkpoint, it is possible to set up a deep learning network trained up to a specific point in time.
  • the first deep learning network may be set according to a fingerprint type. For example, if there are a plurality of first checkpoints corresponding to a plurality of fingerprint types as shown in FIG. 9 , the fingerprint separation apparatus 100 selects the first checkpoint according to the type of the detection target fingerprint to be separated from the fingerprint superimposed image. By loading the first deep learning network, it is possible to generate a fingerprint separation image obtained by first separating the detection target fingerprint. Checkpoints according to a plurality of fingerprint types are reviewed again in FIG. 9 .
  • the fingerprint separation apparatus 100 designates a region where fingerprints are overlapped in the fingerprint superimposed image as a region of interest, and replaces the region of interest with a corresponding region of the separated fingerprint image (S210). For example, the fingerprint separation apparatus 100 designates a region where two fingerprints overlap in the fingerprint superimposed image 430 as the region of interest 432 as shown in FIG. It is possible to generate an alternative image 440 replaced with the corresponding region of the fingerprint separation image 420 including
  • the fingerprint separation device 100 uses the second deep learning network learned to separate the background and the fingerprint to secondarily separate and output the detection target fingerprint from the alternative image (S220).
  • the fingerprint separation apparatus 100 may set the second deep learning network by loading the second checkpoint of the deep learning network learned to separate the background and the fingerprint.
  • the first checkpoint salpin is loaded prior to the deep learning network
  • the second checkpoint is loaded
  • the second deep learning network is set.
  • the second deep learning network may also be set according to the fingerprint type. The second checkpoint according to the plurality of fingerprint types will be reviewed again in FIG. 9 .
  • 3 and 4 are diagrams schematically illustrating a fingerprint separation method according to an embodiment of the present invention.
  • the fingerprint separation apparatus 100 inputs the superimposed fingerprint images 300 and 410 in which at least two or more fingerprints are superimposed to the first deep learning network 310 .
  • the first deep learning network 310 may be set by loading the first checkpoint 315 that has been learned and stored in advance to separate the overlapping fingerprints.
  • the fingerprint separation apparatus 100 acquires the fingerprint separation images 320 and 420 obtained by first separating the detection target fingerprint 412 through the first deep learning network 310 .
  • the fingerprint separation apparatus 100 designates a region in which a plurality of fingerprints are overlapped in the fingerprint superimposed images 300 and 410 as a Region of Intrest (ROI) 432, and sets the region of interest 432 to the first deep learning network. It is replaced with the corresponding region 422 of the fingerprint separation images 320 and 420 obtained through 310 .
  • the fingerprint separation apparatus 100 may provide a screen interface through which the region of interest 432 may be input from the user.
  • the fingerprint separation apparatus 100 inputs the replacement images 330 and 440 in which the region of interest 432 is replaced with the corresponding region 422 of the stake separation image 320 to the second deep learning network 340 .
  • the second deep learning network 340 may be set by loading the second checkpoint 345 that has been previously learned and stored in order to separate the background and the fingerprint.
  • the fingerprint separation apparatus 100 separates and outputs the detection target fingerprint 412 from the alternative images 330 and 440 through the second deep learning network 340 .
  • the second deep learning network 340 identifies all fingerprints other than the detection target fingerprint 412 in the alternative images 330 and 440 as a background and separates the detection target fingerprint 412 from the final images 350 and 450. print out
  • the fingerprint separation apparatus 100 may acquire the fingerprint superimposed image 410 through a pre-processing process.
  • the fingerprint separation apparatus 100 scans the image 400 so that the detection target fingerprint 412 is in a preset direction (eg, longitudinal direction, etc.)
  • the overlapping fingerprint area may be rotated and cropped so that the number of pixels per unit length becomes a predetermined number to match the resolution to generate the fingerprint superimposed image 410 .
  • the fingerprint separation apparatus 100 may adjust the resolution of the fingerprint superimposed image 410 to 500 ppi (pixles per inches). This means that if a fingerprint is an actual 1-inch size, the image will be 500 pixels wide.
  • the unscaled image 400 can be used by adjusting the size to fit 500 ppi.
  • a distance of 1 cm can be specified through a ruler scale on the original image (4000) before the preposition process.
  • a vertical line is drawn on the detection target fingerprint 412 to detect the detection target fingerprint ( 412), the resolution can be adjusted by calculating the length of the corresponding fingerprint. Drawing a line on the original image 400 can be done manually by the user.
  • the fingerprint separation device is positioned on a vertical line. Accordingly, the fingerprint superimposed image 410 may be rotated and the detection target fingerprint 412 may be identified.
  • FIG. 5 is a diagram illustrating an example of a deep learning network according to an embodiment of the present invention.
  • Fingerprint separation technology using a deep learning algorithm builds a learning dataset by synthesizing an artificial fingerprint that reflects the characteristics of an actual field oil fingerprint and a background image, and then trains the deep learning network to extract the original artificial fingerprint.
  • the deep learning network identifies fingerprints aligned in a certain direction in the fingerprint superimposed image as separation targets, and learns in the direction of removing other fingerprints from the background. Therefore, fingerprint separation using deep learning involves a process of separating fingerprints and a process of separating fingerprints from the background.
  • the network parameters learned from the deep learning process to the intermediate process are stored as checkpoints, and each checkpoint separates the fingerprints according to the deep learning network structure, the deep learning learning epoch, and the variables of the preprocessing process for synthesizing artificial fingerprints.
  • each checkpoint separates the fingerprints according to the deep learning network structure, the deep learning learning epoch, and the variables of the preprocessing process for synthesizing artificial fingerprints.
  • the first checkpoint with good separation performance of the overlapping fingerprint and the second checkpoint with good separation performance of the background and fingerprint are identified and stored in advance, and then each checkpoint is loaded to separate the fingerprints.
  • 6 and 7 are diagrams illustrating an example of a method of generating learning data for a deep learning network according to an embodiment of the present invention.
  • the fingerprint separation apparatus 100 generates learning data 630 in which the background image 600 and at least two artificial fingerprints 610 and 620 are superimposed.
  • the background image 600 may be a surface image of various objects (eg, various types of paper such as receipts) from which fingerprints can be detected.
  • the artificial fingerprints 610 and 620 are fingerprints previously generated by various conventional methods.
  • At least two artificial fingerprints 610 and 620 may overlap in different directions.
  • the fingerprint separation apparatus 100 arranges the first artificial fingerprint 710 in a predefined direction (eg, vertical direction) and places the second artificial fingerprint 720 in a predetermined vertical direction as shown in FIG. 7 .
  • the training data 730 may be generated by overlapping with the background image 700 .
  • fingerprints arranged in a predefined direction (ie, vertical direction), ie, the first artificial fingerprint 710 become a target for the deep learning network to separate.
  • the deep learning network can be trained to separate and output fingerprints (ie, the first artificial fingerprint 710) arranged in a predefined direction (eg, vertical). have.
  • a predefined direction eg, vertical
  • a large number of fingerprint images can be processed quickly because only the preprocessing process of rotating the detection target fingerprint in a predefined direction in the fingerprint superimposed image input to the deep learning network is performed.
  • the deep learning network used in this embodiment is the first deep learning network trained to separate the overlapping fingerprints as shown in FIGS. 2 and 3 and the second deep learning network trained to separate the background and the fingerprint.
  • Both the first deep learning network and the second deep learning network may be generated by learning using the learning data 630 and 730 in which at least two artificial fingerprints 610, 620, 710, 720 are superimposed on the background images 600 and 700.
  • the first deep learning network is trained to specialize in the separation of superimposed fingerprints
  • the second deep learning network is trained to specialize in the separation of backgrounds and fingerprints.
  • the deep learning network may show excellent performance in separating overlapping fingerprints or in separating background and fingerprints.
  • the fingerprint separation device 100 stores the first checkpoint of the deep learning network at the point in time showing excellent performance in fingerprint separation in the learning process of the deep learning network, and the deep learning network at the point in time showing excellent performance in separating the background and fingerprint One may store the second checkpoint.
  • the fingerprint separation apparatus 100 uses a plurality of learning data 630 and 730 created by superimposing the background images 600 and 700 and at least two artificial fingerprints 610, 620, 710, 720 to learn the deep learning network until a certain point in time.
  • a first checkpoint defining a network may be stored.
  • the fingerprint separation device 100 may then load the first checkpoint to set the first deep learning network.
  • the fingerprint separation device 100 also includes learning data created by superimposing background images 600 and 700 and one artificial fingerprint 610 and 710 or a first image and background created by overlapping background images 600 and 700 and one artificial fingerprint 610 and 710.
  • a deep learning network using learning data in which a second image made by superimposing the images 600 and 700 and at least two artificial fingerprints 610, 620, 710, 720 (for example, the first image: the second image 5:5) is mixed) It is possible to store the second checkpoint defining the deep learning network trained up to a certain point in time. The fingerprint separation device 100 may then load a second checkpoint to set up a second deep learning network.
  • FIG. 8 is a diagram illustrating an example of a preprocessing process of learning data for a deep learning network according to an embodiment of the present invention.
  • the fingerprint separation apparatus 100 may perform a pre-processing process so that the artificial fingerprint of the learning data set is similar to the oil fingerprint of the actual field. For example, the fingerprint separation device 100 adds a curve to all or a part of the ridge of the artificial fingerprint 800 ( 810 ), modulates the thickness of all or a part of the ridge ( 820 ), or sharpness of all or a part of the ridge. A preprocessing process of adjusting (eg, blurring) 830 may be performed. The fingerprint separation apparatus 100 may automatically perform a pre-processing process or may receive the curvature, thickness, sharpness, etc. of the fingerprint from the user through a user interface and reflect it on the artificial fingerprint 800 . The fingerprint separating apparatus 100 may rotate or symmetrical the preprocessed artificial fingerprint to overlap other artificial fingerprints to generate the learning data 840 .
  • the fingerprint separation device 100 adds a curve to all or a part of the ridge of the artificial fingerprint 800 ( 810 ), modulates the thickness of all or a part of the ridge (
  • FIG. 9 is a diagram illustrating an example of a method of generating a checkpoint for each fingerprint type according to an embodiment of the present invention.
  • the fingerprint separation apparatus 100 generates a learning dataset 910 for a plurality of fingerprint types 900 and then uses the learning dataset 910 for each fingerprint type to learn a deep learning network.
  • various types of fingerprints may exist, such as a fingerprint type in which the fingerprint ridges are arranged in a circle, and a fingerprint type in which the fingerprint ridges are arranged in a horizontal direction. It is assumed that the learning dataset 910 for each fingerprint type is predefined.
  • the fingerprint separation device 100 learns the deep learning network using the first learning dataset of the first fingerprint type, and then stores the checkpoints of the learned deep learning network. In addition, after learning the deep learning network using the second learning dataset of the second fingerprint type, checkpoints of the learned deep learning network are stored. In this way, N checkpoints 920 for N fingerprint types 900 may be generated and stored.
  • the fingerprint separation apparatus 100 may load any one of the N checkpoints 920 according to the type of the detection target fingerprint of the fingerprint superimposed image. For example, if the detection target fingerprints of the fingerprint superimposed image are arranged horizontally, the fingerprint separation apparatus 100 may load the learned checkpoints using the horizontally arranged fingerprint type learning dataset.
  • the fingerprint separation device 100 displays information on the fingerprint type for each checkpoint (eg, at least one representative image for each fingerprint type, etc.) through the screen interface so that the user can check the fingerprint type similar to the detection target fingerprint. Points can be easily selected.
  • the learning dataset 910 for each fingerprint type may be defined to generate a plurality of first checkpoints specialized for separation of overlapping stakes, or may be defined to generate a plurality of second checkpoints specialized for separation of background and fingerprints.
  • FIG. 10 is a diagram illustrating an example of a fingerprint separation result according to an embodiment of the present invention.
  • a process of separating different overlapping stakes and a process of separating a background and a fingerprint should be performed in a complex manner. If these two processes are not performed properly, there is a problem in that the ridges of the fingerprint are blurred or the ridges that were not in the fingerprint superimposed image appear in the image 1010 of the detection target fingerprint separated.
  • the inaccurate fingerprint image 1010 may provide difficulty in the initial investigation by providing fingerprint information different from the truth (eg, feature points, core, or delta).
  • the detection target fingerprint is separated using the first checkpoint specialized for separating the overlapping fingerprints and the second checkpoint specialized for separating the background and the background, the fingerprint 1020 can be accurately separated.
  • An inaccurate ridge portion 1015 exists in the fingerprint separation image 1010 using a conventional deep learning algorithm, but an accurate ridge 1020 is expressed in the stake separation image 1020 using this embodiment.
  • FIG. 11 is a diagram showing the configuration of an example of a fingerprint separation device according to an embodiment of the present invention.
  • the fingerprint separating apparatus 100 includes a learning unit 1100 , a preprocessing unit 1110 , a fingerprint separating unit 1120 , an ROI setting unit 1130 , and a fingerprint output unit 1140 .
  • the learning unit 1100 trains the first deep learning network to specialize in separating the overlapping fingerprints, and trains the second deep learning network to specialize in separating the background and the fingerprint, and then the first deep learning network and the second deep learning network that have been trained Store the first checkpoint and the second checkpoint of , respectively.
  • checkpoints for each fingerprint type may be generated and stored, and an example of this is shown in FIG. 9 .
  • the pre-processing unit 1110 rotates the image so that the detection target fingerprint is in a preset direction (eg, longitudinal direction, etc.) in the image obtained by photographing the overlapping fingerprint, and crops the area of the overlapped fingerprint to have a predefined resolution. ) to create a fingerprint superimposed image.
  • a preset direction eg, longitudinal direction, etc.
  • the fingerprint separation unit 1120 generates a fingerprint separation image obtained by first separating a detection target fingerprint from a fingerprint superimposed image through a first deep learning network set by loading a first checkpoint.
  • the region of interest setting unit 1130 generates an alternative image in which the region of interest in which a plurality of fingerprints are overlapped in the fingerprint superimposed image is replaced with the corresponding region of the fingerprint separation image. For example, when the region of interest 432 is set by the user as shown in FIG. 4 , the region of interest 432 is replaced with the region 422 of the fingerprint separation image 420 for the firstly separated detection target fingerprint. create an image
  • the fingerprint output unit 1140 loads the second checkpoint and outputs the secondly separated detection target fingerprint from the alternative image through the set second deep learning network.
  • the fingerprint output unit 1140 may normalize the value of each pixel of the detection target fingerprint output through the second deep learning network to a value of 0 to 1. For example, if each pixel has a value of 0 to 255, it can be normalized to a value of 0 to 1 by dividing by 255.
  • the fingerprint output unit 1140 may output a detection target fingerprint in which pixels less than or equal to a predefined value are removed.
  • the present invention can also be implemented as computer-readable codes on a computer-readable recording medium.
  • the computer-readable recording medium includes all types of recording devices in which data readable by a computer system is stored. Examples of computer-readable recording media include ROM, RAM, CD-ROM, magnetic tape, floppy disk, and optical data storage device.
  • the computer-readable recording medium is distributed in a network-connected computer system so that the computer-readable code can be stored and executed in a distributed manner.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

Disclosed are a method for separating a fingerprint from a fingerprint overlay image using a deep-learning algorithm, and a device therefor. The fingerprint separation device: obtains a fingerprint separation image by separating a detection target fingerprint from a fingerprint overlay image, in which at least two fingerprints are overlaid, through a first deep-learning network trained to separate the overlaid fingerprints; generates a replacement image in which a region of interest, in which the plurality of fingerprints are overlaid in the fingerprint overlay image, is replaced with a region, corresponding to the region of interest, in the fingerprint separation image; and separates the background and the detection target fingerprint in the replacement image through a second deep-learning network trained to separate the background and the fingerprint, and outputs the detection target fingerprint.

Description

딥러닝 알고리즘을 이용하여 지문중첩영상에서 지문을 분리하는 방법 및 그 장치Method and device for separating fingerprints from superimposed fingerprint images using deep learning algorithm
본 발명의 실시 예는 복수의 지문이 겹쳐진 지문중첩영상에서 지문을 분리하는 방법 및 그 장치에 관한 것으로, 보다 상세하게는 딥러닝 알고리즘의 여러 개의 체크포인트를 이용하여 지문을 분리하는 방법 및 그 장치에 관한 것이다.An embodiment of the present invention relates to a method and an apparatus for separating a fingerprint from a fingerprint superimposed image in which a plurality of fingerprints are superimposed, and more particularly, a method and an apparatus for separating a fingerprint using several checkpoints of a deep learning algorithm is about
겹친 유류 지문은 종종 범죄 현장에서 발견되지만, 이를 법정 증거로써 사용하기 위해서는 각 유류 지문에 해당하는 영상을 겹친 영상에서 분리해야 한다. 겹친 지문을 분리하기 위한 종래 다양한 기술이 존재한다. 가시광선 대역에서 찍은 사진을 기반으로 각 유류 지문의 융선이 가진 방향을 통해 추측하여 겹친 부분에 섞인 각 유류 지문의 다른 방향의 융선들을 분리해내는 기술, 겹친 유류 지문을 큰 파워의 레이저로 조사한 후 다양한 파장 대역별 지문의 형광 세기를 검출하고 이를 바탕으로 분리하는 기술, 겹친 유류 지문의 표본을 질량 분석법을 통해 분석한 후 얻어진 스펙트럼을 기반으로 분리하는 기술 등이 존재한다. 그러나 융선의 방향 필드(orientation field)를 이용한 겹친 유류 지문 영상 분리 기술의 경우 겹쳐진 부분에서의 융선 방향에 모호한 특징점이 존재하거나 방향이 겹친 경우 분리결과가 좋지 못하다는 단점이 존재한다. 형광 세기를 바탕으로 분리하는 기술은 시간 차이를 두고 찍은 샘플에 대해서만 분리가 가능하며 특정 파장 대역에서 형광 세기가 피크를 보이므로 측정 후 별도의 관찰을 통한 추측을 기반으로 하므로 사용처가 제한적이다. 질량 분석법을 통해 겹친 지문을 분리하는 기술은 표본을 이온화시키는 과정에서 표본을 손상시킨다는 단점을 안고 있다.Overlapping oil fingerprints are often found at crime scenes, but in order to use them as court evidence, the image corresponding to each oil fingerprint must be separated from the overlapped image. There are various conventional techniques for separating overlapping fingerprints. A technology that separates the ridges in different directions of each oil fingerprint mixed in the overlapped part by guessing through the direction of the ridges of each oil fingerprint based on the photo taken in the visible light band. After irradiating the overlapping oil fingerprints with a laser of high power There are techniques for detecting the fluorescence intensity of fingerprints by various wavelength bands and separating them based on this, and techniques for separating based on spectra obtained after analyzing samples of overlapping oil fingerprints through mass spectrometry. However, in the case of the overlapping oil fingerprint image separation technology using the orientation field of the ridge, there is a disadvantage that the separation result is not good if there is an ambiguous feature point in the direction of the ridge in the overlapped portion or the direction overlaps. Separation technology based on fluorescence intensity can separate only samples taken with a time difference, and since the fluorescence intensity peaks in a specific wavelength band, it is based on guesswork through separate observation after measurement, so its use is limited. The technique of separating overlapping fingerprints through mass spectrometry has the disadvantage of damaging the sample in the process of ionizing the sample.
본 발명의 실시 예가 이루고자 하는 기술적 과제는, 딥러닝 알고리즘을 이용하여 중첩된 지문을 분리하는 방법 및 그 장치를 제공하는 데 있다.The technical problem to be achieved by the embodiment of the present invention is to provide a method and an apparatus for separating overlapping fingerprints using a deep learning algorithm.
상기의 기술적 과제를 달성하기 위한, 본 발명의 실시 예에 따른 지문분리방법의 일 예는, 중첩된 지문을 분리하도록 학습된 제1 딥러닝 네트워크를 통해 적어도 둘 이상의 지문이 중첩된 지문중첩영상에서 검출대상지문을 분리하여 지문분리영상을 획득하는 단계; 상기 지문중첩영상에서 복수의 지문이 중첩된 관심영역을 상기 관심영역에 대응하는 상기 지문분리영상의 영역으로 대체한 대체영상을 생성하는 단계; 및 배경과 지문을 분리하도록 학습된 제2 딥러닝 네트워크를 통해 상기 대체영상에서 배경과 상기 검출대상지문을 분리하여 출력하는 단계;를 포함한다.An example of a fingerprint separation method according to an embodiment of the present invention for achieving the above technical problem is in a fingerprint superimposed image in which at least two or more fingerprints are superimposed through a first deep learning network learned to separate the superimposed fingerprints Separating the detection target fingerprint to obtain a fingerprint separation image; generating an alternative image in which a region of interest in which a plurality of fingerprints are overlapped in the fingerprint superimposed image is replaced with a region of the fingerprint separated image corresponding to the region of interest; and separating and outputting the background and the detection target fingerprint from the replacement image through the second deep learning network trained to separate the background and the fingerprint.
상기의 기술적 과제를 달성하기 위한, 본 발명의 실시 예에 따른 지문분리장치의 일 예는, 중첩된 지문을 분리하도록 학습된 제1 딥러닝 네트워크를 통해 적어도 둘 이상의 지문이 중첩된 지문중첩영상에서 검출대상지문을 분리하여 지문분리영상을 획득하는 지문분리부; 상기 지문중첩영상에서 지문이 중첩된 관심영역을 상기 관심영역에 대응하는 상기 지문분리영상의 영역으로 대체한 대체영상을 생성하는 관심영역설정부; 및 배경과 지문을 분리하도록 학습된 제2 딥러닝 네트워크를 통해 상기 대체영상에서 배경과 상기 검출대상지문을 분리하여 출력하는 지문출력부;를 포함한다.In order to achieve the above technical problem, an example of a fingerprint separation apparatus according to an embodiment of the present invention is a fingerprint superimposed image in which at least two or more fingerprints are superimposed through a first deep learning network trained to separate the superimposed fingerprints. a fingerprint separation unit for obtaining a fingerprint separation image by separating the detection target fingerprint; a region of interest setting unit generating a replacement image in which the region of interest, where the fingerprint is overlapped in the fingerprint superimposed image, is replaced with the region of the separated fingerprint image corresponding to the region of interest; and a fingerprint output unit that separates and outputs the background and the detection target fingerprint from the alternative image through the second deep learning network trained to separate the background and the fingerprint.
본 발명의 실시 예에 따르면, 딥러닝 알고리즘의 여러 체크포인트를 이용하여 지문 분리 성능을 향상시킬 수 있다. 또한 범죄 현장에서 얻어진 유류 지문을 빠르게 현장에서 바로 복원할 수 있어, 초동수사를 빠르게 진행할 수 있는 장점이 있다. 증거로 채택되지는 않지만 수사에 도움이 되는 거짓말 탐지기와 같이 본 발명으로 복원된 결과가 증거로 채택이 어려워도 초등수사에 도움이 될 수 있으며, 복원된 결과는 후에 공인된 사람이 최종적으로 검증하는 과정을 거쳐 증거로도 채택이 가능할 수 있다. 본 발명을 통하여 기존 딥러닝 알고리즘을 활용한 지문 분리에 비해 개선된 목표 지문 이미지를 얻을 수 있으며, 불필요하거나 잘못된 지문 정보를 줄여 초동수사의 혼선을 줄일 수 있다.According to an embodiment of the present invention, fingerprint separation performance can be improved by using several checkpoints of a deep learning algorithm. In addition, oil fingerprints obtained at the crime scene can be quickly restored at the scene, which has the advantage of expediting the initial investigation. Although it is not adopted as evidence, it can be helpful for elementary investigations even if the results restored by the present invention are difficult to adopt as evidence, such as a lie detector that is helpful in the investigation, and the restored results are finally verified by an authorized person later It may be accepted as evidence through Through the present invention, an improved target fingerprint image can be obtained compared to fingerprint separation using an existing deep learning algorithm, and unnecessary or incorrect fingerprint information can be reduced to reduce confusion in the initial investigation.
도 1은 본 발명의 실시 예에 따른 지문분리장치의 일 예를 도시한 도면,1 is a view showing an example of a fingerprint separation device according to an embodiment of the present invention;
도 2는 본 발명의 실시 예에 따른 지문분리방법의 일 예를 도시한 흐름도,2 is a flowchart illustrating an example of a fingerprint separation method according to an embodiment of the present invention;
도 3 및 도 4는 본 발명의 실시 예에 다른 지문분리방법을 도식화하여 표시한 도면,3 and 4 are diagrams schematically showing a fingerprint separation method according to an embodiment of the present invention;
도 5는 본 발명의 실시 예에 따른 딥러닝 네트워크의 일 예를 도시한 도면,5 is a diagram showing an example of a deep learning network according to an embodiment of the present invention;
도 6 및 도 7은 본 발명의 실시 예에 따른 딥러닝 네트워크를 위한 학습데이터의 생성 방법의 일 예를 도시한 도면,6 and 7 are diagrams illustrating an example of a method of generating learning data for a deep learning network according to an embodiment of the present invention;
도 8은 본 발명의 실시 예에 따른 딥러닝 네트워크를 위한 학습데이터의 전처리 과정의 일 예를 도시한 도면,8 is a diagram illustrating an example of a pre-processing process of learning data for a deep learning network according to an embodiment of the present invention;
도 9는 본 발명의 실시 예에 따라 지문유형별 체크포인트를 생성하는 방법의일 예를 도시한 도면,9 is a view showing an example of a method of generating a checkpoint for each fingerprint type according to an embodiment of the present invention;
도 10은 본 발명의 실시 예에 따른 지문분리결과의 일 예를 도식화하여 표현한 도면, 그리고,10 is a diagram illustrating an example of a fingerprint separation result according to an embodiment of the present invention, and
도 11은 본 발명의 실시 예에 따른 지문분리장치의 일 예의 구성을 도시한 도면이다.11 is a diagram illustrating the configuration of an example of a fingerprint separation device according to an embodiment of the present invention.
이하에서, 첨부된 도면들을 참조하여 본 발명의 실시 예에 따른 지문분리방법 및 그 장치에 대해 상세히 설명한다.Hereinafter, a fingerprint separation method and an apparatus according to an embodiment of the present invention will be described in detail with reference to the accompanying drawings.
도 1은 본 발명의 실시 예에 따른 지문분리장치의 일 예를 도시한 도면이다.1 is a view showing an example of a fingerprint separation device according to an embodiment of the present invention.
도 1을 참조하면, 지문분리장치(100)는 적어도 둘 이상의 지문이 중첩된 지문중첩영상을 입력받으면 딥러닝 알고리즘을 이용하여 검출대상지문을 분리하여 출력한다. 여기서, 지문중첩영상은 배경영상에 적어도 둘 이상의 유류 지문의 일부 또는 전체가 서로 중첩되어 있는 영상을 의미한다. 예를 들어, 지문중첩영상은 범죄현장 등에서 물건에 존재하는 유류 지문을 카메라 등을 통해 촬영한 후 소정의 전처리 과정을 통해 얻어진 영상일 수 있다. Referring to FIG. 1 , the fingerprint separation apparatus 100 separates and outputs a detection target fingerprint using a deep learning algorithm when receiving a fingerprint superimposed image in which at least two or more fingerprints are superimposed. Here, the fingerprint superimposed image refers to an image in which at least two or more oil fingerprints are partially or entirely overlapped with each other on the background image. For example, the fingerprint superimposed image may be an image obtained through a predetermined pre-processing process after photographing an oil fingerprint existing on an object at a crime scene using a camera or the like.
지문중첩영상에서 원하는 검출대상지문을 분리하기 위해서는 겹쳐진 지문의 분리와 배경의 분리가 필요하다. 그러나 하나의 딥러닝 알고리즘으로 겹친 지문과 배경 이미지를 모두 한 번에 제거하는 경우 복잡한 형상의 배경이나 분리의 난이도 등에 따라 복잡도가 증가하여 검출대상지문의 분리 결과가 정확하지 않은 문제점이 존재한다. In order to separate the desired detection target fingerprint from the fingerprint superimposed image, it is necessary to separate the overlapping fingerprints and the background. However, when all overlapping fingerprints and background images are removed with one deep learning algorithm, the complexity increases depending on the background of a complex shape or the difficulty of separation, so there is a problem in that the separation result of the detection target fingerprint is not accurate.
이에 본 실시 예는 겹쳐진 지문의 분리에 특화된 제1 딥러닝 네트워크와 배경과 지문의 분리에 특화된 제2 딥러닝 네크워크를 이용한 지문 분리 방법을 제시한다. 다시 말해, 딥러닝 네트워크는 딥러닝 네트워크 구조, 딥러닝 학습 에포크(epich), 학습데이터 등에 따라 지문 분리의 성능에 차이가 발생하므로, 본 실시 예는 겹쳐된 지분 분리에 성능이 우수한 딥러닝 네트워크의 제1 체크포인트와 배경과 지문의 분리에 성능이 우수한 딥러닝 네트워크의 제2 체크포인트를 각각 불러와 제1 딥러닝 네트워크 및 제2 딥러닝 네트워크를 설정한 후 이들 딥러닝 네트워크를 통해 지문을 분리하는 방법을 제시한다.Accordingly, this embodiment proposes a fingerprint separation method using a first deep learning network specialized in separating overlapping fingerprints and a second deep learning network specialized in separating background and fingerprints. In other words, the deep learning network has a difference in the performance of fingerprint separation depending on the deep learning network structure, deep learning learning epoch, learning data, etc. The first checkpoint and the second checkpoint of the deep learning network with excellent performance in separating the background and the fingerprint are respectively called and the first deep learning network and the second deep learning network are set up, and then the fingerprint is separated through these deep learning networks suggest how to
도 2는 본 발명의 실시 예에 따른 지문분리방법의 일 예를 도시한 흐름도이다.2 is a flowchart illustrating an example of a fingerprint separation method according to an embodiment of the present invention.
도 2를 참조하면, 지문분리장치(100)는 중첩된 지문을 분리하도록 학습된 제1 딥러닝 네트워크를 이용하여 지문중첩영상에서 검출대상지문을 1차 분리한다(S200). 예를 들어, 지문분리장치(100)는 중첩된 지문을 분리하도록 학습된 딥러닝 네트워크의 제1 체크포인트를 로딩하여 제1 딥러닝 네크워크를 설정할 수 있다. 여기서 체크포인트는 학습 과정의 특정 시점의 딥러닝 네트워크의 각종 파라메터나 변수값 등 딥러닝 네트워크를 정의하는 데이터이다. 체크포인트를 로딩하면 특정 시점까지 학습된 딥러닝 네트워크의 설정이 가능하다.Referring to FIG. 2 , the fingerprint separation apparatus 100 first separates the detection target fingerprint from the superimposed fingerprint image using the first deep learning network learned to separate the overlapped fingerprints ( S200 ). For example, the fingerprint separation apparatus 100 may set the first deep learning network by loading the first checkpoint of the deep learning network learned to separate the overlapped fingerprints. Here, the checkpoint is data defining the deep learning network, such as various parameters or variable values of the deep learning network at a specific point in the learning process. By loading a checkpoint, it is possible to set up a deep learning network trained up to a specific point in time.
제1 딥러닝 네트워크는 지문 유형에 따라 설정될 수 있다. 예를 들어, 도 9와 같이 복수의 지문 유형에 해당하는 복수의 제1 체크포인트가 존재하면, 지문분리장치(100)는 지문중첩영상에서 분리하고자 하는 검출대상지문의 유형에 따른 제1 체크포인트를 로딩하여 제1 딥러닝 네트워크를 설정하여 검출대상지문을 1차 분리한 지문분리영상을 생성할 수 있다. 복수의 지문 유형에 따른 체크포인트에 대해 도 9에서 다시 살펴본다. The first deep learning network may be set according to a fingerprint type. For example, if there are a plurality of first checkpoints corresponding to a plurality of fingerprint types as shown in FIG. 9 , the fingerprint separation apparatus 100 selects the first checkpoint according to the type of the detection target fingerprint to be separated from the fingerprint superimposed image. By loading the first deep learning network, it is possible to generate a fingerprint separation image obtained by first separating the detection target fingerprint. Checkpoints according to a plurality of fingerprint types are reviewed again in FIG. 9 .
지문분리장치(100)는 지문중첩영상에서 지문이 중첩된 영역을 관심영역으로 지정하고, 관심영역을 지문분리영상의 해당 영역으로 대체한다(S210). 예를 들어, 지문분리장치(100)는 도 4와 같이 지문중첩영상(430)에서 두 지문이 겹쳐진 영역을 관심영역(432)으로 지정한 후 해당 관심영역(432)을 1차 분리된 검출대상지문을 포함하는 지문분리영상(420)의 해당영역으로 대체한 대체영상(440)을 생성할 수 있다.The fingerprint separation apparatus 100 designates a region where fingerprints are overlapped in the fingerprint superimposed image as a region of interest, and replaces the region of interest with a corresponding region of the separated fingerprint image (S210). For example, the fingerprint separation apparatus 100 designates a region where two fingerprints overlap in the fingerprint superimposed image 430 as the region of interest 432 as shown in FIG. It is possible to generate an alternative image 440 replaced with the corresponding region of the fingerprint separation image 420 including
지문분리장치(100)는 배경과 지문을 분리하도록 학습된 제2 딥러닝 네트워크를 이용하여 대체영상에서 검출대상지문을 2차 분리하여 출력한다(S220). 예를 들어, 지문분리장치(100)는 배경과 지문을 분리하도록 학습된 딥러닝 네트워크의 제2 체크포인트를 로딩하여 제2 딥러닝 네크워크를 설정할 수 있다. 딥러닝 네트워크에 앞서 살핀 제1 체크포인트를 로딩하면 제1 딥러닝 네크워크가 설정되고, 제2 체크포인트를 로딩하면 제2 딥러닝 네트워크가 설정된다. 제2 딥러닝 네트워크 또한 제1 딥러닝 네트워크처럼 지문 유형에 따라 설정될 수 있다. 복수의 지문 유형에 따른 제2 체크포인트에 대해서는 도 9에서 다시 살펴본다. The fingerprint separation device 100 uses the second deep learning network learned to separate the background and the fingerprint to secondarily separate and output the detection target fingerprint from the alternative image (S220). For example, the fingerprint separation apparatus 100 may set the second deep learning network by loading the second checkpoint of the deep learning network learned to separate the background and the fingerprint. When the first checkpoint salpin is loaded prior to the deep learning network, the first deep learning network is set, and when the second checkpoint is loaded, the second deep learning network is set. Like the first deep learning network, the second deep learning network may also be set according to the fingerprint type. The second checkpoint according to the plurality of fingerprint types will be reviewed again in FIG. 9 .
도 3 및 도 4는 본 발명의 실시 예에 다른 지문분리방법을 도식화하여 표시한 도면이다.3 and 4 are diagrams schematically illustrating a fingerprint separation method according to an embodiment of the present invention.
도 3 및 도 4를 함께 참조하면, 지문분리장치(100)는 적어도 둘 이상의 지문이 중첩된 지문중첩영상(300,410)을 제1 딥러닝 네트워크(310)에 입력한다. 제1 딥러닝 네트워크(310)는 겹쳐진 지문 분리를 위하여 미리 학습되어 저장된 제1 체크포인트(315)를 로딩하여 설정될 수 있다. 지문분리장치(100)는 제1 딥러닝 네트워크(310)를 통해 검출대상지문(412)을 1차 분리한 지문분리영상(320,420)을 획득한다. 3 and 4 together, the fingerprint separation apparatus 100 inputs the superimposed fingerprint images 300 and 410 in which at least two or more fingerprints are superimposed to the first deep learning network 310 . The first deep learning network 310 may be set by loading the first checkpoint 315 that has been learned and stored in advance to separate the overlapping fingerprints. The fingerprint separation apparatus 100 acquires the fingerprint separation images 320 and 420 obtained by first separating the detection target fingerprint 412 through the first deep learning network 310 .
지문분리장치(100)는 지문중첩영상(300,410)에서 복수의 지문이 중첩된 영역을 관심영역(ROI, Region of Intrest)(432)으로 지정하고, 그 관심영역(432)을 제1 딥러닝 네트워크(310)를 통해 획득한 지문분리영상(320,420)의 해당 영역(422)으로 대체한다. 지문분리장치(100)는 관심영역(432)을 사용자로부터 입력받을 수 있는 화면 인터페이스를 제공할 수 있다. The fingerprint separation apparatus 100 designates a region in which a plurality of fingerprints are overlapped in the fingerprint superimposed images 300 and 410 as a Region of Intrest (ROI) 432, and sets the region of interest 432 to the first deep learning network. It is replaced with the corresponding region 422 of the fingerprint separation images 320 and 420 obtained through 310 . The fingerprint separation apparatus 100 may provide a screen interface through which the region of interest 432 may be input from the user.
지문분리장치(100)는 관심영역(432)을 지분분리영상(320)의 해당 영역(422)으로 대체한 대체영상(330,440)을 제2 딥러닝 네트워크(340)에 입력한다. 제2 딥러닝 네트워크(340)는 배경과 지문 분리를 위하여 미리 학습되어 저장된 제2 체크포인트(345)를 로딩하여 설정될 수 있다. The fingerprint separation apparatus 100 inputs the replacement images 330 and 440 in which the region of interest 432 is replaced with the corresponding region 422 of the stake separation image 320 to the second deep learning network 340 . The second deep learning network 340 may be set by loading the second checkpoint 345 that has been previously learned and stored in order to separate the background and the fingerprint.
지문분리장치(100)는 제2 딥러닝 네트워크(340)를 통해 대체영상(330,440)에서 검출대상지문(412)을 분리하여 출력한다. 다시 말해, 제2 딥러닝 네트워크(340)는 대체영상(330,440)에서 검출대상지문(412)을 제외한 다른 지문 등을 모두 배경으로 파악하여 검출대상지문(412)을 분리한 최종영상(350,450)을 출력한다.The fingerprint separation apparatus 100 separates and outputs the detection target fingerprint 412 from the alternative images 330 and 440 through the second deep learning network 340 . In other words, the second deep learning network 340 identifies all fingerprints other than the detection target fingerprint 412 in the alternative images 330 and 440 as a background and separates the detection target fingerprint 412 from the final images 350 and 450. print out
지문분리장치(100)는 지문중첩영상(410)을 전처리 과정을 통해 획득할 수 있다. 도 4를 참조하면, 지문분리장치(100)는 겹쳐된 지문을 촬영한 영상(400)에서 검출대상지문(412)이 기 설정된 방향(예를 들어, 길이 방향 등)이 되도록 영상(400)을 회전하고 또한 단위길이당 픽셀 수가 기 정의된 일정 개수가 되도록 겹쳐진 지문 영역을 크롭(crop)하여 해상도를 맞추어 지문중첩영상(410)을 생성할 수 있다. 일 예로, 지문분리장치(100)는 지문중첩영상(410)을 500ppi(pixles per inches)가 되도록 해상도를 조정할 수 있다. 실제 1인치 크기의 지문이라면 이미지는 500 픽셀 크기가 된다는 의미이다. 스케일이 맞지 않은 영상(400)은 500ppi에 맞게 크기 조절을 하여 사용 가능하다. 이때 지문의 실제 크기를 알아내야 하기 때문에 전치리 과정 이전에 원본 영상(4000에 자 눈금을 통해 1cm거리를 지정해 줄 수 있다. 다른 실시 예로, 검출대상지문(412)에 수직선을 그어 검출대상지문(412)을 지정함과 동시에, 해당 지문의 길이까지 계산하여 해상도를 맞출 수 있다. 원본 영상(400)에 선을 긋는 작업은 사용자에 의해 수작업으로 이루어질 수 있다. 이 경우, 지문분리장치는 수직선에 따라 지문중첩영상(410)을 회전하고 검출대상지문(412)을 파악할 수 있다.The fingerprint separation apparatus 100 may acquire the fingerprint superimposed image 410 through a pre-processing process. Referring to FIG. 4 , the fingerprint separation apparatus 100 scans the image 400 so that the detection target fingerprint 412 is in a preset direction (eg, longitudinal direction, etc.) The overlapping fingerprint area may be rotated and cropped so that the number of pixels per unit length becomes a predetermined number to match the resolution to generate the fingerprint superimposed image 410 . For example, the fingerprint separation apparatus 100 may adjust the resolution of the fingerprint superimposed image 410 to 500 ppi (pixles per inches). This means that if a fingerprint is an actual 1-inch size, the image will be 500 pixels wide. The unscaled image 400 can be used by adjusting the size to fit 500 ppi. At this time, since it is necessary to find out the actual size of the fingerprint, a distance of 1 cm can be specified through a ruler scale on the original image (4000) before the preposition process. In another embodiment, a vertical line is drawn on the detection target fingerprint 412 to detect the detection target fingerprint ( 412), the resolution can be adjusted by calculating the length of the corresponding fingerprint. Drawing a line on the original image 400 can be done manually by the user. In this case, the fingerprint separation device is positioned on a vertical line. Accordingly, the fingerprint superimposed image 410 may be rotated and the detection target fingerprint 412 may be identified.
도 5는 본 발명의 실시 예에 따른 딥러닝 네트워크의 일 예를 도시한 도면이다.5 is a diagram illustrating an example of a deep learning network according to an embodiment of the present invention.
도 5를 참조하면, 딥러닝 알고리즘은 영상으로부터 패턴 인식, 복원, 분리 등의 영상 처리 작업에 뛰어난 성능을 보인다는 점은 최근의 많은 연구에 의해 증명된 바 있다. 딥러닝 알고리즘을 이용한 지문 분리 기술은, 실제 현장의 유류 지문의 특성이 반영된 인조지문과 배경영상을 합성하여 학습데이터셋을 구축한 다음 원조 인조지문을 추출하도록 딥러닝 네트워크를 학습시킨다. 이 과정에서 딥러닝 네트워크는 지문중첩영상에서 일정 방향으로 정렬된 지문을 분리 대상으로 파악하고, 그 외의 배경과 다른 지문을 제거하는 방향으로 학습한다. 따라서 딥러닝을 이용한 지문분리는 지문끼리 분리하는 과정과 지문과 배경을 분리하는 과정이 복합적으로 진행된다. 이때 딥러닝 과정에서 중간과정까지 학습한 네트워크 파라미터를 체크포인트로 저장하며, 각각의 체크포인트는 딥러닝 네트워크 구조와 딥러닝 학습 에포크(epoch), 인조지문을 합성하는 전처리 과정의 변수 등에 따라 지문 분리 성능에 차이가 존재한다. 따라서 지문 분리 상황에 맞는 적절한 체크포인트를 불러와 실행할수록 분리 성능이 향상된다.Referring to FIG. 5 , it has been proven by many recent studies that the deep learning algorithm shows excellent performance in image processing tasks such as pattern recognition, restoration, and separation from images. Fingerprint separation technology using a deep learning algorithm builds a learning dataset by synthesizing an artificial fingerprint that reflects the characteristics of an actual field oil fingerprint and a background image, and then trains the deep learning network to extract the original artificial fingerprint. In this process, the deep learning network identifies fingerprints aligned in a certain direction in the fingerprint superimposed image as separation targets, and learns in the direction of removing other fingerprints from the background. Therefore, fingerprint separation using deep learning involves a process of separating fingerprints and a process of separating fingerprints from the background. At this time, the network parameters learned from the deep learning process to the intermediate process are stored as checkpoints, and each checkpoint separates the fingerprints according to the deep learning network structure, the deep learning learning epoch, and the variables of the preprocessing process for synthesizing artificial fingerprints. There is a difference in performance. Therefore, the more the appropriate checkpoint is called and executed according to the fingerprint separation situation, the better the separation performance.
본 실시 예는 중첩된 지문의 분리 성능이 좋은 제1 체크포인트와 배경과 지문의 분리 성능이 좋은 제2 체크포인트는 미리 파악하여 저장한 후 각 체크포인트를 로딩하여 지문을 분리한다. In this embodiment, the first checkpoint with good separation performance of the overlapping fingerprint and the second checkpoint with good separation performance of the background and fingerprint are identified and stored in advance, and then each checkpoint is loaded to separate the fingerprints.
도 6 및 도 7은 본 발명의 실시 예에 따른 딥러닝 네트워크를 위한 학습데이터의 생성 방법의 일 예를 도시한 도면이다.6 and 7 are diagrams illustrating an example of a method of generating learning data for a deep learning network according to an embodiment of the present invention.
도 6 및 도 7을 참조하면, 지문분리장치(100)는 배경영상(600)과 적어도 둘 이상의 인조지문(610,620)을 중첩한 학습데이터(630)를 생성한다. 배경영상(600)은 지문이 검출될 수 있는 다양한 물건(예를 들어, 영수증과 같은 각종 종이 등)의 표면 이미지일 수 있다. 인조지문(610,620)은 종래의 다양한 방법으로 미리 생성된 지문이다. 6 and 7 , the fingerprint separation apparatus 100 generates learning data 630 in which the background image 600 and at least two artificial fingerprints 610 and 620 are superimposed. The background image 600 may be a surface image of various objects (eg, various types of paper such as receipts) from which fingerprints can be detected. The artificial fingerprints 610 and 620 are fingerprints previously generated by various conventional methods.
적어도 둘 이상의 인조지문(610,620)은 서로 다른 방향으로 중첩될 수 있다. 예를 들어, 지문분리장치(100)는 도 7과 같이 제1 인조지문(710)을 기 정의된 방향(예를 들어, 수직방향)으로 배치하고 제2 인조지문(720)을 수직방향에서 일정 각도 회전한 후 배경영상(700)과 함계 중첩하여 학습데이터(730)를 생성할 수 있다. 여기서 기 정의된 방향(즉, 수직 방향)으로 배열된 지문, 즉 제1 인조지문(710)이 딥러닝 네트워크가 분리하는 대상이 된다. At least two artificial fingerprints 610 and 620 may overlap in different directions. For example, the fingerprint separation apparatus 100 arranges the first artificial fingerprint 710 in a predefined direction (eg, vertical direction) and places the second artificial fingerprint 720 in a predetermined vertical direction as shown in FIG. 7 . After the angle is rotated, the training data 730 may be generated by overlapping with the background image 700 . Here, fingerprints arranged in a predefined direction (ie, vertical direction), ie, the first artificial fingerprint 710 , become a target for the deep learning network to separate.
이와 같이 방법으로 학습데이터셋을 구축한 후 기 정의된 방향(예를 들어, 수직방향)으로 배열된 지문(즉, 제1 인조지문(710))을 분리하여 출력하도록 딥러닝 네트워크를 학습시킬 수 있다. 본 실시 예는 사용이 낮은 일반적인 노트북으로도 신속하게 처리가 가능하며 결과 이미지가 간단하고 명료하다. 딥러닝 네트워크에 입력한 지문중첩영상에서 검출대상지문을 기 정의된 방향으로만 회전하는 전처리 과정만 수행하면 되므로 대량의 지문 이미지를 신속하게 처리할 수 있다. After constructing the training dataset in this way, the deep learning network can be trained to separate and output fingerprints (ie, the first artificial fingerprint 710) arranged in a predefined direction (eg, vertical). have. In this embodiment, it is possible to quickly process even a general laptop with low usage, and the resulting image is simple and clear. A large number of fingerprint images can be processed quickly because only the preprocessing process of rotating the detection target fingerprint in a predefined direction in the fingerprint superimposed image input to the deep learning network is performed.
본 실시 예에서 사용하는 딥러닝 네트워크는 도 2 및 도 3에서 살핀 바와 같이 중첩된 지문을 분리하도록 학습된 제1 딥러닝 네트워크와 배경과 지문을 분리하도록 학습된 제2 딥러닝 네트워크이다. The deep learning network used in this embodiment is the first deep learning network trained to separate the overlapping fingerprints as shown in FIGS. 2 and 3 and the second deep learning network trained to separate the background and the fingerprint.
제1 딥러닝 네트워크 및 제2 딥러닝 네트워크는 모두 배경영상(600,700)에 적어도 둘 이상의 인조지문(610,620,710,720)이 중첩된 학습데이터(630,730)를 이용하여 학습시켜 생성할 수 있다. 다만 제1 딥러닝 네트워크는 중첩된 지문의 분리에 특화되도록 학습되고, 제2 딥러닝 네트워크는 배경과 지문의 분리에 특화되도록 학습된다. 딥러닝 네트워크의 학습 에포크(epoch)나 학습데이터의 종류 등에 따라 딥러닝 네트워크는 겹쳐진 지문의 분리에 우수한 성능을 나타내거나 배경과 지문의 분리에 우수한 성능을 나타낼 수 있다. 따라서 지문분리장치(100)는 딥러닝 네트워크의 학습 과정에서 지문분리에 우수한 성능을 나타내는 시점의 딥러닝 네트워크의 제1 체크포인트를 저장하고, 배경과 지문분리에 우수한 성능을 나타내는 시점의 딥러닝 네크워느이 제2 체크포인트를 저장할 수 있다.Both the first deep learning network and the second deep learning network may be generated by learning using the learning data 630 and 730 in which at least two artificial fingerprints 610, 620, 710, 720 are superimposed on the background images 600 and 700. However, the first deep learning network is trained to specialize in the separation of superimposed fingerprints, and the second deep learning network is trained to specialize in the separation of backgrounds and fingerprints. Depending on the learning epoch of the deep learning network or the type of learning data, the deep learning network may show excellent performance in separating overlapping fingerprints or in separating background and fingerprints. Therefore, the fingerprint separation device 100 stores the first checkpoint of the deep learning network at the point in time showing excellent performance in fingerprint separation in the learning process of the deep learning network, and the deep learning network at the point in time showing excellent performance in separating the background and fingerprint One may store the second checkpoint.
예를 들어, 지문분리장치(100)는 배경영상(600,700)과 적어도 둘 이상의 인조지문(610,620,710,720)을 중첩하여 만든 복수 개의 학습데이터(630,730)를 이용하여 딥러닝 네트워크를 일정 시점까지 학습시킨 딥러닝 네트워크를 정의하는 제1 체크포인트를 저장할 수 있다. 지문분리장치(100)는 이후 제1 체크포인트를 로딩하여 제1 딥러닝 네트워크를 설정할 수 있다. For example, the fingerprint separation apparatus 100 uses a plurality of learning data 630 and 730 created by superimposing the background images 600 and 700 and at least two artificial fingerprints 610, 620, 710, 720 to learn the deep learning network until a certain point in time. A first checkpoint defining a network may be stored. The fingerprint separation device 100 may then load the first checkpoint to set the first deep learning network.
지문분리장치(100)는 또한 배경영상(600,700)과 하나의 인조지문(610,710)을 중첩하여 만든 학습데이터 또는 배경영상(600,700)과 하나의 인조지문(610,710)을 중첩하여 만든 제1 영상과 배경영상(600,700)과 적어도 둘 이상의 인조지문(610,620,710,720)을 중첩하여 만든 제2 영상이 혼재(예를 들어, 제1영상:제2영상=5:5로 혼재)된 학습데이터를 이용하여 딥러닝 네트워크를 일정 시점까지 학습시킨 딥러닝 네크워크를 정의하는 제2 체크포인트를 저장할 수 있다. 지문분리장치(100)는 이후 제2 체크포인트를 로딩하여 제2 딥러닝 네트워크를 설정할 수 있다.The fingerprint separation device 100 also includes learning data created by superimposing background images 600 and 700 and one artificial fingerprint 610 and 710 or a first image and background created by overlapping background images 600 and 700 and one artificial fingerprint 610 and 710. A deep learning network using learning data in which a second image made by superimposing the images 600 and 700 and at least two artificial fingerprints 610, 620, 710, 720 (for example, the first image: the second image = 5:5) is mixed) It is possible to store the second checkpoint defining the deep learning network trained up to a certain point in time. The fingerprint separation device 100 may then load a second checkpoint to set up a second deep learning network.
도 8은 본 발명의 실시 예에 따른 딥러닝 네트워크를 위한 학습데이터의 전처리 과정의 일 예를 도시한 도면이다.8 is a diagram illustrating an example of a preprocessing process of learning data for a deep learning network according to an embodiment of the present invention.
도 8을 참조하면, 지문분리장치(100)는 학습데이터셋의 인조지문이 실제 현장의 유류지문과 유사하도록 전처리과정을 수행할 수 있다. 예를 들어, 지문분리장치(100)는 인조지문(800)의 융선의 전체 또는 일부에 굴곡을 부가(810)하거나 융선의 전체 또는 일부의 두께를 변조(820)하거나 융선의 전체 또는 일부의 선명도를 조정(예를 들어, 흐리게 함)(830)하는 전처리 과정을 수행할 수 있다. 지문분리장치(100)는 전처리과정을 자동으로 수행하거나 사용자 인터페이스를 통해 사용자로부터 지문의 굴곡이나 두께, 선명도 등을 입력받아 인조지문(800)에 반영할 수 있다. 지문분리장치(100)는 전처리된 인조지문를 회전 또는 대칭시켜 다른 인조지문과 중첩하여 학습데이터(840)를 생성할 수 있다.Referring to FIG. 8 , the fingerprint separation apparatus 100 may perform a pre-processing process so that the artificial fingerprint of the learning data set is similar to the oil fingerprint of the actual field. For example, the fingerprint separation device 100 adds a curve to all or a part of the ridge of the artificial fingerprint 800 ( 810 ), modulates the thickness of all or a part of the ridge ( 820 ), or sharpness of all or a part of the ridge. A preprocessing process of adjusting (eg, blurring) 830 may be performed. The fingerprint separation apparatus 100 may automatically perform a pre-processing process or may receive the curvature, thickness, sharpness, etc. of the fingerprint from the user through a user interface and reflect it on the artificial fingerprint 800 . The fingerprint separating apparatus 100 may rotate or symmetrical the preprocessed artificial fingerprint to overlap other artificial fingerprints to generate the learning data 840 .
도 9는 본 발명의 실시 예에 따라 지문유형별 체크포인트를 생성하는 방법의일 예를 도시한 도면이다.9 is a diagram illustrating an example of a method of generating a checkpoint for each fingerprint type according to an embodiment of the present invention.
도 9를 참조하면, 지문분리장치(100)는 복수의 지문유형(900)에 대한 학습데이터셋(910)을 생성한 후 각 지문유형별 학습데이터셋(910)을 이용하여 딥러닝 네트워크를 학습시킬 수 있다. 예를 들어, 지문 융선이 원형으로 배열하는 지문유형, 지문 융선이 가로 방향으로 배열하는 지문유형 등 다양한 지문유형이 존재할 수 있다. 지문유형별 학습데이터셋(910)은 미리 정의된다고 가정한다.Referring to FIG. 9 , the fingerprint separation apparatus 100 generates a learning dataset 910 for a plurality of fingerprint types 900 and then uses the learning dataset 910 for each fingerprint type to learn a deep learning network. can For example, various types of fingerprints may exist, such as a fingerprint type in which the fingerprint ridges are arranged in a circle, and a fingerprint type in which the fingerprint ridges are arranged in a horizontal direction. It is assumed that the learning dataset 910 for each fingerprint type is predefined.
지문분리장치(100)는 제1 지문유형의 제1 학습데이터셋을 이용하여 딥러닝 네트워크를 학습시킨 후 학습 완료된 딥러닝 네트워크의 체크포인트를 저장한다. 또한 제2 지문유형의 제2 학습데이터셋을 이용하여 딥러닝 네트워크를 학습시킨 후 학습 완료된 딥러닝 네트워크의 체크포인트를 저장한다. 이와 같은 방법으로 N개의 지문유형(900)에 대한 N개의 체크포인트(920)를 생성하여 저장할 수 있다. The fingerprint separation device 100 learns the deep learning network using the first learning dataset of the first fingerprint type, and then stores the checkpoints of the learned deep learning network. In addition, after learning the deep learning network using the second learning dataset of the second fingerprint type, checkpoints of the learned deep learning network are stored. In this way, N checkpoints 920 for N fingerprint types 900 may be generated and stored.
지문분리장치(100)는 지문중첩영상의 검출대상지문의 유형에 다라 N개의 체크포인트(920) 중 어느 하나를 로딩할 수 있다. 예를 들어, 지문중첩영상의 검출대상지문이 가로로 배열하는 형태이면, 지문분리장치(100)는 가로로 배열하는 지문유형의 학습데이터셋을 이용하여 학습완료된 체크포인트를 로딩할 수 있다. 지문분리장치(100)는 체크포인트별 지문유형에 대한 정보(예를 들어, 각 지문유형에 대한 적어도 하나 이상의 대표이미지 등)를 화면인터페이스를 통해 표시하여 사용자가 검출대상지문과 유사한 지문유형의 체크포인트를 용이하게 선택하게 할 수 있다.The fingerprint separation apparatus 100 may load any one of the N checkpoints 920 according to the type of the detection target fingerprint of the fingerprint superimposed image. For example, if the detection target fingerprints of the fingerprint superimposed image are arranged horizontally, the fingerprint separation apparatus 100 may load the learned checkpoints using the horizontally arranged fingerprint type learning dataset. The fingerprint separation device 100 displays information on the fingerprint type for each checkpoint (eg, at least one representative image for each fingerprint type, etc.) through the screen interface so that the user can check the fingerprint type similar to the detection target fingerprint. Points can be easily selected.
지문유형별 학습데이터셋(910)은 겹쳐진 지분 분리에 특화된 복수 개의 제1 체크포인트를 생성하도록 정의되거나, 배경과 지문의 분리에 특화된 복수 개의 제2 체크포인트를 생성하도록 정의될 수 있다. The learning dataset 910 for each fingerprint type may be defined to generate a plurality of first checkpoints specialized for separation of overlapping stakes, or may be defined to generate a plurality of second checkpoints specialized for separation of background and fingerprints.
도 10은 본 발명의 실시 예에 따른 지문분리결과의 일 예를 도식화하여 표현한 도면이다.10 is a diagram illustrating an example of a fingerprint separation result according to an embodiment of the present invention.
도 10을 참조하면, 지문중첩영상(1000)은 겹쳐진 서로 다른 지분을 분리하는 과정과 배경과 지문을 분리하는 과정이 복합적으로 진행되어야 한다. 이 두 과정이 제대로 수행되지 않으면 지문의 융선이 흐려지거나 지문중첩영상에 없던 융선이 분리된 검출대상지문의 영상(1010)에 나타나는 문제점 등이 존재한다. 정확하지 않은 지문 이미지(1010)는 사실과 다른 지문 정보(예를 들어, 특징점 또는 코어(core), 델타(delta))를 제공하여 초동수사에 어려움을 안겨줄 수 있다. 본 실시 예와 같이 겹쳐진 지문 분리에 특화된 제1 체크포인트와 배경과 분리에 특화된 제2 체크포인트를 이용하여 검출대상 지문을 분리하는 경우 지문(1020)을 정확하게 분리할 수 있다. 종래 딥러닝 알고리즘을 이용한 지문분리영상(1010)에는 부정확한 융선부분(1015)이 존재하나, 본 실시 예를 이용한 지분분리영상(1020)에는 정확한 융선(1020)을 표현한다.Referring to FIG. 10 , for the fingerprint superimposed image 1000 , a process of separating different overlapping stakes and a process of separating a background and a fingerprint should be performed in a complex manner. If these two processes are not performed properly, there is a problem in that the ridges of the fingerprint are blurred or the ridges that were not in the fingerprint superimposed image appear in the image 1010 of the detection target fingerprint separated. The inaccurate fingerprint image 1010 may provide difficulty in the initial investigation by providing fingerprint information different from the truth (eg, feature points, core, or delta). As in the present embodiment, when the detection target fingerprint is separated using the first checkpoint specialized for separating the overlapping fingerprints and the second checkpoint specialized for separating the background and the background, the fingerprint 1020 can be accurately separated. An inaccurate ridge portion 1015 exists in the fingerprint separation image 1010 using a conventional deep learning algorithm, but an accurate ridge 1020 is expressed in the stake separation image 1020 using this embodiment.
도 11은 본 발명의 실시 예에 따른 지문분리장치의 일 예의 구성을 도시한 도면이다.11 is a diagram showing the configuration of an example of a fingerprint separation device according to an embodiment of the present invention.
도 11을 참조하면, 지문분리장치(100)는 학습부(1100), 전처리부(1110), 지문분리부(1120), 관심영역설정부(1130) 및 지문출력부(1140)를 포함한다.Referring to FIG. 11 , the fingerprint separating apparatus 100 includes a learning unit 1100 , a preprocessing unit 1110 , a fingerprint separating unit 1120 , an ROI setting unit 1130 , and a fingerprint output unit 1140 .
학습부(1100)는 겹쳐진 지문 분리에 특화되도록 제1 딥러닝 네트워크를 학습시키고 또한 배경과 지문 분리에 특화되도록 제2 딥러닝 네트워크를 학습시킨 후 학습완료된 제1 딥러닝 네트워크 및 제2 딥러닝 네트워크의 제1 체크포인트 및 제2 체크포인트를 각각 저장한다. 다른 실시 예로, 지문유형별 체크포인트를 생성하여 저장할 수 있으며, 이에 대한 예가 도 9에 도시되어 있다.The learning unit 1100 trains the first deep learning network to specialize in separating the overlapping fingerprints, and trains the second deep learning network to specialize in separating the background and the fingerprint, and then the first deep learning network and the second deep learning network that have been trained Store the first checkpoint and the second checkpoint of , respectively. In another embodiment, checkpoints for each fingerprint type may be generated and stored, and an example of this is shown in FIG. 9 .
전처리부(1110)는 겹쳐진 지문을 촬영한 영상에서 검출대상지문이 기 설정된 방향(예를 들어, 길이 방향 등)이 되도록 영상을 회전하고 또한 기 정의된 해상도가 되도록 겹쳐진 지문의 영역을 크롭(crop)하여 지문중첩영상을 생성한다. 전처리부의 일 예가 도 4에 도시되어 있다.The pre-processing unit 1110 rotates the image so that the detection target fingerprint is in a preset direction (eg, longitudinal direction, etc.) in the image obtained by photographing the overlapping fingerprint, and crops the area of the overlapped fingerprint to have a predefined resolution. ) to create a fingerprint superimposed image. An example of the preprocessor is shown in FIG. 4 .
지문분리부(1120)는 제1 체크포인트를 로딩하여 설정된 제1 딥러닝 네트워크를 통해 지문중첩영상에서 검출대상지문을 1차 분리한 지문분리영상을 생성한다. The fingerprint separation unit 1120 generates a fingerprint separation image obtained by first separating a detection target fingerprint from a fingerprint superimposed image through a first deep learning network set by loading a first checkpoint.
관심영역설정부(1130)는 지문중첩영상에서 복수의 지문이 중첩된 관심영역을 지문분리영상의 해당영역으로 대체한 대체영상을 생성한다. 예를 들어, 도 4와 같이 사용자로부터 관심영역(432)을 설정받으면, 1차 분리된 검출대상지문에 대한 지문분리영상(420)의 해당 영역(422)으로 관심영역(432)을 대체한 대체영상을 생성한다.The region of interest setting unit 1130 generates an alternative image in which the region of interest in which a plurality of fingerprints are overlapped in the fingerprint superimposed image is replaced with the corresponding region of the fingerprint separation image. For example, when the region of interest 432 is set by the user as shown in FIG. 4 , the region of interest 432 is replaced with the region 422 of the fingerprint separation image 420 for the firstly separated detection target fingerprint. create an image
지문출력부(1140)는 제2 체크포인트를 로딩하여 설정된 제2 딥러닝 네트워크를 통해 대체영상에서 검출대상지문을 2차 분리하여 출력한다. 지문출력부(1140)는 제2 딥러닝 네트워크를 통해 출력된 검출대상지문의 각 픽셀의 값을 0~1의 값으로 정규화할 수 있다. 예를 들어, 각 픽셀이 0~255의 값을 가지는 경우 255로 나누어 0~1의 값으로 정규화할 수 있다. 지문출력부(1140)는 기 정의된 값 이하의 픽셀을 제거한 검출대상지문을 출력할 수 있다. The fingerprint output unit 1140 loads the second checkpoint and outputs the secondly separated detection target fingerprint from the alternative image through the set second deep learning network. The fingerprint output unit 1140 may normalize the value of each pixel of the detection target fingerprint output through the second deep learning network to a value of 0 to 1. For example, if each pixel has a value of 0 to 255, it can be normalized to a value of 0 to 1 by dividing by 255. The fingerprint output unit 1140 may output a detection target fingerprint in which pixels less than or equal to a predefined value are removed.
본 발명은 또한 컴퓨터로 읽을 수 있는 기록매체에 컴퓨터가 읽을 수 있는 코드로서 구현하는 것이 가능하다. 컴퓨터가 읽을 수 있는 기록매체는 컴퓨터 시스템에 의하여 읽혀질 수 있는 데이터가 저장되는 모든 종류의 기록장치를 포함한다. 컴퓨터가 읽을 수 있는 기록매체의 예로는 ROM, RAM, CD-ROM, 자기 테이프, 플로피디스크, 광데이터 저장장치 등이 있다. 또한 컴퓨터가 읽을 수 있는 기록매체는 네트워크로 연결된 컴퓨터 시스템에 분산되어 분산방식으로 컴퓨터가 읽을 수 있는 코드가 저장되고 실행될 수 있다.The present invention can also be implemented as computer-readable codes on a computer-readable recording medium. The computer-readable recording medium includes all types of recording devices in which data readable by a computer system is stored. Examples of computer-readable recording media include ROM, RAM, CD-ROM, magnetic tape, floppy disk, and optical data storage device. In addition, the computer-readable recording medium is distributed in a network-connected computer system so that the computer-readable code can be stored and executed in a distributed manner.
이제까지 본 발명에 대하여 그 바람직한 실시예들을 중심으로 살펴보았다. 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자는 본 발명이 본 발명의 본질적인 특성에서 벗어나지 않는 범위에서 변형된 형태로 구현될 수 있음을 이해할 수 있을 것이다. 그러므로 개시된 실시예들은 한정적인 관점이 아니라 설명적인 관점에서 고려되어야 한다. 본 발명의 범위는 전술한 설명이 아니라 특허청구범위에 나타나 있으며, 그와 동등한 범위 내에 있는 모든 차이점은 본 발명에 포함된 것으로 해석되어야 할 것이다.So far, the present invention has been looked at with respect to preferred embodiments thereof. Those of ordinary skill in the art to which the present invention pertains will understand that the present invention can be implemented in a modified form without departing from the essential characteristics of the present invention. Therefore, the disclosed embodiments are to be considered in an illustrative rather than a restrictive sense. The scope of the present invention is indicated in the claims rather than the foregoing description, and all differences within the scope equivalent thereto should be construed as being included in the present invention.

Claims (13)

  1. 중첩된 지문을 분리하도록 학습된 제1 딥러닝 네트워크를 통해 적어도 둘 이상의 지문이 중첩된 지문중첩영상에서 검출대상지문을 분리하여 지문분리영상을 획득하는 단계;obtaining a fingerprint separation image by separating a detection target fingerprint from a fingerprint superimposed image in which at least two or more fingerprints are superimposed through a first deep learning network trained to separate the superimposed fingerprints;
    상기 지문중첩영상에서 복수의 지문이 중첩된 관심영역을 상기 관심영역에 대응하는 상기 지문분리영상의 영역으로 대체한 대체영상을 생성하는 단계; 및generating an alternative image in which a region of interest in which a plurality of fingerprints are overlapped in the fingerprint superimposed image is replaced with a region of the fingerprint separated image corresponding to the region of interest; and
    배경과 지문을 분리하도록 학습된 제2 딥러닝 네트워크를 통해 상기 대체영상에서 배경과 상기 검출대상지문을 분리하여 출력하는 단계;를 포함하는 것을 특징으로 하는 지문분리방법.and outputting the background and the detection target fingerprint separately from the replacement image through a second deep learning network trained to separate the background and the fingerprint.
  2. 제 1항에 있어서, 상기 지문분리영상을 획득하는 단계는,According to claim 1, wherein the step of obtaining the fingerprint separation image,
    중첩된 지문을 분리하도록 학습된 딥러닝 네트워크의 제1 체크포인트를 로딩하여 상기 제1 딥러닝 네트워크를 설정하는 단계;를 포함하는 것을 특징으로 하는 지문분리방법.Setting the first deep learning network by loading a first checkpoint of the deep learning network learned to separate the overlapping fingerprints; A fingerprint separation method comprising: a.
  3. 제 2항에 있어서, 상기 설정하는 단계는,The method of claim 2, wherein the setting comprises:
    복수의 지문 유형에 해당하는 복수의 제1 체크포인트 중에서 상기 검출 대상 지문의 지문 유형에 해당하는 제1 체크포인트를 로딩하는 단계;를 포함하는 것을 특징으로 하는 지문분리방법.and loading a first checkpoint corresponding to a fingerprint type of the detection target fingerprint from among a plurality of first checkpoints corresponding to a plurality of fingerprint types.
  4. 제 1항에 있어서, 상기 출력하는 단계는,According to claim 1, wherein the outputting step,
    배경과 지문을 분리하도록 학습된 딥러닝 네트워크의 제2 체크포인트를 로딩하하여 상기 제2 딥러닝 네트워크를 설정하는 단계;를 포함하는 것을 특징으로 하는 지문분리방법.Setting the second deep learning network by loading a second checkpoint of the deep learning network learned to separate the background and the fingerprint;
  5. 제 4항에 있어서, 상기 설정하는 단계는,The method of claim 4, wherein the setting comprises:
    복수의 지문 유형에 해당하는 복수의 제2 체크포인트 중에서 상기 검출 대상 지문의 지문 유형에 해당하는 제2 체크포인트를 로딩하는 단계;를 포함하는 것을 특징으로 하는 지문분리방법.and loading a second checkpoint corresponding to the fingerprint type of the detection target fingerprint from among a plurality of second checkpoints corresponding to the plurality of fingerprint types.
  6. 제 1항에 있어서,The method of claim 1,
    검출 대상 지문이 수직 방향이 되도록 상기 지문중첩영상을 회전하는 단계;를 상기 지문분리영상을 획득하는 단계 전에 더 포함하는 것을 특징으로 하는 지문분리방법.The method of claim 1, further comprising: rotating the superimposed fingerprint image so that the detection target fingerprint is in a vertical direction; before acquiring the fingerprint separated image.
  7. 제 1항에 있어서,The method of claim 1,
    상기 검출 대상 지문의 길이 방향의 단위길이당 픽셀수가 기 정의된 값이 되도록 상기 지문중첩영상의 해상도를 조정하는 단계;를 상기 지문분리영상을 획득하는 단계 전에 더 포함하는 것을 특징으로 하는 지문분리방법.Adjusting the resolution of the fingerprint superimposed image so that the number of pixels per unit length in the longitudinal direction of the detection target fingerprint becomes a predefined value; .
  8. 제 1항에 있어서,The method of claim 1,
    상기 제1 딥러닝 네트워크를 학습시키는 단계;를 상기 지문분리영상을 획득하는 단계 전에 더 포함하고,The step of learning the first deep learning network; further comprising before the step of obtaining the fingerprint separation image,
    상기 학습시키는 단계는,The learning step is
    복수의 배경영상과 복수의 인조지문을 이용하여 다양한 배경에 복수의 지문이 중첩된 학습데이터셋을 생성하는 단계;generating a learning dataset in which a plurality of fingerprints are superimposed on various backgrounds using a plurality of background images and a plurality of artificial fingerprints;
    상기 학습데이터셋을 이용하여 지문을 분리하도록 상기 제1 딥러닝 네트워크를 학습시키는 단계; 및training the first deep learning network to separate fingerprints using the training dataset; and
    학습 완료된 제1 딥러닝 네트워크의 제1 체크포인트를 저장하는 단계;를 포함하는 것을 특징으로 하는 하는 지문분리방법.Storing the first checkpoint of the learned first deep learning network; Fingerprint separation method comprising a.
  9. 제 1항에 있어서,The method of claim 1,
    상기 제2 딥러닝 네트워크를 학습시키는 단계;를 상기 지문분리영상을 획득하는 단계 전에 더 포함하고,Learning the second deep learning network; further comprising before the step of obtaining the fingerprint separation image,
    상기 학습시키는 단계는,The learning step is
    복수의 배경영상과 복수의 인조지문을 이용하여 다양한 배경에 복수의 지문이 중첩된 학습데이터셋을 생성하는 단계;generating a learning dataset in which a plurality of fingerprints are superimposed on various backgrounds using a plurality of background images and a plurality of artificial fingerprints;
    상기 학습데이터셋을 이용하여 배경과 지문을 분리하도록 상기 제2 딥러닝 네트워크를 학습시키는 단계; 및training the second deep learning network to separate a background and a fingerprint using the training dataset; and
    학습 완료된 제2 딥러닝 네트워크의 제2 체크포인트를 저장하는 단계;를 포함하는 것을 특징으로 하는 하는 지문분리방법.Storing a second checkpoint of the learned second deep learning network; Fingerprint separation method comprising: a.
  10. 중첩된 지문을 분리하도록 학습된 제1 딥러닝 네트워크를 통해 적어도 둘 이상의 지문이 중첩된 지문중첩영상에서 검출대상지문을 분리하여 지문분리영상을 획득하는 지문분리부;a fingerprint separation unit configured to obtain a separated fingerprint image by separating a detection target fingerprint from a fingerprint superimposed image in which at least two or more fingerprints are superimposed through a first deep learning network trained to separate the superimposed fingerprints;
    상기 지문중첩영상에서 지문이 중첩된 관심영역을 상기 관심영역에 대응하는 상기 지문분리영상의 영역으로 대체한 대체영상을 생성하는 관심영역설정부; 및a region of interest setting unit generating a replacement image in which the region of interest, where the fingerprint is overlapped in the fingerprint superimposed image, is replaced with the region of the separated fingerprint image corresponding to the region of interest; and
    배경과 지문을 분리하도록 학습된 제2 딥러닝 네트워크를 통해 상기 대체영상에서 배경과 상기 검출대상지문을 분리하여 출력하는 지문출력부;를 포함하는 것을 특징으로 하는 지문분리장치.and a fingerprint output unit that separates and outputs the background and the detected fingerprint from the alternative image through a second deep learning network trained to separate the background and the fingerprint.
  11. 제 10항에 있어서,11. The method of claim 10,
    복수의 배경영상과 복수의 인조지문를 이용하여 다양한 배경에 복수의 지문이 중첩된 학습데이터셋을 생성하고, 상기 학습데이터셋을 이용하여 상기 제1 딥러닝 네트워크 또는 상기 제2 딥러닝 네트워크를 학습시키는 학습부;를 더 포함하는 것을 특징으로 하는 지문분리장치.Using a plurality of background images and a plurality of artificial fingerprints to generate a learning dataset in which a plurality of fingerprints are superimposed on various backgrounds, and to learn the first deep learning network or the second deep learning network using the learning dataset Fingerprint separation device comprising a; learning unit.
  12. 제 10항에 있어서,11. The method of claim 10,
    상기 검출 대상 지문의 길이 방향의 단위길이당 픽셀수가 되도록 기 정의된 값이 되도록 상기 지문중첩영상의 해상도를 조정하고, 상기 검출 대상 지문이 수직 방향이 되도록 상기 지문중첩영상을 회전하는 전처리부;를 더 포함하는 것을 특징으로 하는 지문분리장치.A preprocessor that adjusts the resolution of the fingerprint superimposed image to a predetermined value so that the number of pixels per unit length in the longitudinal direction of the detection target fingerprint, and rotates the fingerprint superimposed image so that the detection target fingerprint is in a vertical direction; Fingerprint separation device, characterized in that it further comprises.
  13. 제 1항에 기재된 방법을 수행하기 위한 프로그램을 기록한 컴퓨터로 읽을 수 있는 기록매체.A computer-readable recording medium in which a program for performing the method according to claim 1 is recorded.
PCT/KR2021/014664 2020-12-18 2021-10-20 Method for separating fingerprint from fingerprint overlay image using deep-learning algorithm, and device therefor WO2022131517A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020200178921A KR102570081B1 (en) 2020-12-18 2020-12-18 Method for separating fingerprint from overlapped fingerprint image using deep learning algorithm, and apparatus therefor
KR10-2020-0178921 2020-12-18

Publications (1)

Publication Number Publication Date
WO2022131517A1 true WO2022131517A1 (en) 2022-06-23

Family

ID=82059627

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/014664 WO2022131517A1 (en) 2020-12-18 2021-10-20 Method for separating fingerprint from fingerprint overlay image using deep-learning algorithm, and device therefor

Country Status (2)

Country Link
KR (1) KR102570081B1 (en)
WO (1) WO2022131517A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170002892A (en) * 2015-06-30 2017-01-09 삼성전자주식회사 Method and apparatus for detecting fake fingerprint, method and apparatus for recognizing fingerprint
US20180018499A1 (en) * 2015-02-13 2018-01-18 Byd Company Limited Method for calculating area of fingerprint overlapping region and electronic device thereof
KR101881097B1 (en) * 2017-02-22 2018-07-23 서울대학교산학협력단 Method of extracting fingerprint and fingerprint extraction apparatus for the same

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214246B (en) * 2017-07-04 2021-02-12 清华大学深圳研究生院 Fingerprint retrieval method based on global direction information
US11373438B2 (en) * 2019-02-11 2022-06-28 Board Of Trustees Of Michigan State University Fixed length fingerprint representation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180018499A1 (en) * 2015-02-13 2018-01-18 Byd Company Limited Method for calculating area of fingerprint overlapping region and electronic device thereof
KR20170002892A (en) * 2015-06-30 2017-01-09 삼성전자주식회사 Method and apparatus for detecting fake fingerprint, method and apparatus for recognizing fingerprint
KR101881097B1 (en) * 2017-02-22 2018-07-23 서울대학교산학협력단 Method of extracting fingerprint and fingerprint extraction apparatus for the same

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FANGLIN CHEN ; JIANJIANG FENG ; ANIL K. JAIN ; JIE ZHOU ; JIN ZHANG: "Separating Overlapped Fingerprints", IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, IEEE, USA, vol. 6, no. 2, 1 June 2011 (2011-06-01), USA, pages 346 - 359, XP011322765, ISSN: 1556-6013, DOI: 10.1109/TIFS.2011.2114345 *
YOO DONGHEON; CHO JAEBUM; LEE JUHYUN; CHAE MINSEOK; LEE BYOUNGHYO; LEE BYOUNGHO: "FinSNet: End-to-End Separation of Overlapped Fingerprints Using Deep Learning", IEEE ACCESS, IEEE, USA, vol. 8, 17 November 2020 (2020-11-17), USA , pages 209020 - 209029, XP011822760, DOI: 10.1109/ACCESS.2020.3038707 *

Also Published As

Publication number Publication date
KR20220088163A (en) 2022-06-27
KR102570081B1 (en) 2023-08-23

Similar Documents

Publication Publication Date Title
US7505608B2 (en) Methods and apparatus for adaptive foreground background analysis
CN107862663A (en) Image processing method, device, readable storage medium storing program for executing and computer equipment
US20020136450A1 (en) Red-eye detection based on red region detection with eye confirmation
US20070116364A1 (en) Apparatus and method for feature recognition
US8938117B2 (en) Pattern recognition apparatus and method therefor configured to recognize object and another lower-order object
EP2312858A1 (en) Image processing apparatus, imaging apparatus, image processing method, and program
CN111612104B (en) Vehicle loss assessment image acquisition method, device, medium and electronic equipment
US11861810B2 (en) Image dehazing method, apparatus, and device, and computer storage medium
CN110909750B (en) Image difference detection method and device, storage medium and terminal
JP3490910B2 (en) Face area detection device
KR20000035050A (en) Method for photographing and recognizing a face
CN114913338A (en) Segmentation model training method and device, and image recognition method and device
WO2022131517A1 (en) Method for separating fingerprint from fingerprint overlay image using deep-learning algorithm, and device therefor
Jain et al. A hybrid approach for detection and recognition of traffic text sign using MSER and OCR
WO2022131469A1 (en) Method and device for separating fingerprint from overlapping fingerprint image by using deep learning algorithm
WO2021091053A1 (en) Location measurement system using image similarity analysis, and method thereof
JPH09130714A (en) Image information extracting device and method
CN114529488A (en) Image fusion method, device and equipment and storage medium
CN111353348B (en) Image processing method, device, acquisition equipment and storage medium
JP2018055591A (en) Information processing apparatus, information processing method and program
JP2007025901A (en) Image processor and image processing method
JP2004199200A (en) Pattern recognition device, imaging apparatus, information processing system, pattern recognition method, recording medium and program
JP2021009493A (en) Image processing device, control method of image processing device, and program
Singh et al. Vehicle number plate recognition using matlab
RU2774058C1 (en) Method for definition (recognition) of the fact of presentation of digital copy of the document in screen reshoot

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21906822

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21906822

Country of ref document: EP

Kind code of ref document: A1