WO2022162760A1 - 判定方法、判定プログラム、および情報処理装置 - Google Patents
判定方法、判定プログラム、および情報処理装置 Download PDFInfo
- Publication number
- WO2022162760A1 WO2022162760A1 PCT/JP2021/002736 JP2021002736W WO2022162760A1 WO 2022162760 A1 WO2022162760 A1 WO 2022162760A1 JP 2021002736 W JP2021002736 W JP 2021002736W WO 2022162760 A1 WO2022162760 A1 WO 2022162760A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image data
- face image
- determination
- data
- information included
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 230000010365 information processing Effects 0.000 title claims description 17
- 238000012545 processing Methods 0.000 claims description 50
- 239000002131 composite material Substances 0.000 claims description 27
- 238000013145 classification model Methods 0.000 claims description 7
- 230000014509 gene expression Effects 0.000 claims description 5
- 238000000605 extraction Methods 0.000 description 29
- 238000010586 diagram Methods 0.000 description 13
- 230000001815 facial effect Effects 0.000 description 11
- 239000000284 extract Substances 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 5
- 238000013500 data storage Methods 0.000 description 5
- 238000006243 chemical reaction Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 230000005856 abnormality Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 1
- 238000013210 evaluation model Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/42—Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
- G06V10/431—Frequency domain transformation; Autocorrelation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/95—Pattern authentication; Markers therefor; Forgery detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20224—Image subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
Definitions
- This case relates to a judgment method, a judgment program, and an information processing device.
- an object of the present invention is to provide a determination method, a determination program, and an information processing apparatus capable of improving determination accuracy for determining whether face image data is a morphing image.
- the determination method includes, when the computer acquires face image data, generates face image data from which noise has been removed by a specific algorithm from the face image data, and generates the acquired face image data and generating difference image data from the face image data obtained, determining whether or not the obtained face image data is a composite image based on information included in the difference image data, and determining whether the obtained face image data is a composite image; is not determined to be a composite image, based on information included in frequency data generated from the difference image data, a process of determining whether the acquired face image data is a composite image. to run.
- FIG. 4 is a diagram illustrating a morphing image; (a) to (i) are diagrams for explaining the principle of the embodiment.
- 1A is a functional block diagram illustrating an overall configuration of an information processing apparatus, and FIG. 1B is a block diagram illustrating a hardware configuration;
- FIG. 4 is a diagram illustrating teacher data; 8 is a flowchart illustrating determination processing;
- Face authentication is a technology that uses facial features to verify a person's identity.
- face authentication when confirmation is required, facial feature data for matching obtained by a sensor is compared (verified) with pre-registered registered facial feature data to determine whether the degree of similarity exceeds the threshold for judging the person.
- the identity is verified by determining whether the Face recognition is used in passport registration offices, ID card registration offices, entrance/exit management, and the like.
- a fraudulent act called a morphing attack may be performed in face authentication.
- a morphing attack as illustrated in FIG. 1, a source image of a first person and a target image of a second person ) are synthesized by a morphing technique to generate a synthesized image. This composite image is called a morphing image.
- the source image of the first person is the actual facial image of the first person, not the modified facial image.
- the target image of the second person is not the modified face image, but the actual face image of the second person.
- a morphing image includes a fusion image (a so-called cut-and-paste image) obtained by partially combining two face images.
- a cut-and-paste image is, for example, an image obtained by cutting out the eyes or mouth and replacing it with someone else's.
- a morphing image includes an interpolation image (interpolation image) obtained by interpolating two face images.
- An interpolated image is an image obtained by interpolating two face images.
- an interpolated image is obtained by averaging two face images.
- the morphed image will have both the facial features of the first person and the facial features of the second person. Therefore, if a morphed image is registered as registered face feature data, there is a possibility that both the first person and the second person will be successfully authenticated.
- a passport is created, if a morphed image obtained by synthesizing the face image data (source image) of a first person and the face image data (target image) of a second person is registered, one issued passport can be used as a second person. Both the first person and the second person can be used.
- PRNU Photo Response Non-Uniformity
- a technology that uses residual noise based on deep learning can be considered. Specifically, it is a technique for determining whether or not the target face image data is a morphed image by using the residual noise obtained by obtaining the difference between the face image data and the face image data after noise removal. be.
- the technique using residual noise uses only spatial domain information, so there is a possibility that the determination can be made only for a specific type of morphed image.
- morphed images include interpolated images and cut and pasted images. These are synthesized from a plurality of face image data of different persons. Each face image data is obtained by different cameras, for example. In this case, noise of different intensity remains in each face image data. Even if each piece of face image data is acquired by the same camera, noise of different intensity remains in each piece of face image data because the timing and environment for acquiring the face image data are different.
- the difference image data between the morphed image and the noise-removed morphed image is obtained.
- the removed noise component remains in this differential image data. Therefore, the difference image data represents residual noise. Since noise with different intensities remains in this residual noise, discontinuous noise intensities appear. Therefore, signal anomalies caused by morphing processing can be detected.
- FIG. 2(a) is a diagram illustrating the actual face image data of the first person.
- FIG. 2(b) is a diagram illustrating the actual face image data of the second person.
- No trace of the morphing process remains in the residual noise of these face image data. Therefore, as exemplified in FIG. 2(c), even if the residual noise is transformed into the frequency space, the signal anomaly due to the morphing process does not appear as a feature.
- FIG. 2(d) is a diagram exemplifying a cut-and-paste image obtained by partially combining the face image data of the first person and the face image data of the second person.
- FIG. 2(e) shows spatial domain information when the residual noise of this cut and pasted image is expressed in a predetermined spatial domain. With this spatial domain information, the edges of the cut and pasted portions can be easily detected as signal anomalies.
- FIG. 2F is a diagram illustrating frequency domain information when residual noise of the cut and pasted image is transformed into frequency space. As exemplified in FIG. 2(f), characteristics such as a vertical line passing through the center are more likely to appear as signal anomalies than in the case of FIG. 2(c).
- FIG. 2(g) is a diagram illustrating an interpolated image obtained by averaging the face image data of the first person and the face image data of the second person.
- FIG. 2(h) is spatial domain information when the residual noise of this interpolated image is expressed in a predetermined spatial domain. Since the interpolated image does not contain edges, this spatial domain information is less likely to show the characteristics of signal anomalies.
- FIG. 2(i) is a diagram illustrating frequency domain information when residual noise of an interpolated image is transformed into frequency space. As exemplified by the arrows in FIG. 2(i), peaks, lines, etc. that do not appear in the spatial domain information appear as features of signal anomalies.
- the determination accuracy of the morphed image is improved by re-determining it based on the frequency domain information. Moreover, since there is no need to add a special device or the like, costs can be suppressed.
- FIG. 3A is a functional block diagram illustrating the overall configuration of the information processing device 100.
- the information processing apparatus 100 includes a feature extraction processing unit 10, a learning processing unit 20, a determination processing unit 30, and an output processing unit 40.
- the feature extraction processing unit 10 includes a face image acquisition unit 11, a color space conversion unit 12, a noise filter unit 13, a difference image generation unit 14, a first feature extraction unit 15, a second feature extraction unit 16, a feature score calculation unit 17, A determination unit 18 and an output unit 19 are provided.
- the learning processing unit 20 includes a teacher data storage unit 21 , a teacher data acquisition unit 22 , a teacher data classification unit 23 and a model creation unit 24 .
- the determination processing section 30 includes a face image acquisition section 31 and a determination section 32 .
- FIG. 3(b) is a block diagram illustrating the hardware configuration of the feature extraction processing unit 10, the learning processing unit 20, the determination processing unit 30, and the output processing unit 40.
- the information processing apparatus 100 includes a CPU 101, a RAM 102, a storage device 103, a display device 104, an interface 105, and the like.
- a CPU (Central Processing Unit) 101 is a central processing unit.
- CPU 101 includes one or more cores.
- a RAM (Random Access Memory) 102 is a volatile memory that temporarily stores programs executed by the CPU 101, data processed by the CPU 101, and the like.
- the storage device 103 is a non-volatile storage device.
- a ROM Read Only Memory
- SSD solid state drive
- the storage device 103 stores a determination program according to this embodiment.
- the display device 104 is a display device such as a liquid crystal display.
- the interface 105 is an interface device with an external device.
- face image data can be obtained from an external device via the interface 105 .
- the feature extraction processing unit 10, the learning processing unit 20, the determination processing unit 30, and the output processing unit 40 of the information processing apparatus 100 are implemented by the CPU 101 executing the determination program.
- Hardware such as a dedicated circuit may be used as the feature extraction processing unit 10, the learning processing unit 20, the determination processing unit 30, and the output processing unit 40.
- FIG. 4 is a flowchart illustrating feature extraction processing executed by the feature extraction processing unit 10 .
- the facial image acquiring unit 11 acquires facial image data (step S1).
- the color space conversion unit 12 converts the color space of the face image data acquired in step S1 into a predetermined color space (step S2).
- the color space conversion unit 12 converts face image data into an HSV color space consisting of three components of hue (Hue), saturation (Saturation Chroma), and brightness (Value Brightness).
- hue Hue
- saturation saturation Chroma
- brightness Value Brightness
- luminance or image intensity can be separated from saturation or color information.
- the noise filter unit 13 generates noise-removed face image data by performing noise removal processing on the face image data obtained in step S2 (step S3).
- the noise filter unit 13 performs noise removal processing using a specific algorithm. A known technique or the like for removing image noise can be used for the noise removal processing.
- the difference image generation unit 14 generates difference image data between the face image data obtained in step S2 and the face image obtained in step S3 (step S4). By generating the difference image data, residual noise remaining in the face image acquired in step S1 can be obtained. If the processing of step S2 is not performed, difference image data between the face image data obtained in step S1 and the face image data after noise removal processing may be generated.
- the first feature extraction unit 15 extracts the signal abnormality feature caused by the morphing process as the first feature from the spatial region information of the differential image data (step S5).
- the first feature extraction unit 15 extracts vector values of spatial domain information such as LBP (Local Binary Pattern) and CoHOG (Co-occurrence Histograms of Oriented Gradients) as features of signal anomalies.
- the first feature extraction unit 15 can extract the vector value of the spatial domain by using a statistic obtained by comparing the pixel value of the target pixel and the pixel values of the pixels surrounding the target pixel.
- the first feature extraction unit 15 may extract vector values in the spatial domain by using deep learning.
- the first feature extraction unit 15 may use feature amounts represented by numerical expressions.
- the feature score calculation unit 17 calculates the probability that the face image data acquired in step S1 is morphing image data as a feature score from the first feature extracted in step S5 (step S6). For example, the statistic of vector values obtained in step S5 can be used as the feature score.
- the determination unit 18 determines whether or not the feature score calculated in step S5 exceeds the threshold (step S7).
- This threshold value can be determined in advance from, for example, a variation value when the feature score is calculated from the spatial region for a plurality of pieces of face image data.
- step S7 If “Yes” is determined in step S7, the output unit 19 outputs the feature score calculated in step S6 (step S8).
- the second feature extraction unit 16 If “No” is determined in step S7, the second feature extraction unit 16 generates frequency information in the frequency domain from the differential image data (step S9).
- the second feature extraction unit 16 can generate frequency information in the frequency domain by performing a digital Fourier transform on the difference image data.
- the second feature extraction unit 16 extracts the signal abnormality feature caused by the morphing process as a second feature from the frequency information generated in step S9 (step S10).
- the second feature extraction unit 16 extracts a frequency domain vector value such as a Gray-Level Co-occurrence Matrix (GLCM) as a signal anomaly feature.
- GLCM Gray-Level Co-occurrence Matrix
- the second feature extraction unit 16 can extract vector values in the frequency domain by using statistics obtained by comparing the pixel value of the pixel of interest and the pixel values of pixels surrounding the pixel of interest.
- the second feature extraction unit 16 may extract vector values in the frequency domain by using deep learning.
- the second feature extraction unit 16 may use feature amounts represented by numerical expressions.
- the feature score calculation unit 17 calculates the probability that the face image data acquired in step S1 is morphing image data as a feature score from the frequency features extracted in step S10 (step S11). For example, the vector value statistic obtained in step S10 can be used as the feature score.
- the determination unit 18 determines whether or not the feature score calculated in step S11 exceeds the threshold (step S12).
- the threshold in step S12 can be determined in advance from, for example, a variation value when feature scores are calculated from the frequency domain for a plurality of pieces of face image data.
- step S12 If “Yes” is determined in step S12, the output processing unit 40 outputs the feature score calculated in step S11 (step S8).
- the feature score calculation unit 17 calculates the face image data acquired in step S1 from the first feature extracted in step S5 and the second feature extracted in step S10. The likelihood of being morphed image data is calculated as a feature score (step S13). After that, the output processing unit 40 outputs the feature score calculated in step S13 (step S8).
- FIG. 5 is a flowchart illustrating learning processing executed by the learning processing unit 20.
- the teacher data acquisition unit 22 acquires each teacher data stored in the teacher data storage unit 21 (step S21).
- the training data is training data of face images, and includes actual face image data without modification and morphing image data.
- each teacher data is associated with either an identifier (Bonafide) indicating actual face image data or an identifier (Morphing) indicating a morphing image.
- identifier indicating actual face image data
- Mephing identifier
- These teaching data are created in advance by a user or the like and stored in the teaching data storage unit 21 .
- the feature extraction process of FIG. 4 is executed for each teacher data acquired in step S21.
- the teacher data classification unit 23 classifies the feature scores output by the output unit 19 for each teacher data into actual face image data and morphing image data according to the identifiers stored in the teacher data storage unit 21. (step S22).
- the model creating unit 24 creates an evaluation model based on the classification result of step S22 (step S23). For example, a classification model is created by drawing a separating hyperplane (boundary plane) from the relationship between the space in which the feature scores of each teacher data are distributed and the identifier. Through the above processing, a classification model can be created.
- a classification model is created by drawing a separating hyperplane (boundary plane) from the relationship between the space in which the feature scores of each teacher data are distributed and the identifier.
- FIG. 7 is a flowchart illustrating determination processing executed by the determination processing unit 30 .
- the facial image acquiring unit 31 acquires facial image data (step S31).
- the feature extraction process of FIG. 4 is performed on the face image data acquired in step S31.
- This face image data is input from an external device via the interface 105 because it is face image data for creating a passport or the like.
- the determination unit 32 determines whether the feature score output by the output unit 19 is an actual image or a morphed image based on the classification model created by the model creation unit 24 (step S32).
- the determination result of step S ⁇ b>32 is output by the output processing unit 40 .
- the determination result output by the output processing unit 40 is displayed on the display device 104 .
- the frequency domain information is used to re-determine whether the face image data is a morphing image.
- the use of frequency domain information improves decision accuracy. Moreover, since a special image sensor or the like does not have to be used, costs can be reduced.
- the frequency domain information is used for re-determination. Therefore, the processing time can be shortened.
- the noise filter unit 13 is an example of a face image data generation unit that generates face image data from which noise has been removed by a specific algorithm when face image data is acquired.
- the difference image generation unit 14 is an example of a difference image data generation unit that generates difference image data between the obtained face image data and the generated face image data.
- the determination unit 18 is an example of a first determination unit that determines whether the acquired face image data is a composite image based on information included in the difference image data.
- the determination unit 18 is also an example of a second determination unit that determines whether the acquired face image data is a composite image based on information included in the frequency data.
- the determination processing unit 30 determines whether or not the face image data determined as the composite image is a composite image by a classification model machine-learned using teacher data of a plurality of face image data. This is an example of a determination processing unit that further determines the .
- the second feature extraction unit 16 is an example of a frequency data generation unit that generates the frequency data from the difference image data by digital Fourier transform.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
図4は、特徴抽出処理部10が実行する特徴抽出処理を例示するフローチャートである。図4で例示するように、顔画像取得部11は、顔画像データを取得する(ステップS1)。
図5は、学習処理部20が実行する学習処理を例示するフローチャートである。図5で例示するように、教師データ取得部22は、教師データ格納部21に格納されている各教師データを取得する(ステップS21)。教師データは、顔画像の教師データであって、修正が加えられていない実際の顔画像データと、モーフィング画像データとを含む。図6で例示するように、各教師データには、実際の顔画像データであることを示す識別子(Bonafide)、およびモーフィング画像であることを示す識別子(Morphing)のいずれかが関連付けられている。これらの教師データは、ユーザなどによって予め作成され、教師データ格納部21に格納されている。ステップS21で取得された各教師データについて、図4の特徴抽出処理が実行される。
図7は、判定処理部30が実行する判定処理を例示するフローチャートである。図7で例示するように、顔画像取得部31は、顔画像データを取得する(ステップS31)。ステップS31について取得された顔画像データについて、図4の特徴抽出処理が行われる。この顔画像データは、パスポート作成用の顔画像データなどであるため、インタフェース105を介して外部機器から入力される。
11 顔画像取得部
12 色空間変換部
13 ノイズフィルタ部
14 差分画像生成部
15 第1特徴抽出部
16 第2特徴抽出部
17 特徴スコア算出部
18 判定部
19 出力部
20 学習処理部
21 教師データ格納部
22 教師データ取得部
23 教師データ分類部
24 モデル作成部
30 判定処理部
31 顔画像取得部
32 判定部
40 出力処理部
100 情報処理装置
Claims (21)
- コンピュータが、
顔画像データを取得した場合に、前記顔画像データから特定アルゴリズムによってノイズが除去された顔画像データを生成し、
取得した前記顔画像データと、生成した前記顔画像データとの差分画像データを生成し、
前記差分画像データに含まれる情報に基づいて、取得した前記顔画像データが合成画像であるか否かを判定し、
取得した前記顔画像データが合成画像であると判定されなかった場合に、前記差分画像データから生成された周波数データに含まれる情報に基づいて、取得した前記顔画像データが合成画像であるか否かを判定する、
処理を実行することを特徴とする判定方法。 - 前記差分画像データに含まれる情報に基づいて、取得した前記顔画像データが合成画像であるか否かを判定する際に、ノイズ強度の不連続箇所を検知することによって判定することを特徴とする請求項1に記載の判定方法。
- 前記コンピュータが、
複数枚の顔画像データの教師データを用いて機械学習された分類モデルによって、前記合成画像であるか否かを判定された前記顔画像データについて合成画像であるか否かをさらに判定する処理を実行することを特徴とする請求項1または請求項2に記載の判定方法。 - 取得した前記顔画像データの色空間は、HSV色空間であることを特徴とする請求項1から請求項3のいずれか一項に記載の判定方法。
- デジタルフーリエ変換によって前記差分画像データから前記周波数データを生成することを特徴とする請求項1から請求項4のいずれか一項に記載の判定方法。
- 前記差分画像データに含まれる情報は、注目画素の画素値と、前記注目画素の周辺の画素の画素値との比較によって得られる統計量であることを特徴とする請求項1から請求項5のいずれか一項に記載の判定方法。
- 前記差分画像データに含まれる情報として、数値表現によって表した特徴量を用いることを特徴とする請求項1から請求項6のいずれか一項に記載の判定方法。
- コンピュータに、
顔画像データを取得した場合に、前記顔画像データから特定アルゴリズムによってノイズが除去された顔画像データを生成する処理と、
取得した前記顔画像データと、生成した前記顔画像データとの差分画像データを生成する処理と、
前記差分画像データに含まれる情報に基づいて、取得した前記顔画像データが合成画像であるか否かを判定する処理と、
取得した前記顔画像データが合成画像であると判定されなかった場合に、前記差分画像データから生成された周波数データに含まれる情報に基づいて、取得した前記顔画像データが合成画像であるか否かを判定する処理と、
を実行させることを特徴とする判定プログラム。 - 前記差分画像データに含まれる情報に基づいて、取得した前記顔画像データが合成画像であるか否かを判定する際に、ノイズ強度の不連続箇所を検知することによって判定することを特徴とする請求項8に記載の判定プログラム。
- 前記コンピュータに、
複数枚の顔画像データの教師データを用いて機械学習された分類モデルによって、前記合成画像であるか否かを判定された前記顔画像データについて合成画像であるか否かをさらに判定する処理を実行させることを特徴とする請求項8または請求項9に記載の判定プログラム。 - 取得した前記顔画像データの色空間は、HSV色空間であることを特徴とする請求項8から請求項10のいずれか一項に記載の判定プログラム。
- デジタルフーリエ変換によって前記差分画像データから前記周波数データを生成することを特徴とする請求項8から請求項11のいずれか一項に記載の判定プログラム。
- 前記差分画像データに含まれる情報は、注目画素の画素値と、前記注目画素の周辺の画素の画素値との比較によって得られる統計量であることを特徴とする請求項8から請求項12のいずれか一項に記載の判定プログラム。
- 前記差分画像データに含まれる情報として、数値表現によって表した特徴量を用いることを特徴とする請求項8から請求項13のいずれか一項に記載の判定プログラム。
- 顔画像データを取得した場合に、前記顔画像データから特定アルゴリズムによってノイズが除去された顔画像データを生成する顔画像データ生成部と、
取得した前記顔画像データと、生成した前記顔画像データとの差分画像データを生成する差分画像データ生成部と、
前記差分画像データに含まれる情報に基づいて、取得した前記顔画像データが合成画像であるか否かを判定する第1判定部と、
取得した前記顔画像データが合成画像であると判定されなかった場合に、前記差分画像データから生成された周波数データに含まれる情報に基づいて、取得した前記顔画像データが合成画像であるか否かを判定する第2判定部と、を備えることを特徴とする情報処理装置。 - 前記第1判定部は、前記差分画像データに含まれる情報に基づいて、取得した前記顔画像データが合成画像であるか否かを判定する際に、ノイズ強度の不連続箇所を検知することによって判定することを特徴とする請求項15に記載の情報処理装置。
- 複数枚の顔画像データの教師データを用いて機械学習された分類モデルによって、前記合成画像であるか否かを判定された前記顔画像データについて合成画像であるか否かをさらに判定する判定処理部を備えることを特徴とする請求項15または請求項16に記載の情報処理装置。
- 取得した前記顔画像データの色空間は、HSV色空間であることを特徴とする請求項15から請求項17のいずれか一項に記載の情報処理装置。
- デジタルフーリエ変換によって前記差分画像データから前記周波数データを生成する周波数データ生成部を有することを特徴とする請求項15から請求項18のいずれか一項に記載の情報処理装置。
- 前記差分画像データに含まれる情報は、注目画素の画素値と、前記注目画素の周辺の画素の画素値との比較によって得られる統計量であることを特徴とする請求項15から請求項19のいずれか一項に記載の情報処理装置。
- 前記第1判定部は、前記差分画像データに含まれる情報として、数値表現によって表した特徴量を用いることを特徴とする請求項15から請求項20のいずれか一項に記載の情報処理装置。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202180082849.6A CN116724332A (zh) | 2021-01-27 | 2021-01-27 | 判定方法、判定程序、以及信息处理装置 |
JP2022577856A JP7415202B2 (ja) | 2021-01-27 | 2021-01-27 | 判定方法、判定プログラム、および情報処理装置 |
EP21922784.0A EP4287112A4 (en) | 2021-01-27 | 2021-01-27 | DETERMINATION METHOD, DETERMINATION PROGRAM AND INFORMATION PROCESSING DEVICE |
PCT/JP2021/002736 WO2022162760A1 (ja) | 2021-01-27 | 2021-01-27 | 判定方法、判定プログラム、および情報処理装置 |
US18/323,718 US20230298140A1 (en) | 2021-01-27 | 2023-05-25 | Determination method, non-transitory computer-readable recording medium storing determination program, and information processing device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2021/002736 WO2022162760A1 (ja) | 2021-01-27 | 2021-01-27 | 判定方法、判定プログラム、および情報処理装置 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/323,718 Continuation US20230298140A1 (en) | 2021-01-27 | 2023-05-25 | Determination method, non-transitory computer-readable recording medium storing determination program, and information processing device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022162760A1 true WO2022162760A1 (ja) | 2022-08-04 |
Family
ID=82653178
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/002736 WO2022162760A1 (ja) | 2021-01-27 | 2021-01-27 | 判定方法、判定プログラム、および情報処理装置 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20230298140A1 (ja) |
EP (1) | EP4287112A4 (ja) |
JP (1) | JP7415202B2 (ja) |
CN (1) | CN116724332A (ja) |
WO (1) | WO2022162760A1 (ja) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019096130A (ja) * | 2017-11-24 | 2019-06-20 | Kddi株式会社 | モーフィング画像生成装置及びモーフィング画像生成方法 |
US20200218885A1 (en) * | 2017-06-20 | 2020-07-09 | Hochschule Darmstadt | Detecting artificial facial images using facial landmarks |
JP2020525947A (ja) | 2017-06-30 | 2020-08-27 | ノルウェージャン ユニバーシティ オブ サイエンス アンド テクノロジー(エヌティーエヌユー) | 操作された画像の検出 |
-
2021
- 2021-01-27 CN CN202180082849.6A patent/CN116724332A/zh active Pending
- 2021-01-27 WO PCT/JP2021/002736 patent/WO2022162760A1/ja active Application Filing
- 2021-01-27 JP JP2022577856A patent/JP7415202B2/ja active Active
- 2021-01-27 EP EP21922784.0A patent/EP4287112A4/en active Pending
-
2023
- 2023-05-25 US US18/323,718 patent/US20230298140A1/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200218885A1 (en) * | 2017-06-20 | 2020-07-09 | Hochschule Darmstadt | Detecting artificial facial images using facial landmarks |
JP2020525947A (ja) | 2017-06-30 | 2020-08-27 | ノルウェージャン ユニバーシティ オブ サイエンス アンド テクノロジー(エヌティーエヌユー) | 操作された画像の検出 |
JP2019096130A (ja) * | 2017-11-24 | 2019-06-20 | Kddi株式会社 | モーフィング画像生成装置及びモーフィング画像生成方法 |
Non-Patent Citations (2)
Title |
---|
See also references of EP4287112A4 |
YU, XINRUI ET AL.: "Face Morphing Detection using Generative Adversarial Networks", INTERNATIONAL CONFERENCE ON ELECTRO INFORMATION TECHNOLOGY (EIT, 22 May 2019 (2019-05-22), pages 288 - 291, XP033615664, DOI: 10.1109/EIT.2019.8834162 * |
Also Published As
Publication number | Publication date |
---|---|
JPWO2022162760A1 (ja) | 2022-08-04 |
CN116724332A (zh) | 2023-09-08 |
US20230298140A1 (en) | 2023-09-21 |
EP4287112A1 (en) | 2023-12-06 |
JP7415202B2 (ja) | 2024-01-17 |
EP4287112A4 (en) | 2024-01-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Scherhag et al. | Detection of face morphing attacks based on PRNU analysis | |
CN110084135B (zh) | 人脸识别方法、装置、计算机设备及存储介质 | |
KR100480781B1 (ko) | 치아영상으로부터 치아영역 추출방법 및 치아영상을이용한 신원확인방법 및 장치 | |
Hildebrandt et al. | Benchmarking face morphing forgery detection: Application of stirtrace for impact simulation of different processing steps | |
Chung et al. | Efficient shadow detection of color aerial images based on successive thresholding scheme | |
JP4877374B2 (ja) | 画像処理装置及びプログラム | |
KR101494874B1 (ko) | 사용자 인증 방법, 이를 실행하는 장치 및 이를 저장한 기록 매체 | |
US11354917B2 (en) | Detection of fraudulently generated and photocopied credential documents | |
JP7197485B2 (ja) | 検出システム、検出装置およびその方法 | |
JP2000105829A (ja) | 顔パーツ検出方法及びその装置 | |
CN107256543B (zh) | 图像处理方法、装置、电子设备及存储介质 | |
US20150146991A1 (en) | Image processing apparatus and image processing method of identifying object in image | |
JP6784261B2 (ja) | 情報処理装置、画像処理システム、画像処理方法及びプログラム | |
Velliangira et al. | A novel forgery detection in image frames of the videos using enhanced convolutional neural network in face images | |
Devadethan et al. | Face detection and facial feature extraction based on a fusion of knowledge based method and morphological image processing | |
JP2010244381A (ja) | ガボアフィルタ、画像認識装置および方法、プログラム、並びに記録媒体 | |
KR101408344B1 (ko) | 얼굴 검출 장치 | |
Youlian et al. | Face detection method using template feature and skin color feature in rgb color space | |
JP2000348173A (ja) | 唇抽出方法 | |
WO2022162760A1 (ja) | 判定方法、判定プログラム、および情報処理装置 | |
Chowdhury et al. | Fuzzy rule based approach for face and facial feature extraction in biometric authentication | |
KR102463353B1 (ko) | 페이크 얼굴 검출 장치 및 검출 방법 | |
JP6467817B2 (ja) | 画像処理装置、画像処理方法、およびプログラム | |
Zangana | A New Skin Color Based Face Detection Algorithm by Combining Three Color Model Algorithms | |
Singh et al. | Face detection in hybrid color space using HBF-KNN |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21922784 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022577856 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180082849.6 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2021922784 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2021922784 Country of ref document: EP Effective date: 20230828 |