KR20190118993A - Surrogate Interview Prevention Method and Processing Technology Using Deep Learning - Google Patents

Surrogate Interview Prevention Method and Processing Technology Using Deep Learning Download PDF

Info

Publication number
KR20190118993A
KR20190118993A KR1020190121441A KR20190121441A KR20190118993A KR 20190118993 A KR20190118993 A KR 20190118993A KR 1020190121441 A KR1020190121441 A KR 1020190121441A KR 20190121441 A KR20190121441 A KR 20190121441A KR 20190118993 A KR20190118993 A KR 20190118993A
Authority
KR
South Korea
Prior art keywords
interview
face
photograph
deep learning
blind
Prior art date
Application number
KR1020190121441A
Other languages
Korean (ko)
Other versions
KR102145132B1 (en
Inventor
전성대
김정환
이정일
유광석
이예솔
김미성
전준호
김현창
이현주
Original Assignee
(주)진학어플라이
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)진학어플라이 filed Critical (주)진학어플라이
Priority to KR1020190121441A priority Critical patent/KR102145132B1/en
Publication of KR20190118993A publication Critical patent/KR20190118993A/en
Application granted granted Critical
Publication of KR102145132B1 publication Critical patent/KR102145132B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/105Human resources
    • G06Q10/1053Employment or hiring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • G06K19/06009Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking
    • G06K19/06037Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking multi-dimensional coding
    • G06K9/00221
    • G06K9/481
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/469Contour-based spatial representations, e.g. vector-coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G07C9/00071
    • G07C9/00111
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/20Individual registration on entry or exit involving the use of a pass
    • G07C9/22Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder
    • G07C9/25Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder using biometric data, e.g. fingerprints, iris scans or voice recognition
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/20Individual registration on entry or exit involving the use of a pass
    • G07C9/28Individual registration on entry or exit involving the use of a pass the pass enabling tracking or indicating presence

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • Theoretical Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Strategic Management (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Economics (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Operations Research (AREA)
  • Marketing (AREA)
  • Evolutionary Computation (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • Collating Specific Patterns (AREA)
  • Biomedical Technology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)

Abstract

A solution for solving side effects as a blind interview gradually expands in a college entrance examination using deep learning may be provided. During a blind interview process, when applicants must be individually identified in an interview waiting room and assigned a temporary number to enter an interview room, an interview assessor conducts interviews based on the blind screening data, which may impair fairness in evaluating the applicants. Therefore, there is problem that the interview assessor may not confirm whether the applicants are the same as the applicants in the reviewed screening data. An objective of the present invention is to provide a solution capable of assisting a blind interview by solving the problem.

Description

딥러닝을 이용한 대리 면접 예방 방법 및 처리기술{Surrogate Interview Prevention Method and Processing Technology Using Deep Learning}Surrogate Interview Prevention Method and Processing Technology Using Deep Learning}

본 발명은 블라인드 면접 시 딥러닝(Deep learning)을 이용한 대리면접 예방에 관한 것으로, 실시예는 블라인드 면접평가 관리 기술에 관한 것이다.The present invention relates to prevention of proxy interview using deep learning during blind interview, and an embodiment relates to a blind interview evaluation management technology.

최근 인공지능 기술을 바탕으로 한 딥러닝 알고리즘이 다양한 산업에 적용되고 있다. 딥러닝은 여러 비선형 변환기법의 조합을 통해 수준의 추상화(abstraction)를 시도하는 기계학습 알고리즘의 집합으로 정의되며, 큰 틀에서 사람의 사고방식을 컴퓨터에게 가르치는 기계학습의 한 분야라고 이야기할 수 있다.Recently, deep learning algorithms based on artificial intelligence technology have been applied to various industries. Deep learning is defined as a set of machine learning algorithms that attempt to achieve a level of abstraction through a combination of several nonlinear transformations, and can be said to be a field of machine learning that teaches computers to think about human thinking in a large framework. .

이 기술을 이용하면, 대학 입시에서 블라인드 면접이 점차 확대되면서 나타나는 문제점들을 해결할 수 있다. This technology solves the problems of blind interviews in universities.

블라인드 면접 진행 시 지원자는 면접대기실에서 개인별로 신원확인을 받고 가번호를 부여받은 상태로 면접실에 입실한다.During the blind interview, applicants will be admitted to the interview room with their individual identification and provisional number.

면접평가위원은 지원자를 평가함에 있어 공정성을 해칠 우려가 있는 정보를 블라인드 처리한 전형자료를 토대로 면접을 진행한다.Interviewers will conduct interviews based on blinded data that may impair fairness in evaluating applicants.

따라서 면접평가위원은 검토하는 전형자료와 지원자가 동일인인지 확인이 어렵다는 문제점이 있으며, 이러한 문제들을 해결하고자 한다.Therefore, the interviewer has a problem that it is difficult to confirm whether the applicants and the applicants are reviewing the same person, and try to solve these problems.

대입 면접평가 진행 시 지원자에 대한 편견이 개입될 수 있는 부분을 가려 공정성을 확보하기 위한 방안으로 블라인드 면접평가를 진행하는데, 블라인드의 범위에는 출신학교가 포함된다.In order to secure fairness, the Blind Interview Evaluation is conducted to cover the areas where bias against applicants may be involved in the interview interview.

지원서의 사진에는 교복이 있는 경우가 많아 교복으로 출신학교를 유추할 수 있는 가능성이 있어 평가위원에게 지원자의 사진을 제공하지 못하고 있다.Since there are many school uniforms on the application form, there is a possibility of inferring the school of origin from the school uniform.

이러한 이유때문에 여러가지 문제점들이 발생할 가능성이 있다.For this reason, various problems are likely to occur.

첫째, 면접실에서 가번호만으로 지원자와 전형자료의 일치여부를 확인 하므로 간혹 다른 지원자와 혼동할 가능성이 있다.First, in the interview room, it is possible to confuse the applicant with other applicants because only the number is used to confirm the correspondence between the applicant and the application.

둘째, 지원자가 의도적으로 대리면접등의 부정행위를 시도할 경우 평가위원은 그것을 즉시 알아차릴 수 없다.Second, if the applicant deliberately attempts to cheat, such as a proxy interview, the evaluator cannot immediately recognize it.

셋째, 면접평가가 종료된 이후에 실제 면접에 응시한 지원자의 증빙 자료가 남아있지 않다.Third, there is no evidence of applicants who actually took the interview after the interview.

본 발명은 위의 문제점들을 해결하고자 한다.The present invention seeks to solve the above problems.

본 발명은 얼굴 인식 시스템에서 딥러닝 학습 모델을 연구하여, 실제 얼굴인식이 블라인드 면접의 문제점 해결에 활용 될 수 있도록 하였다.The present invention studies the deep learning model in the face recognition system so that real face recognition can be utilized to solve the problem of blind interview.

본 발명은 얼굴 인식의 4단계 중 핵심 단계인 얼굴 검출과 특징 추출에 초점을 맞추어 진행하였다. Cascade 방식의 모델 설계와 inception model을 적용하여 모델의 파라미터를 낮추고, 연산량을 줄일 수 있음을 확인하였으며, 이런한 모델을 영상처리 분야에서 적용하기 위한 목적 함수 설정을 도출하였다.The present invention focused on face detection and feature extraction, which are the core stages of face recognition. By applying the cascade model design and the inception model, we confirmed that the model parameters can be lowered and the computation amount can be reduced, and the objective function setting for applying these models in the image processing field was derived.

본 발명의 실시예에 따른 평가시스템은, 지원자의 수험표에 QR코드를 부여하고, 면접 당일 지원자는 수험표와 신분증을 지참하며, 대기실에서 감독관이 수험표로부터 QR코드를 스캔하여 출결 처리를 수행하는 감독관 단말기를 통해 신분확인 및 사진촬영 정보를 관리자 단말기로 전송하는 과정, 관리자 단말기에서 면접평가위원의 단말기로 신분확인 결과를 전송하는 과정을 포함한다.In the evaluation system according to the embodiment of the present invention, the QR code is given to the candidate's test ticket, the applicant on the day of the interview with the test ticket and ID card, the supervisor terminal in the waiting room to scan the QR code from the test ticket and perform attendance processing Transmitting identification and photographing information to the administrator terminal through the identification terminal, and transmitting the identification result from the administrator terminal to the terminal of the interviewer.

상기 면접평가위원 단말기는, 원서 접수 사진에서 얼굴을 검출하고, 얼굴 아래의 영역에 영상 처리를 하여 교복 정보를 블라인드 한다.The interviewer's terminal detects a face in the application reception photograph and blinds the uniform information by performing image processing on an area under the face.

상기 면접평가위원 단말기는, 학생의 원서 접수 사진과 감독관 단말기로부터 전송된 사진의 사이의 유사도를 계산할 수 있으며, 유사도가 기준 값 보다 낮을 경우 면접관에게 유사도 결과를 안내해 줄 수 있다.The interviewer's terminal may calculate the similarity between the student's application reception picture and the picture transmitted from the supervisor's terminal. If the similarity is lower than the reference value, the interviewer may inform the interviewer of the similarity result.

이러한 과정은 위에서 도출된 문제점들을 해결하는 솔루션이 될 것이다.This process will be a solution to the problems identified above.

딥러닝 학습 모델을 이용한 얼굴인식 시스템은 블라인드면접의 여러가지 문제점들을 해결하는데 중요한 역할을 한다.Face recognition system using deep learning model plays an important role in solving various problems of blind interview.

첫째, 면접실에서 가번호만으로 지원자와 전형자료의 일치여부를 확인하므로 간혹 다른 지원자와 혼동할 가능성이 있었지만, 교복이 가려진 지원자 사진이 제공되면 평가장에 참여한 지원자의 얼굴을 토대로 지원자와 전형자료의 일치여부를 직관적으로 알아차릴 수 있다.First, in the interview room, it is possible to confuse the applicants with the application data only by the temporary number. However, if a photograph of applicants with uniforms is provided, the applicants and the application data are matched based on the faces of the applicants who participated in the evaluation. You can intuitively notice whether or not.

둘째, 지원자가 의도적으로 대리면접등의 부정행위를 시도할 경우 지원서의 사진과 출석관리시스템을 통해 면접대기실에서 촬영된 지원자 얼굴사진과의 유사도를 토대로 의심행위를 사전에 알림 받을 수 있으며 평가위원은 그것을 즉시 알아차릴 수 있다.Second, if a candidate deliberately attempts to cheat, such as a proxy interview, he / she may be notified in advance of suspicious behavior based on the similarity to the applicant's face photographed in the interview room through the photo of the application and the attendance management system. You can notice it right away.

셋째, 면접평가가 종료된 이후에 실제 면접에 응시한 사람이 본인인지 증명할 수 있는 자료가 남아있지 않았지만, 지원서류에 제출된 교복이 블라인드 처리된 사진과 현장에서 출결관리시스템으로 촬영된 지원자 사진과 유사도 분석 결과가 시스템에 기록되어 있으므로 전형이 끝난 후에도 면접에 응시한 사람의 사진과 정보가 보관된다.Third, after the interview was completed, there was no data to prove that the person who actually took the interview had been, but the uniforms submitted to the application documents were blinded and the applicants' photos taken by the attendance management system. Similarity analysis results are recorded in the system, so the photos and information of the interviewees are retained even after the screening is completed.

도 1 : 얼굴 인식 시스템 구조도
도 2 : 얼굴 검출 시스템 구조도
도 3 : 단계별 Layer 구조도
도 4 : Inception model 구조도
도 5 : 특징 추출 네트워크 구조도
도 6 : Triplet Loss 개념도
도 7 : 교복 블라인드 적용 사례
도 8 : 얼굴 특징벡터 사이의 Cosine Distance 측정 사례
도 9 : 시스템 구성도
Figure 1: Structure of Face Recognition System
Figure 2: Structure of Facial Detection System
3: Layer structure diagram
4: Inception model structure diagram
5: Feature extraction network structure diagram
6: Triplet Loss conceptual diagram
7: Application example of the school uniform blind
8: Example of Cosine Distance Measurement Between Facial Feature Vectors
9: System Configuration

본 발명의 이점 및 특징, 그리고 그것들을 달성하는 방법은 도시하는 첨부 도면과 함께 상세하게 후술되는 예시를 통해 충분히 설명된다. 그러나, 본 발명은 이하에서 개시되는 실시예에 한정되는 것이 아니라 서로 다른 다양한 형태로 구현될 수 있다. 다른 정의가 없다면, 본 명세서에서 사용되는 모든 용어는 본 발명이 속하는 기술분야에서 통상의 지식을 가진 자에게 공통적으로 이해될 수 있는 의미로 사용될 수 있을 것이다. 또한, 일반적으로 사용되는 사전에 정의되어 있는 용어들은 정의되어 있지 않은 한 이상적으로 또는 과도하게 해석되지 않는다.Advantages and features of the present invention, and methods of achieving them are fully explained by way of examples which will be described below in detail in conjunction with the accompanying drawings. However, the present invention is not limited to the embodiments disclosed below, but may be implemented in various forms. Unless otherwise defined, all terms used in the present specification may be used in a sense that can be commonly understood by those skilled in the art. In addition, terms that are defined in a commonly used dictionary are not ideally or excessively interpreted unless they are defined.

이하, 첨부된 도면들을 참조하여 본 발명의 기술적 특징을 구체적으로 설명하기로 한다.Hereinafter, technical features of the present invention will be described in detail with reference to the accompanying drawings.

아래 내용은 얼굴 인식 시스템에 관한 설명이다.The following is a description of the face recognition system.

도 1을 참고하면, 일반적인 얼굴 인식 시스템은 크게 아래와 같은 4가지 단계로 이루어진다.Referring to FIG. 1, a general face recognition system is composed of four steps as follows.

1. 주어진 영상에서 얼굴을 검출하는 단계(Face Detection)1.Face detection in a given image

2. 검출된 얼굴의 크기, 방향 등을 정렬하는 단계(Face Alignment)2. Face Alignment of the detected face size, direction, etc.

3. 정렬된 얼굴에서 특징을 추출하는 단계(Feature Extraction)3. Feature Extraction from Aligned Faces

4. 추출한 특징을 데이터베이스에서 누구인지 검색하는 단계(Feature Matching)4. Search for who the extracted feature is in the database (Feature Matching)

얼굴 인식의 주요 단계인 얼굴 검출 및 특징 추출은 Histogram of Oriented Gradient(HOG), Local Binary Pattern(LBP), Scale Invariant Feature Transform(SIFT) 등 기존의 영상처리 분야에서 많이 연구가 진행되어 왔던 분야이다. 본 발명에서는 기존의 영상처리 방법이 아닌, 딥러닝을 활용한 얼굴 검출(Face Detection) 및 특징 추출(Feature Extraction)에 대한 논문을 분석하고, 각 논문에서 사용한 딥러닝 학습 모델을 활용하여 얼굴 검출 및 특징 추출에 적용하였다. Face detection and feature extraction, which are the main stages of face recognition, have been studied in the existing image processing fields such as Histogram of Oriented Gradient (HOG), Local Binary Pattern (LBP), and Scale Invariant Feature Transform (SIFT). The present invention analyzes papers on face detection and feature extraction using deep learning rather than conventional image processing methods, and detects faces using deep learning learning models used in each paper. Applied to feature extraction.

도 2를 참고하면, 논문에서 제안된 얼굴 검출 시스템은 3가지 Deep convolutional network를 단계별로 연결하였으며, 입력 영상을 여러 크기로 변환하여 첫번째 단계의 입력으로 사용한다. 각 단계에서는 입력영상에서 얼굴의 위치를 추측하게 되고, 많이 겹치는 후보 영역에 대해선 Non-maximum suppression를 사용하여 통합한다. 처리된 영역은 추출되어 다음 단계의 입력으로 사용되고, 단계가 진행될 수록 더 세밀하게 영역을 예측한다. 각 단계의 Layer 구조는 도 3과 같다.Referring to FIG. 2, the face detection system proposed in the paper connects three deep convolutional networks in stages, converts an input image into various sizes, and uses the first stage as an input. In each step, the position of the face is estimated in the input image, and non-maximum suppression is integrated for candidate regions that overlap a lot. The processed area is extracted and used as input for the next step, and the area is predicted more precisely as the step proceeds. The layer structure of each step is shown in FIG.

도 3을 참고하면, 3단계 모두 Convolutional Layer, Max-Pooling Layer, Fully Connected Layer로 구성되어 있으며, 최종 출력은 얼굴분류, 영역예측, 랜드마크 좌표 예측 3가지로 나뉘어진다. 뒤쪽 단계일수록 구조가 더 깊어지고, 필터의 개수도 늘어나 더 세밀하고 정확한 예측을 가능하게 한다. 또한, 처음에 간단한 구조의 Layer를 통과시키고, 영역을 좁혀가면서 복잡한 구조의 Layer를 통과시킴으로써, 전체 영상에 대해 처음부터 복잡한 Layer를 통과시키는 것 보다 연산량을 줄이는 효과를 볼 수 있다.Referring to FIG. 3, all three stages are composed of a convolutional layer, a max-pooling layer, and a fully connected layer. The final output is divided into three types: face classification, region prediction, and landmark coordinate prediction. The later stages deepen the structure and increase the number of filters, enabling more precise and accurate predictions. In addition, it is possible to reduce the amount of computation than passing a complex layer from the beginning for the entire image by passing a simple structure layer and passing a complex structure layer while narrowing an area.

아래 내용은 3가지 목적함수(얼굴 분류, 영역 예측, 랜드마크 좌표 예측)들의 학습방법에 대한 설명이다.The following is a description of how to learn three objective functions (face classification, region prediction, landmark coordinate prediction).

얼굴 분류는 해당 영역이 얼굴인지 아닌지 2가지 class의 분류 문제로 볼 수 있다. 따라서, 목적함수로 cross-entropy를 사용하였으며 수식은 다음과 같다.

Figure pat00001
. y는 정답 라벨에 해당되며, p는 네트워크를 통해 예측된 확률이다.Face classification can be considered as a classification problem of two classes whether the area is a face or not. Therefore, cross-entropy is used as the objective function and the formula is as follows.
Figure pat00001
. y corresponds to the correct answer label, and p is the probability predicted over the network.

영역 예측은 얼굴이 있는 사각 영역을 예측하며, 사각 영역은 왼쪽 꼭지점의 x, y 좌표와 높이 및 너비로 표현될 수 있다. 따라서, 영역 예측의 경우 4가지 값을 예측하는 회귀 문제로 볼 수 있으며, 목적 함수의 경우 유클리디안 거리를 사용하였다.Region prediction predicts a rectangular area with a face, and the rectangular area may be expressed by x, y coordinates and height and width of the left vertex. Therefore, in the case of region prediction, it can be regarded as a regression problem that predicts four values. For the objective function, Euclidean distance is used.

랜드마크 좌표 예측은 얼굴의 5가지 특징 점(왼쪽 눈, 오른쪽 눈, 코, 왼쪽 입 꼭지점, 오른쪽 입 꼭지점)의 x, y를 예측하는 문제로 10가지 값을 예측하는 회귀문제이다. 따라서 영역예측과 마찬가지로 목적함수는 유클리디안 거리를 사용하였다.Landmark coordinate prediction is a regression problem that predicts 10 values by predicting x and y of five feature points of the face (left eye, right eye, nose, left mouth vertex, and right mouth vertex). Therefore, as with domain prediction, the objective function is Euclidean distance.

아래의 내용은 얼굴의 특징을 추출하는 방법에 대한 설명이다.The following is a description of how to extract facial features.

일반적인 Convolutional Neural Network의 경우 네트워크가 깊어지면 깊어질수록, 성능이 좋아진다. 하지만, 지속적인 Convolution을 하게 되면, 영상의 해상도가 낮아진다는 단점이 있으며, 채널수가 곱해지면서 모델의 파라미터 또한 매우 늘어나게 된다.In the case of a typical convolutional neural network, the deeper the network, the better the performance. However, continuous convolution has the disadvantage of lowering the resolution of the image. As the number of channels is multiplied, the parameters of the model are also greatly increased.

본 발명에서 사용되는 모델 구조는 GoogLeNet에서 발표한 inception model을 기반으로 하며, inception model의 구조는 도 4와 같다. 도 4를 참고하면, Inception model은 이를 해결하기 위해서 제안된 모델이며, 네트워크의 넓이를 늘이는 방식을 택한다. The model structure used in the present invention is based on the inception model published by GoogLeNet, and the structure of the inception model is shown in FIG. Referring to FIG. 4, the Inception model is a proposed model to solve this problem and adopts a method of increasing the network width.

Inception model을 활용한 전체 네트워크 구조는 도 5와 같다.The overall network structure using the inception model is shown in FIG.

도 5를 참고하면, 전체 네트워크 구조는 inception layer를 여러 번 쌓았으며, 깊이와 넓이를 같이 늘리는 구조를 택하였다. 이러한 구조를 택함으로써, 다양한 크기의 특징을 효과적으로 추출할 수 있으며, 같은 수의 convolution filter를 단순히 깊게 쌓는 것 보다 연산량을 줄이는 효과가 나타난다. 마지막에는 Fully Connected를 연결하여 embedding하고, L2 normalization을 통해 embedding vector의 길이를 일치시킨다.Referring to FIG. 5, the entire network structure is stacked several times with an inception layer, and the structure that increases the depth and the width is selected. By adopting this structure, it is possible to extract features of various sizes effectively and reduce the computation amount rather than simply stacking the same number of convolution filters. At the end, embedding by connecting Fully Connected, and matching the length of embedding vector through L2 normalization.

아래의 내용은 얼굴의 특징을 추출하는 모델의 학습 방법에 대한 설명이다.The following is a description of how to train the model to extract facial features.

네트워크를 통해 나오는 최종 출력은 128차원의 특징 벡터이며, 얼굴 인식을 위해 Triplet Loss를 도입했다. 도 6과 같이 동일인의 특징 벡터는 비슷하게, 타인의 특징벡터는 멀어지게 학습이 되어야 한다.The final output through the network is a 128-dimensional feature vector, and Triplet Loss is introduced for face recognition. As shown in Figure 6, the feature vector of the same person is similar, the feature vector of the other person should be learned away.

도 6을 참고하여 이를 수식으로 표현하면,

Figure pat00002
Figure pat00003
는 anchor의 특징 벡터,
Figure pat00004
는 anchor와 동일인의 특징 벡터,
Figure pat00005
는 타인의 특징 벡터이다.Referring to Figure 6 and expressed as a formula,
Figure pat00002
in
Figure pat00003
Is the feature vector for anchor,
Figure pat00004
Is the feature vector of the same person as anchor,
Figure pat00005
Is the feature vector of another person.

본 발명은 딥러닝 학습 모델을 분석하고, 이러한 모델 들을 얼굴 인식에 적용 하여 현재의 블라인드 면접에서 발생 할 수 있는 문제점을 해결하였다. The present invention analyzes the deep learning model and applies these models to face recognition to solve problems that may occur in current blind interviews.

도 7, 8을 참고하면, 이러한 모델들은 블라인드 면접 시스템에서 지원자 사진의 교복 제거, 지원자와 면접 대상자 동일인 유사도 체크 등 블라인드 면접에 적용되면 블라인드 문제점을 해결하는데 중요한 역할을 하게 된다. Referring to FIGS. 7 and 8, these models play an important role in solving blind problems when applied to blind interviews, such as removing uniforms of applicants' photographs and checking similarity between applicants and interviewees in the blind interview system.

도 9는 본 발명의 딥러닝을 활용한 대리면접 예방에 관한 실시예를 구체적으로 설명하기 위하여 도시한 구성도이다.9 is a block diagram illustrating in detail an embodiment of proxy interview prevention using the deep learning of the present invention.

도 9를 참조하면, 본 발명은 수험생단말기(100), 감독관단말기(200), 관리서버(300), 전자인증서버(400), 면접위원단말기(500)을 포함한다. 각 단말기는 고사실별로 다수의 단말기가 사용될 수 있으며 상호 유무선 통신이 가능한 단말기로서 실시간 정보를 송수신한다.9, the present invention includes the examinee terminal 100, the supervisor terminal 200, the management server 300, the electronic authentication server 400, and the interviewer terminal 500. Each terminal can be used a plurality of terminals for each test room and is a terminal capable of wired and wireless communication with each other to transmit and receive real-time information.

수험생단말기(100)는 관리서버(300)를 통해 전자수험표를 발급받는다. 수험생단말기(100)는 관리서버(300)로부터 발급받은 전자수험표를 이용해 전자인증서버(400)에 본인 전자인증 처리를 요청한다. Examinee terminal 100 is issued through the management server 300, the electronic test ticket. The examinee terminal 100 requests the electronic authentication process to the electronic authentication server 400 by using the electronic examination table issued from the management server 300.

전자인증서버(400)는 수험생단말기(100)로부터 요청 받은 본인 전자인증 처리 결과를 알려준다. The electronic authentication server 400 informs the user's electronic authentication processing result requested from the examinee terminal 100.

수험생단말기(100)는 전자인증서버(400)로부터 처리된 본인 전자인증 처리 결과를 이용해 수험생 식별코드와 전자인증 암호화 토큰을 일시적으로 발행하여 관리서버(300)에 저장을 한 후 수험생단말기(100)에 QR코드를 생성하고 화면에 출력한다.The examinee terminal 100 temporarily issues the examinee identification code and the electronic authentication encryption token using the result of the electronic authentication process processed from the electronic authentication server 400 and stores the result in the management server 300. The examinee terminal 100 Create a QR code on the screen and print it out.

감독관단말기(200)는 수험생단말기(100)에 출력된 QR코드를 카메라 인식 모듈로 스캔하여 QR코드에서 수험생 식별코드와 전자인증 암호화 토큰을 식별하여 관리서버(300)에 진위여부를 확인한다. 진위여부가 확인되면 감독관 단말기는 지원자의 출석사진을 촬영하여 관리서버(300)로 전송한다.The supervisor terminal 200 scans the QR code output to the examinee terminal 100 with the camera recognition module to identify the examinee identification code and the electronic authentication encryption token in the QR code to check the authenticity of the management server 300. When the authenticity is confirmed, the supervisor terminal photographs the attendance photograph of the applicant and transmits it to the management server 300.

관리서버(300)는 수험생단말기(100)로부터 생성된 전자인증 암호화 토큰을 일시적으로 저장하고 있다가 감독관단말기(200)로부터 수험생 식별코드와 전자인증 암호화 토큰을 이용하여 확인요청을 받으면 요청 받은 암호화 토큰이 유효한지 확인후 그 결과를 감독관단말기(200)에 알려준다. 이때 관리서버(300)는 전자인증 암호화 토큰을 더이상 사용할 수 없는 상태로 파기한다. 또한 관리서버(300)는 감독관단말기(200)로부터 전송 받은 암호화 토큰이 유효한 경우 출석처리상태로 갱신하고, 유효하지 않은 경우 오프라인으로 신분증을 확인할 수 있도록 안내한다.The management server 300 temporarily stores the electronic authentication encryption token generated from the examinee terminal 100 and receives a confirmation request from the supervisor terminal 200 using the examinee identification code and the electronic authentication encryption token. After checking whether this is valid and notifies the supervisor terminal 200 of the result. At this time, the management server 300 discards the electronic authentication encryption token can no longer be used. In addition, the management server 300 is updated to the attendance processing state if the encryption token received from the supervisor terminal 200 is valid, and if not valid to guide the identification card offline.

관리서버(300)는 지원자가 출석처리 상태로 갱신되면 감독관단말기(200)로부터 전송 받은 출석사진과 지원자가 원서접수시 제출한 접수사진에서 추출한 특징 벡터 사이의 코사인유사도를 계산하여 면접위원단말기(500)로 전송한다.The management server 300 calculates the cosine similarity between the attendance picture received from the supervisor terminal 200 and the feature vector extracted from the reception photo submitted by the applicant when the applicant is updated to the attendance processing state. To send).

이때 관리서버(300)는 원서접수시 제출한 접수사진은 사전에 교복을 블라인드처리한 사진으로 전송한다.In this case, the management server 300 transmits the received photographs submitted at the time of application reception as photographs of the school uniform.

100 : 수험생단말기
200 : 감독관단말기
300 : 관리서버
400 : 전자인증서버
500 : 면접위원단말기
100: examinee terminal
200: supervisor terminal
300: management server
400: Electronic authentication server
500: Interviewee Terminal

Claims (3)

감독관단말기(200)는 수험생단말기(100)에 출력된 QR코드를 카메라 인식 모듈로 스캔하여 QR코드에서 수험생 식별코드와 전자인증 암호화 토큰을 식별하여 관리서버(300)에 진위여부를 확인하는 단계;
진위여부가 확인되면 감독관 단말기는 지원자의 출석사진을 촬영하여 관리서버(300)로 전송하는 단계;
관리서버(300)는 지원자가 출석처리 상태로 갱신되면 감독관단말기(200)로부터 전송받은 출석사진과 지원자가 원서접수시 제출한 접수사진에서 추출한 특징 벡터 사이의 코사인유사도를 계산하여 면접위원단말기(500)로 전송하는 단계를 포함하는 딥러닝을 이용한 대리면접 예방 방법.
The supervisor terminal 200 scans the QR code outputted to the examinee terminal 100 with a camera recognition module to identify the examinee identification code and the electronic authentication encryption token from the QR code to check the authenticity of the management server 300;
If the authenticity is confirmed, the supervisor terminal photographing the attendance photograph of the applicant and transmitting it to the management server 300;
The management server 300 calculates the cosine similarity between the attendance picture received from the supervisor terminal 200 and the feature vector extracted from the applicant's application filed when the applicant receives the application, when the applicant is updated to the attendance processing state. Method for preventing surrogate interview using a deep learning comprising the step of transmitting to.
청구항 1에 있어서,
원서접수시 제출한 접수사진에서 얼굴을 검출(Face Detection)하는 단계;
원서접수 사진에서 검출된 얼굴 아래부분을 블라인드 처리하는 단계를 포함하는 딥러닝을 이용한 대리 면접 예방 방법.
The method according to claim 1,
Detecting a face in a received photograph submitted at the time of application;
A blind interview prevention method using deep learning comprising the step of blind processing the lower face detected in the application photograph.
청구항 1에 있어서,
출결관리시스템에서 촬영된 사진에서 얼굴을 검출(Face Detection)하는 단계;
검출된 얼굴의 크기, 방향 등을 정렬(Face Alignment)하는 단계;
정렬된 얼굴에서 특징을 추출(Feature Extraction)하는 단계;
원서접수 사진과 촬영된 사진에서 추출한 특징 벡터 사이의 코사인 유사도를 측정하는 단계를 포함하는 딥러닝을 이용한 대리 면접 예방 방법.
The method according to claim 1,
Detecting a face in a photograph taken by the attendance management system;
Face aligning the size, direction, etc. of the detected face;
Feature Extraction from the aligned face;
A method for preventing surrogate interviews using deep learning, comprising measuring cosine similarity between an application photograph and a feature vector extracted from the photograph.
KR1020190121441A 2019-10-01 2019-10-01 Surrogate Interview Prevention Method Using Deep Learning KR102145132B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020190121441A KR102145132B1 (en) 2019-10-01 2019-10-01 Surrogate Interview Prevention Method Using Deep Learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020190121441A KR102145132B1 (en) 2019-10-01 2019-10-01 Surrogate Interview Prevention Method Using Deep Learning

Publications (2)

Publication Number Publication Date
KR20190118993A true KR20190118993A (en) 2019-10-21
KR102145132B1 KR102145132B1 (en) 2020-08-14

Family

ID=68460659

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020190121441A KR102145132B1 (en) 2019-10-01 2019-10-01 Surrogate Interview Prevention Method Using Deep Learning

Country Status (1)

Country Link
KR (1) KR102145132B1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110929256A (en) * 2019-11-20 2020-03-27 秒针信息技术有限公司 Method and device for identifying abnormal access equipment
CN111080853A (en) * 2019-12-20 2020-04-28 珠海格力电器股份有限公司 Intelligent door lock system, unlocking method, device, equipment and medium
CN116863576A (en) * 2023-09-04 2023-10-10 民航成都电子技术有限责任公司 Method, system and medium for synchronizing passage information of aircrew

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102557293B1 (en) * 2020-11-11 2023-07-19 위드로봇 주식회사 Camera device and method performed by the camera device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020032010A (en) 2000-10-25 2002-05-03 이재찬 A user certifying system using a certification key and the certifying method
KR100983346B1 (en) 2009-08-11 2010-09-20 (주) 픽셀플러스 System and method for recognition faces using a infra red light
KR20140053504A (en) * 2012-10-26 2014-05-08 삼성전자주식회사 Face recognition method, machine-readable storage medium and face recognition device
KR20170106736A (en) * 2016-03-14 2017-09-22 이기곤 Smart Exam and Supervisor system
KR20190038203A (en) * 2017-09-29 2019-04-08 이인규 Facial expression recognition system and method using machine learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020032010A (en) 2000-10-25 2002-05-03 이재찬 A user certifying system using a certification key and the certifying method
KR100983346B1 (en) 2009-08-11 2010-09-20 (주) 픽셀플러스 System and method for recognition faces using a infra red light
KR20140053504A (en) * 2012-10-26 2014-05-08 삼성전자주식회사 Face recognition method, machine-readable storage medium and face recognition device
KR20170106736A (en) * 2016-03-14 2017-09-22 이기곤 Smart Exam and Supervisor system
KR20190038203A (en) * 2017-09-29 2019-04-08 이인규 Facial expression recognition system and method using machine learning

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110929256A (en) * 2019-11-20 2020-03-27 秒针信息技术有限公司 Method and device for identifying abnormal access equipment
CN111080853A (en) * 2019-12-20 2020-04-28 珠海格力电器股份有限公司 Intelligent door lock system, unlocking method, device, equipment and medium
CN116863576A (en) * 2023-09-04 2023-10-10 民航成都电子技术有限责任公司 Method, system and medium for synchronizing passage information of aircrew
CN116863576B (en) * 2023-09-04 2023-12-22 民航成都电子技术有限责任公司 Method, system and medium for synchronizing passage information of aircrew

Also Published As

Publication number Publication date
KR102145132B1 (en) 2020-08-14

Similar Documents

Publication Publication Date Title
US10650261B2 (en) System and method for identifying re-photographed images
KR102596897B1 (en) Method of motion vector and feature vector based fake face detection and apparatus for the same
KR20190118993A (en) Surrogate Interview Prevention Method and Processing Technology Using Deep Learning
Shetty et al. Facial recognition using Haar cascade and LBP classifiers
JP5076563B2 (en) Face matching device
CN111191568B (en) Method, device, equipment and medium for identifying flip image
Bhatti et al. Smart attendance management system using face recognition
Shrivastava et al. Conceptual model for proficient automated attendance system based on face recognition and gender classification using Haar-Cascade, LBPH algorithm along with LDA model
CN115601807A (en) Face recognition method suitable for online examination system and working method thereof
Gill et al. Attendance Management System Using Facial Recognition and Image Augmentation Technique
Kuang et al. A real-time attendance system using deep-learning face recognition
JP2022048464A (en) Nose print collation device, method and program
Lavanya et al. LBPH-Based Face Recognition System for Attendance Management
Sai et al. Student Attendance Monitoring System Using Face Recognition
Hosen et al. Face recognition-based attendance system with anti-spoofing, system alert, and email automation
Kaur et al. Automatic Attendance System Using AI and Raspberry Pi Controller
Joshi et al. Face Recognition Based Attendance System
Charishma et al. Smart Attendance System with and Without Mask using Face Recognition
Sriman et al. Robust Smart Face Recognition System Based on Integration of Local Binary Pattern (LBP), CNN and MTCNN for Attendance Registration
Neves et al. A Robust Approach to Detect Occlusions During Camera-Based Document Scanning
CN114760484B (en) Live video identification method, live video identification device, computer equipment and storage medium
Kapoor et al. Video Based Attendance System
Florence et al. Smart attendance marking system using face recognition
Pagare et al. Image Forgery Detection Model Analysis using Statistical Splicing Method for Architecture Learning and Feature Extraction
Hossain Applicability and Adaptability of Gait-based Biometric Security System in GCC

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant