WO2021145634A1 - Speaker authentication method - Google Patents

Speaker authentication method Download PDF

Info

Publication number
WO2021145634A1
WO2021145634A1 PCT/KR2021/000369 KR2021000369W WO2021145634A1 WO 2021145634 A1 WO2021145634 A1 WO 2021145634A1 KR 2021000369 W KR2021000369 W KR 2021000369W WO 2021145634 A1 WO2021145634 A1 WO 2021145634A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
noise
voiceprint
user
authentication server
Prior art date
Application number
PCT/KR2021/000369
Other languages
French (fr)
Korean (ko)
Inventor
최수택
진승범
Original Assignee
주식회사 인에이블
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 인에이블 filed Critical 주식회사 인에이블
Publication of WO2021145634A1 publication Critical patent/WO2021145634A1/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/20Pattern transformations or operations aimed at increasing system robustness, e.g. against channel noise or different working conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0861Generation of secret information including derivation or calculation of cryptographic keys or passwords
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3226Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using a predetermined code, e.g. password, passphrase or PIN
    • H04L9/3231Biological data, e.g. fingerprint, voice or retina

Definitions

  • the present invention relates to a speaker authentication method using a voiceprint, and more particularly, to an improved speaker authentication method using a noisy environment around the speaker.
  • the present invention was created to solve the above problems, and an object of the present invention is to provide a method for effectively performing identity authentication by combining communication and speaker authentication.
  • Another object of the present invention is to provide a method for effectively performing speaker authentication using a noisy environment around a speaker.
  • a speaker authentication method for achieving the above object
  • the user terminal separates the voiceprint and noise from the user's voice input, encrypts the separated voiceprint data (first voiceprint data) and the noise data, and transmits it to the authentication server, and the authentication server 1 glottal data preparation process for storing glottal data and noise data;
  • the authentication server During authentication, the authentication server generates a random text and transmits it to the user terminal, the user terminal expresses the text transmitted from the authentication server, receives the user's voice, separates the voiceprint and noise, and separates the voiceprint data (Second voiceprint data) is encrypted using the noise as a factor and transmitted to the authentication server, and the authentication server compares and compares the second voiceprint data input from the user terminal with the first voiceprint data stored in the preparation process. a speaker authentication process of transmitting a result to the user terminal;
  • the authentication server generates a random text and transmits it to the user terminal, the user terminal expresses the text received from the authentication server, receives the user's voice, separates the voice print and noise, and separates the voice print (first voice print) data) and job data are encrypted and transmitted to the authentication server.
  • the voiceprint data is encrypted using the following equation in the preparation process.
  • Ciphertext f_enc(Plaintext1, f_key1(v_server, device_id, v_user))
  • plaintext1 f(glottal data, noise data)
  • V_server A variable value created by the authentication server for each terminal, and a unique value for each user terminal.
  • V_user the identifier for the user
  • Ciphertext f_enc(Plaintext2, f_key2(v_server, device_id, v_user, v_noise)) ---(Equation 2)
  • V_server A variable value created by the authentication server for each terminal, and a unique value for each user terminal.
  • V_user the identifier for the user
  • V_noise a constant value generated from the noise data prepared in the glottal data preparation process
  • the authentication server generates a random text and transmits it to the user terminal
  • the user terminal expresses the text received from the authentication server, receives the user's voice, separates the voice print from the noise
  • the voiceprint data (second voiceprint data) is encrypted using the noise as an argument and transmitted to the authentication server
  • the authentication server generates a third encryption key using the noise data stored in the preparation of the voiceprint data, and the generated
  • the second voiceprint data is decrypted using the third encryption key
  • the decrypted second voiceprint data is compared with the first voiceprint data stored in the preparation of the voiceprint data, and the comparison result is transmitted to the user terminal.
  • the speaker authentication method according to the present invention has the effect of effectively authenticating the user by considering the user's voice print and noise around the user together.
  • the speaker authentication method according to the present invention has the effect of effectively responding to an attack by using the unpredictability of noise.
  • FIG. 1 is a flowchart illustrating a process in which a speaker authentication method according to the present invention is performed.
  • Figure 2 shows the separation of glottis and noise from the user's voice.
  • FIG. 3 is a detailed view of the voice information preparation process shown in FIG. 2 .
  • FIG. 4 shows the speaker authentication process shown in FIG. 2 in detail.
  • FIG. 1 is a flowchart illustrating a process in which a speaker authentication method according to the present invention is performed.
  • the speaker authentication method includes a voiceprint data preparation process ( S100 ) and a speaker authentication process ( S200 ).
  • the glottal data preparation process S100 is a process of separating the glottis and noise from the user's voice and storing the separated glottal data (first glottal data) and noise data.
  • the user presents and reads arbitrary text, separates voiceprint data (second voiceprint data) and noise from the user's voice who reads it, and grabs the separated voiceprint data (second voiceprint data)
  • the authentication server performs authentication by encrypting it with an encryption key taking as an argument and transmitting it, and the authentication server compares the voiceprint data (second voiceprint data) with the voiceprint data (first voiceprint data) stored in the voiceprint data preparation process.
  • an encryption key (second encryption key) is generated using the noise data prepared in the voiceprint data preparation process, and generated Encrypted using the encrypted key (second encryption key).
  • the authentication server generates a third encryption key for decryption by using the noise data stored in the voiceprint data preparation process (S100).
  • Figure 2 shows the details of the glottal data preparation process.
  • the user terminal shows an arbitrary text, for example, a word, a sentence, a combination of a number and a letter to the user so that the user reads it, and then, from the input voice signal, the voiceprint and It extracts noise, encrypts it, and sends it to the authentication server.
  • an arbitrary text for example, a word, a sentence, a combination of a number and a letter to the user so that the user reads it, and then, from the input voice signal, the voiceprint and It extracts noise, encrypts it, and sends it to the authentication server.
  • the arbitrary text may be transmitted by the authentication server to the user terminal.
  • the user terminal expresses the text sent by the authentication server, records the voice when the user reads it, and analyzes the recorded voice to extract voiceprint data (first voiceprint data) and noise data.
  • Ciphertext f_enc(Plaintext1, f_key1(v_server, device_id, v_user)) ---(Formula 1)
  • plaintext1 f(glottal data, noise data)
  • V_server A variable value created by the authentication server for each terminal, and a unique value for each user terminal.
  • V_user the identifier for the user
  • plaintext1 f (speechprint data, noise data) means to create a series of data columns by adding the glottal data and noise data.
  • Device_id is generated by a combination of hardware (H/W) elements of the user terminal and device information.
  • Equation 1 The characteristic of Equation 1 is that the encryption key is generated without using noise during encryption.
  • This voiceprint data preparation process (S100) may be performed at the time of membership registration.
  • Figure 3 shows the separation of glottis and noise from the user's voice during the preparation of glottal data.
  • the step of extracting the glottal data typically involves two steps.
  • the first is to separate the recorded data according to the frequency components that make up it through a mathematical analysis called FFT (Fast Fourier Transform).
  • FFT Fast Fourier Transform
  • each person shows a characteristic distribution and appears differently. By extracting these characteristics, the person's own glottal data that is distinguished from others is constructed.
  • FIG. 4 shows the speaker authentication process shown in FIG. 2 in detail.
  • the authentication server in the speaker authentication process (S200), the authentication server generates an arbitrary combination of sentences, words, numbers, and characters and transmits them to the user terminal, and the user terminal shows the content transmitted by the authentication server to the user , extracts voiceprints and noises from the recorded data that the user reads and sends them to the user authentication server.
  • Ciphertext f_enc(Plaintext2, f_key2(v_server, device_id, v_user, v_noise)) ---(Equation 2)
  • V_server A variable value created by the authentication server for each terminal, and a unique value for each user terminal.
  • V_user the identifier for the user
  • V_noise A constant value generated from the prepared noise data during the glottal data preparation process.
  • Equation 2 The characteristic of Equation 2 is that the encryption key is generated using noise during encryption.
  • noise is not used. In order to use the noise during authentication, it is necessary to transmit and store the noise every time it is authenticated. Instead, it is sufficient to combine existing commercially available technologies using only the initial noise.
  • the authentication server compares the voiceprint data (second voiceprint data) transmitted from the user terminal with the voiceprint data (first voiceprint data) stored in the voiceprint data preparation process, and transmits the result to the user terminal.
  • the authentication server generates an encryption key (third encryption key) for decryption by using the noise data stored in the voiceprint data preparation process ( S100 ).
  • the voiceprint data (second voiceprint data) transmitted from the user terminal is decoded using the generated third encryption key, and the decrypted second voiceprint data is compared with the first voiceprint data stored in the preparation process of the voiceprint data.
  • a block cipher algorithm is used for reliable encryption, and a key is always used for the block cipher algorithm. Therefore, attacks on these encryption methods are focused on finding the key.
  • the main feature of the present invention is that the noise data prepared in the voiceprint data preparation process (S100) in the speaker authentication process (S200) is used to generate an encryption key.
  • the noise data is very diverse and has a very good randomness because none of them are identical. So, if an encryption key is generated by adding noise, this key is very difficult to find out.
  • the speaker authentication method according to the present invention has strengths in two aspects.
  • the first is that the user can be effectively authenticated by considering the user's voice gate and the noise around the user together.
  • the second is that the unpredictability of noise can be used to build an authentication method that is difficult to attack.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Biomedical Technology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Telephonic Communication Services (AREA)

Abstract

Disclosed is a speaker authentication method using a voiceprint. The speaker authentication method comprises: a voiceprint data preparation step in which a user terminal separates a voiceprint and noise from a voice input of a user, separated voiceprint data (first voiceprint data) is encrypted and transmitted to an authentication server, and the authentication server stores the first voiceprint data inputted from the user terminal; and a speaker authentication step in which during authentication, the authentication server generates an arbitrary text and transmits same to the user terminal, the user terminal outputs the text transmitted from the authentication server, receives a voice input of the user, and separates a voiceprint and noise, separated voiceprint data (second voiceprint data) is encrypted by taking the noise as a factor and is transmitted to the authentication server, and the authentication server compares the second voiceprint data inputted from the user terminal with the first voiceprint data stored in the preparation step, and transmits the comparison result to the user terminal.

Description

화자 인증 방법How to authenticate the speaker
본 발명은 성문(聲紋)을 이용한 화자 인증 방법에 관한 것으로서 특히, 화자 주변의 잡음 환경을 이용하는 개선된 화자 인증 방법에 관한 것이다.The present invention relates to a speaker authentication method using a voiceprint, and more particularly, to an improved speaker authentication method using a noisy environment around the speaker.
현재 시장은 차량과 스마트폰의 연동에 의한 차량 제어가 가능하도록 변화하고 있고, 스마트폰 자체가 개인을 입증하는 유용한 수단이 되고 스마트 폰을 이용한 결제까지 보편화되고 있지만, 그로 인하여 스마트 폰 도난 시, 개인이 소유하고 있는 자산에 대한 도난도 발생할 수 있다.Currently, the market is changing to enable vehicle control by interlocking the vehicle with a smartphone, and the smartphone itself becomes a useful means of verifying an individual and payment using a smartphone is becoming common. Theft of the assets they own may also occur.
특히, 스마트폰을 이용하여 비밀 번호 혹은 일회용 비밀번호(OTP)를 입력함에 의해 본인 인증을 하는 시스템에 있어서 소유자 본인이 아닌 다른 사람이 소유자의 스마트폰을 통하여 입력되는 문자를 보고 이를 이용하여 부정하게 사용하는 것을 막을 방법이 없다.In particular, in a system that authenticates itself by entering a password or one-time password (OTP) using a smartphone, a person other than the owner sees a character input through the owner's smartphone and uses it illegally there's no way to stop it from doing
본 발명은 상기의 문제점을 해결하기 위하여 창출된 것으로서, 통신과 화자인증을 결합함에 의해 효과적으로 본인 인증을 수행하는 방법을 제공하는 것을 그 목적으로 한다.The present invention was created to solve the above problems, and an object of the present invention is to provide a method for effectively performing identity authentication by combining communication and speaker authentication.
본 발명의 다른 목적은 화자 주변의 잡음 환경을 이용하여 효과적인 화자 인증을 수행하는 방법을 제공하는 것에 있다.Another object of the present invention is to provide a method for effectively performing speaker authentication using a noisy environment around a speaker.
상기의 목적을 달성하기 위한 본 발명에 따른 화자 인증 방법은A speaker authentication method according to the present invention for achieving the above object
인증 서버와 사용자 단말 사이의 인증 방법에 있어서,In the authentication method between the authentication server and the user terminal,
상기 사용자 단말이 사용자의 음성 입력으로부터 성문과 잡음을 분리하고, 분리된 성문 데이터(제1성문 데이터) 및 잡음 데이터를 암호화하여 상기 인증 서버로 전송하고, 상기 인증 서버가 상기 사용자 단말로부터 입력된 제1성문 데이터 및 잡음 데이터를 저장하는 성문 데이터 준비 과정;The user terminal separates the voiceprint and noise from the user's voice input, encrypts the separated voiceprint data (first voiceprint data) and the noise data, and transmits it to the authentication server, and the authentication server 1 glottal data preparation process for storing glottal data and noise data;
인증 시, 상기 인증 서버가 임의의 텍스트를 생성하여 상기 사용자 단말로 전송하고, 상기 사용자 단말이 상기 인증 서버로부터 전송받은 텍스트를 표출하고 사용자 음성을 입력받아 성문과 잡음을 분리하고, 분리된 성문 데이터(제2성문 데이터)를 상기 잡음을 인수로 하여 암호화하여 상기 인증 서버로 전송하며, 상기 인증 서버가 상기 사용자 단말로부터 입력받은 제2성문 데이터와 상기 준비 과정에서 저장된 제1성문 데이터를 비교하고 비교 결과를 상기 사용자 단말로 전송하는 화자 인증 과정;During authentication, the authentication server generates a random text and transmits it to the user terminal, the user terminal expresses the text transmitted from the authentication server, receives the user's voice, separates the voiceprint and noise, and separates the voiceprint data (Second voiceprint data) is encrypted using the noise as a factor and transmitted to the authentication server, and the authentication server compares and compares the second voiceprint data input from the user terminal with the first voiceprint data stored in the preparation process. a speaker authentication process of transmitting a result to the user terminal;
을 포함한다.includes
여기서, 상기 성문 데이터 준비 과정은 Here, the glottal data preparation process is
상기 인증 서버가 임의의 텍스트를 생성하여 상기 사용자 단말로 전송하고, 상기 사용자 단말이 상기 인증 서버로부터 전송받은 텍스트를 표출하고 사용자 음성을 입력받아 성문과 잡음을 분리하고, 분리된 성문(제1성문 데이터) 및 잡을 데이터를 암호화하여 상기 인증 서버로 전송하는 것을 특징으로 한다.The authentication server generates a random text and transmits it to the user terminal, the user terminal expresses the text received from the authentication server, receives the user's voice, separates the voice print and noise, and separates the voice print (first voice print) data) and job data are encrypted and transmitted to the authentication server.
여기서, 상기 성문 데이터 준비 과정에서 하기의 수식을 이용하여 암호화되는 것을 특징으로 한다.Here, it is characterized in that the voiceprint data is encrypted using the following equation in the preparation process.
Ciphertext = f_enc(Plaintext1, f_key1(v_server, device_id, v_user))Ciphertext = f_enc(Plaintext1, f_key1(v_server, device_id, v_user))
여기서, here,
f_enc; 암호화 함수f_enc; encryption function
plaintext1 = f(성문 데이터, 잡음 데이터)plaintext1 = f(glottal data, noise data)
f_key1; 제1암호화키f_key1; 1st encryption key
V_server: 인증서버에서 단말 별로 생성한 변수 값으로서 사용자 단말 별로 유일한 값. V_server: A variable value created by the authentication server for each terminal, and a unique value for each user terminal.
Device_id: 단말마다 가지고 있는 고유의 상수 값Device_id: A unique constant value for each terminal
V_user: 사용자에 대한 식별자V_user: the identifier for the user
여기서, 상기 화자 인증 과정에서 하기의 수식을 이용하여 암호화되는 것을 특징으로 한다.Here, it is characterized in that encryption is performed using the following equation in the speaker authentication process.
Ciphertext = f_enc(Plaintext2, f_key2(v_server, device_id, v_user, v_noise)) ---(수식2)Ciphertext = f_enc(Plaintext2, f_key2(v_server, device_id, v_user, v_noise)) ---(Equation 2)
여기서, here,
f_enc; 암호화 함수f_enc; encryption function
plaintext2; 성문 데이터plaintext2; glottal data
f_key2; 제2암호화키f_key2; second encryption key
V_server: 인증서버에서 단말 별로 생성한 변수 값으로서 사용자 단말 별로 유일한 값. V_server: A variable value created by the authentication server for each terminal, and a unique value for each user terminal.
Device_id: 단말마다 가지고 있는 고유의 상수 값Device_id: A unique constant value for each terminal
V_user: 사용자에 대한 식별자V_user: the identifier for the user
V_noise: 상기 성문 데이터 준비 과정에서 준비된 잡음 데이터로부터 생성한 상수 값V_noise: a constant value generated from the noise data prepared in the glottal data preparation process
여기서, 인증 시, 상기 인증 서버가 임의의 텍스트를 생성하여 상기 사용자 단말로 전송하고, 상기 사용자 단말이 상기 인증 서버로부터 전송받은 텍스트를 표출하고 사용자 음성을 입력받아 성문과 잡음을 분리하고, 분리된 성문 데이터(제2성문 데이터)를 상기 잡음을 인수로 하여 암호화하여 상기 인증 서버로 전송하며, 상기 인증 서버가 상기 성문 데이터 준비 과정에서 저장된 잡음 데이터를 이용하여 제3암호화키를 생성하고, 생성된 제3암호화키에 의해 제2성문 데이터를 복호하고, 복호된 제2성문 데이터와 상기 성문 데이터 준비 과정에서 저장된 제1성문 데이터를 비교하고 비교 결과를 상기 사용자 단말에 전송한다.Here, at the time of authentication, the authentication server generates a random text and transmits it to the user terminal, the user terminal expresses the text received from the authentication server, receives the user's voice, separates the voice print from the noise, The voiceprint data (second voiceprint data) is encrypted using the noise as an argument and transmitted to the authentication server, and the authentication server generates a third encryption key using the noise data stored in the preparation of the voiceprint data, and the generated The second voiceprint data is decrypted using the third encryption key, the decrypted second voiceprint data is compared with the first voiceprint data stored in the preparation of the voiceprint data, and the comparison result is transmitted to the user terminal.
본 발명에 따른 화자 인증 방법은 사용자의 성문 및 사용자 주변의 잡음을 함께 고려함으로써 효과적으로 사용자를 인증할 수 있게 하는 효과를 갖는다.The speaker authentication method according to the present invention has the effect of effectively authenticating the user by considering the user's voice print and noise around the user together.
본 발명에 따른 화자 인증 방법은 잡음의 예측 불가능성을 이용함으로써 공격에 효과적으로 대응할 수 있게 하는 효과를 갖는다.The speaker authentication method according to the present invention has the effect of effectively responding to an attack by using the unpredictability of noise.
도 1은 본 발명에 따른 화자 인증 방법이 수행되는 과정을 보이는 흐름도이다.1 is a flowchart illustrating a process in which a speaker authentication method according to the present invention is performed.
도 2는 사용자 음성으로부터 성문 및 잡음을 분리하는 것을 보인다.Figure 2 shows the separation of glottis and noise from the user's voice.
도 3은 도 2에 도시된 음성 정보 준비 과정을 상세히 보인다.3 is a detailed view of the voice information preparation process shown in FIG. 2 .
도 4는 도 2에 도시된 화자 인증 과정을 상세히 보인다.FIG. 4 shows the speaker authentication process shown in FIG. 2 in detail.
이하, 첨부된 도면을 참조하여 본 발명의 구성 및 동작을 상세히 설명하기로 한다.Hereinafter, the configuration and operation of the present invention will be described in detail with reference to the accompanying drawings.
도 1은 본 발명에 따른 화자 인증 방법이 수행되는 과정을 보이는 흐름도이다.1 is a flowchart illustrating a process in which a speaker authentication method according to the present invention is performed.
도 1을 참조하면, 본 발명에 따른 화자 인증 방법은 성문 데이터 준비 과정(S100)과 화자 인증 과정(S200)을 포함한다.Referring to FIG. 1 , the speaker authentication method according to the present invention includes a voiceprint data preparation process ( S100 ) and a speaker authentication process ( S200 ).
성문 데이터 준비 과정(S100)은 사용자의 음성으로부터 성문 및 잡음을 분리하고 분리된 성문 데이터(제1성문 데이터) 및 잡음 데이터를 저장하는 과정이다.The glottal data preparation process S100 is a process of separating the glottis and noise from the user's voice and storing the separated glottal data (first glottal data) and noise data.
이때, 성문 데이터 준비 과정(S100)에서는 사용자 단말이 성문 데이터(제1성문 데이터)를 인증 서버로 보낼 때 잡음을 이용하지 않고 암호화키(제1암호화키)를 생성하고, 생성된 암호화키(제1암호화키)를 이용하여 암호화한다. At this time, in the voiceprint data preparation process (S100), when the user terminal sends the voiceprint data (first voiceprint data) to the authentication server, an encryption key (first encryption key) is generated without using noise, and the generated encryption key (first encryption key) is generated. 1 Encryption key).
화자 인증 과정(S200)은 사용자에게 임의의 텍스트를 제시하여 읽게 하고, 이를 읽은 사용자의 음성으로부터 성문 데이터(제2성문 데이터) 및 잡음을 분리하고, 분리된 성문 데이터(제2성문 데이터)를 잡음을 인수로 하는 암호화키로 암호화하여 전송하고 인증 서버가 성문 데이터(제2성문 데이터)를 성문 데이터 준비 과정에서 저장된 성문 데이터(제1성문 데이터)와 비교하여 인증을 수행하는 과정이다.In the speaker authentication process (S200), the user presents and reads arbitrary text, separates voiceprint data (second voiceprint data) and noise from the user's voice who reads it, and grabs the separated voiceprint data (second voiceprint data) It is a process in which the authentication server performs authentication by encrypting it with an encryption key taking as an argument and transmitting it, and the authentication server compares the voiceprint data (second voiceprint data) with the voiceprint data (first voiceprint data) stored in the voiceprint data preparation process.
이때, 화자 인증 과정(S200)에서는 사용자 단말이 성문 데이터(제2성문 데이터)를 인증 서버로 보낼 때 성문 데이터 준비 과정에서 준비된 잡음 데이터를 이용하여 암호화키(제2암호화키)를 생성하고, 생성된 암호화키(제2암호화키)를 이용하여 암호화한다. At this time, in the speaker authentication process (S200), when the user terminal sends voiceprint data (second voiceprint data) to the authentication server, an encryption key (second encryption key) is generated using the noise data prepared in the voiceprint data preparation process, and generated Encrypted using the encrypted key (second encryption key).
또한, 인증 서버는 성문 데이터 준비 과정(S100)에서 저장된 잡음 데이터를 이용하여 복호화를 위한 제3암호화키를 생성한다.In addition, the authentication server generates a third encryption key for decryption by using the noise data stored in the voiceprint data preparation process (S100).
도 2는 성문 데이터 준비 과정을 상세히 보인다.Figure 2 shows the details of the glottal data preparation process.
도 2를 참조하면 성문 데이터 준비 과정(S100)에서는 사용자 단말에서 임의의 텍스트 예를 들어, 단어, 문장, 숫자와 문자의 조합을 사용자에게 보여주어 사용자가 읽게 한 후, 입력된 음성 신호로부터 성문 및 잡음을 추출하고, 암호화하여 인증 서버로 전송한다.Referring to FIG. 2 , in the voiceprint data preparation process ( S100 ), the user terminal shows an arbitrary text, for example, a word, a sentence, a combination of a number and a letter to the user so that the user reads it, and then, from the input voice signal, the voiceprint and It extracts noise, encrypts it, and sends it to the authentication server.
임의의 텍스트는 인증 서버가 사용자 단말에 전송하는 것일 수 있다. 사용자 단말은 인증 서버가 보내주는 텍스트를 표출시키고, 사용자가 이를 읽을 때의 음성을 녹음하고, 녹음된 음성을 분석하여 성문 데이터(제1성문 데이터) 및 잡음 데이터를 추출한다.The arbitrary text may be transmitted by the authentication server to the user terminal. The user terminal expresses the text sent by the authentication server, records the voice when the user reads it, and analyzes the recorded voice to extract voiceprint data (first voiceprint data) and noise data.
이때, 암호화는 하기의 수식 1과 같이 수행된다.At this time, encryption is performed as shown in Equation 1 below.
Ciphertext = f_enc(Plaintext1, f_key1(v_server, device_id, v_user)) ---(수식1)Ciphertext = f_enc(Plaintext1, f_key1(v_server, device_id, v_user)) ---(Formula 1)
여기서, here,
f_enc; 암호화 함수f_enc; encryption function
plaintext1 = f(성문 데이터, 잡음 데이터)plaintext1 = f(glottal data, noise data)
f_key1; 제1암호화키f_key1; 1st encryption key
V_server: 인증서버에서 단말 별로 생성한 변수 값으로서 사용자 단말 별로 유일한 값. V_server: A variable value created by the authentication server for each terminal, and a unique value for each user terminal.
Device_id: 단말마다 가지고 있는 고유의 상수 값Device_id: A unique constant value for each terminal
V_user: 사용자에 대한 식별자V_user: the identifier for the user
여기서, plaintext1 = f(성문 데이터, 잡음 데이터)는 성문 데이터와 잡음 데이터를 덧붙여서 일련의 데이터 열을 만드는 것을 의미한다.Here, plaintext1 = f (speechprint data, noise data) means to create a series of data columns by adding the glottal data and noise data.
한편, Device_id는 사용자 단말이 가지고 있는 하드웨어(H/W) 요소들과 디바이스 정보의 조합으로 생성된다.Meanwhile, Device_id is generated by a combination of hardware (H/W) elements of the user terminal and device information.
수식1의 특징은 암호화시 잡음을 이용하지 않고 암호화키를 생성한다는 것이다.The characteristic of Equation 1 is that the encryption key is generated without using noise during encryption.
이러한 성문 데이터 준비 과정(S100)은 회원 가입 시 수행될 수 있다.This voiceprint data preparation process (S100) may be performed at the time of membership registration.
도 3은 성문 데이터 준비 과정에서 사용자 음성으로부터 성문 및 잡음을 분리하는 것을 보인다.Figure 3 shows the separation of glottis and noise from the user's voice during the preparation of glottal data.
성문 데이터를 추출하는 단계는 통상적으로 2가지 단계를 거친다.The step of extracting the glottal data typically involves two steps.
첫 번째는 FFT (Fast Fourier Transform) 라는 수학적인 분석 작업을 거쳐서, 녹음된 데이터를 구성하고 있는 주파수의 성분대로 분리하는 작업이다.The first is to separate the recorded data according to the frequency components that make up it through a mathematical analysis called FFT (Fast Fourier Transform).
이 작업을 통해서 사람의 목소리와 잡음을 각각 분리해 낼 수 있다.Through this operation, the human voice and noise can be separated from each other.
여기서 추출해 낸 사람의 목소리의 파형과 진폭을 분석해 보면 사람마다 특징적인 분포를 보이면서 다르게 나타나는데 이 특징들을 추출하여 다른 사람과 구별되는 그 사람만의 성문 데이터를 구성하게 된다.When we analyze the waveform and amplitude of the person's voice extracted here, each person shows a characteristic distribution and appears differently. By extracting these characteristics, the person's own glottal data that is distinguished from others is constructed.
도 4는 도 2에 도시된 화자 인증 과정을 상세히 보인다.FIG. 4 shows the speaker authentication process shown in FIG. 2 in detail.
도 4를 참조하면, 화자 인증 과정(S200)에서는 인증 서버가 임의의 문장이나 단어 혹은 숫자, 문자의 조합을 생성하여 사용자 단말로 전송하고, 사용자 단말이 인증 서버가 전송한 내용을 사용자에게 보여주고, 사용자가 해당 내용을 읽은 녹음 데이터에서 성문 및 잡음을 추출하여 사용자 인증 서버로 전송한다. 4, in the speaker authentication process (S200), the authentication server generates an arbitrary combination of sentences, words, numbers, and characters and transmits them to the user terminal, and the user terminal shows the content transmitted by the authentication server to the user , extracts voiceprints and noises from the recorded data that the user reads and sends them to the user authentication server.
여기서, 암호화는 하기의 수식2를 사용하여 수행된다.Here, encryption is performed using Equation 2 below.
Ciphertext = f_enc(Plaintext2, f_key2(v_server, device_id, v_user, v_noise)) ---(수식2)Ciphertext = f_enc(Plaintext2, f_key2(v_server, device_id, v_user, v_noise)) ---(Equation 2)
여기서, here,
f_enc; 암호화 함수f_enc; encryption function
plaintext2; 성문 데이터plaintext2; glottal data
f_key2; 제2암호화키f_key2; second encryption key
V_server: 인증서버에서 단말 별로 생성한 변수 값으로서 사용자 단말 별로 유일한 값. V_server: A variable value created by the authentication server for each terminal, and a unique value for each user terminal.
Device_id: 단말마다 가지고 있는 고유의 상수 값Device_id: A unique constant value for each terminal
V_user: 사용자에 대한 식별자V_user: the identifier for the user
V_noise: 성문 데이터 준비 과정에서 준비된 잡음 데이터로부터 생성한 상수 값V_noise: A constant value generated from the prepared noise data during the glottal data preparation process.
수식2의 특징은 암호화시 잡음을 이용하여 암호화키를 생성한다는 것이다.The characteristic of Equation 2 is that the encryption key is generated using noise during encryption.
인증 시, 잡음은 사용하지 않는다. 인증 시의 잡음을 사용하려면 인증 시마다 잡음을 전송하고 이를 저장하는 과정이 필요한 데, 그렇게 하지 않고 최초의 잡음만을 사용하여 기존에 상용화 되어 있는 기술을 조합하는 방법으로도 충분하다.During authentication, noise is not used. In order to use the noise during authentication, it is necessary to transmit and store the noise every time it is authenticated. Instead, it is sufficient to combine existing commercially available technologies using only the initial noise.
인증 서버는 사용자 단말에서 전송된 성문 데이터(제2성문 데이터)를 성문 데이터 준비 과정에서 저장된 성문 데이터(제1성문 데이터)와 비교하고 그 결과를 사용자 단말로 전송한다.The authentication server compares the voiceprint data (second voiceprint data) transmitted from the user terminal with the voiceprint data (first voiceprint data) stored in the voiceprint data preparation process, and transmits the result to the user terminal.
인증 서버는 성문 데이터 준비 과정(S100)에서 저장된 잡음 데이터를 이용하여 복호화를 위한 암호화키(제3암호화키)를 생성한다. 생성된 제3암호화키를 사용하여 사용자 단말에서 전송된 성문 데이터(제2성문 데이터)를 복호하고, 복호된 제2성문 데이터와 성문 데이터 준비 과정에서 저장된 제1성문 데이터와 같은 지를 비교한다.The authentication server generates an encryption key (third encryption key) for decryption by using the noise data stored in the voiceprint data preparation process ( S100 ). The voiceprint data (second voiceprint data) transmitted from the user terminal is decoded using the generated third encryption key, and the decrypted second voiceprint data is compared with the first voiceprint data stored in the preparation process of the voiceprint data.
신뢰성 있는 암호화는 불록 암호 알고리즘이 사용되고, 블록 암호 알고리즘은 Key가 반드시 사용된다. 따라서 이러한 암호화 방법에 대한 공격은 Key를 알아내는 데에 집중되고 있다. A block cipher algorithm is used for reliable encryption, and a key is always used for the block cipher algorithm. Therefore, attacks on these encryption methods are focused on finding the key.
본 발명의 주된 특징은 화자 인증 과정(S200)에서 성문 데이터 준비 과정(S100)에서 준비된 잡음 데이터를 암호화키 생성에 사용한다는 것이다. The main feature of the present invention is that the noise data prepared in the voiceprint data preparation process (S100) in the speaker authentication process (S200) is used to generate an encryption key.
성문 데이터 준비 과정(S100)에서는 아직 서버 상에 잡음 데이터가 저장되기 전이므로, 단말기가 수집한 잡음 데이터를 서버가 알 수 없다. 따라서 이 경우에는 암호화키 생성 시 잡음 데이터를 사용하지 못한다.In the voice gate data preparation process ( S100 ), since the noise data is not yet stored on the server, the server cannot know the noise data collected by the terminal. Therefore, in this case, noise data cannot be used to generate the encryption key.
즉, 성문 데이터 준비 과정(S100)에서는 수식(1)에 보이는 바와 같이 암호화를 하되 암호화키 생성 시 잡음을 인수로 하지 않으며, 반면 화자 인증 과정(S200)에서는 수식(2)에 보이는 바와 같이, 성문 데이터 준비 과정에서 준비된 잡음 데이터를 인수로 하여 제2암호화키를 생성한다.That is, in the voiceprint data preparation process (S100), as shown in Equation (1), encryption is performed, but noise is not taken as a factor when generating the encryption key, whereas in the speaker authentication process (S200), as shown in Equation (2), A second encryption key is generated using the noise data prepared in the data preparation process as an argument.
잡음 데이터는 매우 다양하고 동일한 경우가 하나도 없기 때문에 매우 훌륭한 난수성을 갖고 있다. 그래서 잡음을 추가하여 암호화키를 생성하면 이 키는 매우 알아내기 어렵다. The noise data is very diverse and has a very good randomness because none of them are identical. So, if an encryption key is generated by adding noise, this key is very difficult to find out.
본 발명에 따른 화자 인증 방법은 두 가지 측면에서 강점을 갖는다.The speaker authentication method according to the present invention has strengths in two aspects.
첫 번째는 두 번째는 사용자의 성문 및 사용자 주변의 잡음을 함께 고려함으로써 효과적으로 사용자를 인증할 수 있다는 것이다. The first is that the user can be effectively authenticated by considering the user's voice gate and the noise around the user together.
두 번째는 잡음의 예측 불가능성을 이용하여 공격이 어려운 인증 방법을 구축할 수 있다는 것이다. The second is that the unpredictability of noise can be used to build an authentication method that is difficult to attack.
본 명세서 및 청구범위에 사용된 용어나 단어는 통상적이거나 사전적인 의미로 한정해서 해석되어서는 아니 되며, 발명자는 그 자신의 발명을 가장 최선의 방법으로 설명하기 위해 용어의 개념을 적절하게 정의할 수 있다는 원칙에 입각하여 본 발명의 기술적 사상에 부합하는 의미와 개념으로 해석되어야만 한다.The terms or words used in the present specification and claims should not be construed as being limited to their ordinary or dictionary meanings, and the inventor may properly define the concept of the term in order to best describe his invention. It should be interpreted as meaning and concept consistent with the technical idea of the present invention based on the principle that there is.
따라서 본 명세서에 기재된 실시 예와 도면에 도시된 구성은 본 발명의 가장 바람직한 실시 예에 불과할 뿐이고, 본 발명의 기술적 사상을 모두 대변하는 것은 아니므로, 본 출원시점에 있어서 이들은 대체할 수 있는 다양한 균등물과 변형 예들이 있을 수 있음을 이해하여야 한다.Therefore, the embodiments described in this specification and the configurations shown in the drawings are only the most preferred embodiments of the present invention, and do not represent all of the technical spirit of the present invention. It should be understood that there may be water and variations.

Claims (3)

  1. 인증 서버와 사용자 단말 사이의 인증 방법에 있어서,In the authentication method between the authentication server and the user terminal,
    상기 사용자 단말이 사용자의 음성 입력으로부터 성문과 잡음을 분리하고, 분리된 성문 데이터(제1성문 데이터) 및 잡음 데이터를 암호화하여 상기 인증 서버로 전송하고, 상기 인증 서버가 상기 사용자 단말로부터 입력된 제1성문 데이터 및 잡음 데이터를 저장하는 성문 데이터 준비 과정; The user terminal separates the voiceprint and noise from the user's voice input, encrypts the separated voiceprint data (first voiceprint data) and the noise data, and transmits it to the authentication server, and the authentication server 1 glottal data preparation process for storing glottal data and noise data; and
    인증시, 상기 인증 서버가 임의의 텍스트를 생성하여 상기 사용자 단말로 전송하고, 상기 사용자 단말이 상기 인증 서버로부터 전송받은 텍스트를 표출하고 사용자 음성을 입력받아 성문과 잡음을 분리하고, 분리된 성문 데이터(제2성문 데이터)를 상기 잡음을 인수로 하여 암호화하여 상기 인증 서버로 전송하며, 상기 인증 서버가 상기 사용자 단말로부터 입력받은 제2성문 데이터와 상기 준비 과정에서 저장된 제1성문 데이터를 비교하고 비교 결과를 상기 사용자 단말로 전송하는 화자 인증 과정;을 포함하되,During authentication, the authentication server generates a random text and transmits it to the user terminal, the user terminal expresses the text transmitted from the authentication server, receives the user's voice, separates the voiceprint and noise, and separates the voiceprint data (Second voiceprint data) is encrypted using the noise as a factor and transmitted to the authentication server, and the authentication server compares and compares the second voiceprint data input from the user terminal with the first voiceprint data stored in the preparation process. Including; a speaker authentication process of transmitting the result to the user terminal;
    상기 성문 데이터 준비 과정은 The glottal data preparation process is
    상기 인증 서버가 임의의 텍스트를 생성하여 상기 사용자 단말로 전송하고, 상기 사용자 단말이 상기 인증 서버로부터 전송받은 텍스트를 표출하고 사용자 음성을 입력받아 성문과 잡음을 분리하고, 분리된 성문(제1성문 데이터)을 암호화하여 상기 인증 서버로 전송하며,The authentication server generates a random text and transmits it to the user terminal, the user terminal expresses the text received from the authentication server, receives the user's voice, separates the voice print and noise, and separates the voice print (first voice print) data) is encrypted and transmitted to the authentication server,
    상기 성문 데이터 준비 과정에서 하기의 수식In the process of preparing the glottal data, the following formula
    Ciphertext = f_enc(Plaintext1, f_key1(v_server, device_id, v_user))Ciphertext = f_enc(Plaintext1, f_key1(v_server, device_id, v_user))
    여기서, here,
    f_enc; 암호화 함수f_enc; encryption function
    plaintext1 = f(성문 데이터, 잡음 데이터)plaintext1 = f(glottal data, noise data)
    f_key1; 제1암호화키f_key1; 1st encryption key
    V_server: 인증서버에서 단말 별로 생성한 변수 값으로서 사용자 단말 별로 유일한 값. V_server: A variable value created by the authentication server for each terminal, and a unique value for each user terminal.
    Device_id: 단말마다 가지고 있는 고유의 상수 값Device_id: A unique constant value for each terminal
    V_user: 사용자에 대한 식별자V_user: the identifier for the user
    을 이용하여 암호화되는 것을 특징으로 하는 화자 인증 방법. A speaker authentication method, characterized in that it is encrypted using
  2. 제1항에 있어서, 상기 화자 인증 과정에서 하기의 수식The method of claim 1, wherein in the speaker authentication process, the following equation
    Ciphertext = f_enc(Plaintext2, f_key2(v_server, device_id, v_user, v_noise)) Ciphertext = f_enc(Plaintext2, f_key2(v_server, device_id, v_user, v_noise))
    여기서, here,
    f_enc; 암호화 함수f_enc; encryption function
    plaintext2; 성문 데이터plaintext2; glottal data
    f_key2; 제2암호화키f_key2; second encryption key
    V_server: 인증서버에서 단말 별로 생성한 변수 값으로서 사용자 단말 별로 유일한 값. V_server: A variable value created by the authentication server for each terminal, and a unique value for each user terminal.
    Device_id: 단말마다 가지고 있는 고유의 상수 값Device_id: A unique constant value for each terminal
    V_user: 사용자에 대한 식별자V_user: the identifier for the user
    V_noise: 상기 성문 데이터 준비 과정에서 준비된 잡음 데이터로부터 생성한 상수 값V_noise: a constant value generated from the noise data prepared in the glottal data preparation process
    을 이용하여 암호화되는 것을 특징으로 하는 화자 인증 방법. A speaker authentication method, characterized in that it is encrypted using
  3. 제2항에 있어서, 상기 화자 인증 과정에서 상기 인증 서버가 임의의 텍스트를 생성하여 상기 사용자 단말로 전송하고, 상기 사용자 단말이 상기 인증 서버로부터 전송받은 텍스트를 표출하고 사용자 음성을 입력받아 성문과 잡음을 분리하고, 분리된 성문 데이터(제2성문 데이터)를 상기 잡음을 인수로 하여 암호화하여 상기 인증 서버로 전송하며, 상기 인증 서버가 상기 성문 데이터 준비 과정에서 저장된 잡음 데이터를 이용하여 제3암호화키를 생성하고, 생성된 제3암호화키에 의해 제2성문 데이터를 복호하고, 복호된 제2성문 데이터와 상기 성문 데이터 준비 과정에서 저장된 제1성문 데이터를 비교하고 비교 결과를 상기 사용자 단말에 전송하는 것을 특징으로 하는 화자 인증 방법.The method of claim 2, wherein in the speaker authentication process, the authentication server generates a random text and transmits it to the user terminal, and the user terminal expresses the text transmitted from the authentication server and receives the user's voice to obtain voiceprint and noise is separated, and the separated voiceprint data (second voiceprint data) is encrypted using the noise as an argument and transmitted to the authentication server, and the authentication server uses the noise data stored in the preparation of the voiceprint data to obtain a third encryption key. generates, decrypts the second voiceprint data using the generated third encryption key, compares the decrypted second voiceprint data with the first voiceprint data stored in the preparation of the voiceprint data, and transmits the comparison result to the user terminal A speaker authentication method, characterized in that.
PCT/KR2021/000369 2020-01-13 2021-01-12 Speaker authentication method WO2021145634A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2020-0004159 2020-01-13
KR1020200004159A KR102227418B1 (en) 2020-01-13 2020-01-13 Method for certificating of speaker

Publications (1)

Publication Number Publication Date
WO2021145634A1 true WO2021145634A1 (en) 2021-07-22

Family

ID=75177121

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/000369 WO2021145634A1 (en) 2020-01-13 2021-01-12 Speaker authentication method

Country Status (2)

Country Link
KR (1) KR102227418B1 (en)
WO (1) WO2021145634A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102356069B1 (en) * 2021-12-21 2022-02-09 (주)리인터내셔널 Forneigner exclusive casino game system and method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100276846B1 (en) * 1997-06-11 2001-01-15 포만 제프리 엘 Portable acoustic interface for remote access to automatic speech/speaker recognition server
KR100672341B1 (en) * 2006-01-20 2007-01-24 엘지전자 주식회사 Method for data encryption, and terminal for the same
KR101325867B1 (en) * 2012-02-24 2013-11-05 주식회사 팬택 Method for authenticating user using voice recognition, device and system for the same
KR101908711B1 (en) * 2015-03-20 2018-10-16 바이두 온라인 네트웍 테크놀러지 (베이징) 캄파니 리미티드 Artificial intelligence based voiceprint login method and device
KR20190138085A (en) * 2018-06-04 2019-12-12 김지헌 Identity authentication earphone device with a unique voice recognition

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101395805B1 (en) 2013-09-06 2014-05-20 주식회사 엔에스에이치씨 User authorization method using voice print and random number sequence

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100276846B1 (en) * 1997-06-11 2001-01-15 포만 제프리 엘 Portable acoustic interface for remote access to automatic speech/speaker recognition server
KR100672341B1 (en) * 2006-01-20 2007-01-24 엘지전자 주식회사 Method for data encryption, and terminal for the same
KR101325867B1 (en) * 2012-02-24 2013-11-05 주식회사 팬택 Method for authenticating user using voice recognition, device and system for the same
KR101908711B1 (en) * 2015-03-20 2018-10-16 바이두 온라인 네트웍 테크놀러지 (베이징) 캄파니 리미티드 Artificial intelligence based voiceprint login method and device
KR20190138085A (en) * 2018-06-04 2019-12-12 김지헌 Identity authentication earphone device with a unique voice recognition

Also Published As

Publication number Publication date
KR102227418B1 (en) 2021-03-12

Similar Documents

Publication Publication Date Title
WO2020204444A2 (en) Secret key security method of distributing and storing key in blockchain node and/or possession device having wallet app installed therein
WO2017111383A1 (en) Biometric data-based authentication device, control server linked to same, and biometric data-based login method for same
WO2021150032A1 (en) Method for providing authentication service by using decentralized identity and server using the same
WO1999016031A3 (en) Method and apparatus for asymmetric key management in a cryptographic system
CN102664739A (en) PKI (Public Key Infrastructure) implementation method based on safety certificate
WO2020117020A1 (en) Method for generating pki key based on biometric information and device for generating key by using same method
WO2014003362A1 (en) Otp-based authentication system and method
CN108600213A (en) The compound identity authorization system of compound identity identifying method and application this method
WO2021145634A1 (en) Speaker authentication method
WO2021182683A1 (en) Voice authentication system into which watermark is inserted, and method therefor
CN112329519A (en) Safe online fingerprint matching method
WO2019125041A1 (en) Authentication system using separation, then distributed storage of personal information using blockchain
WO2020235942A9 (en) System for restoring lost private key
CN109039643B (en) A kind of sustainable method for authenticating user identity and system based on electromagnetic radiation
CN108667801A (en) A kind of Internet of Things access identity safety certifying method and system
CN104639528A (en) DBA (database administrator) mobile client counterattack method and DBA mobile client counterattack device
WO2020111403A1 (en) Stream cipher-based image security method using zero-watermarking, server, and computer readable recording medium
Wu et al. Attacks and countermeasures on privacy-preserving biometric authentication schemes
JP2017530636A (en) Authentication stick
JP2002269047A (en) Sound user authentication system
CN111698253A (en) Computer network safety system
WO2015160190A1 (en) Device and method for generating virtual keyboard for user authentication
CN115941176A (en) PUF-based bidirectional authentication and key agreement method
WO2021025403A2 (en) Security key management method and security key management server
Mishra et al. Pseudo-biometric identity framework: Achieving self-sovereignity for biometrics on blockchain

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21741007

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19.12.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 21741007

Country of ref document: EP

Kind code of ref document: A1