WO2022160691A1 - Reliable user authentication method and system based on mandibular biological features - Google Patents

Reliable user authentication method and system based on mandibular biological features Download PDF

Info

Publication number
WO2022160691A1
WO2022160691A1 PCT/CN2021/114402 CN2021114402W WO2022160691A1 WO 2022160691 A1 WO2022160691 A1 WO 2022160691A1 CN 2021114402 W CN2021114402 W CN 2021114402W WO 2022160691 A1 WO2022160691 A1 WO 2022160691A1
Authority
WO
WIPO (PCT)
Prior art keywords
mandibular
biometrics
vibration
user authentication
user
Prior art date
Application number
PCT/CN2021/114402
Other languages
French (fr)
Chinese (zh)
Inventor
刘建伟
宋文帆
沈乐明
韩劲松
任奎
Original Assignee
浙江大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江大学 filed Critical 浙江大学
Publication of WO2022160691A1 publication Critical patent/WO2022160691A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/14Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction

Definitions

  • the invention belongs to the field of user authentication, and in particular relates to a reliable user authentication method that uses an inertial measurement unit in an earphone to collect vibration signals containing mandibular biological features and extracts the biological features with a deep neural network.
  • User authentication technology is widely deployed in security and privacy related applications in people's daily life. For example, the door lock of the user's house, the unlocking of the smartphone, and the identity verification of the high-speed rail plane and other transportation stations.
  • Existing user authentication technologies can be divided into biometric-based user authentication technologies and non-biometrics-based identity authentication technologies according to whether or not biometrics are used. Both of the existing authentication technologies have their own drawbacks.
  • Non-biometric based authentication techniques often use knowledge or equipment as an authentication credential.
  • the password or pattern unlocking of a mobile phone is a knowledge credential.
  • the user's ID card and ID card are device credentials, and these two authentication methods are simple and easy to use.
  • the knowledge-based authentication method requires the user to remember sufficiently complex knowledge for sufficient security, which brings inconvenience to the user.
  • Device-based authentication has a big security risk if the device is lost, because anyone who has the device can be authenticated.
  • biometric-based authentication technology In order to solve the inconvenient and insecure defects of non-biometric-based authentication technology, biometric-based authentication technology has been proposed and widely studied.
  • Existing biometric-based user authentication technologies include in vivo biometric-based and in vitro biometric-based authentication technologies.
  • fingerprints, facial features are in vitro features because they can be captured from the body surface.
  • Brain wave and heartbeat signatures are in vivo biosignatures because they are extracted from organ dynamics in the body.
  • the authentication technology based on in vitro features is very convenient in collecting features, fingerprints and facial features are easily stolen and copied by attackers, and its security is not as secure as the authentication technology based on in vivo biometrics.
  • the authentication technology based on in vivo biometrics is very inconvenient when collecting features, and the features themselves are not stable enough.
  • brain wave acquisition devices are cumbersome and not suitable for long-term wear.
  • Heartbeat characteristics are easily affected by factors such as mood movement. Therefore, there is an urgent need for a reliable user authentication technology that can capture stable biometrics with a simple acquisition method to realize authentication.
  • inertial measurement units can already be deployed in existing headset devices. This makes it possible to acquire the vibration signal of the mandible with the inertial measurement unit.
  • the headset will become an important computing platform for the next generation, and there are already headsets that deploy deep neural networks for real-time language translation. Therefore, it is feasible to extract mandibular biometrics (in vivo features) from vibration signals using deep neural networks in headphones.
  • the invention utilizes the inertial measurement unit in the earphone to collect the vibration signal of the mandible and uses the deep neural network to extract the biometrics of the mandible in the body, and proposes a reliable authentication method based on the biometrics of the mandible.
  • the present invention proposes a user authentication method that uses a simple inertial measurement unit in a headset to collect in-vivo vibration signals, and uses a deep neural network to extract stable and reliable in-vivo biometrics.
  • a method for reliable user authentication based on mandibular biometrics comprising the following steps:
  • the inertial measurement unit is used to obtain the six-axis vibration signals generated by the user when the throat utters; the inertial measurement unit includes an accelerometer and a gyroscope. By closing the mouth and making an 'emm' sound, the user can cause the mandible to vibrate, which travels along the mandible to the ear, where it is picked up by the centralized inertial measurement unit.
  • the vibration signal of each axis is removed outliers, filtered, normalized by dispersion, and the gradient value is calculated and divided into positive and negative vibration signals, and finally spliced to form a gradient array.
  • the gradient array is input to the mandibular biometric extractor, the mandibular biometrics are obtained, and the user is registered and authenticated according to the mandibular biometrics.
  • the vibration signals of the six axes are N sampling points intercepted from the vibration starting point, and N is not less than 60.
  • vibration starting point is determined by the following method:
  • the standard deviation of a certain window is greater than 250, and the standard deviation of the following two windows is greater than 100, the first signal value of the window with the standard deviation of 250 is determined as the starting point of vibration.
  • the removal of abnormal values is specifically: using the MAD detection algorithm to find out abnormal values, and for each abnormal value, at the same time, the average value of the previous two normal values and the back two normal values of the abnormal value is used to replace, with achieve the purpose of noise reduction.
  • high-pass filtering is performed by using a Butterworth filter with a cutoff frequency of 20 Hz, because the frequency of occurrence of people is generally higher than 150 Hz, and the frequency caused by body motion is generally lower than 10 Hz. This filtering approach removes low-frequency components that are not related to mandibular biometrics.
  • it also includes performing interpolation processing on the positive and negative vibration signals to keep the dimensions of the gradient array consistent.
  • the mandibular biometric feature extractor is a trained neural network, a classifier, and the like.
  • the mandible biological feature extractor is a double-branched deep neural network, and the positive gradient feature and the negative gradient feature are respectively input into the neural network to obtain a biological feature vector with dimension (1, 512). Training is done by the system manufacturer.
  • the converted biometric vector of the mandible can be calculated by the following formula:
  • the user can change a Gaussian matrix G and generate a new revocable biometric template.
  • the attack is rejected due to the low similarity between the old template and the new template.
  • the registration is completed by saving the obtained biometric features of the registered user's mandible.
  • the process of authenticating the user according to the mandibular biometrics is as follows: calculating the similarity between the mandible biometrics of the authenticated user and the mandibular biometrics saved during registration, if the similarity is greater than a threshold, accept the authentication, otherwise reject.
  • the cosine algorithm is used to calculate the similarity, that is, the cosine similarity between the new revocable mandible biometric vector and the revocable biometric vector template is calculated. If the obtained similarity is greater than the acceptance threshold set in advance, the authentication will be accepted. Otherwise, the authentication is rejected.
  • the user only needs to speak for 0.2 seconds to provide 60 vibration sampling points. Because the sampling rate of the inertial measurement unit can be 350HZ.
  • v' i , v i , v min , v max are the normalized vibration data value, the vibration data value before normalization, the maximum value in the axis, and the minimum value, respectively.
  • are the ith gradient value, the ith normalized vibration data value, and the difference between the ith data and the i+1th data, respectively Time difference.
  • the present invention also provides an authentication system for the above-mentioned reliable user authentication method based on mandibular biometric features, including:
  • the inertial measurement unit is used to obtain the six-axis vibration signal generated by the user's vocalization in the throat.
  • the signal processing module is used for removing outliers, filtering, normalizing the dispersion and calculating the gradient value of the vibration signal of each axis, dividing it into positive and negative vibration signals, and finally splicing to form a gradient array.
  • Mandible biometric extractor for extracting mandibular biometrics based on gradient arrays.
  • the registration and authentication module is used to store the mandibular biometrics generated during user authentication, and perform authentication based on the mandibular biometrics during user authentication.
  • the above authentication system can be a stand-alone system or built into an existing device including an inertial measurement unit, such as a headset, and the headset manufacturer embeds the trained mandibular biometric extraction network and the registration and authentication module into the headset system when manufacturing the headset. available in. It can thus be used for device authentication.
  • the present invention Compared with the existing user authentication technology, the present invention generates a vibration signal through the user's throat to vibrate and drives the mandible to vibrate.
  • the present invention captures the vibration features of the mandible by using the inertial measurement unit, separates the positive and negative vibration features by the sign of the gradient, and uses the dual-branch deep neural network to extract the mandible biological features in the vibration signal. Transform biometric vectors into revocable biometric vectors using a Gaussian matrix to prevent replay attacks.
  • Fig. 1 is the authentication flow chart of the present invention
  • Fig. 2 is a schematic diagram of a mandibular vibration model
  • Fig. 3 is a structure diagram of a dual-branch neural network
  • Figure 4 is a schematic diagram of authentication performance
  • the present invention proposes a method to capture safe, reliable and high discrimination using an inertial measurement unit on a wearable device, such as an earphone.
  • biometrics for authentication Figure 2 shows the vibration model of the mandible. Assuming that the force generating the positive vibration signal is F P , the mass of the mandible is m, and the damping and elastic coefficients of the body components on both sides of the mandible are (c 1 , c 2 , respectively). ) and (k 1 , k 2 ), then according to Newton's second law, we can get:
  • w, X P (w), i, and F P (0) represent the frequency components of the vibration signal, the frequency spectrum of the vibration signal, the imaginary unit, and the instantaneous force that produces positive vibration, respectively.
  • Step 1) The user makes a sound for 0.2 seconds with the throat.
  • the vibration of the throat vibrates the user's jawbone.
  • the vibration signal is captured by an inertial measurement unit in the headset as it travels along the mandible to the headset.
  • the vibration signal contains the biometric features of the user's mandible.
  • Step 2) Preprocess the original vibration signal collected by the inertial measurement unit to remove noise and environment-independent components in the signal data, and finally obtain a signal array.
  • the vibration is considered to start from the first data of the window with the standard deviation of 250. And select consecutive 60 data values for the vibration signal of each axis.
  • the data of each axis were processed by high-pass filtering with Butterworth filter, and the cut-off frequency was 20 Hz to eliminate the low-frequency components irrelevant to the biological features of the mandible and make the data more pure.
  • the normalized six-axis data are spliced to form a signal array of dimension (6, 60).
  • Step 3 Calculate the positive and negative gradient features
  • the signal array obtained by step 1) and step 2) is difficult to distinguish between positive vibration and negative vibration, so the gradient of each axis can be obtained and the positive and negative vibration can be separated according to the sign of the gradient. Gradients greater than or equal to 0 are positive vibrations, and other gradients are negative vibrations.
  • the obtained positive and negative gradients are linearly interpolated separately, so that the gradients in each direction of each axis contain 30 values. Concatenate the gradients of all axes to form a gradient array of dimension (2, 6, 30).
  • Step 4) Input the gradient matrix obtained in step 3) into the mandible biometric extractor to obtain a mandible biometric vector.
  • This vector is multiplied by a Gaussian matrix and stored in the headset as a revocable authentication template.
  • the mandibular biometric feature extractor adopts a trained neural network, and its structure is shown in FIG. 3 , including two convolution branches inputting positive gradient and negative gradient respectively.
  • Each convolutional branch consists of three convolutional layers.
  • Each convolutional layer is followed by a batch normalization function and a ReLU activation function.
  • the outputs of the convolutional layers are concatenated and fed into two fully connected layers.
  • the output of the first fully connected layer is the mandible biometric vector extracted from the gradient feature array.
  • Step 5 The user makes the earphone capture the vibration signal of the mandible by 0.2 seconds of throat vibration.
  • the vibration signal is processed through steps 2) and 3) and then input to the feature extractor to obtain a new mandibular biological feature vector.
  • the similarity calculation is performed with the template in the headset. If the similarity is greater than 0.57, the authentication is accepted; otherwise, the authentication is rejected.
  • the present invention proposes a safe and reliable user authentication method that can be implemented on earphones, aiming at the situation that the security and ease of use cannot be guaranteed at the same time in the existing user authentication technology.
  • the invention does not require physical interaction between the user and the authentication device, only needs to emit throat vibration, the inertial measurement unit in the earphone can capture the vibration signal containing the biological features of the mandible, and uses multi-step preprocessing and deep network to extract high-discrimination Mandible biometrics, using Gaussian matrix and similarity calculation to achieve anti-replay security authentication method.
  • the false rejection rate and false acceptance rate in the experimental data of 30 individuals are shown in Figure 4, and the equal error rate is less than 2%.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Analysis (AREA)
  • Computing Systems (AREA)
  • Pure & Applied Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Signal Processing (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

Disclosed is a reliable user authentication method based on mandibular biological features. The method comprises: acquiring, by means of an inertial measurement unit, vibration signals of six axes that are generated when a user makes a sound with his/her throat; preprocessing the vibration signals of each axis into a gradient array; and inputting the gradient array into a mandibular biological feature extractor so as to obtain mandibular biological features, and registering and authenticating the user according to the mandibular biological features. In the present invention, vibration signals are generated by a user making a sound with his/her throat, and a mandible is driven to vibrate; and in the present invention, vibration signals comprising biological features are collected by means of earphones. In the present invention, an inertial measurement unit is used to capture vibration features of a mandible; positive vibration features and negative vibration features are separated by using gradient symbols; and mandibular biological features are extracted from vibration signals by using a dual-branch deep neural network. Biological feature vectors are converted into cancelable biological feature vectors by using a Gaussian matrix, so as to avoid replay attacks.

Description

一种基于下颌骨生物特征的可靠用户认证方法及系统A reliable user authentication method and system based on mandibular biometrics 技术领域technical field
本发明属于用户认证领域,具体涉及一种使用耳机中的惯性测量单元收集包含下颌骨生物特征的振动信号并用深度神经网络提取该生物特征的可靠用户认证方法。The invention belongs to the field of user authentication, and in particular relates to a reliable user authentication method that uses an inertial measurement unit in an earphone to collect vibration signals containing mandibular biological features and extracts the biological features with a deep neural network.
背景技术Background technique
用户认证技术在人们的日常生活中被广泛部署于与安全和隐私相关的应用中。例如用户住宅的门锁,智能手机的解锁,高铁飞机等交通站点出入的身份核查。现有的用户认证技术可以根据是否使用生物特征分为基于生物特征的用户认证技术和基于非生物特征的身份认证技术。现有的这两种认证技术都有各自的缺点。User authentication technology is widely deployed in security and privacy related applications in people's daily life. For example, the door lock of the user's house, the unlocking of the smartphone, and the identity verification of the high-speed rail plane and other transportation stations. Existing user authentication technologies can be divided into biometric-based user authentication technologies and non-biometrics-based identity authentication technologies according to whether or not biometrics are used. Both of the existing authentication technologies have their own drawbacks.
基于非生物特征的认证技术常使用知识或设备作为认证凭证。例如手机的密码或图案解锁属于知识凭证。用户的身份证和ID卡属于设备凭证,这两种认证方式都简单,容易使用。但是基于知识的认证方式为了足够的安全性需要用户记住足够复杂的知识,这给用户带来了不便。基于设备的认证方式在设备丢失后存在很大的安全隐患,因为任何拥有该设备的人都可以通过认证。Non-biometric based authentication techniques often use knowledge or equipment as an authentication credential. For example, the password or pattern unlocking of a mobile phone is a knowledge credential. The user's ID card and ID card are device credentials, and these two authentication methods are simple and easy to use. However, the knowledge-based authentication method requires the user to remember sufficiently complex knowledge for sufficient security, which brings inconvenience to the user. Device-based authentication has a big security risk if the device is lost, because anyone who has the device can be authenticated.
为了解决基于非生物特征的认证技术具有的不方便和不安全的缺陷,基于生物特征的认证技术被提了出来并被广泛研究。现有的基于生物特征的用户认证技术包括基于体内生物特征和基于体外生物特征的认证技术。例如指纹,面部特征都是体外特征因为它们可以从体表进行捕获。脑电波和心跳特征属于体内生物特征,因为它们是从体内的器官动态中提取出来的。基于体外特征的认证技术虽然在采集特征时十分方便,但是指纹和面部特征容易被攻击者窃取和复制,其安全性不如基于体内生物特征的认证技术。但是,基于体内生物特征的认证技术在采集特征时十分不便,且特征本身不够稳定。例如脑电波采集设备很笨重且不利于长时间佩戴。心 跳特征容易受到心情运动等因素的影响。因此,急需要一种利用简单的采集方式便可捕获稳定的生物特征来实现认证的可靠用户认证技术。In order to solve the inconvenient and insecure defects of non-biometric-based authentication technology, biometric-based authentication technology has been proposed and widely studied. Existing biometric-based user authentication technologies include in vivo biometric-based and in vitro biometric-based authentication technologies. For example, fingerprints, facial features are in vitro features because they can be captured from the body surface. Brain wave and heartbeat signatures are in vivo biosignatures because they are extracted from organ dynamics in the body. Although the authentication technology based on in vitro features is very convenient in collecting features, fingerprints and facial features are easily stolen and copied by attackers, and its security is not as secure as the authentication technology based on in vivo biometrics. However, the authentication technology based on in vivo biometrics is very inconvenient when collecting features, and the features themselves are not stable enough. For example, brain wave acquisition devices are cumbersome and not suitable for long-term wear. Heartbeat characteristics are easily affected by factors such as mood movement. Therefore, there is an urgent need for a reliable user authentication technology that can capture stable biometrics with a simple acquisition method to realize authentication.
随着硬件设备的迅速发展,现有的耳机设备中已经可以部署惯性测量单元了。这使得利用惯性测量单元采集下颌骨的振动信号成为可能。另外,耳机将会成为下一代重要的计算平台,已经有耳机部署了深度神经网络进行实时语言翻译。因此,利用耳机中的深度神经网络提取振动信号中的下颌骨生物特征(体内特征)是可行的。本发明利用耳机中的惯性测量单元收集下颌骨振动信号并用深度神经网络提取体内的下颌骨生物特征,提出一种基于下颌骨生物特征的可靠认证方法。With the rapid development of hardware devices, inertial measurement units can already be deployed in existing headset devices. This makes it possible to acquire the vibration signal of the mandible with the inertial measurement unit. In addition, the headset will become an important computing platform for the next generation, and there are already headsets that deploy deep neural networks for real-time language translation. Therefore, it is feasible to extract mandibular biometrics (in vivo features) from vibration signals using deep neural networks in headphones. The invention utilizes the inertial measurement unit in the earphone to collect the vibration signal of the mandible and uses the deep neural network to extract the biometrics of the mandible in the body, and proposes a reliable authentication method based on the biometrics of the mandible.
发明内容SUMMARY OF THE INVENTION
为了解决现有技术中的问题,本发明提出一种使用简单的耳机中的惯性测量单元采集体内振动信号,并用深度神经网络提取稳定可靠的体内生物特征的用户认证方法。In order to solve the problems in the prior art, the present invention proposes a user authentication method that uses a simple inertial measurement unit in a headset to collect in-vivo vibration signals, and uses a deep neural network to extract stable and reliable in-vivo biometrics.
为了实现以上目的,本发明所采用的技术方案是:In order to achieve the above purpose, the technical scheme adopted in the present invention is:
一种基于下颌骨生物特征的可靠用户认证方法,包括以下步骤:A method for reliable user authentication based on mandibular biometrics, comprising the following steps:
利用惯性测量单元获取用户在喉咙发声时产生的六个轴的振动信号;其中惯性测量单元包含一个加速度计和一个陀螺仪。用户可以通过合住嘴巴发出一段‘emm’的声音即可引起下颌骨振动,该振动会沿着下颌骨传到耳朵,并由而集中的惯性测量单元接收。The inertial measurement unit is used to obtain the six-axis vibration signals generated by the user when the throat utters; the inertial measurement unit includes an accelerometer and a gyroscope. By closing the mouth and making an 'emm' sound, the user can cause the mandible to vibrate, which travels along the mandible to the ear, where it is picked up by the centralized inertial measurement unit.
对每个轴的振动信号去除异常值、滤波处理、离差归一化并计算梯度值划分成正、负向振动信号,最后拼接组成梯度阵列。The vibration signal of each axis is removed outliers, filtered, normalized by dispersion, and the gradient value is calculated and divided into positive and negative vibration signals, and finally spliced to form a gradient array.
将梯度阵列输入至下颌骨生物特征提取器,获得下颌骨生物特征,根据下颌骨生物特征对用户进行注册和认证。The gradient array is input to the mandibular biometric extractor, the mandibular biometrics are obtained, and the user is registered and authenticated according to the mandibular biometrics.
进一步地,所述六个轴的振动信号为从振动起始点后截取的N个采样点,N不小于60。Further, the vibration signals of the six axes are N sampling points intercepted from the vibration starting point, and N is not less than 60.
进一步地,所述振动起始点通过如下方法确定:Further, the vibration starting point is determined by the following method:
将原始信号按每十个值分为一个窗口,计算窗口内所有信号值的标准差。若某一窗口的标准差大于250,且其后两个窗口的标准差大于100,则认定标准差为250的窗口的第一个信号值为振动起始点。Divide the original signal into a window of ten values, and calculate the standard deviation of all signal values within the window. If the standard deviation of a certain window is greater than 250, and the standard deviation of the following two windows is greater than 100, the first signal value of the window with the standard deviation of 250 is determined as the starting point of vibration.
进一步地,所述去除异常值具体为:利用MAD检测算法找出异常值,同时对于每个异常值,同时利用该异常值的前面的两个正常值和后面两个正常值的均值替代,以达到降噪的目的。进一步地,使用截断频率为20HZ的巴特沃斯滤波器进行高通滤波,是因为人的发生频率一般是高于150HZ的,而由身体运动引起的频率通常低于10HZ。这种滤波方式可以消除与下颌骨生物特征无关的低频分量。Further, the removal of abnormal values is specifically: using the MAD detection algorithm to find out abnormal values, and for each abnormal value, at the same time, the average value of the previous two normal values and the back two normal values of the abnormal value is used to replace, with achieve the purpose of noise reduction. Further, high-pass filtering is performed by using a Butterworth filter with a cutoff frequency of 20 Hz, because the frequency of occurrence of people is generally higher than 150 Hz, and the frequency caused by body motion is generally lower than 10 Hz. This filtering approach removes low-frequency components that are not related to mandibular biometrics.
进一步地,还包括对正、负向振动信号进行插值处理,使梯度阵列的维度保持一致。Further, it also includes performing interpolation processing on the positive and negative vibration signals to keep the dimensions of the gradient array consistent.
进一步地,所述下颌骨生物特征提取器为训练好的神经网络、分类器等。优选地,下颌骨生物特征提取器是一个双分支的深度神经网络,分别将正向梯度特征和负向梯度特征输入该神经网络即可获得一个维度为(1,512)的生物特征向量。训练由系统制造商完成。Further, the mandibular biometric feature extractor is a trained neural network, a classifier, and the like. Preferably, the mandible biological feature extractor is a double-branched deep neural network, and the positive gradient feature and the negative gradient feature are respectively input into the neural network to obtain a biological feature vector with dimension (1, 512). Training is done by the system manufacturer.
进一步地,获得下颌骨生物特征后,还包括:Further, after obtaining the biological features of the mandible, it also includes:
将下颌骨生物特征乘以高斯矩阵转化成可下颌骨撤销生物特征。若将高斯矩阵表示为G,特征提取器输出的生物特征向量表示为M 1,则转换后的可撤销下颌骨生物特征向量可以用如下公式计算: Multiply the mandible biometrics by a Gaussian matrix to convert the mandibular undo biometrics. If the Gaussian matrix is denoted as G, and the biometric vector output by the feature extractor is denoted as M 1 , the converted biometric vector of the mandible can be calculated by the following formula:
M=M 1×G. M=M 1 ×G.
一旦耳机中的可撤销的生物特征模板被窃取了,用户就可以换一个高斯矩阵G,并产生新的可撤销生物特征模板。由于旧的模板和新的模板之间的相似度很低,攻击会被拒绝。Once the revocable biometric template in the headset is stolen, the user can change a Gaussian matrix G and generate a new revocable biometric template. The attack is rejected due to the low similarity between the old template and the new template.
进一步地,所述根据下颌骨生物特征对用户进行注册的过程为:Further, the process of registering the user according to the biometric features of the mandible is as follows:
将获得的注册用户的下颌骨生物特征保存即完成注册。The registration is completed by saving the obtained biometric features of the registered user's mandible.
所述根据下颌骨生物特征对用户进行认证的过程为:将认证用户的下颌骨生物特征与注册 时保存的下颌骨生物特征进行相似度计算,若相似度大于阈值,则接受此次认证,否则拒绝。其中计算相似度使用的是余弦算法,即计算新的可撤销下颌骨生物特征向量与可撤销生物特征向量模板之间的余弦相似度。得到的相似度如果大于提前设定的接受阈值,就接受此次认证。否则,拒绝此次认证。The process of authenticating the user according to the mandibular biometrics is as follows: calculating the similarity between the mandible biometrics of the authenticated user and the mandibular biometrics saved during registration, if the similarity is greater than a threshold, accept the authentication, otherwise reject. The cosine algorithm is used to calculate the similarity, that is, the cosine similarity between the new revocable mandible biometric vector and the revocable biometric vector template is calculated. If the obtained similarity is greater than the acceptance threshold set in advance, the authentication will be accepted. Otherwise, the authentication is rejected.
其中,用户只需要发声0.2秒即可提供60个振动采样点。因为惯性测量单元的采样率可以为350HZ。Among them, the user only needs to speak for 0.2 seconds to provide 60 vibration sampling points. Because the sampling rate of the inertial measurement unit can be 350HZ.
其中,利用离差归一化处理每个轴的数据(幅值)的公式如下:Among them, the formula for using dispersion normalization to process the data (amplitude) of each axis is as follows:
Figure PCTCN2021114402-appb-000001
Figure PCTCN2021114402-appb-000001
其中v’ i,v i,v min,v max分别是归一化后的振动数据值,归一化前的振动数据值,该轴中的最大值,以及最小值。 Wherein v' i , v i , v min , v max are the normalized vibration data value, the vibration data value before normalization, the maximum value in the axis, and the minimum value, respectively.
其中,对每个轴求取梯度的公式如下:Among them, the formula for obtaining the gradient for each axis is as follows:
Figure PCTCN2021114402-appb-000002
Figure PCTCN2021114402-appb-000002
其中g i,v′ i,|t i+1-t i|分别是第i个梯度值,第i个归一化振动数据值,以及第i个数据和第i+1个数据之间的时间差。 where g i , v′ i , |t i+1 -t i | are the ith gradient value, the ith normalized vibration data value, and the difference between the ith data and the i+1th data, respectively Time difference.
本发明还提供了一种上述基于下颌骨生物特征的可靠用户认证方法的认证系统,包括:The present invention also provides an authentication system for the above-mentioned reliable user authentication method based on mandibular biometric features, including:
惯性测量单元,用于获取用户在喉咙发声时产生的六个轴的振动信号。The inertial measurement unit is used to obtain the six-axis vibration signal generated by the user's vocalization in the throat.
信号处理模块,用于对每个轴的振动信号去除异常值、滤波处理、离差归一化并计算梯度值划分成正、负向振动信号,最后拼接组成梯度阵列。The signal processing module is used for removing outliers, filtering, normalizing the dispersion and calculating the gradient value of the vibration signal of each axis, dividing it into positive and negative vibration signals, and finally splicing to form a gradient array.
下颌骨生物特征提取器,用于根据梯度阵列提取下颌骨生物特征。Mandible biometric extractor for extracting mandibular biometrics based on gradient arrays.
注册和认证模块,用于存储用户认证时产生的下颌骨生物特征,并在用户认证时根据下颌骨生物特征进行认证。The registration and authentication module is used to store the mandibular biometrics generated during user authentication, and perform authentication based on the mandibular biometrics during user authentication.
上述认证系统可为一个独立的系统或者内置于现有的包含惯性测量单元的设备如耳机中, 耳机制造商在制造耳机时将训练好的下颌骨生物特征提取网络和注册和认证模块嵌入耳机系统中即可获得。从而可以用于设备的认证。The above authentication system can be a stand-alone system or built into an existing device including an inertial measurement unit, such as a headset, and the headset manufacturer embeds the trained mandibular biometric extraction network and the registration and authentication module into the headset system when manufacturing the headset. available in. It can thus be used for device authentication.
与现有的用户认证技术相比,本发明通过用户喉咙发声产生振动信号并带动下颌骨振动,本发明利用耳机收集包生物特征的振动信号。本发明使用惯性测量单元捕获下颌骨的振动特征,利用梯度的符号将正向与负向振动特征分开,使用双分支深度神经网络提取振动信号中的下颌骨生物特征。利用高斯矩阵将生物特征向量转化为可撤销生物特征向量以防止重放攻击。Compared with the existing user authentication technology, the present invention generates a vibration signal through the user's throat to vibrate and drives the mandible to vibrate. The present invention captures the vibration features of the mandible by using the inertial measurement unit, separates the positive and negative vibration features by the sign of the gradient, and uses the dual-branch deep neural network to extract the mandible biological features in the vibration signal. Transform biometric vectors into revocable biometric vectors using a Gaussian matrix to prevent replay attacks.
附图说明Description of drawings
图1是本发明的认证流程图;Fig. 1 is the authentication flow chart of the present invention;
图2是下颌骨振动模型示意图;Fig. 2 is a schematic diagram of a mandibular vibration model;
图3是双分支神经网络的结构图;Fig. 3 is a structure diagram of a dual-branch neural network;
图4是认证性能示意图;Figure 4 is a schematic diagram of authentication performance;
具体实施方式Detailed ways
本发明针对现有的用户认证技术在易用性和安全性上不能同时很好的满足的情况下,提出一种在可穿戴设备,如耳机,上利用惯性测量单元捕获安全可靠且高区分度的生物特征来进行认证的方法。图2为下颌骨的振动模型,假设产生正向振动信号的力为F P,下颌骨质量为m,下颌骨两边的身体成分产生的对振动的阻尼和弹力系数分别为(c 1,c 2)和(k 1,k 2),那么根据牛顿第二定律可以得到: Aiming at the situation that the existing user authentication technology cannot satisfy both ease of use and security, the present invention proposes a method to capture safe, reliable and high discrimination using an inertial measurement unit on a wearable device, such as an earphone. biometrics for authentication. Figure 2 shows the vibration model of the mandible. Assuming that the force generating the positive vibration signal is F P , the mass of the mandible is m, and the damping and elastic coefficients of the body components on both sides of the mandible are (c 1 , c 2 , respectively). ) and (k 1 , k 2 ), then according to Newton's second law, we can get:
F P(t)=mx″(t)+c 1x′(t)+(k 1+k 2)x(t), F P (t)=mx″(t)+c 1 x′(t)+(k 1 +k 2 )x(t),
其中x(t)是下颌骨的正向振动位移。将该等式进行傅里叶变换后得到如下式子:where x(t) is the positive vibration displacement of the mandible. After the Fourier transform of this equation, the following formula is obtained:
Figure PCTCN2021114402-appb-000003
Figure PCTCN2021114402-appb-000003
其中w,X P(w),i,和F P(0)分别表示振动信号的频率成分,振动信号的频谱,虚数单位,以及产生正向振动的瞬时力。在振动传输到耳机后,振动信号的频谱可以把表示为: where w, X P (w), i, and F P (0) represent the frequency components of the vibration signal, the frequency spectrum of the vibration signal, the imaginary unit, and the instantaneous force that produces positive vibration, respectively. After the vibration is transmitted to the earphone, the spectrum of the vibration signal can be expressed as:
Figure PCTCN2021114402-appb-000004
Figure PCTCN2021114402-appb-000004
其中a和d分别是振动信号的衰减系数和传播距离。类似于正向振动信号的传播规律,负向振动信号在到达耳机处的频谱可以表示为:where a and d are the attenuation coefficient and propagation distance of the vibration signal, respectively. Similar to the propagation law of the positive vibration signal, the spectrum of the negative vibration signal when it reaches the earphone can be expressed as:
Figure PCTCN2021114402-appb-000005
Figure PCTCN2021114402-appb-000005
因此一个振动周期(下颌骨从中心位置位移到正向边缘和从中心位置位移到负向边缘)可以表示为:So one vibration period (displacement of the mandible from the center position to the positive edge and from the center position to the negative edge) can be expressed as:
Figure PCTCN2021114402-appb-000006
Figure PCTCN2021114402-appb-000006
可以发现该公式中的m,c 1,c 2,k 1,k 2都是下颌骨的生物特征,在不同的人中具有区分性。因此,从耳机的惯性测量单元中捕获包含下颌骨生物特征的振动信号是一种可行的认证方法。下面结合附图和具体实施例对本发明方法作进一步说明: It can be found that m, c 1 , c 2 , k 1 , and k 2 in this formula are all biological features of the mandible, which are distinguishable in different people. Therefore, capturing vibration signals containing mandibular biometrics from the headset's inertial measurement unit is a viable authentication method. The method of the present invention will be further described below in conjunction with the accompanying drawings and specific embodiments:
一种基于下颌骨生物特征的可靠用户认证方法,其简要流程如图1所示,具体分以下五步完成:A reliable user authentication method based on mandibular biometrics, its brief process is shown in Figure 1, which is completed in the following five steps:
步骤1)用户用喉咙发出0.2秒的声音。喉咙的振动会带动用户的下颌骨振动。振动信号在沿着下颌骨传到耳机处时被耳机中的惯性测量单元捕获。该振动信号是包含了用户的下颌骨生物特征。Step 1) The user makes a sound for 0.2 seconds with the throat. The vibration of the throat vibrates the user's jawbone. The vibration signal is captured by an inertial measurement unit in the headset as it travels along the mandible to the headset. The vibration signal contains the biometric features of the user's mandible.
步骤2)对惯性测量单元收集到的原始振动信号进行预处理,以移除信号数据中的噪声和与环境无关的成分,最后得到一个信号阵列。Step 2) Preprocess the original vibration signal collected by the inertial measurement unit to remove noise and environment-independent components in the signal data, and finally obtain a signal array.
将原始信号每十个值分为一个窗口,计算窗口内所有信号值的标准差。如果某个窗口的标 准差大于250,而后面两个窗口的标准差大于100,就认定振动从标准差为250的窗口的第一个数据开始。并为每个轴的振动信号选择连续的60个数据值。Divide the original signal into a window every ten values, and calculate the standard deviation of all signal values within the window. If the standard deviation of a window is greater than 250, and the standard deviation of the next two windows is greater than 100, the vibration is considered to start from the first data of the window with the standard deviation of 250. And select consecutive 60 data values for the vibration signal of each axis.
利用MAD算法找出每个轴中由硬件的不完美性或人体运动引起的异常值。接着对每个异常值用其前面的两个正常值和后面的两个正常值的平均值进行替换。Leverage the MAD algorithm to find outliers in each axis caused by hardware imperfections or human motion. Each outlier is then replaced with the average of the two normal values before it and the two normal values after it.
对每个轴的数据分别利用巴特沃斯滤波器进行高通滤波处理,截断频率为20HZ,以消除与下颌骨生物特征无关的低频分量,使数据更纯净。The data of each axis were processed by high-pass filtering with Butterworth filter, and the cut-off frequency was 20 Hz to eliminate the low-frequency components irrelevant to the biological features of the mandible and make the data more pure.
由于每个轴的起始振动值不一样,需要对每个轴进行离差归一化。否则起始值比较小的轴的作用将在后续处理中被起始值比较大的轴的作用所掩盖。归一化后的六个轴的数据被拼接形成一个维度为(6,60)的信号阵列。Since the initial vibration value of each axis is different, it is necessary to normalize the dispersion for each axis. Otherwise, the effect of the axis with the smaller starting value will be masked by the effect of the axis with the larger starting value in the subsequent processing. The normalized six-axis data are spliced to form a signal array of dimension (6, 60).
步骤3)计算正向和负向的梯度特征Step 3) Calculate the positive and negative gradient features
通过步骤1)和步骤2)得到的信号阵列难以区分正向振动和负向振动,因此对每个轴求取梯度并根据梯度的符号即可把正向和负向的振动分开。大于等于0的梯度属于正向振动,其他的梯度属于负向振动。The signal array obtained by step 1) and step 2) is difficult to distinguish between positive vibration and negative vibration, so the gradient of each axis can be obtained and the positive and negative vibration can be separated according to the sign of the gradient. Gradients greater than or equal to 0 are positive vibrations, and other gradients are negative vibrations.
对求出来的正向和负向梯度分别线性插值,使每个轴的每个方向的梯度包含30个值。将所有轴的梯度拼接起来形成一个维度为(2,6,30)的梯度阵列。The obtained positive and negative gradients are linearly interpolated separately, so that the gradients in each direction of each axis contain 30 values. Concatenate the gradients of all axes to form a gradient array of dimension (2, 6, 30).
步骤4)将步骤3)得到的梯度矩阵输入下颌骨生物特征提取器以得到一个下颌骨生物特征向量。将该向量乘以高斯矩阵后存储在耳机中作为可撤销认证模板。Step 4) Input the gradient matrix obtained in step 3) into the mandible biometric extractor to obtain a mandible biometric vector. This vector is multiplied by a Gaussian matrix and stored in the headset as a revocable authentication template.
本实施例中,下颌骨生物特征提取器采用训练好的神经网络,其结构如图3所示,包括两个卷积分支分别输入正向梯度和负向梯度。每个卷积分支包括三个卷积层。每个卷积层后面连接着一个批处理归一化函数和一个ReLU激活函数。卷积层的输出拼接后输入到两个全连接层中。第一个全连接层的输出即从梯度特征阵列中提取的下颌骨生物特征向量。In this embodiment, the mandibular biometric feature extractor adopts a trained neural network, and its structure is shown in FIG. 3 , including two convolution branches inputting positive gradient and negative gradient respectively. Each convolutional branch consists of three convolutional layers. Each convolutional layer is followed by a batch normalization function and a ReLU activation function. The outputs of the convolutional layers are concatenated and fed into two fully connected layers. The output of the first fully connected layer is the mandible biometric vector extracted from the gradient feature array.
步骤5)用户通过0.2秒的喉咙振动来使耳机捕获下颌骨的振动信号。将振动信号通过步 骤2)和3)处理后输入特征提取器以得到新的下颌骨生物特征向量。将该生物特征向量转化为可撤销生物特征向量后与耳机中的模板进行相似度计算。如果该相似度大于0.57,就接受此次认证,否则就拒绝此次认证。Step 5) The user makes the earphone capture the vibration signal of the mandible by 0.2 seconds of throat vibration. The vibration signal is processed through steps 2) and 3) and then input to the feature extractor to obtain a new mandibular biological feature vector. After converting the biometric vector into a revocable biometric vector, the similarity calculation is performed with the template in the headset. If the similarity is greater than 0.57, the authentication is accepted; otherwise, the authentication is rejected.
本发明针对现有的用户认证技术在安全性和易用性上无法同时保障的情况,提出一种在耳机上即可实现的安全可靠的用户认证方法。本发明无需用户与认证设备之间进行肢体交互,只需发出喉咙振动,耳机中的惯性测量单元即可捕获包含下颌骨生物特征的振动信号,利用多步预处理和深度网络提取高区分度的下颌骨生物特征,利用高斯矩阵和相似度计算实现防重放的安全认证方法。在30个人的实验数据的误拒率和误收率如图4所示,等误差率小于2%。The present invention proposes a safe and reliable user authentication method that can be implemented on earphones, aiming at the situation that the security and ease of use cannot be guaranteed at the same time in the existing user authentication technology. The invention does not require physical interaction between the user and the authentication device, only needs to emit throat vibration, the inertial measurement unit in the earphone can capture the vibration signal containing the biological features of the mandible, and uses multi-step preprocessing and deep network to extract high-discrimination Mandible biometrics, using Gaussian matrix and similarity calculation to achieve anti-replay security authentication method. The false rejection rate and false acceptance rate in the experimental data of 30 individuals are shown in Figure 4, and the equal error rate is less than 2%.
显然,上述实施例仅仅是为清楚地说明所作的举例,而并非对实施方式的限定。对于所属领域的普通技术人员来说,在上述说明的基础上还可以做出其他不同形式的变化或变动。这里无需也无法把所有的实施方式予以穷举。而由此所引申出的显而易见的变化或变动仍处于本发明的保护范围。Obviously, the above-mentioned embodiments are only examples for clear description, and are not intended to limit the implementation manner. For those of ordinary skill in the art, changes or modifications in other different forms can also be made on the basis of the above description. All implementations need not and cannot be exhaustive here. However, the obvious changes or changes derived from this are still within the protection scope of the present invention.

Claims (10)

  1. 一种基于下颌骨生物特征的可靠用户认证方法,其特征在于,包括以下步骤:A reliable user authentication method based on mandibular biometric features, comprising the following steps:
    利用惯性测量单元获取用户在喉咙发声时产生的六个轴的振动信号;Use the inertial measurement unit to obtain the six-axis vibration signals generated by the user when the user vocalizes in the throat;
    对每个轴的振动信号去除异常值、滤波处理、离差归一化并计算梯度值划分成正、负向振动信号,最后拼接组成梯度阵列;The vibration signal of each axis is removed outliers, filtered, normalized by dispersion, and the gradient value is calculated to divide it into positive and negative vibration signals, and finally spliced to form a gradient array;
    将梯度阵列输入至下颌骨生物特征提取器,获得下颌骨生物特征,根据下颌骨生物特征对用户进行注册和认证。The gradient array is input to the mandibular biometric extractor, the mandibular biometrics are obtained, and the user is registered and authenticated according to the mandibular biometrics.
  2. 根据权利要求1所述的基于下颌骨生物特征的可靠用户认证方法,其特征在于,所述六个轴的振动信号为从振动起始点后截取的N个采样点,N不小于60。The reliable user authentication method based on mandibular biometrics according to claim 1, wherein the vibration signals of the six axes are N sampling points taken after the vibration starting point, and N is not less than 60.
  3. 根据权利要求2所述的基于下颌骨生物特征的可靠用户认证方法,其特征在于,所述振动起始点通过如下方法确定:The reliable user authentication method based on mandibular biometric features according to claim 2, wherein the vibration starting point is determined by the following method:
    将原始信号按每十个值分为一个窗口,计算窗口内所有信号值的标准差;若某一窗口的标准差大于250,且其后两个窗口的标准差大于100,则认定标准差为250的窗口的第一个信号值为振动起始点。Divide the original signal into a window every ten values, and calculate the standard deviation of all signal values in the window; if the standard deviation of a window is greater than 250, and the standard deviation of the next two windows is greater than 100, the standard deviation is determined as The first signal value of the window of 250 is the vibration start point.
  4. 根据权利要求1所述的基于下颌骨生物特征的可靠用户认证方法,其特征在于,所述去除异常值具体为:利用MAD检测算法找出异常值,同时对于每个异常值,同时利用该异常值前面的两个正常值和后面两个正常值的均值替代。The method for reliable user authentication based on mandibular biometric features according to claim 1, wherein the removing outliers is specifically: using a MAD detection algorithm to find outliers, and for each outlier, using the abnormality at the same time The value is replaced by the mean of the two normal values in front of the value and the two normal values in the back.
  5. 根据权利要求1所述的基于下颌骨生物特征的可靠用户认证方法,其特征在于,所述滤波处理的截断频率为20HZ。The reliable user authentication method based on mandibular biometric features according to claim 1, wherein the cut-off frequency of the filtering process is 20 Hz.
  6. 根据权利要求1所述的基于下颌骨生物特征的可靠用户认证方法,其特征在于,还包括对正、负向振动信号进行插值处理,使梯度阵列的维度保持一致。The method for reliable user authentication based on mandibular biometric features according to claim 1, further comprising performing interpolation processing on the positive and negative vibration signals to keep the dimensions of the gradient array consistent.
  7. 根据权利要求1所述的基于下颌骨生物特征的可靠用户认证方法,其特征在于,所述下颌骨生物特征提取器为训练好的神经网络、分类器。The reliable user authentication method based on mandibular biometrics according to claim 1, wherein the mandibular biometrics extractor is a trained neural network and a classifier.
  8. 根据权利要求1所述的基于下颌骨生物特征的可靠用户认证方法,其特征在于,获得下颌骨生物特征后,还包括:The reliable user authentication method based on mandibular biometrics according to claim 1, characterized in that, after obtaining mandibular biometrics, the method further comprises:
    将下颌骨生物特征乘以高斯矩阵转化成可下颌骨撤销生物特征。Multiply the mandible biometrics by a Gaussian matrix to convert the mandibular undo biometrics.
  9. 根据权利要求1所述的基于下颌骨生物特征的可靠用户认证方法,其特征在于,所述根据下颌骨生物特征对用户进行注册的过程为:The reliable user authentication method based on mandibular biometrics according to claim 1, wherein the process of registering a user according to mandibular biometrics is:
    将获得的注册用户的下颌骨生物特征保存即完成注册;The registration is completed by saving the obtained biometric features of the registered user's mandible;
    所述根据下颌骨生物特征对用户进行认证的过程为:将认证用户的下颌骨生物特征与注册时保存的下颌骨生物特征进行相似度计算,若相似度大于阈值,则接受此次认证,否则拒绝。The process of authenticating the user according to the mandibular biometrics is as follows: calculating the similarity between the mandible biometrics of the authenticated user and the mandibular biometrics saved during registration, if the similarity is greater than a threshold, accept the authentication, otherwise reject.
  10. 根据权利要求1-9任一项所述基于下颌骨生物特征的可靠用户认证方法的认证系统,其特征在于,包括:The authentication system for a reliable user authentication method based on mandibular biometrics according to any one of claims 1-9, characterized in that, comprising:
    惯性测量单元,用于获取用户在喉咙发声时产生的六个轴的振动信号;Inertial measurement unit, used to obtain the six-axis vibration signal generated by the user when vocalizing in the throat;
    信号处理模块,用于对每个轴的振动信号去除异常值、滤波处理、离差归一化并计算梯度值划分成正、负向振动信号,最后拼接组成梯度阵列;The signal processing module is used for removing outliers, filtering, normalizing the dispersion and calculating the gradient value of the vibration signal of each axis, dividing it into positive and negative vibration signals, and finally splicing to form a gradient array;
    下颌骨生物特征提取器,用于根据梯度阵列提取下颌骨生物特征;A mandibular biometric extractor for extracting mandibular biometrics based on gradient arrays;
    注册和认证模块,用于存储用户认证时产生的下颌骨生物特征,并在用户认证时根据下颌骨生物特征进行认证。The registration and authentication module is used to store the mandibular biometrics generated during user authentication, and perform authentication based on the mandibular biometrics during user authentication.
PCT/CN2021/114402 2021-02-01 2021-08-24 Reliable user authentication method and system based on mandibular biological features WO2022160691A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110137465.1 2021-02-01
CN202110137465.1A CN112949403B (en) 2021-02-01 2021-02-01 Reliable user authentication method and system based on biological characteristics of mandible

Publications (1)

Publication Number Publication Date
WO2022160691A1 true WO2022160691A1 (en) 2022-08-04

Family

ID=76240896

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/114402 WO2022160691A1 (en) 2021-02-01 2021-08-24 Reliable user authentication method and system based on mandibular biological features

Country Status (2)

Country Link
CN (1) CN112949403B (en)
WO (1) WO2022160691A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949403B (en) * 2021-02-01 2022-08-23 浙江大学 Reliable user authentication method and system based on biological characteristics of mandible

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103218841A (en) * 2013-04-26 2013-07-24 中国科学技术大学 Three-dimensional vocal organ animation method combining physiological model and data driving model
US20170220786A1 (en) * 2016-02-02 2017-08-03 Qualcomm Incorporated Liveness determination based on sensor signals
CN109711350A (en) * 2018-12-28 2019-05-03 武汉大学 A kind of identity identifying method merged based on lip movement and voice
US20200074058A1 (en) * 2018-08-28 2020-03-05 Samsung Electronics Co., Ltd. Method and apparatus for training user terminal
CN112949403A (en) * 2021-02-01 2021-06-11 浙江大学 Reliable user authentication method and system based on biological characteristics of mandible

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101301063B1 (en) * 2013-07-05 2013-08-28 (주)드림텍 Method of manufacturing fingerprint recognition home key using high dielectric constant material and fingerprint recognition home key structure thereof
US10051112B2 (en) * 2016-12-23 2018-08-14 Google Llc Non-intrusive user authentication system
CN108574701B (en) * 2017-03-08 2022-10-04 理查德.A.罗思柴尔德 System and method for determining user status
CN110363120B (en) * 2019-07-01 2020-07-10 上海交通大学 Intelligent terminal touch authentication method and system based on vibration signal
CN111371951B (en) * 2020-03-03 2021-04-23 北京航空航天大学 Smart phone user authentication method and system based on electromyographic signals and twin neural network
CN112149638B (en) * 2020-10-23 2022-07-01 贵州电网有限责任公司 Personnel identity recognition system construction and use method based on multi-modal biological characteristics

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103218841A (en) * 2013-04-26 2013-07-24 中国科学技术大学 Three-dimensional vocal organ animation method combining physiological model and data driving model
US20170220786A1 (en) * 2016-02-02 2017-08-03 Qualcomm Incorporated Liveness determination based on sensor signals
US20200074058A1 (en) * 2018-08-28 2020-03-05 Samsung Electronics Co., Ltd. Method and apparatus for training user terminal
CN109711350A (en) * 2018-12-28 2019-05-03 武汉大学 A kind of identity identifying method merged based on lip movement and voice
CN112949403A (en) * 2021-02-01 2021-06-11 浙江大学 Reliable user authentication method and system based on biological characteristics of mandible

Also Published As

Publication number Publication date
CN112949403A (en) 2021-06-11
CN112949403B (en) 2022-08-23

Similar Documents

Publication Publication Date Title
US10566007B2 (en) System and method for authenticating voice commands for a voice assistant
CN111837180B (en) Bioassay process
Chauhan et al. BreathPrint: Breathing acoustics-based user authentication
US10867019B2 (en) Personal authentication device, personal authentication method, and personal authentication program using acoustic signal propagation
CN104834849B (en) Dual-factor identity authentication method and system based on Application on Voiceprint Recognition and recognition of face
WO2021051608A1 (en) Voiceprint recognition method and device employing deep learning, and apparatus
EP1962280A1 (en) Method and network-based biometric system for biometric authentication of an end user
WO2022160691A1 (en) Reliable user authentication method and system based on mandibular biological features
Shang et al. Voice liveness detection for voice assistants using ear canal pressure
Duraibi Voice biometric identity authentication model for IoT devices
WO2022268183A1 (en) Video-based random gesture authentication method and system
CN111611437A (en) Method and device for preventing face voiceprint verification and replacement attack
Liu et al. Mandipass: Secure and usable user authentication via earphone imu
MK et al. Voice Biometric Systems for User Identification and Authentication–A Literature Review
Shang et al. Voice liveness detection for voice assistants through ear canal pressure monitoring
Gofman et al. Hidden markov models for feature-level fusion of biometrics on mobile devices
CN115935314A (en) User identity authentication method based on wearable device motion sensor
Chang et al. Vogue: Secure user voice authentication on wearable devices using gyroscope
Mohanta et al. Development of multimodal biometric framework for smartphone authentication system
CN111444489A (en) Double-factor authentication method based on photoplethysmography sensor
CN115348049B (en) User identity authentication method utilizing earphone inward microphone
Al-Hudhud et al. Web-based multimodal biometric authentication application
KR102535244B1 (en) identification system and method using landmark of part of the face and voice recognition
Pradhan et al. Authentication using 3 tier biometric modalities
CN110100278A (en) Speaker recognition systems and speaker identification method and In-Ear device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21922286

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21922286

Country of ref document: EP

Kind code of ref document: A1