WO2017069118A1 - 個人認証装置、個人認証方法および個人認証プログラム - Google Patents
個人認証装置、個人認証方法および個人認証プログラム Download PDFInfo
- Publication number
- WO2017069118A1 WO2017069118A1 PCT/JP2016/080833 JP2016080833W WO2017069118A1 WO 2017069118 A1 WO2017069118 A1 WO 2017069118A1 JP 2016080833 W JP2016080833 W JP 2016080833W WO 2017069118 A1 WO2017069118 A1 WO 2017069118A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- acoustic signal
- acoustic
- user
- personal authentication
- signal
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 47
- 230000008054 signal transmission Effects 0.000 claims abstract description 19
- 230000000644 propagated effect Effects 0.000 claims abstract description 11
- 210000003128 head Anatomy 0.000 claims description 58
- 210000000988 bone and bone Anatomy 0.000 claims description 34
- 210000000613 ear canal Anatomy 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 abstract description 13
- 238000010586 diagram Methods 0.000 description 17
- 230000005540 biological transmission Effects 0.000 description 16
- 230000005236 sound signal Effects 0.000 description 16
- 238000004891 communication Methods 0.000 description 15
- 230000004044 response Effects 0.000 description 15
- 238000000605 extraction Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 7
- 230000008859 change Effects 0.000 description 6
- 230000001902 propagating effect Effects 0.000 description 6
- 230000009471 action Effects 0.000 description 3
- 210000004556 brain Anatomy 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 3
- 210000003625 skull Anatomy 0.000 description 3
- 238000001228 spectrum Methods 0.000 description 3
- 238000013179 statistical model Methods 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 208000037656 Respiratory Sounds Diseases 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 210000000887 face Anatomy 0.000 description 2
- 210000000554 iris Anatomy 0.000 description 2
- 210000000697 sensory organ Anatomy 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 210000003462 vein Anatomy 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000005034 decoration Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000000513 principal component analysis Methods 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
- 210000000515 tooth Anatomy 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/117—Identification of persons
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/12—Audiometering
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6801—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
- A61B5/6813—Specially adapted to be attached to a specific body part
- A61B5/6814—Head
- A61B5/6815—Ear
- A61B5/6817—Ear canal
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N29/00—Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
- G01N29/04—Analysing solids
- G01N29/11—Analysing solids by measuring attenuation of acoustic waves
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N29/00—Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
- G01N29/44—Processing the detected response signal, e.g. electronic circuits specially adapted therefor
- G01N29/46—Processing the detected response signal, e.g. electronic circuits specially adapted therefor by spectral analysis, e.g. Fourier analysis or wavelet analysis
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/02—Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
- H04R1/1041—Mechanical or electronic switches, or control elements
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2291/00—Indexing codes associated with group G01N29/00
- G01N2291/01—Indexing codes associated with the measuring variable
- G01N2291/015—Attenuation, scattering
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2291/00—Indexing codes associated with group G01N29/00
- G01N2291/04—Wave modes and trajectories
- G01N2291/044—Internal reflections (echoes), e.g. on walls or defects
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2291/00—Indexing codes associated with group G01N29/00
- G01N2291/10—Number of transducers
- G01N2291/102—Number of transducers one emitter, one receiver
Definitions
- the present invention relates to a personal authentication device, a personal authentication method, and a personal authentication program for authenticating an individual.
- Biometrics-based authentication based on individual differences between living organisms has a lower risk of leakage or theft than passwords. For this reason, there are increasing examples of introducing personal authentication based on individual differences between living bodies for the purpose of identifying individuals and confirming rights, and for the purpose of security protection.
- Known personal authentication technologies based on individual differences between living organisms are known that use fingerprints, veins, faces, faces, irises, voices, etc. Yes. Among these, personal authentication using voice can be performed using not a special device but an inexpensive device such as a telephone or a microphone that is used for general purposes.
- Patent Document 1 as an example of personal authentication using voice, voice data to be authenticated is converted into a feature amount, and a similarity with a feature amount of a user registered in advance is measured. Based on the result A method for performing authentication is disclosed.
- Patent Document 2 discloses a method of performing personal authentication using bone conduction sound received by a bone conduction microphone instead of sound propagating in the air.
- the problem is that, in the case of a method of acquiring biometric information and performing personal authentication, it is necessary to have the user perform some operation in order to perform authentication. For example, in the case of personal authentication using a fingerprint or a raw vein, the user needs to place a finger on a dedicated scanner. Further, in the case of personal authentication using a face, a face, or an iris, a user operation such as turning the face toward the camera is necessary. In addition, in the case of personal authentication using voice or bone conduction sound, user action such as uttering a password is required. Users who are forced to perform such actions follow psychological and physical burdens.
- an object of the present invention is to provide a personal authentication device, a personal authentication method, and a personal authentication program that have less psychological and / or physical burden on a user to be authenticated.
- the personal authentication device is an acoustic signal transmitting means for transmitting a first acoustic signal to a part of a user's head, and an acoustic signal after the first acoustic signal has propagated through a part of the head.
- Acoustic signal observing means for observing the second acoustic signal
- acoustic characteristic calculating means for calculating acoustic characteristics from the first acoustic signal and the second acoustic signal, and characteristics relating to the user extracted from the acoustic characteristics or the acoustic characteristics
- User identification means for identifying the user based on the quantity is provided.
- the personal authentication method sends a first acoustic signal to a part of the user's head, and the second acoustic signal is an acoustic signal after the first acoustic signal propagates through a part of the head.
- the acoustic characteristics are calculated from the first acoustic signal and the second acoustic signal, and the user is identified on the basis of the acoustic characteristics or the feature quantity relating to the user extracted from the acoustic characteristics.
- the personal authentication program according to the present invention is a first acoustic signal sent to a part of the user's head to the computer and an acoustic signal after the first acoustic signal has propagated through the part of the head.
- a process of calculating an acoustic characteristic from the two acoustic signals and a process of identifying the user based on the acoustic characteristic or a feature amount relating to the user extracted from the acoustic characteristic are executed.
- FIG. 1 is a block diagram illustrating a configuration example of the personal authentication device according to the first embodiment.
- the personal authentication apparatus shown in FIG. 1 includes an acoustic signal transmission unit 101, an acoustic signal observation unit 102, an acoustic characteristic calculation unit 103, a feature extraction unit 104, a user identification unit 105, and a feature amount storage unit 106. ing.
- the acoustic signal sending means 101 sends an acoustic signal to a part of the head of the first user.
- a part of the head to which the acoustic signal is transmitted is a region where a cavity in the head is formed, and a region in which a decorative article or a device that produces a sound effect can be attached or can be brought close to. It may be at least a part.
- the acoustic signal observation unit 102 observes the acoustic signal after the acoustic signal transmitted from the acoustic signal transmission unit 101 propagates through a part of the head of the first user. More specifically, the part of the head used as the propagation path of the acoustic signal may be at least a part of the skull, brain, sensory sensation, and the cavity between them, constituting the head.
- the acoustic characteristic calculation unit 103 is configured to generate an acoustic signal of the acoustic signal propagating through a part of the user's head based on the acoustic signal transmitted from the acoustic signal transmission unit 101 and the acoustic signal observed by the acoustic signal observation unit 102. Calculate the characteristics.
- the feature extraction unit 104 calculates a feature amount related to the user who has propagated the acoustic signal from the calculated acoustic characteristic.
- Feature amount storage means 106 stores the extracted feature amount for a predetermined user in advance.
- a user whose feature value is stored in the feature value storage unit 106 may be referred to as a registered user.
- the feature amount storage unit 106 extracts feature amounts from a plurality of users in advance by an acoustic signal transmission unit 101, an acoustic signal observation unit 102, an acoustic characteristic calculation unit 103, a feature extraction unit 104, or an equivalent configuration thereof. You may memorize
- the user identification unit 105 compares the feature amount obtained by the feature extraction unit 104 with the feature amount of the registered user stored in the feature amount storage unit 106 described later, and the first user is the registered user. It is determined whether it corresponds to.
- FIG. 2 is a configuration diagram showing a specific configuration example of the personal authentication device of the present embodiment.
- the personal authentication device shown in FIG. 2 includes a personal computer (PC) 11, a sound processor 12, a microphone amplifier 13, an earphone 14, and a microphone 15. And.
- Reference numeral 16 represents a user (Subject) to be recognized.
- the earphone 14 corresponds to the acoustic signal sending means 101 described above.
- the microphone 15 corresponds to the acoustic signal observation unit 102 described above.
- the microphone 15 and the earphone 14 are preferably integrated so that the relative positional relationship does not change. However, this is not the case when the relative positional relationship between the two does not change significantly.
- a microphone-integrated earphone that is inserted into the ear canal entrance is described.
- the acoustic signal transmitting unit 101 and the acoustic signal observing unit 102 may be realized by a headphone that covers the auricle provided with a microphone (an auricle-type microphone-integrated earphone) other than the above example. Further, the acoustic signal sending means 101 and the acoustic signal observation means 102 may be realized by a telephone provided with a microphone in the receiver portion. In such a case, the sound signal transmitted from the earphone located at the left ear canal entrance or the like is observed with a microphone located at the right ear ear canal entrance or the like, or vice versa. May be.
- the acoustic characteristic calculation means 103, the feature extraction means 104, and the user identification means 105 are each realized by a CPU and a memory (all not shown) provided in the PC 11 and operating according to a program.
- the feature amount storage unit 106 is realized by a storage medium (not shown) such as a hard disk included in the PC 11.
- FIG. 3 is a flowchart showing an example of the operation of the personal authentication device of this embodiment.
- the acoustic signal transmission unit 101 transmits an acoustic signal toward a part of the user's head to be authenticated (step S ⁇ b> 101).
- the acoustic signal transmission unit 101 may transmit an acoustic signal from the ear canal entrance to the ear canal, for example.
- an M-sequence signal maximal length-sequence
- TSP Time-Stretched-Pulse
- Impulse Response an impulse response
- FIG. 4A is a graph showing an example of an acoustic signal sent out by the acoustic signal sending means 101.
- the horizontal axis indicates time t
- the vertical axis indicates the signal value x (t) of the transmitted acoustic signal at time t.
- the acoustic signal transmitted by the acoustic signal transmitting unit 101 may be referred to as a transmitted acoustic signal.
- the acoustic signal observation unit 102 observes the acoustic signal after the transmitted acoustic signal has propagated through a part of the head of the user to be authenticated (step S102).
- FIG. 4B is a graph showing an example of an acoustic signal observed by the acoustic signal observation means 102.
- the horizontal axis indicates time t
- the vertical axis indicates the signal value y (t) of the observed acoustic signal at time t.
- the acoustic signal observed by the acoustic signal observation unit 102 may be referred to as an observed acoustic signal.
- the acoustic characteristic calculation means 103 compares the transmitted acoustic signal and the observed acoustic signal, and determines the acoustic characteristic of the acoustic signal when the acoustic signal propagates through a part of the user's head from the change. Calculate (step S103).
- acoustic characteristics an impulse response (Impulse Response), a transfer function obtained by Fourier transform or Laplace transform of the impulse response, and the like can be considered.
- the acoustic characteristics preferably include information on how the acoustic signal is reflected and / or attenuated in the living body.
- the acoustic characteristics may be an ear canal impulse response (Ear Canal Impulse Response) or an ear canal transfer function (Ear Canal Transfer Function) .
- FIG. 5 is a graph showing an example of an impulse response as an acoustic characteristic calculated by the acoustic characteristic calculation means 103.
- the horizontal axis indicates time t
- the vertical axis indicates the value g (t) of the impulse response of the observed acoustic signal at time t.
- the feature extraction unit 104 calculates a feature amount from the acoustic characteristic calculated by the acoustic characteristic calculation unit 103 (step S104).
- a feature quantity an impulse response or a transfer function calculated as an acoustic characteristic may be used as it is. That is, the feature extraction unit 104 may use the time response value of the impulse response as the acoustic characteristic and the frequency value of the transfer function as the feature value. Further, the feature extraction unit 104 performs a principal component analysis on an impulse response or transfer function as an acoustic characteristic and performs dimension compression, or a mfcc (mel-frequency) described in Non-Patent Document 1. cepstrum (coefficients)) or the like may be calculated.
- the user identification unit 105 compares the feature amount obtained by the feature extraction unit 104 with the feature amount of the registered user stored in advance in the feature amount storage unit 106, and the user to be authenticated It is determined whether it corresponds to a registered user (step S105).
- the user identification unit 105 may use one-to-one authentication or one-to-N authentication as a determination method.
- N is an integer of 1 or more.
- the user identification unit 105 determines the feature amount of the user to be authenticated (the feature amount obtained by the feature extraction unit 104) and the feature amount of the registered user as 1 Compare one-on-one. At this time, the administrator of the personal authentication device may give the user identification unit 105 a designation as to which registered user the comparison is performed using a user ID or the like in advance.
- the user identification unit 105 calculates, for example, the distance between the feature amount of the user who is the authentication target and the feature amount of the designated registered user, and the distance is less than the threshold value. May be determined to be the same person. On the other hand, when the calculated distance is greater than the threshold, the user identification unit 105 may determine that the two are different people.
- the user identification unit 105 compares the user to be authenticated with N registered users when using 1-to-N authentication.
- the user identification unit 105 calculates the distance between each feature amount of N registered users with respect to the feature amount of the user to be authenticated, and the registered user with the shortest distance is determined as the authentication target. It is determined that the user is a registered user.
- the user identification unit 105 can also use a combination of one-to-one authentication and one-to-N authentication. In this case, the user identification unit 105 may perform one-to-one authentication with the extracted registered user as a comparison target after performing the one-to-N authentication and extracting the registered user with the shortest distance. Further, as a scale of the distance to be calculated, Euclid distance (Euclid distance), cosine distance (cosine distance), and the like can be considered, but are not limited thereto.
- the feature amount storage unit 106 may store a statistical model instead of the feature amount.
- the statistical model may be, for example, an average value and a variance value obtained by acquiring feature quantities a plurality of times for each user, or a relational expression calculated using them.
- the statistical model may be a model using GMM (Gaussian Mixture Model), SVM (Support Vector Machine), a neural network, or the like as disclosed in Patent Document 1.
- the present embodiment performs personal authentication using the feature that the acoustic characteristics of the acoustic signal propagating through a part of the user's head are different for each individual.
- the acoustic characteristic propagating through a part of the user's head is a feature inside the living body, unlike the feature that can be observed from the outside, such as a face and a fingerprint, so the risk of leakage is low and theft is difficult.
- two transmission acoustic signals and an observation acoustic signal are necessary. For this reason, compared with the method using one signal, there is less risk of being forged by eavesdropping or the like.
- an operation performed by a user who is an authentication target for authentication is to wear a headphone or earphone in which a microphone is embedded, or hold a mobile phone or the like in which a microphone is embedded in the earpiece. Only. Therefore, according to this embodiment, a user's psychological and physical burden can be reduced. For example, if the personal authentication method of the present embodiment is used in combination with an information distribution device that uses sound such as music distribution, transceivers, and telephone calls, the physical / Can provide personal authentication without mental burden.
- FIG. 6 is a block diagram illustrating a configuration example of the personal authentication device according to the second embodiment of this invention.
- the personal authentication device shown in FIG. 6 is different from the configuration of the first embodiment shown in FIG. 1 in that it further includes a noise / apparatus characteristic removing unit 201.
- the observed acoustic signal includes not only acoustic characteristics derived from the user's living body, but also various noises such as ambient environmental noise, heart sounds, breathing sounds, voices, sounds emitted from joints, and vibrations. Also, the acoustic signal sending unit 101 and the acoustic signal observation unit 102 themselves include acoustic characteristics, which may occur as device characteristics that are individual differences between devices.
- the noise / device characteristic removing unit 201 removes such noise and device characteristics from the observed acoustic signal.
- the acoustic characteristic calculation unit 103 can use the observed acoustic signal from which noise and equipment characteristics have been removed.
- a method of sending and observing a transmission acoustic signal to two places on the user's head can be considered.
- the method of observing the acoustic signal at two locations may be, for example, a method in which one acoustic signal observation means 102 performs observation twice and obtains two observation acoustic signals, or two acoustic signal observation means 102 are used.
- a method of obtaining two observation acoustic signals in one observation may be used.
- the acoustic signal sending unit 101 and the acoustic signal observation unit 102 may send and observe acoustic signals to the left and right external auditory canals and / or pinna to obtain two observed acoustic signals.
- the noise / device characteristic removing unit 201 may remove noise and device characteristics using the two observed acoustic signals observed in this manner.
- the noise / apparatus characteristic removing unit 201 performs, as another method of removing the noise / apparatus characteristic, for example, an operation of observing the acoustic signal transmitted from the acoustic signal transmitting unit 101 by the acoustic signal observation unit 102 a plurality of times. You may take the average of the two or more observation sound signals obtained. By such a method, it is possible to reduce the influence of instantaneous noise.
- an average of observed acoustic signals observed from a plurality of people is stored in advance, and observed from a user (first user) to be authenticated at the time of authentication.
- FIGS. 7 and 8 are block diagrams showing another configuration example of the personal authentication device of the present embodiment.
- the position where the noise / apparatus characteristic removing unit is arranged is not limited to the subsequent stage of the acoustic signal observation unit 102.
- the noise / apparatus characteristic removing unit may be provided in the subsequent stage of the acoustic characteristic calculating unit 103.
- the noise / apparatus characteristic removing unit may be provided after the feature extracting unit 104.
- FIG. 7 shows a noise / equipment characteristic removing unit 202 as an example of the noise / equipment characteristic removing unit provided in the subsequent stage of the acoustic characteristic calculating unit 103.
- FIG. 8 shows a noise / equipment characteristic removing unit 203 as an example of the noise / apparatus characteristic removing unit provided in the subsequent stage of the feature extracting unit 104.
- the noise / apparatus characteristic removing unit 202 removes noise and apparatus characteristics from the acoustic characteristic calculated by the acoustic characteristic calculating unit 103.
- the noise / device characteristic removing unit 203 removes noise and device characteristics from the feature amount calculated by the feature extracting unit 104.
- a method of removing noise and device characteristics from acoustic characteristics there is a method of subtracting commonly obtained acoustic characteristics from observed acoustic signals observed at two locations on the user's head. Another example is a method of averaging the acoustic characteristics obtained from the observed acoustic signal obtained by performing the observation a plurality of times. As yet another method, there is a method of subtracting the acoustic characteristic obtained from the average of the observed acoustic signals observed from a plurality of persons from the acoustic characteristic obtained from the observed acoustic signal obtained from the user who is the subject of authentication. Can be mentioned.
- a method of removing noise and device characteristics from the feature value a method similar to the method of removing noise and device property from the sound property, that is, a method of subtracting and averaging (however, “acoustic property” is referred to as “feature value”).
- feature value a standard deviation of acoustic characteristics is measured and division is performed.
- the noise / apparatus characteristic removing means 201, the noise / apparatus characteristic removing means 202, and the noise / apparatus characteristic removing means 203 determine the head from the observed acoustic signal (or the acoustic characteristic or feature amount obtained from the observed acoustic signal).
- the device characteristics may be removed by subtracting an acoustic signal (or an acoustic characteristic or feature amount obtained from the acoustic signal) obtained by propagating only with the earphone and the microphone without using the audio signal.
- various noises such as ambient environmental noise, heart sounds, breathing sounds, utterances, and sounds generated from joints are observed based on the observed acoustic signals, their acoustic characteristics, and their feature quantities.
- device characteristics that are individual differences between devices can be eliminated. Therefore, it is possible to realize personal authentication with higher accuracy than in the first embodiment.
- FIG. 9 is a block diagram illustrating a configuration example of the personal authentication device according to the third exemplary embodiment of the present invention.
- the personal authentication apparatus shown in FIG. 9 includes an acoustic signal transmission unit 101 and an acoustic signal observation unit 102 configured as in the first embodiment shown in FIG. 1, a bone conduction signal transmission unit 301, a bone conduction signal observation unit 302, and the like. It has a configuration that has been replaced.
- bone conduction Propagating sound using bone as a medium.
- personality also exists in the propagation characteristics of bone conduction. For this reason, the observed acoustic signal obtained by bone conduction can be used for personal authentication.
- the bone conduction signal sending means 301 sends out a bone conduction signal which is an acoustic signal for bone conduction as a sending sound signal from a part of the head.
- a part of the head to which the bone conduction signal is transmitted is, for example, a region where bone is formed in the head, and at least a region where a decorative article or a device that expresses an acoustic effect can be attached or approached It may be a part.
- the bone conduction signal observation unit 302 observes the bone conduction signal after the bone conduction signal transmitted from the bone conduction signal transmission unit 301 propagates through a part of the user's head as an observation acoustic signal.
- the part of the head used as the propagation path of the bone conduction signal may be at least part of the skull, teeth, brain, sensory organs, and the cavity between them. It is assumed that the propagation path includes at least bone.
- the bone conduction signal observation means 302 may be realized by, for example, a bone conduction microphone. At this time, the bone conduction signal observation means 302 may observe the bone conduction signal from a part different from an arbitrary part of the head sent by the bone conduction signal sending means 301. Other points are the same as those in the first embodiment.
- personal authentication can be performed using the transmission characteristic of an acoustic signal by bone conduction, so that personal authentication can be performed without blocking a hearing organ such as a user's ear. it can.
- FIG. 10 is a block diagram illustrating a configuration example of the personal authentication device according to the fourth embodiment of the present invention.
- the personal authentication device shown in FIG. 10 has a configuration further including a noise signal estimation means 401 and a transmission acoustic signal selection means 402 in addition to the configuration of the first embodiment shown in FIG.
- the noise signal estimation means 401 estimates a noise signal from the observed acoustic signal. For example, the noise signal estimation unit 401 estimates a noise signal from an observed acoustic signal observed in a part of the user's head or in the vicinity thereof in a state where no authentication acoustic signal is transmitted.
- the transmission acoustic signal selection means 402 selects an acoustic signal having a frequency band different from the estimated frequency band of the noise signal as the transmission acoustic signal.
- FIG. 11 is a flowchart showing an example of the operation of the personal authentication device of this embodiment. Note that the same operations as those in the first embodiment are denoted by the same reference numerals, and description thereof is omitted.
- the personal authentication device observes the acoustic signal using the acoustic signal observation means 102 before sending the acoustic signal toward a part of the user's head (step S201).
- the noise signal estimation means 401 estimates a noise signal using the observed acoustic signal (step S202).
- the transmission acoustic signal selection unit 402 selects a transmission acoustic signal based on the estimated noise signal (step S203). Subsequent operations may be the same as those in the first embodiment.
- Embodiment 5 a fifth embodiment of the present invention will be described.
- the configuration of this embodiment may be the same as that of the first embodiment.
- the acoustic signal sending means 101 of this embodiment changes the sending acoustic signal regularly or irregularly.
- an irregular example when authentication is performed randomly, there is a method of changing the transmitted acoustic signal every time the authentication is performed. For example, when changing the frequency
- the user authenticates every time a service request is made there is a method of changing the transmitted sound signal every time the user performs the authentication.
- a method of changing the transmitted sound signal every time the authentication is performed can be mentioned.
- the acoustic signal observing unit 102 observes the same acoustic signal unless the user and the position thereof are changed. Then, a malicious person may eavesdrop on the observed acoustic signal by some means and succeed in spoofing. In order to prevent such an action, it is preferable to change the transmitted sound signal when the state where the user and its position do not change continues.
- the personal authentication apparatus by separating the signal path until the acoustic signal transmitting unit 101 transmits the transmitted acoustic signal and the signal path until the acoustic signal observing unit 102 receives the transmitted acoustic signal.
- the acoustic signal transmitting unit 101 transmits the transmitted acoustic signal
- the signal path until the acoustic signal observing unit 102 receives the transmitted acoustic signal it is possible to avoid the risk of wiretapping two acoustic signals (transmission acoustic signal and observation acoustic signal) at the same time.
- Embodiment 6 FIG. Next, a sixth embodiment of the present invention will be described.
- the configuration of this embodiment may be the same as that of the first embodiment.
- the sound signal sending means 101 of the present embodiment uses a musical sound as a sending sound signal. This further reduces the psychological and physical burden on the user.
- the acoustic signal transmitting means 101 is provided with another acoustic signal (white noise or the like) for the music for the purpose of improving the amplitude of the low-amplitude component in the musical tone or for the purpose of supplementing the low-power component among the frequency components in the musical tone. ) May be transmitted as a transmission acoustic signal.
- the acoustic signal transmitting unit 101 may transmit the acoustic signal used in the service as a transmission acoustic signal.
- FIG. 12 is an explanatory diagram showing an example of a transmission sound signal in which another sound signal is superimposed on the sound signal of a music piece for the purpose of improving the amplitude of a low-amplitude component in a musical sound.
- the audio signal of the original music is represented by time on the horizontal axis and signal value on the vertical axis.
- FIG. 12B shows a transmission acoustic signal obtained by adding white noise to the original music, with time on the horizontal axis and signal values on the vertical axis.
- FIG. 12 shows that by adding white noise or the like to the original music, the amplitude of the low amplitude component is improved.
- the absolute value of the signal value corresponds to the amplitude.
- FIG. 13 is an explanatory diagram showing an example of a transmission sound signal in which another sound signal is superimposed on the sound signal of a music piece for the purpose of compensating for a low-power component among the frequency components in the musical sound.
- FIG. 13 (a) shows the spectrum power of the original music
- FIG. 13 (b) shows the spectrum power of the transmitted acoustic signal obtained by adding white noise to the original music.
- 13A and 13B both show values obtained by performing Fourier transform, with the horizontal axis representing frequency and the vertical axis representing spectrum power on a logarithmic axis. As shown in FIG. 13, it can be seen that a portion with low power is compensated by adding white noise or the like to the original music.
- the above-described embodiments can be combined with each other.
- the example in which the personal authentication device operates alone has been described.
- the personal authentication device may be combined with another device.
- the system may include service providing means (not shown) that provides different services for each individual based on the specific result output from the personal authentication device.
- the system includes means for determining whether or not the user has the right to the content based on the specific result output from the personal authentication device, and the result of the determination.
- a means for providing a service such as providing contents only when a regular right is held may be provided.
- a voice communication control system that performs voice communication only to a specific individual based on the result of specifying the individual using any one of the above personal authentication devices.
- the system performs control for voice communication such as permitting voice communication only for a specified individual based on a specific result output from the personal authentication device.
- Means may be provided.
- Voice communication may be wiretapped without encryption or the like. Further, even if encryption is performed, there is a possibility that the encryption may be eavesdropped due to leakage or theft. For example, voice communication with low possibility of leakage or theft can be realized by encrypting voice using the acoustic characteristics of each person extracted by the personal authentication device.
- voice communication by a secret key / public key cryptosystem may be realized using the acoustic characteristics of each individual extracted by the above personal authentication device as a secret key.
- the communication provided by the system is not limited to voice communication, and may be data communication, for example.
- FIG. 13 is a block diagram showing an outline of a personal authentication device according to the present invention.
- the personal authentication apparatus shown in FIG. 13 includes an acoustic signal transmission unit 701, an acoustic signal observation unit 702, an acoustic characteristic calculation unit 703, and a user identification unit 704.
- the acoustic signal sending means 701 sends a first acoustic signal to a part of the user's head.
- the acoustic signal sending unit 701 expresses a decoration or a sound effect that is worn in or near a region where a cavity or bone is formed in the user's head in order to send the first acoustic signal. It may be realized in the form of a device.
- the acoustic signal observation means 702 observes a second acoustic signal that is an acoustic signal after the first acoustic signal propagates through a part of the head.
- the part of the head used as the propagation path of the acoustic signal only needs to include at least a part of the skull, brain, sensory organ, and cavity between them.
- the acoustic characteristic calculation means 703 calculates an acoustic characteristic from the first acoustic signal and the second acoustic signal.
- the user identification unit 704 identifies the user based on the calculated acoustic characteristic or the feature amount related to the user extracted from the acoustic characteristic.
- the personal authentication device may further include a feature extraction unit that extracts a feature quantity related to the user from the acoustic characteristics.
- the user identification unit may identify the user based on the feature amount related to the user extracted by the feature extraction unit.
- the 2nd sound which is an acoustic signal sending means which sends out the 1st acoustic signal to a part of a user's head, and an acoustic signal after the 1st acoustic signal propagates a part of the head
- a personal identification device comprising user identification means for identifying the user.
- a storage means for storing acoustic characteristics at the time of propagation of a predetermined acoustic signal of the head or a feature amount extracted from the acoustic characteristics for one person or a plurality of persons is provided, The acoustic characteristic calculated from the second acoustic signal is compared with the acoustic characteristic stored in the storage means, or the feature amount extracted from the acoustic characteristic calculated from the second acoustic signal and stored in the storage means.
- the personal authentication device according to supplementary note 1, wherein the user is identified by comparing the obtained feature amount.
- Appendix 3 The personal authentication device according to appendix 1 or appendix 2, wherein a part of the head serving as a propagation path of the first acoustic signal includes at least one ear canal and / or pinna.
- the acoustic signal sending means is realized by an earphone, headphone or telephone receiver, and the acoustic signal observation means is realized by a microphone mounted on the earphone, headphone or telephone receiver.
- the personal authentication device according to any one of 1 to Appendix 3.
- the personal authentication device Based on the acoustic signal observed at two locations on the user's head, the acoustic signal observed multiple times from the same user, or the acoustic signal observed from multiple people, the surroundings from the second acoustic signal
- the personal authentication device according to any one of supplementary notes 1 to 4, further comprising a removing unit that removes the noise and / or device characteristics.
- the removing means transmits the acoustic signals to the left and right external auditory canals and / or pinna, and uses the observed acoustic signals to detect ambient noise and / or device characteristics from at least one of the observed acoustic signals. May be removed.
- the acoustic signal sending means sends an acoustic signal for bone conduction as the first acoustic signal
- the acoustic signal observing means sends the first acoustic signal to the head as the second acoustic signal.
- the personal authentication device according to any one of Supplementary Note 1 to Supplementary Note 5, wherein the second acoustic signal after being propagated through the bone conduction is observed.
- the acoustic signal observation means may be realized by a bone conduction microphone. Further, the acoustic signal observation means may observe the second acoustic signal from a part different from the part of the head from which the acoustic signal sending means sent the first acoustic signal.
- the personal authentication device according to any one of supplementary notes 1 to 6, wherein the acoustic signal is transmitted as a first acoustic signal after the next time.
- An acoustic signal transmission means is an acoustic signal which superimposes another acoustic signal on the acoustic signal of a music for the purpose of improving the amplitude of the low-amplitude component of a musical sound, or a component with low power among the frequency components in a musical sound.
- the personal authentication device according to appendix 9, wherein an acoustic signal obtained by superimposing another acoustic signal on the acoustic signal of the music is transmitted as a first acoustic signal for the purpose of compensation.
- a personal authentication method characterized in that an acoustic characteristic is calculated from a first acoustic signal and a second acoustic signal, and a user is identified based on the acoustic characteristic or a feature amount relating to the user extracted from the acoustic characteristic.
- storage means which memorize
- the 2nd acoustic signal which is an acoustic signal after the 1st acoustic signal transmitted to a part of a user's head to a computer and the 1st acoustic signal propagated a part of a head
- a personal authentication program for executing a process for calculating an acoustic characteristic and a process for identifying the user based on the acoustic characteristic or a feature amount related to the user extracted from the acoustic characteristic.
- the acoustic characteristics at the time of propagation of a predetermined part of the acoustic signal of the head or the feature amount extracted from the acoustic characteristics is stored in advance in the computer for the process of identifying the user
- a personalization system comprising service providing means for providing different services for each individual based on a user identification result by any one of the personal authentication devices of Supplementary notes 1 to 10.
- Determination means for determining whether or not the user has a regular right for the content to be managed based on the identification result of the user by any one of the personal authentication devices of Supplementary Note 1 to Supplementary Note 10. Provided content rights management system.
- a communication control system including a control means for controlling voice communication or data communication based on a user identification result by any one of the personal authentication devices of Supplementary note 1 to Supplementary note 10.
- the present invention can be suitably applied to, for example, a personal authentication device, a personal authentication method, and a personal authentication program for authenticating an individual using an audio device.
- the present invention can also be applied to such a personal authentication device, a personal authentication method, a personalization system using a personal authentication program, a content right management system, a communication control system, and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Pathology (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Theoretical Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Biomedical Technology (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Biophysics (AREA)
- Animal Behavior & Ethology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Chemical & Material Sciences (AREA)
- Biochemistry (AREA)
- Immunology (AREA)
- Analytical Chemistry (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Otolaryngology (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Mathematical Physics (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
Description
以下、本発明の実施形態について図面を参照して説明する。図1は、第1の実施形態の個人認証装置の構成例を示すブロック図である。図1に示す個人認証装置は、音響信号送出手段101と、音響信号観測手段102と、音響特性算出手段103と、特徴抽出手段104と、ユーザ識別手段105と、特徴量記憶手段106とを備えている。
次に、本発明の第2の実施形態について説明する。図6は、本発明の第2の実施形態の個人認証装置の構成例を示すブロック図である。図6に示す個人認証装置は、図1に示した第1の実施形態の構成に比べて、雑音・機器特性除去手段201をさらに備える点が異なる。
次に、本発明の第3の実施形態について説明する。図9は、本発明の第3の実施形態の個人認証装置の構成例を示すブロック図である。図9に示す個人認証装置は、図1に示した第1の実施形態の構成の音響信号送出手段101と音響信号観測手段102とが、骨伝導信号送出手段301と骨伝導信号観測手段302とに置き換わった構成となっている。
次に、本発明の第4の実施形態について説明する。図10は、本発明の第4の実施形態の個人認証装置の構成例を示すブロック図である。図10に示す個人認証装置は、図1に示した第1の実施形態の構成から、さらに雑音信号推定手段401と送出音響信号選択手段402とを備えた構成となっている。
次に、本発明の第5の実施形態について説明する。本実施形態の構成は、第1の実施形態と同様でよい。ただし、本実施形態の音響信号送出手段101は、送出音響信号を定期的にまたは不定期に変更する。不定期の例としては、ランダムに認証を行う場合に、それらの認証を行う毎に送出音響信号を変更する方法が挙げられる。また、例えば、認証の信頼度に応じて認証を行う回数や頻度を変える場合に、それらの認証を行う毎に送出音響信号を変更する方法が挙げられる。また、例えば、ユーザが何らかのサービス要求をする度に認証する場合に、それらの認証を行う毎に送出音響信号を変更する方法が挙げられる。また、例えば、楽曲が変わる等のサービスの切れ目のタイミングで認証を実施する場合にそれらの認証を行う毎に送出音響信号を変更する方法が挙げられる。
次に、本発明の第6の実施形態について説明する。本実施形態の構成は、第1の実施形態と同様でよい。ただし、本実施形態の音響信号送出手段101は、送出音響信号として楽音を用いる。これにより、ユーザへの心理的・肉体的負担をより軽減させる。
12 サウンドプロセッサ
13 マイクロホンアンプ
14 イヤホン
15 マイクロホン
16 ユーザ
101 音響信号送出手段
102 音響信号観測手段
103 音響特性算出手段
104 特徴抽出手段
105 ユーザ識別手段
106 特徴量記憶手段
201、202、203 雑音・機器特性除去手段
301 骨伝導信号送出手段
302 骨伝導信号観測手段
401 雑音信号推定手段
402 送出音響信号選択手段
701 音響信号送出手段
702 音響信号観測手段
703 音響特性算出手段
704 ユーザ識別手段
Claims (10)
- ユーザの頭部の一部に第1の音響信号を送出する音響信号送出手段と、
前記第1の音響信号が前記頭部の一部を伝播した後の音響信号である第2の音響信号を観測する音響信号観測手段と、
前記第1の音響信号および前記第2の音響信号から音響特性を算出する音響特性算出手段と、
前記音響特性または前記音響特性から抽出されるユーザに関する特徴量に基づいて、前記ユーザを識別するユーザ識別手段とを備えた
ことを特徴とする個人認証装置。 - 予め、一人ないし複数人について、頭部の所定の一部の音響信号の伝搬時の音響特性または前記音響特性から抽出される特徴量を記憶する記憶手段を備え、
ユーザ識別手段は、第2の音響信号から算出された音響特性と前記記憶手段に記憶された音響特性とを比較して、または前記第2の音響信号から算出された音響特性から抽出された特徴量と前記記憶手段に記憶された特徴量とを比較して、ユーザを識別する
請求項1に記載の個人認証装置。 - 第1の音響信号の伝搬路となる頭部の一部は、少なくとも一方の外耳道および/または耳介を含む
請求項1または請求項2に記載の個人認証装置。 - 音響信号送出手段は、イヤホン、ヘッドホンまたは電話機の受話部により実現されており、
音響信号観測手段は、前記イヤホン、ヘッドホンまたは電話機の受話部上に装着されたマイクロホンにより実現されている
請求項1から請求項3のうちのいずれか1項に記載の個人認証装置。 - ユーザの頭部の2か所で観測された音響信号、同一ユーザから複数回観測された音響信号、もしくは複数人から観測された音響信号に基づいて、第2の音響信号から周囲の雑音および/または機器特性を除去する除去手段を備えた
請求項1から請求項4のうちのいずれか1項に記載の個人認証装置。 - 音響信号送出手段は、第1の音響信号として、骨伝導用の音響信号を送出し、
音響信号観測手段は、第2の音響信号として、前記第1の音響信号が頭部の一部を骨伝導によって伝播した後の音響信号を観測する
請求項1から請求項5のうちのいずれか1項に記載の個人認証装置。 - 音響信号観測手段により観測される音響信号に基づいて、雑音信号を推定する雑音推定手段を備え、
音響信号送出手段は、前記雑音推定手段によって推定された雑音信号とは異なる周波数帯域の音響信号を、次回以降の第1の音響信号として送出する
請求項1から請求項6のうちのいずれか1項に記載の個人認証装置。 - 音響信号送出手段は、第1の音響信号を、定期的または不定期に変更する
請求項1から請求項7のうちのいずれか1項に記載の個人認証装置。 - ユーザの頭部の一部に第1の音響信号を送出し、
前記第1の音響信号が前記頭部の一部を伝播した後の音響信号である第2の音響信号を観測し、
前記第1の音響信号および前記第2の音響信号から音響特性を算出し、
前記音響特性または前記音響特性から抽出されるユーザに関する特徴量に基づいて、前記ユーザを識別する
ことを特徴とする個人認証方法。 - コンピュータに、
ユーザの頭部の一部に送出された第1の音響信号と、前記第1の音響信号が前記頭部の一部を伝播した後の音響信号である第2の音響信号とから、音響特性を算出する処理、および
前記音響特性または前記音響特性から抽出されるユーザに関する特徴量に基づいて、前記ユーザを識別する処理
を実行させるための個人認証プログラム。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017546551A JP6855381B2 (ja) | 2015-10-21 | 2016-10-18 | 個人認証装置、個人認証方法および個人認証プログラム |
US15/769,967 US10867019B2 (en) | 2015-10-21 | 2016-10-18 | Personal authentication device, personal authentication method, and personal authentication program using acoustic signal propagation |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015206857 | 2015-10-21 | ||
JP2015-206857 | 2015-10-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017069118A1 true WO2017069118A1 (ja) | 2017-04-27 |
Family
ID=58557034
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/080833 WO2017069118A1 (ja) | 2015-10-21 | 2016-10-18 | 個人認証装置、個人認証方法および個人認証プログラム |
Country Status (3)
Country | Link |
---|---|
US (1) | US10867019B2 (ja) |
JP (1) | JP6855381B2 (ja) |
WO (1) | WO2017069118A1 (ja) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019062377A (ja) * | 2017-09-26 | 2019-04-18 | カシオ計算機株式会社 | 電子機器、音響機器、電子機器の制御方法及び制御プログラム |
WO2019143210A1 (en) | 2018-01-22 | 2019-07-25 | Samsung Electronics Co., Ltd. | Electronic device for authenticating user by using audio signal and method thereof |
WO2020045204A1 (ja) * | 2018-08-31 | 2020-03-05 | 日本電気株式会社 | 生体認証装置、生体認証方法および記録媒体 |
WO2020089983A1 (en) * | 2018-10-29 | 2020-05-07 | Nec Corporation | Recognition apparatus, recognition method, and computer-readable recording medium |
WO2020149175A1 (ja) * | 2019-01-15 | 2020-07-23 | 日本電気株式会社 | 情報処理装置、装着型機器、情報処理方法及び記憶媒体 |
EP3702945A4 (en) * | 2017-10-25 | 2020-12-09 | Nec Corporation | BIOMETRIC AUTHENTICATION DEVICE, BIOMETRIC AUTHENTICATION SYSTEM, BIOMETRIC AUTHENTICATION PROCESS AND REGISTRATION MEDIA |
JP2022069467A (ja) * | 2018-12-19 | 2022-05-11 | 日本電気株式会社 | 情報処理装置、装着型機器、情報処理方法及び記憶媒体 |
US20230008680A1 (en) * | 2019-12-26 | 2023-01-12 | Nec Corporation | In-ear acoustic authentication device, in-ear acoustic authentication method, and recording medium |
US11775972B2 (en) | 2018-09-28 | 2023-10-03 | Nec Corporation | Server, processing apparatus, and processing method |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10460095B2 (en) * | 2016-09-30 | 2019-10-29 | Bragi GmbH | Earpiece with biometric identifiers |
US11586716B2 (en) * | 2017-04-28 | 2023-02-21 | Nec Corporation | Personal authentication device, personal authentication method, and recording medium |
EP3625718B1 (en) * | 2017-05-19 | 2021-09-08 | Plantronics, Inc. | Headset for acoustic authentication of a user |
EP3900630A4 (en) * | 2018-12-19 | 2021-12-22 | NEC Corporation | INFORMATION PROCESSING DEVICE, WEARABLE DEVICE, INFORMATION PROCESSING METHODS AND STORAGE MEDIUM |
US11200572B2 (en) * | 2019-02-27 | 2021-12-14 | Mastercard International Incorporated | Encoding one-time passwords as audio transmissions including security artifacts |
US11937040B2 (en) * | 2019-09-12 | 2024-03-19 | Nec Corporation | Information processing device, information processing method, and storage medium |
CN113536282A (zh) * | 2021-06-17 | 2021-10-22 | 南京大学 | 基于耳声学传感的安全防护系统 |
TWI797880B (zh) | 2021-12-08 | 2023-04-01 | 仁寶電腦工業股份有限公司 | 應用於入耳式耳機的偵測系統及偵測方法 |
JP2023117921A (ja) * | 2022-02-14 | 2023-08-24 | 株式会社東芝 | 診断装置及び診断方法 |
US11847200B2 (en) | 2022-03-29 | 2023-12-19 | Cirrus Logic Inc. | Methods and apparatus for system identification |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004065363A (ja) * | 2002-08-02 | 2004-03-04 | Sony Corp | 個人認証装置と個人認証方法、及び信号伝送装置 |
JP2007116373A (ja) * | 2005-10-19 | 2007-05-10 | Sony Corp | 計測装置、計測方法、音声信号処理装置 |
WO2009104437A1 (ja) * | 2008-02-22 | 2009-08-27 | 日本電気株式会社 | 生体認証装置、生体認証方法及び生体認証用プログラム |
JP2010086328A (ja) * | 2008-09-30 | 2010-04-15 | Yamaha Corp | 認証装置および携帯電話機 |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5787187A (en) * | 1996-04-01 | 1998-07-28 | Sandia Corporation | Systems and methods for biometric identification using the acoustic properties of the ear canal |
US6231521B1 (en) * | 1998-12-17 | 2001-05-15 | Peter Zoth | Audiological screening method and apparatus |
JP2003058190A (ja) | 2001-08-09 | 2003-02-28 | Mitsubishi Heavy Ind Ltd | 個人認証方式 |
US7922671B2 (en) * | 2002-01-30 | 2011-04-12 | Natus Medical Incorporated | Method and apparatus for automatic non-cooperative frequency specific assessment of hearing impairment and fitting of hearing aids |
US7333618B2 (en) * | 2003-09-24 | 2008-02-19 | Harman International Industries, Incorporated | Ambient noise sound level compensation |
US7596231B2 (en) * | 2005-05-23 | 2009-09-29 | Hewlett-Packard Development Company, L.P. | Reducing noise in an audio signal |
US7806833B2 (en) * | 2006-04-27 | 2010-10-05 | Hd Medical Group Limited | Systems and methods for analysis and display of heart sounds |
JP5229124B2 (ja) | 2009-06-12 | 2013-07-03 | 日本電気株式会社 | 話者照合装置、話者照合方法およびプログラム |
US20130163781A1 (en) * | 2011-12-22 | 2013-06-27 | Broadcom Corporation | Breathing noise suppression for audio signals |
US9565497B2 (en) * | 2013-08-01 | 2017-02-07 | Caavo Inc. | Enhancing audio using a mobile device |
KR102223278B1 (ko) * | 2014-05-22 | 2021-03-05 | 엘지전자 주식회사 | 글래스 타입 단말기 및 이의 제어방법 |
-
2016
- 2016-10-18 JP JP2017546551A patent/JP6855381B2/ja active Active
- 2016-10-18 US US15/769,967 patent/US10867019B2/en active Active
- 2016-10-18 WO PCT/JP2016/080833 patent/WO2017069118A1/ja active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004065363A (ja) * | 2002-08-02 | 2004-03-04 | Sony Corp | 個人認証装置と個人認証方法、及び信号伝送装置 |
JP2007116373A (ja) * | 2005-10-19 | 2007-05-10 | Sony Corp | 計測装置、計測方法、音声信号処理装置 |
WO2009104437A1 (ja) * | 2008-02-22 | 2009-08-27 | 日本電気株式会社 | 生体認証装置、生体認証方法及び生体認証用プログラム |
JP2010086328A (ja) * | 2008-09-30 | 2010-04-15 | Yamaha Corp | 認証装置および携帯電話機 |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7380775B2 (ja) | 2017-09-26 | 2023-11-15 | カシオ計算機株式会社 | 音響機器、音響機器の制御方法及び制御プログラム |
US11501028B2 (en) | 2017-09-26 | 2022-11-15 | Casio Computer Co., Ltd. | Electronic device, audio device, electronic device control method and storage medium |
JP2022145772A (ja) * | 2017-09-26 | 2022-10-04 | カシオ計算機株式会社 | 音響機器、音響機器の制御方法及び制御プログラム |
JP7121330B2 (ja) | 2017-09-26 | 2022-08-18 | カシオ計算機株式会社 | 電子機器、音響機器、電子機器の制御方法及び制御プログラム |
JP2019062377A (ja) * | 2017-09-26 | 2019-04-18 | カシオ計算機株式会社 | 電子機器、音響機器、電子機器の制御方法及び制御プログラム |
EP3702945A4 (en) * | 2017-10-25 | 2020-12-09 | Nec Corporation | BIOMETRIC AUTHENTICATION DEVICE, BIOMETRIC AUTHENTICATION SYSTEM, BIOMETRIC AUTHENTICATION PROCESS AND REGISTRATION MEDIA |
US11405388B2 (en) | 2017-10-25 | 2022-08-02 | Nec Corporation | Biometric authentication device, biometric authentication system, biometric authentication method and recording medium |
CN111465933A (zh) * | 2018-01-22 | 2020-07-28 | 三星电子株式会社 | 利用音频信号认证用户的电子装置及其方法 |
US11159868B2 (en) | 2018-01-22 | 2021-10-26 | Samsung Electronics Co., Ltd | Electronic device for authenticating user by using audio signal and method thereof |
CN111465933B (zh) * | 2018-01-22 | 2024-04-23 | 三星电子株式会社 | 利用音频信号认证用户的电子装置及其方法 |
WO2019143210A1 (en) | 2018-01-22 | 2019-07-25 | Samsung Electronics Co., Ltd. | Electronic device for authenticating user by using audio signal and method thereof |
KR102488001B1 (ko) * | 2018-01-22 | 2023-01-13 | 삼성전자주식회사 | 오디오 신호를 이용하여 사용자를 인증하는 전자 장치 및 그 방법 |
EP3707628A4 (en) * | 2018-01-22 | 2020-10-07 | Samsung Electronics Co., Ltd. | ELECTRONIC DEVICE FOR AUTHENTICATING A USER USING AN AUDIO SIGNAL AND METHOD FOR DOING THIS |
KR20190089422A (ko) * | 2018-01-22 | 2019-07-31 | 삼성전자주식회사 | 오디오 신호를 이용하여 사용자를 인증하는 전자 장치 및 그 방법 |
WO2020045204A1 (ja) * | 2018-08-31 | 2020-03-05 | 日本電気株式会社 | 生体認証装置、生体認証方法および記録媒体 |
JPWO2020045204A1 (ja) * | 2018-08-31 | 2021-09-24 | 日本電気株式会社 | 生体認証装置、生体認証方法およびプログラム |
JP7120313B2 (ja) | 2018-08-31 | 2022-08-17 | 日本電気株式会社 | 生体認証装置、生体認証方法およびプログラム |
US11775972B2 (en) | 2018-09-28 | 2023-10-03 | Nec Corporation | Server, processing apparatus, and processing method |
WO2020089983A1 (en) * | 2018-10-29 | 2020-05-07 | Nec Corporation | Recognition apparatus, recognition method, and computer-readable recording medium |
JP7192982B2 (ja) | 2018-10-29 | 2022-12-20 | 日本電気株式会社 | 認識装置、認識方法、およびプログラム |
JP2022505984A (ja) * | 2018-10-29 | 2022-01-14 | 日本電気株式会社 | 認識装置、認識方法、およびプログラム |
JP7315045B2 (ja) | 2018-12-19 | 2023-07-26 | 日本電気株式会社 | 情報処理装置、装着型機器、情報処理方法及び記憶媒体 |
JP2022069467A (ja) * | 2018-12-19 | 2022-05-11 | 日本電気株式会社 | 情報処理装置、装着型機器、情報処理方法及び記憶媒体 |
JP7131636B2 (ja) | 2019-01-15 | 2022-09-06 | 日本電気株式会社 | 情報処理装置、装着型機器、情報処理方法及び記憶媒体 |
WO2020149175A1 (ja) * | 2019-01-15 | 2020-07-23 | 日本電気株式会社 | 情報処理装置、装着型機器、情報処理方法及び記憶媒体 |
JPWO2020149175A1 (ja) * | 2019-01-15 | 2021-10-28 | 日本電気株式会社 | 情報処理装置、装着型機器、情報処理方法及び記憶媒体 |
US20230008680A1 (en) * | 2019-12-26 | 2023-01-12 | Nec Corporation | In-ear acoustic authentication device, in-ear acoustic authentication method, and recording medium |
Also Published As
Publication number | Publication date |
---|---|
US10867019B2 (en) | 2020-12-15 |
JPWO2017069118A1 (ja) | 2018-09-06 |
JP6855381B2 (ja) | 2021-04-07 |
US20180307818A1 (en) | 2018-10-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017069118A1 (ja) | 個人認証装置、個人認証方法および個人認証プログラム | |
JP7259902B2 (ja) | 個人認証装置、個人認証方法および個人認証プログラム | |
JP6900955B2 (ja) | 個人認証装置、個人認証方法および個人認証プログラム | |
CN112585676A (zh) | 生物特征认证 | |
US11699449B2 (en) | In-ear liveness detection for voice user interfaces | |
US20230143028A1 (en) | Personal authentication device, personal authentication method, and recording medium | |
JP7375855B2 (ja) | 個人認証装置、個人認証方法および個人認証プログラム | |
Huang et al. | Pcr-auth: Solving authentication puzzle challenge with encoded palm contact response | |
CN110100278B (zh) | 说话者识别系统及说话者识别方法及入耳式装置 | |
WO2018101317A1 (ja) | 認証システム、認証管理サーバ、方法およびプログラム | |
JP7244683B2 (ja) | 個人認証装置、個人認証方法および個人認証プログラム | |
JP2021002357A (ja) | 個人認証装置、個人認証方法および個人認証プログラム | |
WO2021130949A1 (ja) | 耳音響認証装置、耳音響認証方法、及び記録媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16857432 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017546551 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15769967 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16857432 Country of ref document: EP Kind code of ref document: A1 |