WO2023089822A1 - Dispositif d'identification de porteur, système d'identification de porteur, procédé d'identification de porteur et programme d'identification de porteur - Google Patents

Dispositif d'identification de porteur, système d'identification de porteur, procédé d'identification de porteur et programme d'identification de porteur Download PDF

Info

Publication number
WO2023089822A1
WO2023089822A1 PCT/JP2021/042778 JP2021042778W WO2023089822A1 WO 2023089822 A1 WO2023089822 A1 WO 2023089822A1 JP 2021042778 W JP2021042778 W JP 2021042778W WO 2023089822 A1 WO2023089822 A1 WO 2023089822A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
feature amount
unit
signal
vibration
Prior art date
Application number
PCT/JP2021/042778
Other languages
English (en)
Japanese (ja)
Inventor
勇貴 久保
幸生 小池
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Priority to PCT/JP2021/042778 priority Critical patent/WO2023089822A1/fr
Priority to JP2023562093A priority patent/JPWO2023089822A1/ja
Publication of WO2023089822A1 publication Critical patent/WO2023089822A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/117Identification of persons

Definitions

  • Embodiments of the present invention relate to a wearer's identification device, a wearer's identification system, a wearer's identification method, and a wearer's identification program.
  • a pair of piezoelectric elements such as piezo elements, one as a speaker and the other as a microphone, measure the vibration characteristics of the target on which the speaker and microphone are installed, and recognize the state of the target based on the measured vibration characteristics
  • active acoustic sensing which estimates the gripped state of an object or object (see, for example, Non-Patent Document 1).
  • sound waves which are in the inaudible range, are transmitted from a speaker to a target, and the vibration propagated through the target is received by a microphone, and the frequency characteristics of the received signal are analyzed. It utilizes the fact that the vibration characteristics of the object on which the microphone is installed change due to changes in the internal structure and boundary conditions.
  • the part of the body where the speaker and microphone are attached may move, unlike static objects. obtain.
  • the wearing part is moved, the internal structure of the body part changes, which causes changes in the values of the vibration characteristics. This noise may cause a failure in personal identification of the user.
  • the present invention seeks to provide a technology that can reduce erroneous determinations when active acoustic sensing is used for personal identification of users.
  • a wearing user identification device includes a feature generation unit, a determination unit, and an identification unit.
  • the feature amount generation unit receives a measurement signal corresponding to the vibration characteristics of the user's body part measured by the sensor from the sensor attached to the body part of the user to be identified, and generates a feature representing the vibration characteristic from the measurement signal. produce quantity.
  • the determination unit determines whether the state of the site where the sensor is attached to the user is stable, based on the magnitude of variation in the feature amount generated by the feature amount generation unit.
  • the identification unit performs personal identification of the user based on the feature amount generated by the feature amount generation unit when the determination unit determines that the state of the wearing site is stable.
  • the personal identification of the user is performed only in a stable state in which the vibration characteristic value does not greatly change, thereby preventing an erroneous determination when active acoustic sensing is used for personal identification of the user. can be provided.
  • FIG. 1 is a block diagram showing an example configuration of a wearable user identification system including a wearable user identification device according to an embodiment of the present invention.
  • FIG. 2 is a plan view showing the configuration of the measurement unit worn by the user.
  • FIG. 3 is a schematic diagram showing a state in which a user wears the measurement unit.
  • FIG. 4 is a diagram showing an example of a spectrogram.
  • FIG. 5 is a block diagram showing an example of the hardware configuration of the wearing user identification device.
  • FIG. 6 is a flow chart showing an example of a learning processing operation related to learning of a classification model in the wearing user identification device.
  • FIG. 7 is a flow chart showing an example of an identification processing operation related to personal identification of a user in the wearable user identification device.
  • FIG. 1 is a block diagram showing an example of the configuration of a wearable user identification system 1 according to one embodiment of the present invention.
  • the wearing user identification system 1 includes a wearing user identification device 10 according to one embodiment of the present invention, a measurement section 20 and an audio interface section 30 .
  • the wearing user identification device 10 includes a signal generation unit 11 , a signal storage unit 12 , a feature amount generation unit 13 , a model learning unit 14 , a model storage unit 15 , an identification execution determination unit 16 and a user identification unit 17 .
  • the measurement unit 20 is a sensor for measuring vibration characteristics of a measurement target and a part for attaching the sensor to the body of a target user, and includes a signal generation unit 21 and a signal reception unit 22 .
  • the audio interface section 30 is an interface between the wearing user identification device 10 and the measurement section 20 and has a signal control section 31 and a signal amplification section 32 .
  • Wire connection or wireless connection can be established between the signal generation unit 11 and the signal control unit 31 and between the signal control unit 31 and the signal generation unit 21 as long as they have a function of transmitting and receiving signals.
  • the connection form does not matter.
  • the signal generation unit 11 of the wearing user identification device 10 generates an acoustic signal based on arbitrarily set parameters.
  • the acoustic signal is ultrasound sweeping from 20 kHz to 40 kHz.
  • the settings of the acoustic signal such as whether or not to sweep, use of other frequency bands, etc., do not matter.
  • the signal control unit 31 of the audio interface unit 30 generates a drive signal based on the acoustic signal generated by the signal generation unit 11 based on the preset parameters, and vibrates the target through the signal generation unit 21 of the measurement unit 20.
  • the vibration at this time will be the same as the vibration used at that time. As long as the included frequency is included, other frequencies may be mixed.
  • the signal generator 21 and the signal receiver 22 of the measurement unit 20 are composed of two piezoelectric elements that do not contact each other.
  • a piezoelectric element can be realized by, for example, a piezo element.
  • One piezoelectric element serves as the signal generating section 21 that generates vibration having the same frequency characteristics as the drive signal generated by the signal control section 31 of the audio interface section 30 .
  • the other piezoelectric element serves as a signal receiving section 22 that receives vibration.
  • the signal receiving unit 22 acquires vibrations propagating inside and on the surface of the object in which the signal receiving unit 22 is installed.
  • the user's living body, which is the object of measurement functions as a propagation path.
  • the frequency characteristics of the acquired vibration change.
  • the signal receiver 22 transmits the received vibration signal (hereinafter referred to as a reaction signal) to the signal amplifier 32 .
  • the signal generating unit 21 and the signal receiving unit 22 may be of any form and material as long as they are mechanisms capable of propagating vibrations while being in contact with the target living body.
  • FIG. 2 is a plan view showing the configuration of the measurement unit 20
  • FIG. 3 is a schematic diagram showing a state in which the measurement unit 20 is worn by a user who is a measurement target.
  • the measurement unit 20 is configured as a band-type sensor, but if the signal generation unit 21 and the signal reception unit 22 can be fixedly attached to the user's skin while maintaining a certain distance, the measurement unit 20 can be used for a living body. Other implementation methods such as adhesive tape may be used.
  • the two piezoelectric elements which are the signal generating section 21 and the signal receiving section 22, are attached to the fixed section 23 so as to keep a certain distance so that they do not come into contact with each other.
  • the fixing portion 23 also functions as a reinforcing member that reinforces the strength of the signal generating portion 21 and the signal receiving portion 22 so that the signal generating portion 21 and the signal receiving portion 22 can be continuously used.
  • a band 24 and a square can 25 are attached to positions facing the fixing portion 23 with the signal generating portion 21 and the signal receiving portion 22 interposed therebetween.
  • a hook-and-loop fastener 26 is provided on the back surface of the band 24 .
  • the measurement unit 20 having such a configuration is worn by the user by adjusting the length of the band 24 , wrapping it around the user's wrist, and fixing it with the hook-and-loop fastener 26 . At this time, when performing personal authentication, if it is unified for the individual, it does not matter where it is worn.
  • the signal amplification section 32 of the audio interface section 30 amplifies the reaction signal acquired by the signal reception section 22 of the measurement section 20 and transmits it to the wearing user identification device 10 .
  • the reason why the signal is amplified by the signal amplifier 32 is that the vibration passing through the object to be measured is attenuated, so it is necessary to amplify the signal to a level that enables processing.
  • the reaction signal transmitted from the signal amplification section 32 of the audio interface section 30 is stored in the signal storage section 12 .
  • the feature amount generation unit 13 extracts the reaction signal stored in the signal storage unit 12 for each fixed time interval, and performs, for example, FFT (Fast Fourier Transform) on the extracted reaction signal. , to generate a spectrogram, which is a feature quantity representing the acoustic frequency characteristics of the living body to be measured.
  • FIG. 4 is a diagram showing an example of this spectrogram.
  • the feature amount generation unit 13 When executing a learning processing operation related to learning of a classification model, the feature amount generation unit 13 generates teacher data including a set of the generated spectrogram and the user ID corresponding to the user who is the measurement target, and the model learning unit 14. Note that the teacher data may be generated by extracting from the registration database created in advance. Further, the feature amount generation unit 13 outputs the generated spectrogram to the identification execution determination unit 16 when executing the identification processing operation related to personal identification of the user.
  • FFT Fast Fourier Transform
  • the model learning unit 14 generates and learns a classification model whose input is the teacher data obtained from the feature amount generation unit 13 and whose output is the user ID.
  • the model learning unit 14 registers the model itself or the parameters of the model obtained by this learning process in the model storage unit 15, which is a model database.
  • the classification model and the type of library used for its learning do not matter. For example, using a generally known machine learning library, algorithms for generating classification models such as SVM (Support Vector Machine) and neural networks obtain optimal output by performing parameter tuning, etc. on teacher data. It is also good to learn so that it can be done.
  • the identification execution determination unit 16 determines whether or not the user identification processing by the user identification unit 17 is to be executed.
  • the identification execution determination unit 16 determines whether the state of the user-worn site of the measurement unit 20 is stable based on whether the standard deviation at each frequency is higher than an arbitrary set threshold. That is, when the standard deviation at each frequency of the spectrogram is not higher than the threshold, the identification execution determination unit 16 considers that the state of the user wearing part of the measurement unit 20 is stable, and the processing in the user identification unit 17 is performed. let it happen Specifically, the identification execution determination unit 16 outputs the spectrogram acquired from the feature amount generation unit 13 to the user identification unit 17 . Conversely, when the standard deviation at each frequency of the spectrogram is higher than the threshold, the identification execution determination unit 16 determines that the user wearing the measurement unit 20 is moving and the stability is low. Do not process.
  • the identification execution determination section 16 does not output the spectrogram to the user identification section 17 . It should be noted that this processing may be performed by limiting only to values having a certain value or more in each frequency characteristic (for example, -50 [dB] or more).
  • the user identification unit 17 inputs the spectrogram, which is the feature amount acquired from the identification execution determination unit 16, as an input to the classification model registered in the model storage unit 15, and outputs the classification model for determining an individual user. Find the numerical value of
  • the classification model outputs, as a reference value, a score indicating the degree of similarity to each label (user ID) of the classification model with respect to the input spectrogram. be done.
  • the score can be output as '1-(similarity)', for example, if the similarity is normalized and expressed between '0' and '1'.
  • Random Forest used as an algorithm for the model learning unit 14
  • data is randomly extracted from the teacher data and a plurality of decision trees are generated. output. Since the higher the number of determination results, the better, the classification model outputs "(the number of determinations) - (the number of determination results)" as a reference value.
  • the reference value may be obtained by subtracting the normalized similarity from "1", or may be obtained by converting the similarity using the reciprocal of the similarity.
  • the user identification unit 17 uses the obtained reference value list to determine the user ID with the smallest (similar) reference value.
  • a threshold for determination may be set for the degree of similarity, and determination may be made only when the reference value is smaller than the threshold.
  • FIG. 5 is a diagram showing an example of the hardware configuration of the wearing user identification device 10.
  • the wearing user identification device 10 is configured by a computer such as a microcomputer or a personal computer, and has a hardware processor 101 such as a CPU (Central Processing Unit).
  • a hardware processor 101 such as a CPU (Central Processing Unit).
  • the processor 101 may include multiple CPUs.
  • a program memory 102 a data memory 103 , a communication interface 104 and an input/output interface 105 are connected to the processor 101 via a bus 106 .
  • "interface" is abbreviated as "IF”.
  • the communication interface 104 can include, for example, one or more wired or wireless communication modules.
  • the example shown in FIG. 5 shows two communication modules 1041 and 1042 .
  • the communication module 1041 is a communication module using short-range wireless technology such as Bluetooth (registered trademark), and transmits and receives signals to and from the signal control section 31 and the signal amplification section 32 of the audio interface section 30 .
  • the communication module 1041 can transmit and receive signals to and from the signal control section 31 and the signal amplification section 32 of the remote audio interface section 30 via the network NW.
  • a network consists of an IP network including the Internet and an access network for accessing this IP network.
  • the wearing user identification device 10 can also acquire reaction signals from the plurality of measuring units 20 via the plurality of audio interface units 30 and identify the plurality of wearing users who wear the respective measuring units 20 .
  • an input unit 107 and a display unit 108 are connected to the input/output interface 105 .
  • the input unit 107 and the display unit 108 are so-called tablet-type inputs, in which an input detection sheet adopting an electrostatic method or a pressure method is arranged on a display screen of a display device using liquid crystal or organic EL (Electro Luminescence), for example. - using a display device can be used; Note that the input unit 107 and the display unit 108 may be configured by independent devices.
  • the input/output interface 105 inputs operation information input from the input unit 107 to the processor 101 and displays display information generated by the processor 101 on the display unit 108 .
  • the input unit 107 and the display unit 108 do not have to be connected to the input/output interface 105 .
  • the input unit 107 and the display unit 108 are provided with a communication unit for connecting directly to the communication interface 104 or via the network NW, so that information can be exchanged with the processor 101 .
  • the input/output interface 105 may have a read/write function for a recording medium such as a semiconductor memory such as a flash memory, or may be connected to a reader/writer having a read/write function for such a recording medium. It may have functions. As a result, a recording medium detachable from the wearing user identification device 10 can be used as a model database holding classification models.
  • the input/output interface 105 may further have a connection function with other devices.
  • the program memory 102 includes, as a non-temporary tangible computer-readable storage medium, for example, a non-volatile memory such as a HDD (Hard Disk Drive) or an SSD (Solid State Drive) that can be written and read at any time, and a ROM ( It was used in combination with non-volatile memory such as Read Only Memory).
  • the program memory 102 stores programs necessary for the processor 101 to execute various control processes according to one embodiment. That is, each processing function unit of the signal generation unit 11, the feature amount generation unit 13, the model learning unit 14, the identification execution determination unit 16, and the user identification unit 17 executes the program stored in the program memory 102 as described above. It can be implemented by being read and executed by the processor 101 . Some or all of these processing functions may be implemented in various other forms, including integrated circuits such as Application Specific Integrated Circuits (ASICs) or field-programmable gate arrays (FPGAs). May be.
  • ASICs Application Specific Integrated Circuits
  • FPGAs field-programmable gate arrays
  • the data memory 103 is a tangible computer-readable storage medium, for example, a combination of the above nonvolatile memory and a volatile memory such as RAM (Random Access Memory).
  • This data memory 103 is used to store various data acquired and created in the process of performing various processes. That is, in the data memory 103, an area for storing various data is appropriately secured in the process of performing various processes.
  • the data memory 103 can be provided with, for example, a signal storage section 1031, a model storage section 1032, an identification result storage section 1033, and a temporary storage section 1034.
  • the signal storage section 1031 stores the reaction signal transmitted from the signal amplification section 32 of the audio interface section 30 . That is, the signal storage section 12 can be configured in this signal storage section 1031 .
  • the model storage unit 1032 stores the classification model learned by the model learning unit 14. That is, the model storage unit 15 can be configured in this model storage unit 1032 .
  • the identification result storage unit 1033 stores output information obtained when the processor 101 operates as the user identification unit 17 .
  • Temporary storage unit 1034 stores spectrograms and teacher data acquired or generated when processor 101 performs operations as feature amount generation unit 13, model learning unit 14, identification execution determination unit 16, and user identification unit 17. , classification models, reference values, and other data.
  • the wearable user identification device 10 prior to user identification, the wearable user identification device 10 first generates a classification model associated with the user ID using a sensor capable of measuring the state of each user to be identified, and generates the classification model.
  • the classified model is stored in the model storage unit 15 as registration data.
  • the signal generator 21 and the signal receiver 22 of the measurement unit 20 are worn on the wrist of the user who is the person to be identified using the band 24 .
  • the vibration generated by the signal generator 21 may be of any form and type as long as it has frequency characteristics like an acoustic signal.
  • an acoustic signal will be described as an example.
  • FIG. 6 is a flowchart showing an example of a learning processing operation related to learning of a classification model in the wearing user identification device 10.
  • FIG. This flowchart shows the processing operation of a part of the wearing user identification device 10 , specifically, the processor 101 of the computer functioning as the signal generator 11 , the feature amount generator 13 , and the model learning unit 14 .
  • the processor 101 starts the operation shown in this flowchart.
  • the communication interface 104 receives a predetermined installation completion notification transmitted from an information processing device such as a smartphone operated by the user via the network NW
  • the processor 101 The operation shown in this flow chart is started.
  • the processor 101 functions as the signal generator 11 and generates an acoustic signal (driving signal) based on arbitrarily set parameters (step S101).
  • the driving signal is, for example, an ultrasonic wave sweeping from 20 kHz to 40 kHz.
  • the settings of the acoustic signal such as whether or not to sweep, whether or not to use other frequency bands, etc., do not matter.
  • the generated drive signal is transmitted to the audio interface section 30 via the communication interface 104 .
  • a signal generation module that generates a drive signal under the control of the processor 101 may be prepared separately, and the drive signal generated there may be transmitted to the audio interface section 30 via the communication interface 104 .
  • the audio interface section 30 transmits this drive signal to the signal generation section 21 of the measurement section 20 .
  • This driving signal causes the body of the user to be registered to vibrate through the signal generator 21 .
  • the signal receiving unit 22 of the measuring unit 20 acquires the vibration that is given to the living body of the user to be registered by the signal generating unit 21 and propagates inside and on the surface of the living body.
  • the vibration given from the piezoelectric element of the signal generating section 21 is propagated to the piezoelectric element of the signal receiving section 22, the living body of the user to be registered functions as a propagation path, and the vibration applied according to this propagation path. change the frequency characteristics of the vibration. This frequency characteristic differs from person to person.
  • the signal receiving section 22 detects the propagated vibration and transmits a reaction signal indicated by the detected vibration to the audio interface section 30 .
  • the signal amplifying section 32 of the audio interface section 30 amplifies the reaction signal transmitted from the signal receiving section 22 of the measuring section 20 and transmits it to the wearing user identification device 10 .
  • the reaction signal transmitted from the audio interface unit 30 is received by the communication interface 104.
  • the processor 101 stores the received reaction signal in the signal storage section 1031 of the data memory 103 (step S102).
  • the processor 101 functions as the feature generator 13 and performs the following processing operations.
  • the processor 101 extracts the reaction signal stored in the signal storage unit 1031 for each fixed time interval. The number of samples of the signal does not matter.
  • the extracted reaction signal is stored in temporary storage section 1034 of data memory 103 .
  • the processor 101 generates a spectrogram, which is a feature quantity representing the acoustic frequency characteristics of the living body, from the extracted reaction signal stored in the temporary storage unit 1034 (step S103).
  • the generated spectrogram is stored in temporary storage section 1034 of data memory 103 .
  • the processor 101 assigns a user ID, which is a unique identifier, to the generated spectrogram, and generates teacher data that combines these spectrograms and the user ID (step S104).
  • the generated teacher data is stored in temporary storage section 1034 of data memory 103 .
  • the processor 101 may extract registration data created in advance from the model storage unit 15 configured in the model storage unit 1032 of the data memory 103, and use it to generate teacher data.
  • the processor 101 functions as the model learning unit 14 and performs the following processing operations.
  • the processor 101 receives the spectrogram in the training data as an input, and generates and learns a classification model that outputs a user ID in the training data as a label and a reference value as a difference from the input (step S105).
  • the processor 101 registers the model itself or the parameters of the classification model and the classification model obtained by this learning process in the model storage unit 15 configured in the model storage unit 1032 of the data memory 103 (step S106). ).
  • the processor 101 stops generating the drive signal and ends the transmission of the drive signal to the audio interface unit 30 by the communication interface 104 (step S107). . Then, the learning processing operation shown in this flow chart ends.
  • the wearing user identification device 10 inputs a spectrogram, which is a feature amount obtained from a user to be identified who wears the measurement unit 20, into a classification model registered in the model storage unit 15 or the like, and identifies an individual user.
  • FIG. 7 is a flowchart showing an example of an identification processing operation related to personal identification of the user in the wearable user identification device 10.
  • FIG. This flowchart shows the processing operation in the processor 101 of the computer that functions as a part of the wearing user identification device 10, specifically, the signal generation unit 11, the feature amount generation unit 13, the identification execution determination unit 16, and the user identification unit 17. is shown.
  • the processor 101 starts the operation shown in this flowchart.
  • a remote user to be identified when a predetermined identification start notification transmitted via the network NW from an information processing device such as a smart phone operated by the user is received by the communication interface 104, the processor 101 The operation shown in this flow chart is started.
  • the processor 101 functions as the signal generator 11 and generates a drive signal based on arbitrarily set parameters (step S201).
  • the generated drive signal is transmitted to the audio interface section 30 via the communication interface 104 .
  • a signal generation module that generates a drive signal under the control of the processor 101 may be prepared separately, and the drive signal generated there may be transmitted to the audio interface section 30 via the communication interface 104 .
  • the audio interface section 30 transmits this drive signal to the signal generation section 21 of the measurement section 20 .
  • This driving signal causes the body of the user to be registered to vibrate through the signal generator 21 .
  • the vibration at this time includes the frequency included in the vibration used to generate the feature quantity included in the registered data regarding the learned user registered in the model storage unit 15 (hereinafter referred to as a registered user). It does not matter if other frequencies are mixed as long as it is
  • the signal receiving unit 22 of the measuring unit 20 acquires the vibration that is given to the living body of the user to be identified by the signal generating unit 21 and propagates inside and on the surface of the living body.
  • the body of the user to be identified functions as a propagation path, and the vibration is applied according to this propagation path. change the frequency characteristics of the vibration.
  • the signal receiving section 22 detects the propagated vibration and transmits a reaction signal indicated by the detected vibration to the audio interface section 30 .
  • the signal amplifying section 32 of the audio interface section 30 amplifies the reaction signal transmitted from the signal receiving section 22 of the measuring section 20 and transmits it to the wearing user identification device 10 .
  • the reaction signal transmitted from the audio interface unit 30 is received by the communication interface 104.
  • the processor 101 stores the received reaction signal in the signal storage section 1031 of the data memory 103 (step S202).
  • the processor 101 functions as the feature amount generation unit 13 and extracts the reaction signal stored in the signal storage unit 1031 for each fixed time interval. The number of samples of the signal does not matter.
  • the extracted reaction signal is stored in temporary storage section 1034 of data memory 103 .
  • the processor 101 performs, for example, FFT on the extracted reaction signal stored in the temporary storage unit 1034 to generate a spectrogram, which is a feature quantity representing the acoustic frequency characteristics of the living body (step S203).
  • the generated spectrogram is stored in temporary storage section 1034 of data memory 103 .
  • the processor 101 functions as the identification execution determination unit 16 and performs the following processing operations.
  • the processor 101 calculates the stability of the spectrogram within a set fixed time from the spectrogram stored in the temporary storage unit 1034 as follows.
  • the processor 101 determines whether spectrograms for a certain period of time, for example, 2 seconds, have been generated (step S204). When determining that the spectrogram for 2 seconds has not yet been generated (NO in step S204), the processor 101 proceeds to the process of step S201 and repeats the above-described processing operations.
  • the processor 101 calculates the degree of stability (step S205). For example, the processor 101 obtains the average value of dB for each frequency for 2 seconds from the spectrogram for 2 seconds stored in the temporary storage unit 1034, and uses this average value to obtain the standard deviation at each frequency of the spectrogram, Calculated as stability.
  • the processor 101 determines whether the state of the user-mounted site of the measurement unit 20 is stable based on whether the standard deviation at each frequency of these spectrograms is equal to or less than the set arbitrary threshold value (step S206). ).
  • the processor 101 determines that the user wearing the measurement unit 20 is moving and not stable (NO in step S206). In this case, the processor 101 deletes the oldest spectrogram from among the spectrograms stored in the temporary storage unit 1034 within a certain period of time (step S207). After that, the processor 101 shifts to the process of step S201 and repeats the above-described processing operations.
  • the processor 101 determines that the state of the user-mounted site of the measurement unit 20 is stable (YES in step S206). In this case, the processor 101 functions as the user identification unit 17 and performs the following processing operations. First, the processor 101 performs personal identification of the user (step S208). That is, the processor 101 stores the classification model registered in the model storage unit 15 configured in the model storage unit 1032 of the data memory 103 with the spectrogram, which is the feature amount generated in step S203 and stored in the temporary storage unit 1034. Enter one, say the latest spectrogram, and get a list of reference values for the classification model.
  • a list of the acquired reference values is stored in the temporary storage unit 1034 of the data memory 103 .
  • processor 101 identifies the smallest reference value from the list of reference values stored in temporary storage unit 1034 .
  • the processor 101 determines, in the model storage unit 15, that registered users associated with the same feature amount as the specified reference value are similar users.
  • processor 101 stores the determined user ID of the registered user in identification result storage section 1033 of data memory 103 as the personal identification result of the user to be identified. Note that in this determination process, a threshold for determination may be set for the degree of similarity, and similar users may be determined only when the specified reference value is smaller than this threshold.
  • the processor 101 outputs the user ID, which is the personal identification result stored in the identification result storage unit 1033 (step S209).
  • the processor 101 displays the user ID on the display unit 108 via the input/output interface 105 .
  • the processor 101 can also provide the user ID to applications and the like that require personal authentication of the user.
  • the processor 101 stops generating the drive signal and ends transmission of the drive signal to the audio interface unit 30 by the communication interface 104 (step S210). Then, the identification processing operation shown in this flow chart ends.
  • the feature amount generation unit 13 measures the A response signal, which is a measurement signal corresponding to the vibration characteristics of a user's body part, is received, a feature quantity representing the vibration characteristics is generated from the reaction signal, and an identification execution determination unit 16 as a determination unit determines the feature quantity generation unit 13. Based on the magnitude of variation in the generated feature amount, it is determined whether the state of the site where the measurement unit 20 is worn by the user is stable.
  • the unit 17 performs personal identification of the user based on the feature amount generated by the feature amount generation unit 13 .
  • the wearable user identification device 10 can reduce erroneous determinations when active acoustic sensing is used for personal identification of the user.
  • the reaction signal received from the measurement unit 20 is a vibration signal obtained by detecting vibration propagating inside the user's body, which is the part where the measurement unit 20 is attached. It can be a spectrogram representing the frequency characteristics of the vibration signal generated by, for example, performing FFT (Fast Fourier Transform) on the signal. In this way, a spectrogram can be generated as a feature quantity.
  • FFT Fast Fourier Transform
  • the identification execution determination unit 16 obtains the standard deviation at each frequency of the spectrogram from the spectrogram for a certain period of time using the average value of dB for each frequency in the certain period of time, and the standard deviation at each frequency is the set threshold In the following cases, it is determined that the state of the attachment site is stable. In this manner, the stability can be easily calculated, and based on the stability, a stable state in which the vibration characteristic value does not change significantly can be determined.
  • the wearing user identification device 10 further includes a model storage unit 15 which is a database in which feature amounts for each of a plurality of users to be registered are registered in advance.
  • a model storage unit 15 which is a database in which feature amounts for each of a plurality of users to be registered are registered in advance.
  • a user having a feature amount corresponding to the feature amount generated by the feature amount generation unit 13 from the reaction signal received from the measurement unit 20 from among a plurality of registration target users registered in the identification target user identify as In this way, by registering the feature amount for each of a plurality of registration target users in the model storage unit 15, the identification target user can be easily identified based on the feature amount.
  • the model storage unit 15 inputs a feature amount, and stores a value based on the difference between the feature amount of at least one user to be registered and the input feature amount as an identifier uniquely given to the user to be registered.
  • This model is a feature amount generated by the feature amount generation unit 13 from the reaction signal received from the measurement unit 20 for each of a plurality of registered users.
  • the user identification unit 17 inputs a spectrogram, which is a feature amount generated for a user to be identified, to the model stored in the model storage unit 15, and the user identification unit 17 selects the values output from the model. , is determined as the user ID of the user to be identified, thereby identifying the user to be identified. Therefore, it is possible to appropriately identify the user to be identified using the spectrogram, which is the feature amount of the registered user.
  • the wearable user identification system 1 includes the wearable user identification device 10 according to one embodiment, and the piezoelectric element generates the first vibration to be applied to the wearing part of the user's body, and gives it to the user's body.
  • a measurement unit 20 that acquires, as a measurement signal, a vibration signal corresponding to the second vibration propagated inside the body among the first vibrations received. Therefore, by having each user to be identified wear the measurement unit 20, each user can be individually identified.
  • the user's individual identification is performed based on the feature quantity only in a stable state where the vibration characteristic value does not change significantly.
  • the model learning unit 14 may similarly perform learning based on the feature amount only in a stable state, not limited to the case of personal identification of the user. As a result, only stable learning data can be used for data used for learning.
  • the audio interface unit 30 is arranged between the wearing user identification device 10 and the measuring unit 20, but the audio interface unit 30 may be incorporated in either the wearing user identification device 10 or the measuring unit 20. good.
  • the processing function unit of the wearable user identification device 10 has been described as being composed of one computer, but it may be composed of a plurality of computers by arbitrary division.
  • the model learning unit 14 and the model storage unit 15 may be configured in a computer or a server device that can communicate via the network NW via the communication interface 104 and that is separate from the computer that configures the wearing user identification device 10. .
  • the method described in the above embodiment can be executed by a computer (computer) as a program (software means), such as a magnetic disk (floppy (registered trademark) disk, hard disk, etc.), an optical disk (CD-ROM, DVD , MO, etc.), a semiconductor memory (ROM, RAM, flash memory, etc.), or the like, or may be transmitted and distributed via a communication medium.
  • the programs stored on the medium also include a setting program for configuring software means (including not only execution programs but also tables and data structures) to be executed by the computer.
  • a computer that realizes this apparatus reads a program recorded on a recording medium, and optionally constructs software means by a setting program. The operation is controlled by this software means to execute the above-described processes.
  • the term "recording medium” as used herein is not limited to those for distribution, and includes storage media such as magnetic disks, semiconductor memories, etc. provided in computers or devices connected via a network.
  • the present invention is not limited to the above embodiments, and can be modified in various ways without departing from the gist of the invention at the implementation stage.
  • each embodiment may be implemented in combination as much as possible, and in that case, the effect of the combination can be obtained.
  • the above-described embodiments include inventions at various stages, and various inventions can be extracted by appropriately combining a plurality of disclosed constituent elements.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Un dispositif d'identification de porteur selon un mode de réalisation de la présente invention comprend une unité de génération de quantité caractéristique, une unité de détermination et une unité d'identification. L'unité de génération de quantité caractéristique reçoit, à partir d'un capteur porté sur une partie du corps d'un utilisateur à identifier, un signal de mesure correspondant aux caractéristiques de vibration de la partie corporelle de l'utilisateur mesurée par le capteur, et génère une quantité caractéristique représentant les caractéristiques de vibration à partir du signal de mesure. L'unité de détermination détermine, sur la base de l'amplitude des fluctuations de la quantité caractéristique générée par l'unité de génération de quantité caractéristique, si l'état de la partie sur lequel le capteur est porté par l'utilisateur est stable. L'unité d'identification réalise une identification personnelle de l'utilisateur sur la base de la quantité caractéristique générée par l'unité de génération de quantité caractéristique, si l'unité de détermination détermine que l'état de la partie sur lequel le capteur est porté est stable.
PCT/JP2021/042778 2021-11-22 2021-11-22 Dispositif d'identification de porteur, système d'identification de porteur, procédé d'identification de porteur et programme d'identification de porteur WO2023089822A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2021/042778 WO2023089822A1 (fr) 2021-11-22 2021-11-22 Dispositif d'identification de porteur, système d'identification de porteur, procédé d'identification de porteur et programme d'identification de porteur
JP2023562093A JPWO2023089822A1 (fr) 2021-11-22 2021-11-22

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/042778 WO2023089822A1 (fr) 2021-11-22 2021-11-22 Dispositif d'identification de porteur, système d'identification de porteur, procédé d'identification de porteur et programme d'identification de porteur

Publications (1)

Publication Number Publication Date
WO2023089822A1 true WO2023089822A1 (fr) 2023-05-25

Family

ID=86396547

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/042778 WO2023089822A1 (fr) 2021-11-22 2021-11-22 Dispositif d'identification de porteur, système d'identification de porteur, procédé d'identification de porteur et programme d'identification de porteur

Country Status (2)

Country Link
JP (1) JPWO2023089822A1 (fr)
WO (1) WO2023089822A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009211370A (ja) * 2008-03-04 2009-09-17 Oki Electric Ind Co Ltd 虹彩認証装置
WO2019082988A1 (fr) * 2017-10-25 2019-05-02 日本電気株式会社 Dispositif d'authentification biométrique, système d'authentification biométrique, procédé d'authentification biométrique et support d'enregistrement
WO2021048974A1 (fr) * 2019-09-12 2021-03-18 日本電気株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations et support de stockage

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009211370A (ja) * 2008-03-04 2009-09-17 Oki Electric Ind Co Ltd 虹彩認証装置
WO2019082988A1 (fr) * 2017-10-25 2019-05-02 日本電気株式会社 Dispositif d'authentification biométrique, système d'authentification biométrique, procédé d'authentification biométrique et support d'enregistrement
WO2021048974A1 (fr) * 2019-09-12 2021-03-18 日本電気株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations et support de stockage

Also Published As

Publication number Publication date
JPWO2023089822A1 (fr) 2023-05-25

Similar Documents

Publication Publication Date Title
Ferlini et al. EarGate: gait-based user identification with in-ear microphones
KR101497644B1 (ko) 음성 및 위치 국부화
JP6943248B2 (ja) 個人認証システム、個人認証装置、個人認証方法および個人認証プログラム
EP2915165B1 (fr) Système et procédé de détection de signaux acoustiques liés à la parole par l'utilisation d'un microphone laser
CN103344959B (zh) 一种超声定位系统和具有定位功能的电子装置
US20150215723A1 (en) Wireless speaker system with distributed low (bass) frequency
US10932714B2 (en) Frequency analysis feedback systems and methods
US11076243B2 (en) Terminal with hearing aid setting, and setting method for hearing aid
US10418965B2 (en) Positioning method and apparatus
US10625670B2 (en) Notification device and notification method
KR20180099721A (ko) 소리 식별을 위한 크라우드 소스 데이터베이스
JP6767322B2 (ja) 出力制御装置、出力制御方法及び出力制御プログラム
US20230230599A1 (en) Data augmentation system and method for multi-microphone systems
WO2023089822A1 (fr) Dispositif d'identification de porteur, système d'identification de porteur, procédé d'identification de porteur et programme d'identification de porteur
AU2018322409B2 (en) System and method for determining a location of a mobile device based on audio localization techniques
WO2020209337A1 (fr) Dispositif d'identification, procédé d'identification, programme de traitement d'identification, dispositif de génération, procédé de génération et programme de traitement de génération
Diaconita et al. Do you hear what i hear? using acoustic probing to detect smartphone locations
JP4944219B2 (ja) 音出力装置
US20160125711A1 (en) Haptic microphone
JP7035525B2 (ja) 注意喚起システム、情報処理装置、情報処理方法、及びプログラム
JP7501619B2 (ja) 識別装置、識別方法及び識別プログラム
US11237669B2 (en) Method and apparatus for improving the measurement of the timing of touches of a touch screen
Zhou et al. Acoustic emission source localization using coupled piezoelectric film strain sensors
Campeiro et al. Damage detection in noisy environments based on EMI and Lamb waves: A comparative study
US9532155B1 (en) Real time monitoring of acoustic environments using ultrasound

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21964837

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023562093

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE