WO2023089822A1 - Wearer identification device, wearer identification system, wearer identification method, and wearer identification program - Google Patents
Wearer identification device, wearer identification system, wearer identification method, and wearer identification program Download PDFInfo
- Publication number
- WO2023089822A1 WO2023089822A1 PCT/JP2021/042778 JP2021042778W WO2023089822A1 WO 2023089822 A1 WO2023089822 A1 WO 2023089822A1 JP 2021042778 W JP2021042778 W JP 2021042778W WO 2023089822 A1 WO2023089822 A1 WO 2023089822A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- feature amount
- unit
- signal
- vibration
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 25
- 238000005259 measurement Methods 0.000 claims abstract description 51
- 230000000644 propagated effect Effects 0.000 claims description 8
- 230000001902 propagating effect Effects 0.000 claims description 4
- 230000015654 memory Effects 0.000 description 31
- 238000012545 processing Methods 0.000 description 24
- 238000004891 communication Methods 0.000 description 23
- 238000013145 classification model Methods 0.000 description 22
- 230000006870 function Effects 0.000 description 20
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 16
- 230000008569 process Effects 0.000 description 15
- 230000008859 change Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 230000003321 amplification Effects 0.000 description 7
- 238000003199 nucleic acid amplification method Methods 0.000 description 7
- 239000000284 extract Substances 0.000 description 4
- 210000000707 wrist Anatomy 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 3
- 238000012706 support-vector machine Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000007635 classification algorithm Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005401 electroluminescence Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000010408 sweeping Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 239000002390 adhesive tape Substances 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 230000003014 reinforcing effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/117—Identification of persons
Definitions
- Embodiments of the present invention relate to a wearer's identification device, a wearer's identification system, a wearer's identification method, and a wearer's identification program.
- a pair of piezoelectric elements such as piezo elements, one as a speaker and the other as a microphone, measure the vibration characteristics of the target on which the speaker and microphone are installed, and recognize the state of the target based on the measured vibration characteristics
- active acoustic sensing which estimates the gripped state of an object or object (see, for example, Non-Patent Document 1).
- sound waves which are in the inaudible range, are transmitted from a speaker to a target, and the vibration propagated through the target is received by a microphone, and the frequency characteristics of the received signal are analyzed. It utilizes the fact that the vibration characteristics of the object on which the microphone is installed change due to changes in the internal structure and boundary conditions.
- the part of the body where the speaker and microphone are attached may move, unlike static objects. obtain.
- the wearing part is moved, the internal structure of the body part changes, which causes changes in the values of the vibration characteristics. This noise may cause a failure in personal identification of the user.
- the present invention seeks to provide a technology that can reduce erroneous determinations when active acoustic sensing is used for personal identification of users.
- a wearing user identification device includes a feature generation unit, a determination unit, and an identification unit.
- the feature amount generation unit receives a measurement signal corresponding to the vibration characteristics of the user's body part measured by the sensor from the sensor attached to the body part of the user to be identified, and generates a feature representing the vibration characteristic from the measurement signal. produce quantity.
- the determination unit determines whether the state of the site where the sensor is attached to the user is stable, based on the magnitude of variation in the feature amount generated by the feature amount generation unit.
- the identification unit performs personal identification of the user based on the feature amount generated by the feature amount generation unit when the determination unit determines that the state of the wearing site is stable.
- the personal identification of the user is performed only in a stable state in which the vibration characteristic value does not greatly change, thereby preventing an erroneous determination when active acoustic sensing is used for personal identification of the user. can be provided.
- FIG. 1 is a block diagram showing an example configuration of a wearable user identification system including a wearable user identification device according to an embodiment of the present invention.
- FIG. 2 is a plan view showing the configuration of the measurement unit worn by the user.
- FIG. 3 is a schematic diagram showing a state in which a user wears the measurement unit.
- FIG. 4 is a diagram showing an example of a spectrogram.
- FIG. 5 is a block diagram showing an example of the hardware configuration of the wearing user identification device.
- FIG. 6 is a flow chart showing an example of a learning processing operation related to learning of a classification model in the wearing user identification device.
- FIG. 7 is a flow chart showing an example of an identification processing operation related to personal identification of a user in the wearable user identification device.
- FIG. 1 is a block diagram showing an example of the configuration of a wearable user identification system 1 according to one embodiment of the present invention.
- the wearing user identification system 1 includes a wearing user identification device 10 according to one embodiment of the present invention, a measurement section 20 and an audio interface section 30 .
- the wearing user identification device 10 includes a signal generation unit 11 , a signal storage unit 12 , a feature amount generation unit 13 , a model learning unit 14 , a model storage unit 15 , an identification execution determination unit 16 and a user identification unit 17 .
- the measurement unit 20 is a sensor for measuring vibration characteristics of a measurement target and a part for attaching the sensor to the body of a target user, and includes a signal generation unit 21 and a signal reception unit 22 .
- the audio interface section 30 is an interface between the wearing user identification device 10 and the measurement section 20 and has a signal control section 31 and a signal amplification section 32 .
- Wire connection or wireless connection can be established between the signal generation unit 11 and the signal control unit 31 and between the signal control unit 31 and the signal generation unit 21 as long as they have a function of transmitting and receiving signals.
- the connection form does not matter.
- the signal generation unit 11 of the wearing user identification device 10 generates an acoustic signal based on arbitrarily set parameters.
- the acoustic signal is ultrasound sweeping from 20 kHz to 40 kHz.
- the settings of the acoustic signal such as whether or not to sweep, use of other frequency bands, etc., do not matter.
- the signal control unit 31 of the audio interface unit 30 generates a drive signal based on the acoustic signal generated by the signal generation unit 11 based on the preset parameters, and vibrates the target through the signal generation unit 21 of the measurement unit 20.
- the vibration at this time will be the same as the vibration used at that time. As long as the included frequency is included, other frequencies may be mixed.
- the signal generator 21 and the signal receiver 22 of the measurement unit 20 are composed of two piezoelectric elements that do not contact each other.
- a piezoelectric element can be realized by, for example, a piezo element.
- One piezoelectric element serves as the signal generating section 21 that generates vibration having the same frequency characteristics as the drive signal generated by the signal control section 31 of the audio interface section 30 .
- the other piezoelectric element serves as a signal receiving section 22 that receives vibration.
- the signal receiving unit 22 acquires vibrations propagating inside and on the surface of the object in which the signal receiving unit 22 is installed.
- the user's living body, which is the object of measurement functions as a propagation path.
- the frequency characteristics of the acquired vibration change.
- the signal receiver 22 transmits the received vibration signal (hereinafter referred to as a reaction signal) to the signal amplifier 32 .
- the signal generating unit 21 and the signal receiving unit 22 may be of any form and material as long as they are mechanisms capable of propagating vibrations while being in contact with the target living body.
- FIG. 2 is a plan view showing the configuration of the measurement unit 20
- FIG. 3 is a schematic diagram showing a state in which the measurement unit 20 is worn by a user who is a measurement target.
- the measurement unit 20 is configured as a band-type sensor, but if the signal generation unit 21 and the signal reception unit 22 can be fixedly attached to the user's skin while maintaining a certain distance, the measurement unit 20 can be used for a living body. Other implementation methods such as adhesive tape may be used.
- the two piezoelectric elements which are the signal generating section 21 and the signal receiving section 22, are attached to the fixed section 23 so as to keep a certain distance so that they do not come into contact with each other.
- the fixing portion 23 also functions as a reinforcing member that reinforces the strength of the signal generating portion 21 and the signal receiving portion 22 so that the signal generating portion 21 and the signal receiving portion 22 can be continuously used.
- a band 24 and a square can 25 are attached to positions facing the fixing portion 23 with the signal generating portion 21 and the signal receiving portion 22 interposed therebetween.
- a hook-and-loop fastener 26 is provided on the back surface of the band 24 .
- the measurement unit 20 having such a configuration is worn by the user by adjusting the length of the band 24 , wrapping it around the user's wrist, and fixing it with the hook-and-loop fastener 26 . At this time, when performing personal authentication, if it is unified for the individual, it does not matter where it is worn.
- the signal amplification section 32 of the audio interface section 30 amplifies the reaction signal acquired by the signal reception section 22 of the measurement section 20 and transmits it to the wearing user identification device 10 .
- the reason why the signal is amplified by the signal amplifier 32 is that the vibration passing through the object to be measured is attenuated, so it is necessary to amplify the signal to a level that enables processing.
- the reaction signal transmitted from the signal amplification section 32 of the audio interface section 30 is stored in the signal storage section 12 .
- the feature amount generation unit 13 extracts the reaction signal stored in the signal storage unit 12 for each fixed time interval, and performs, for example, FFT (Fast Fourier Transform) on the extracted reaction signal. , to generate a spectrogram, which is a feature quantity representing the acoustic frequency characteristics of the living body to be measured.
- FIG. 4 is a diagram showing an example of this spectrogram.
- the feature amount generation unit 13 When executing a learning processing operation related to learning of a classification model, the feature amount generation unit 13 generates teacher data including a set of the generated spectrogram and the user ID corresponding to the user who is the measurement target, and the model learning unit 14. Note that the teacher data may be generated by extracting from the registration database created in advance. Further, the feature amount generation unit 13 outputs the generated spectrogram to the identification execution determination unit 16 when executing the identification processing operation related to personal identification of the user.
- FFT Fast Fourier Transform
- the model learning unit 14 generates and learns a classification model whose input is the teacher data obtained from the feature amount generation unit 13 and whose output is the user ID.
- the model learning unit 14 registers the model itself or the parameters of the model obtained by this learning process in the model storage unit 15, which is a model database.
- the classification model and the type of library used for its learning do not matter. For example, using a generally known machine learning library, algorithms for generating classification models such as SVM (Support Vector Machine) and neural networks obtain optimal output by performing parameter tuning, etc. on teacher data. It is also good to learn so that it can be done.
- the identification execution determination unit 16 determines whether or not the user identification processing by the user identification unit 17 is to be executed.
- the identification execution determination unit 16 determines whether the state of the user-worn site of the measurement unit 20 is stable based on whether the standard deviation at each frequency is higher than an arbitrary set threshold. That is, when the standard deviation at each frequency of the spectrogram is not higher than the threshold, the identification execution determination unit 16 considers that the state of the user wearing part of the measurement unit 20 is stable, and the processing in the user identification unit 17 is performed. let it happen Specifically, the identification execution determination unit 16 outputs the spectrogram acquired from the feature amount generation unit 13 to the user identification unit 17 . Conversely, when the standard deviation at each frequency of the spectrogram is higher than the threshold, the identification execution determination unit 16 determines that the user wearing the measurement unit 20 is moving and the stability is low. Do not process.
- the identification execution determination section 16 does not output the spectrogram to the user identification section 17 . It should be noted that this processing may be performed by limiting only to values having a certain value or more in each frequency characteristic (for example, -50 [dB] or more).
- the user identification unit 17 inputs the spectrogram, which is the feature amount acquired from the identification execution determination unit 16, as an input to the classification model registered in the model storage unit 15, and outputs the classification model for determining an individual user. Find the numerical value of
- the classification model outputs, as a reference value, a score indicating the degree of similarity to each label (user ID) of the classification model with respect to the input spectrogram. be done.
- the score can be output as '1-(similarity)', for example, if the similarity is normalized and expressed between '0' and '1'.
- Random Forest used as an algorithm for the model learning unit 14
- data is randomly extracted from the teacher data and a plurality of decision trees are generated. output. Since the higher the number of determination results, the better, the classification model outputs "(the number of determinations) - (the number of determination results)" as a reference value.
- the reference value may be obtained by subtracting the normalized similarity from "1", or may be obtained by converting the similarity using the reciprocal of the similarity.
- the user identification unit 17 uses the obtained reference value list to determine the user ID with the smallest (similar) reference value.
- a threshold for determination may be set for the degree of similarity, and determination may be made only when the reference value is smaller than the threshold.
- FIG. 5 is a diagram showing an example of the hardware configuration of the wearing user identification device 10.
- the wearing user identification device 10 is configured by a computer such as a microcomputer or a personal computer, and has a hardware processor 101 such as a CPU (Central Processing Unit).
- a hardware processor 101 such as a CPU (Central Processing Unit).
- the processor 101 may include multiple CPUs.
- a program memory 102 a data memory 103 , a communication interface 104 and an input/output interface 105 are connected to the processor 101 via a bus 106 .
- "interface" is abbreviated as "IF”.
- the communication interface 104 can include, for example, one or more wired or wireless communication modules.
- the example shown in FIG. 5 shows two communication modules 1041 and 1042 .
- the communication module 1041 is a communication module using short-range wireless technology such as Bluetooth (registered trademark), and transmits and receives signals to and from the signal control section 31 and the signal amplification section 32 of the audio interface section 30 .
- the communication module 1041 can transmit and receive signals to and from the signal control section 31 and the signal amplification section 32 of the remote audio interface section 30 via the network NW.
- a network consists of an IP network including the Internet and an access network for accessing this IP network.
- the wearing user identification device 10 can also acquire reaction signals from the plurality of measuring units 20 via the plurality of audio interface units 30 and identify the plurality of wearing users who wear the respective measuring units 20 .
- an input unit 107 and a display unit 108 are connected to the input/output interface 105 .
- the input unit 107 and the display unit 108 are so-called tablet-type inputs, in which an input detection sheet adopting an electrostatic method or a pressure method is arranged on a display screen of a display device using liquid crystal or organic EL (Electro Luminescence), for example. - using a display device can be used; Note that the input unit 107 and the display unit 108 may be configured by independent devices.
- the input/output interface 105 inputs operation information input from the input unit 107 to the processor 101 and displays display information generated by the processor 101 on the display unit 108 .
- the input unit 107 and the display unit 108 do not have to be connected to the input/output interface 105 .
- the input unit 107 and the display unit 108 are provided with a communication unit for connecting directly to the communication interface 104 or via the network NW, so that information can be exchanged with the processor 101 .
- the input/output interface 105 may have a read/write function for a recording medium such as a semiconductor memory such as a flash memory, or may be connected to a reader/writer having a read/write function for such a recording medium. It may have functions. As a result, a recording medium detachable from the wearing user identification device 10 can be used as a model database holding classification models.
- the input/output interface 105 may further have a connection function with other devices.
- the program memory 102 includes, as a non-temporary tangible computer-readable storage medium, for example, a non-volatile memory such as a HDD (Hard Disk Drive) or an SSD (Solid State Drive) that can be written and read at any time, and a ROM ( It was used in combination with non-volatile memory such as Read Only Memory).
- the program memory 102 stores programs necessary for the processor 101 to execute various control processes according to one embodiment. That is, each processing function unit of the signal generation unit 11, the feature amount generation unit 13, the model learning unit 14, the identification execution determination unit 16, and the user identification unit 17 executes the program stored in the program memory 102 as described above. It can be implemented by being read and executed by the processor 101 . Some or all of these processing functions may be implemented in various other forms, including integrated circuits such as Application Specific Integrated Circuits (ASICs) or field-programmable gate arrays (FPGAs). May be.
- ASICs Application Specific Integrated Circuits
- FPGAs field-programmable gate arrays
- the data memory 103 is a tangible computer-readable storage medium, for example, a combination of the above nonvolatile memory and a volatile memory such as RAM (Random Access Memory).
- This data memory 103 is used to store various data acquired and created in the process of performing various processes. That is, in the data memory 103, an area for storing various data is appropriately secured in the process of performing various processes.
- the data memory 103 can be provided with, for example, a signal storage section 1031, a model storage section 1032, an identification result storage section 1033, and a temporary storage section 1034.
- the signal storage section 1031 stores the reaction signal transmitted from the signal amplification section 32 of the audio interface section 30 . That is, the signal storage section 12 can be configured in this signal storage section 1031 .
- the model storage unit 1032 stores the classification model learned by the model learning unit 14. That is, the model storage unit 15 can be configured in this model storage unit 1032 .
- the identification result storage unit 1033 stores output information obtained when the processor 101 operates as the user identification unit 17 .
- Temporary storage unit 1034 stores spectrograms and teacher data acquired or generated when processor 101 performs operations as feature amount generation unit 13, model learning unit 14, identification execution determination unit 16, and user identification unit 17. , classification models, reference values, and other data.
- the wearable user identification device 10 prior to user identification, the wearable user identification device 10 first generates a classification model associated with the user ID using a sensor capable of measuring the state of each user to be identified, and generates the classification model.
- the classified model is stored in the model storage unit 15 as registration data.
- the signal generator 21 and the signal receiver 22 of the measurement unit 20 are worn on the wrist of the user who is the person to be identified using the band 24 .
- the vibration generated by the signal generator 21 may be of any form and type as long as it has frequency characteristics like an acoustic signal.
- an acoustic signal will be described as an example.
- FIG. 6 is a flowchart showing an example of a learning processing operation related to learning of a classification model in the wearing user identification device 10.
- FIG. This flowchart shows the processing operation of a part of the wearing user identification device 10 , specifically, the processor 101 of the computer functioning as the signal generator 11 , the feature amount generator 13 , and the model learning unit 14 .
- the processor 101 starts the operation shown in this flowchart.
- the communication interface 104 receives a predetermined installation completion notification transmitted from an information processing device such as a smartphone operated by the user via the network NW
- the processor 101 The operation shown in this flow chart is started.
- the processor 101 functions as the signal generator 11 and generates an acoustic signal (driving signal) based on arbitrarily set parameters (step S101).
- the driving signal is, for example, an ultrasonic wave sweeping from 20 kHz to 40 kHz.
- the settings of the acoustic signal such as whether or not to sweep, whether or not to use other frequency bands, etc., do not matter.
- the generated drive signal is transmitted to the audio interface section 30 via the communication interface 104 .
- a signal generation module that generates a drive signal under the control of the processor 101 may be prepared separately, and the drive signal generated there may be transmitted to the audio interface section 30 via the communication interface 104 .
- the audio interface section 30 transmits this drive signal to the signal generation section 21 of the measurement section 20 .
- This driving signal causes the body of the user to be registered to vibrate through the signal generator 21 .
- the signal receiving unit 22 of the measuring unit 20 acquires the vibration that is given to the living body of the user to be registered by the signal generating unit 21 and propagates inside and on the surface of the living body.
- the vibration given from the piezoelectric element of the signal generating section 21 is propagated to the piezoelectric element of the signal receiving section 22, the living body of the user to be registered functions as a propagation path, and the vibration applied according to this propagation path. change the frequency characteristics of the vibration. This frequency characteristic differs from person to person.
- the signal receiving section 22 detects the propagated vibration and transmits a reaction signal indicated by the detected vibration to the audio interface section 30 .
- the signal amplifying section 32 of the audio interface section 30 amplifies the reaction signal transmitted from the signal receiving section 22 of the measuring section 20 and transmits it to the wearing user identification device 10 .
- the reaction signal transmitted from the audio interface unit 30 is received by the communication interface 104.
- the processor 101 stores the received reaction signal in the signal storage section 1031 of the data memory 103 (step S102).
- the processor 101 functions as the feature generator 13 and performs the following processing operations.
- the processor 101 extracts the reaction signal stored in the signal storage unit 1031 for each fixed time interval. The number of samples of the signal does not matter.
- the extracted reaction signal is stored in temporary storage section 1034 of data memory 103 .
- the processor 101 generates a spectrogram, which is a feature quantity representing the acoustic frequency characteristics of the living body, from the extracted reaction signal stored in the temporary storage unit 1034 (step S103).
- the generated spectrogram is stored in temporary storage section 1034 of data memory 103 .
- the processor 101 assigns a user ID, which is a unique identifier, to the generated spectrogram, and generates teacher data that combines these spectrograms and the user ID (step S104).
- the generated teacher data is stored in temporary storage section 1034 of data memory 103 .
- the processor 101 may extract registration data created in advance from the model storage unit 15 configured in the model storage unit 1032 of the data memory 103, and use it to generate teacher data.
- the processor 101 functions as the model learning unit 14 and performs the following processing operations.
- the processor 101 receives the spectrogram in the training data as an input, and generates and learns a classification model that outputs a user ID in the training data as a label and a reference value as a difference from the input (step S105).
- the processor 101 registers the model itself or the parameters of the classification model and the classification model obtained by this learning process in the model storage unit 15 configured in the model storage unit 1032 of the data memory 103 (step S106). ).
- the processor 101 stops generating the drive signal and ends the transmission of the drive signal to the audio interface unit 30 by the communication interface 104 (step S107). . Then, the learning processing operation shown in this flow chart ends.
- the wearing user identification device 10 inputs a spectrogram, which is a feature amount obtained from a user to be identified who wears the measurement unit 20, into a classification model registered in the model storage unit 15 or the like, and identifies an individual user.
- FIG. 7 is a flowchart showing an example of an identification processing operation related to personal identification of the user in the wearable user identification device 10.
- FIG. This flowchart shows the processing operation in the processor 101 of the computer that functions as a part of the wearing user identification device 10, specifically, the signal generation unit 11, the feature amount generation unit 13, the identification execution determination unit 16, and the user identification unit 17. is shown.
- the processor 101 starts the operation shown in this flowchart.
- a remote user to be identified when a predetermined identification start notification transmitted via the network NW from an information processing device such as a smart phone operated by the user is received by the communication interface 104, the processor 101 The operation shown in this flow chart is started.
- the processor 101 functions as the signal generator 11 and generates a drive signal based on arbitrarily set parameters (step S201).
- the generated drive signal is transmitted to the audio interface section 30 via the communication interface 104 .
- a signal generation module that generates a drive signal under the control of the processor 101 may be prepared separately, and the drive signal generated there may be transmitted to the audio interface section 30 via the communication interface 104 .
- the audio interface section 30 transmits this drive signal to the signal generation section 21 of the measurement section 20 .
- This driving signal causes the body of the user to be registered to vibrate through the signal generator 21 .
- the vibration at this time includes the frequency included in the vibration used to generate the feature quantity included in the registered data regarding the learned user registered in the model storage unit 15 (hereinafter referred to as a registered user). It does not matter if other frequencies are mixed as long as it is
- the signal receiving unit 22 of the measuring unit 20 acquires the vibration that is given to the living body of the user to be identified by the signal generating unit 21 and propagates inside and on the surface of the living body.
- the body of the user to be identified functions as a propagation path, and the vibration is applied according to this propagation path. change the frequency characteristics of the vibration.
- the signal receiving section 22 detects the propagated vibration and transmits a reaction signal indicated by the detected vibration to the audio interface section 30 .
- the signal amplifying section 32 of the audio interface section 30 amplifies the reaction signal transmitted from the signal receiving section 22 of the measuring section 20 and transmits it to the wearing user identification device 10 .
- the reaction signal transmitted from the audio interface unit 30 is received by the communication interface 104.
- the processor 101 stores the received reaction signal in the signal storage section 1031 of the data memory 103 (step S202).
- the processor 101 functions as the feature amount generation unit 13 and extracts the reaction signal stored in the signal storage unit 1031 for each fixed time interval. The number of samples of the signal does not matter.
- the extracted reaction signal is stored in temporary storage section 1034 of data memory 103 .
- the processor 101 performs, for example, FFT on the extracted reaction signal stored in the temporary storage unit 1034 to generate a spectrogram, which is a feature quantity representing the acoustic frequency characteristics of the living body (step S203).
- the generated spectrogram is stored in temporary storage section 1034 of data memory 103 .
- the processor 101 functions as the identification execution determination unit 16 and performs the following processing operations.
- the processor 101 calculates the stability of the spectrogram within a set fixed time from the spectrogram stored in the temporary storage unit 1034 as follows.
- the processor 101 determines whether spectrograms for a certain period of time, for example, 2 seconds, have been generated (step S204). When determining that the spectrogram for 2 seconds has not yet been generated (NO in step S204), the processor 101 proceeds to the process of step S201 and repeats the above-described processing operations.
- the processor 101 calculates the degree of stability (step S205). For example, the processor 101 obtains the average value of dB for each frequency for 2 seconds from the spectrogram for 2 seconds stored in the temporary storage unit 1034, and uses this average value to obtain the standard deviation at each frequency of the spectrogram, Calculated as stability.
- the processor 101 determines whether the state of the user-mounted site of the measurement unit 20 is stable based on whether the standard deviation at each frequency of these spectrograms is equal to or less than the set arbitrary threshold value (step S206). ).
- the processor 101 determines that the user wearing the measurement unit 20 is moving and not stable (NO in step S206). In this case, the processor 101 deletes the oldest spectrogram from among the spectrograms stored in the temporary storage unit 1034 within a certain period of time (step S207). After that, the processor 101 shifts to the process of step S201 and repeats the above-described processing operations.
- the processor 101 determines that the state of the user-mounted site of the measurement unit 20 is stable (YES in step S206). In this case, the processor 101 functions as the user identification unit 17 and performs the following processing operations. First, the processor 101 performs personal identification of the user (step S208). That is, the processor 101 stores the classification model registered in the model storage unit 15 configured in the model storage unit 1032 of the data memory 103 with the spectrogram, which is the feature amount generated in step S203 and stored in the temporary storage unit 1034. Enter one, say the latest spectrogram, and get a list of reference values for the classification model.
- a list of the acquired reference values is stored in the temporary storage unit 1034 of the data memory 103 .
- processor 101 identifies the smallest reference value from the list of reference values stored in temporary storage unit 1034 .
- the processor 101 determines, in the model storage unit 15, that registered users associated with the same feature amount as the specified reference value are similar users.
- processor 101 stores the determined user ID of the registered user in identification result storage section 1033 of data memory 103 as the personal identification result of the user to be identified. Note that in this determination process, a threshold for determination may be set for the degree of similarity, and similar users may be determined only when the specified reference value is smaller than this threshold.
- the processor 101 outputs the user ID, which is the personal identification result stored in the identification result storage unit 1033 (step S209).
- the processor 101 displays the user ID on the display unit 108 via the input/output interface 105 .
- the processor 101 can also provide the user ID to applications and the like that require personal authentication of the user.
- the processor 101 stops generating the drive signal and ends transmission of the drive signal to the audio interface unit 30 by the communication interface 104 (step S210). Then, the identification processing operation shown in this flow chart ends.
- the feature amount generation unit 13 measures the A response signal, which is a measurement signal corresponding to the vibration characteristics of a user's body part, is received, a feature quantity representing the vibration characteristics is generated from the reaction signal, and an identification execution determination unit 16 as a determination unit determines the feature quantity generation unit 13. Based on the magnitude of variation in the generated feature amount, it is determined whether the state of the site where the measurement unit 20 is worn by the user is stable.
- the unit 17 performs personal identification of the user based on the feature amount generated by the feature amount generation unit 13 .
- the wearable user identification device 10 can reduce erroneous determinations when active acoustic sensing is used for personal identification of the user.
- the reaction signal received from the measurement unit 20 is a vibration signal obtained by detecting vibration propagating inside the user's body, which is the part where the measurement unit 20 is attached. It can be a spectrogram representing the frequency characteristics of the vibration signal generated by, for example, performing FFT (Fast Fourier Transform) on the signal. In this way, a spectrogram can be generated as a feature quantity.
- FFT Fast Fourier Transform
- the identification execution determination unit 16 obtains the standard deviation at each frequency of the spectrogram from the spectrogram for a certain period of time using the average value of dB for each frequency in the certain period of time, and the standard deviation at each frequency is the set threshold In the following cases, it is determined that the state of the attachment site is stable. In this manner, the stability can be easily calculated, and based on the stability, a stable state in which the vibration characteristic value does not change significantly can be determined.
- the wearing user identification device 10 further includes a model storage unit 15 which is a database in which feature amounts for each of a plurality of users to be registered are registered in advance.
- a model storage unit 15 which is a database in which feature amounts for each of a plurality of users to be registered are registered in advance.
- a user having a feature amount corresponding to the feature amount generated by the feature amount generation unit 13 from the reaction signal received from the measurement unit 20 from among a plurality of registration target users registered in the identification target user identify as In this way, by registering the feature amount for each of a plurality of registration target users in the model storage unit 15, the identification target user can be easily identified based on the feature amount.
- the model storage unit 15 inputs a feature amount, and stores a value based on the difference between the feature amount of at least one user to be registered and the input feature amount as an identifier uniquely given to the user to be registered.
- This model is a feature amount generated by the feature amount generation unit 13 from the reaction signal received from the measurement unit 20 for each of a plurality of registered users.
- the user identification unit 17 inputs a spectrogram, which is a feature amount generated for a user to be identified, to the model stored in the model storage unit 15, and the user identification unit 17 selects the values output from the model. , is determined as the user ID of the user to be identified, thereby identifying the user to be identified. Therefore, it is possible to appropriately identify the user to be identified using the spectrogram, which is the feature amount of the registered user.
- the wearable user identification system 1 includes the wearable user identification device 10 according to one embodiment, and the piezoelectric element generates the first vibration to be applied to the wearing part of the user's body, and gives it to the user's body.
- a measurement unit 20 that acquires, as a measurement signal, a vibration signal corresponding to the second vibration propagated inside the body among the first vibrations received. Therefore, by having each user to be identified wear the measurement unit 20, each user can be individually identified.
- the user's individual identification is performed based on the feature quantity only in a stable state where the vibration characteristic value does not change significantly.
- the model learning unit 14 may similarly perform learning based on the feature amount only in a stable state, not limited to the case of personal identification of the user. As a result, only stable learning data can be used for data used for learning.
- the audio interface unit 30 is arranged between the wearing user identification device 10 and the measuring unit 20, but the audio interface unit 30 may be incorporated in either the wearing user identification device 10 or the measuring unit 20. good.
- the processing function unit of the wearable user identification device 10 has been described as being composed of one computer, but it may be composed of a plurality of computers by arbitrary division.
- the model learning unit 14 and the model storage unit 15 may be configured in a computer or a server device that can communicate via the network NW via the communication interface 104 and that is separate from the computer that configures the wearing user identification device 10. .
- the method described in the above embodiment can be executed by a computer (computer) as a program (software means), such as a magnetic disk (floppy (registered trademark) disk, hard disk, etc.), an optical disk (CD-ROM, DVD , MO, etc.), a semiconductor memory (ROM, RAM, flash memory, etc.), or the like, or may be transmitted and distributed via a communication medium.
- the programs stored on the medium also include a setting program for configuring software means (including not only execution programs but also tables and data structures) to be executed by the computer.
- a computer that realizes this apparatus reads a program recorded on a recording medium, and optionally constructs software means by a setting program. The operation is controlled by this software means to execute the above-described processes.
- the term "recording medium” as used herein is not limited to those for distribution, and includes storage media such as magnetic disks, semiconductor memories, etc. provided in computers or devices connected via a network.
- the present invention is not limited to the above embodiments, and can be modified in various ways without departing from the gist of the invention at the implementation stage.
- each embodiment may be implemented in combination as much as possible, and in that case, the effect of the combination can be obtained.
- the above-described embodiments include inventions at various stages, and various inventions can be extracted by appropriately combining a plurality of disclosed constituent elements.
Landscapes
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A wearer identification device according to an embodiment comprises a feature amount generation unit, a determining unit, and an identification unit. The feature amount generation unit receives, from a sensor worn on a part of the body of a user to be identified, a measurement signal corresponding to the vibration characteristics of the body part of the user measured by the sensor, and generates a feature amount representing the vibration characteristics from the measurement signal. The determining unit determines, on the basis of the magnitude of fluctuations in the feature amount generated by the feature amount generation unit, whether the state of the part on which the sensor is worn by the user is stable. The identification unit performs personal identification of the user on the basis of the feature amount generated by the feature amount generation unit, if the determining unit determines that the state of the part on which the sensor is worn is stable.
Description
この発明の実施形態は、装着ユーザ識別装置、装着ユーザ識別システム、装着ユーザ識別方法及び装着ユーザ識別プログラムに関する。
Embodiments of the present invention relate to a wearer's identification device, a wearer's identification system, a wearer's identification method, and a wearer's identification program.
一組のピエゾ素子等の圧電素子を、一方をスピーカ、もう一方をマイクとして利用して、スピーカとマイクを設置した対象の振動特性を計測し、この計測した振動特性に基づいて対象の状態認識や対象の把持された状態を推定する、アクティブ音響センシングという手法がある(例えば、非特許文献1を参照)。このアクティブ音響センシングは、スピーカから非可聴域である音波を対象に向けて発信し、対象を伝搬した振動をマイクにより受信し、この受信した信号の周波数特性の解析を行うものであり、スピーカとマイクが設置された対象の振動特性の内部構造や境界条件の変化に伴う振動特性の変化が生じることを利用している。
Using a pair of piezoelectric elements such as piezo elements, one as a speaker and the other as a microphone, measure the vibration characteristics of the target on which the speaker and microphone are installed, and recognize the state of the target based on the measured vibration characteristics There is a method called active acoustic sensing, which estimates the gripped state of an object or object (see, for example, Non-Patent Document 1). In this active acoustic sensing, sound waves, which are in the inaudible range, are transmitted from a speaker to a target, and the vibration propagated through the target is received by a microphone, and the frequency characteristics of the received signal are analyzed. It utilizes the fact that the vibration characteristics of the object on which the microphone is installed change due to changes in the internal structure and boundary conditions.
このアクティブ音響センシング手法を利用することで、各対象の識別を行うことも可能となる。
By using this active acoustic sensing method, it is also possible to identify each target.
アクティブ音響センシングを用いて静的な物体の識別ではなく、ユーザの個人識別を行おうとする場合、静的な物体とは異なり、スピーカとマイクが装着された身体の部位が動いてしまうことが起こり得る。装着部位を動かすと、身体部位の内部構造に変化が生じることに伴い振動特性の値にも変化が生じるため、このノイズが原因となって、ユーザの個人識別がうまく行われない場合がある。
When trying to identify a user rather than identifying static objects using active acoustic sensing, the part of the body where the speaker and microphone are attached may move, unlike static objects. obtain. When the wearing part is moved, the internal structure of the body part changes, which causes changes in the values of the vibration characteristics. This noise may cause a failure in personal identification of the user.
この発明は、アクティブ音響センシングをユーザの個人識別に利用した際の誤判定を軽減可能とする技術を提供しようとするものである。
The present invention seeks to provide a technology that can reduce erroneous determinations when active acoustic sensing is used for personal identification of users.
上記課題を解決するために、この発明の一態様に係る装着ユーザ識別装置は、特徴量生成部と、判断部と、識別部と、を備える。特徴量生成部は、識別対象であるユーザの身体の部位に装着されたセンサから、センサが計測したユーザの身体部位の振動特性に応じた計測信号を受信し、計測信号から振動特性を表す特徴量を生成する。判断部は、特徴量生成部が生成した特徴量の変動の大きさに基づいて、センサのユーザへの装着部位の状態が安定しているか否か判断する。識別部は、判断部が、装着部位の状態が安定していると判断したとき、特徴量生成部が生成した特徴量に基づいて、ユーザの個人識別を行う。
In order to solve the above problems, a wearing user identification device according to one aspect of the present invention includes a feature generation unit, a determination unit, and an identification unit. The feature amount generation unit receives a measurement signal corresponding to the vibration characteristics of the user's body part measured by the sensor from the sensor attached to the body part of the user to be identified, and generates a feature representing the vibration characteristic from the measurement signal. produce quantity. The determination unit determines whether the state of the site where the sensor is attached to the user is stable, based on the magnitude of variation in the feature amount generated by the feature amount generation unit. The identification unit performs personal identification of the user based on the feature amount generated by the feature amount generation unit when the determination unit determines that the state of the wearing site is stable.
この発明の一態様によれば、振動特性値に大きな変化が生じない安定した状態においてのみユーザの個人識別を行うようにすることで、アクティブ音響センシングをユーザの個人識別に利用した際の誤判定を軽減可能とする技術を提供することができる。
According to one aspect of the present invention, the personal identification of the user is performed only in a stable state in which the vibration characteristic value does not greatly change, thereby preventing an erroneous determination when active acoustic sensing is used for personal identification of the user. can be provided.
以下、図面を参照して、この発明に係わる一実施形態を説明する。
An embodiment according to the present invention will be described below with reference to the drawings.
図1は、この発明の一実施形態における装着ユーザ識別システム1の構成の一例を示すブロック図である。装着ユーザ識別システム1は、この発明の一実施形態に係る装着ユーザ識別装置10と、計測部20と、オーディオインタフェース部30と、を含む。装着ユーザ識別装置10は、信号生成部11、信号記憶部12、特徴量生成部13、モデル学習部14、モデル記憶部15、識別実行判定部16、及び、ユーザ識別部17を備える。計測部20は、計測対象の振動特性を計測するセンサ及びそのセンサを対象ユーザの生体に装着するための部位であり、信号発生部21と、信号受信部22と、を有する。オーディオインタフェース部30は、装着ユーザ識別装置10と計測部20とのインタフェースであり、信号制御部31と信号増幅部32とを有する。なお、信号生成部11と信号制御部31との間、及び、信号制御部31と信号発生部21との間は、信号の送信受信ができる機能を有していれば、有線接続でも無線接続でも良く、その接続形態は問わない。
FIG. 1 is a block diagram showing an example of the configuration of a wearable user identification system 1 according to one embodiment of the present invention. The wearing user identification system 1 includes a wearing user identification device 10 according to one embodiment of the present invention, a measurement section 20 and an audio interface section 30 . The wearing user identification device 10 includes a signal generation unit 11 , a signal storage unit 12 , a feature amount generation unit 13 , a model learning unit 14 , a model storage unit 15 , an identification execution determination unit 16 and a user identification unit 17 . The measurement unit 20 is a sensor for measuring vibration characteristics of a measurement target and a part for attaching the sensor to the body of a target user, and includes a signal generation unit 21 and a signal reception unit 22 . The audio interface section 30 is an interface between the wearing user identification device 10 and the measurement section 20 and has a signal control section 31 and a signal amplification section 32 . Wire connection or wireless connection can be established between the signal generation unit 11 and the signal control unit 31 and between the signal control unit 31 and the signal generation unit 21 as long as they have a function of transmitting and receiving signals. However, the connection form does not matter.
装着ユーザ識別装置10の信号生成部11は、任意に設定したパラメータに基づく音響信号を生成する。一例として、音響信号は、20kHzから40kHzまで掃引する超音波とする。但し、掃引するか否か、他の周波数帯域の利用、等の、音響信号の設定は問わない。
The signal generation unit 11 of the wearing user identification device 10 generates an acoustic signal based on arbitrarily set parameters. As an example, the acoustic signal is ultrasound sweeping from 20 kHz to 40 kHz. However, the settings of the acoustic signal, such as whether or not to sweep, use of other frequency bands, etc., do not matter.
オーディオインタフェース部30の信号制御部31は、この予め設定したパラメータに基づいて信号生成部11で生成した音響信号をもとに駆動信号を生成し、計測部20の信号発生部21を通じて対象に振動を与える。なお、予め、識別対象である個人に関する教師データに含まれる特徴量を生成して図示しない登録データベースに登録しておく構成とした場合には、このときの振動は、その際に用いた振動に含まれる周波数を含んでいれば、他の周波数が混在していても構わない。
The signal control unit 31 of the audio interface unit 30 generates a drive signal based on the acoustic signal generated by the signal generation unit 11 based on the preset parameters, and vibrates the target through the signal generation unit 21 of the measurement unit 20. give. Note that in the case of a configuration in which the feature amount included in the teacher data regarding the individual to be identified is generated in advance and registered in a registration database (not shown), the vibration at this time will be the same as the vibration used at that time. As long as the included frequency is included, other frequencies may be mixed.
計測部20の信号発生部21及び信号受信部22は、互いに接触しない2つの圧電素子により構成される。圧電素子は、例えばピエゾ素子により実現できる。一方の圧電素子は、オーディオインタフェース部30の信号制御部31において生成された駆動信号と同一の周波数特徴を有する振動を発生する信号発生部21となる。また、もう一方の圧電素子は、振動を受信する信号受信部22となる。信号受信部22は、信号受信部22が設置された対象内部及び表面を伝搬してきた振動を取得する。ここでは、信号発生部21から与えられる振動が信号受信部22まで伝搬される際に、計測対象であるユーザの生体が伝搬路として機能し、この伝搬路の材質や境界条件等に応じて、取得した振動の周波数特性が変化する。信号受信部22は、受信した振動信号(以下、反応信号と称する)を、信号増幅部32に送信される。信号発生部21及び信号受信部22は、対象の生体と接触しつつ、且つ、振動を伝搬できる機構であれば、その形態及び素材は問わない。
The signal generator 21 and the signal receiver 22 of the measurement unit 20 are composed of two piezoelectric elements that do not contact each other. A piezoelectric element can be realized by, for example, a piezo element. One piezoelectric element serves as the signal generating section 21 that generates vibration having the same frequency characteristics as the drive signal generated by the signal control section 31 of the audio interface section 30 . The other piezoelectric element serves as a signal receiving section 22 that receives vibration. The signal receiving unit 22 acquires vibrations propagating inside and on the surface of the object in which the signal receiving unit 22 is installed. Here, when the vibration given from the signal generator 21 is propagated to the signal receiver 22, the user's living body, which is the object of measurement, functions as a propagation path. The frequency characteristics of the acquired vibration change. The signal receiver 22 transmits the received vibration signal (hereinafter referred to as a reaction signal) to the signal amplifier 32 . The signal generating unit 21 and the signal receiving unit 22 may be of any form and material as long as they are mechanisms capable of propagating vibrations while being in contact with the target living body.
図2は、計測部20の構成を示す平面図であり、図3は、計測部20を計測対象であるユーザが装着した状態を示す模式図である。なお、ここでは、計測部20は、バンド型センサとして構成しているが、信号発生部21及び信号受信部22を一定距離を保ってユーザの皮膚に固定に装着できるものであれば、生体用粘着テープ等、他の実現方法であっても構わない。
FIG. 2 is a plan view showing the configuration of the measurement unit 20, and FIG. 3 is a schematic diagram showing a state in which the measurement unit 20 is worn by a user who is a measurement target. Here, the measurement unit 20 is configured as a band-type sensor, but if the signal generation unit 21 and the signal reception unit 22 can be fixedly attached to the user's skin while maintaining a certain distance, the measurement unit 20 can be used for a living body. Other implementation methods such as adhesive tape may be used.
計測部20は、信号発生部21及び信号受信部22となる二つのピエゾ素子は、互いに接触しない一定距離を保つように固定部23に取り付けられる。固定部23は、信号発生部21及び信号受信部22を継続して利用するために、信号発生部21及び信号受信部22の強度を補強する補強部材としても機能する。信号発生部21及び信号受信部22を挟んで固定部23の対向する位置に、バンド24と角カン25とが取り付けられている。バンド24の裏面には、面ファスナ26が設けられている。このような構成の計測部20は、バンド24の長さを調整してユーザの手首に巻き付け、面ファスナ26により固定することで、ユーザに装着される。このとき、個人認証を行うそれぞれの際に、当該個人において統一されていれば、その装着場所は問わない。
In the measuring section 20, the two piezoelectric elements, which are the signal generating section 21 and the signal receiving section 22, are attached to the fixed section 23 so as to keep a certain distance so that they do not come into contact with each other. The fixing portion 23 also functions as a reinforcing member that reinforces the strength of the signal generating portion 21 and the signal receiving portion 22 so that the signal generating portion 21 and the signal receiving portion 22 can be continuously used. A band 24 and a square can 25 are attached to positions facing the fixing portion 23 with the signal generating portion 21 and the signal receiving portion 22 interposed therebetween. A hook-and-loop fastener 26 is provided on the back surface of the band 24 . The measurement unit 20 having such a configuration is worn by the user by adjusting the length of the band 24 , wrapping it around the user's wrist, and fixing it with the hook-and-loop fastener 26 . At this time, when performing personal authentication, if it is unified for the individual, it does not matter where it is worn.
オーディオインタフェース部30の信号増幅部32は、計測部20の信号受信部22において取得した反応信号を増幅し、装着ユーザ識別装置10に送信する。信号増幅部32にて信号を増幅する理由は、計測対象を通過した振動は減衰することから、処理が可能となるレベルまで増幅を行う必要があることによる。
The signal amplification section 32 of the audio interface section 30 amplifies the reaction signal acquired by the signal reception section 22 of the measurement section 20 and transmits it to the wearing user identification device 10 . The reason why the signal is amplified by the signal amplifier 32 is that the vibration passing through the object to be measured is attenuated, so it is necessary to amplify the signal to a level that enables processing.
装着ユーザ識別装置10においては、オーディオインタフェース部30の信号増幅部32から送信されてきた反応信号を、信号記憶部12に記憶する。
In the wearing user identification device 10 , the reaction signal transmitted from the signal amplification section 32 of the audio interface section 30 is stored in the signal storage section 12 .
特徴量生成部13は、この信号記憶部12に記憶した反応信号を一定時間区間毎に抽出し、その抽出した反応信号に対して、例えばFFT(Fast Fourier Transform:高速フーリエ変換)を行うことで、計測対象の生体の音響周波数特性等を表す特徴量であるスペクトログラムを生成する。図4は、このスペクトログラムの一例を示す図である。特徴量生成部13は、分類モデルの学習に係わる学習処理動作の実行時には、生成したスペクトログラムと、計測対象であるユーザに対応したユーザIDと、を組とする教師データを生成し、モデル学習部14に出力する。なお、教師データは、予め作成した上記登録データベースから抽出して生成しても良い。また、特徴量生成部13は、ユーザの個人識別に係わる識別処理動作の実行時には、生成したスペクトログラムを識別実行判定部16に出力する。
The feature amount generation unit 13 extracts the reaction signal stored in the signal storage unit 12 for each fixed time interval, and performs, for example, FFT (Fast Fourier Transform) on the extracted reaction signal. , to generate a spectrogram, which is a feature quantity representing the acoustic frequency characteristics of the living body to be measured. FIG. 4 is a diagram showing an example of this spectrogram. When executing a learning processing operation related to learning of a classification model, the feature amount generation unit 13 generates teacher data including a set of the generated spectrogram and the user ID corresponding to the user who is the measurement target, and the model learning unit 14. Note that the teacher data may be generated by extracting from the registration database created in advance. Further, the feature amount generation unit 13 outputs the generated spectrogram to the identification execution determination unit 16 when executing the identification processing operation related to personal identification of the user.
モデル学習部14は、入力を特徴量生成部13から得られる教師データ、出力をユーザIDとする、分類モデルを生成・学習する。モデル学習部14は、この学習処理によって得られた分類モデルについて、モデルそのもの、またはモデルのパラメータを、モデルデータベースであるモデル記憶部15に登録する。教師データに対してパラメータチューニング等を実施することにより最適な出力を得られるよう学習することが可能であれば、分類モデル及びその学習に用いるライブラリの種別は問わない。例えば、一般的に公知な機械学習ライブラリを用い、SVM(Support Vector Machine)やニューラルネットワーク等の分類モデル生成用のアルゴリズムが、教師データに対してパラメータチューニング等を実施することにより最適な出力を得られるよう学習することとしても良い。
The model learning unit 14 generates and learns a classification model whose input is the teacher data obtained from the feature amount generation unit 13 and whose output is the user ID. The model learning unit 14 registers the model itself or the parameters of the model obtained by this learning process in the model storage unit 15, which is a model database. As long as it is possible to perform learning so as to obtain an optimal output by performing parameter tuning or the like on teacher data, the classification model and the type of library used for its learning do not matter. For example, using a generally known machine learning library, algorithms for generating classification models such as SVM (Support Vector Machine) and neural networks obtain optimal output by performing parameter tuning, etc. on teacher data. It is also good to learn so that it can be done.
識別実行判定部16は、ユーザ識別部17によるユーザ識別処理を実行するかどうかの判定を行う。識別実行判定部16は、特徴量生成部13から取得したスペクトログラムより、設定した一定時間内におけるスペクトログラムの安定度を求める。例えば、2秒間のスペクトログラムにおける安定度を求めるために、2秒間における周波数毎のdBの平均値を求める(例:20kHzの平均値=0[dB],…,40kHzの平均値=-5[dB])。そして、識別実行判定部16は、これらの平均値を用いて、スペクトログラムの各周波数における標準偏差を、安定度として算出する。識別実行判定部16は、これら各周波数における標準偏差が設定した任意の閾値よりも高いか否かにより、計測部20のユーザ装着部位の状態が安定しているかどうかを判断する。すなわち、スペクトログラムの各周波数における標準偏差が閾値よりも高くない場合には、識別実行判定部16は、計測部20のユーザ装着部位の状態が安定しているとみなし、ユーザ識別部17における処理を行わせる。具体的には、識別実行判定部16は、ユーザ識別部17に、特徴量生成部13から取得したスペクトログラムを出力する。逆に、スペクトログラムの各周波数における標準偏差が閾値よりも高い場合には、識別実行判定部16は、計測部20を装着したユーザが動いていて安定度が低いと判定し、ユーザ識別部17における処理を行わないようにする。すなわち、識別実行判定部16は、ユーザ識別部17に対して、スペクトログラムを出力することはしない。なお、本処理は、各周波数特性において一定以上の値を有する値(例えば、-50[dB]以上)にだけ限定して実施するようにしても良い。
The identification execution determination unit 16 determines whether or not the user identification processing by the user identification unit 17 is to be executed. The identification execution determination unit 16 obtains the stability of the spectrogram within a set fixed time from the spectrogram acquired from the feature amount generation unit 13 . For example, in order to obtain the stability in the spectrogram for 2 seconds, obtain the average value of dB for each frequency in 2 seconds (eg: average value of 20 kHz = 0 [dB], ..., average value of 40 kHz = -5 [dB ]). Then, the identification execution determination unit 16 uses these average values to calculate the standard deviation at each frequency of the spectrogram as the degree of stability. The identification execution determination unit 16 determines whether the state of the user-worn site of the measurement unit 20 is stable based on whether the standard deviation at each frequency is higher than an arbitrary set threshold. That is, when the standard deviation at each frequency of the spectrogram is not higher than the threshold, the identification execution determination unit 16 considers that the state of the user wearing part of the measurement unit 20 is stable, and the processing in the user identification unit 17 is performed. let it happen Specifically, the identification execution determination unit 16 outputs the spectrogram acquired from the feature amount generation unit 13 to the user identification unit 17 . Conversely, when the standard deviation at each frequency of the spectrogram is higher than the threshold, the identification execution determination unit 16 determines that the user wearing the measurement unit 20 is moving and the stability is low. Do not process. That is, the identification execution determination section 16 does not output the spectrogram to the user identification section 17 . It should be noted that this processing may be performed by limiting only to values having a certain value or more in each frequency characteristic (for example, -50 [dB] or more).
ユーザ識別部17は、モデル記憶部15に登録された分類モデルに、入力として、識別実行判定部16から取得した特徴量であるスペクトログラムを入力し、分類モデルの出力として、ユーザ個人を判定するための数値を求める。
The user identification unit 17 inputs the spectrogram, which is the feature amount acquired from the identification execution determination unit 16, as an input to the classification model registered in the model storage unit 15, and outputs the classification model for determining an individual user. Find the numerical value of
ここで、SVMをモデル学習部14のアルゴリズムとする場合、分類モデルからは、入力されたスペクトログラムに対して、分類モデルが持つ各ラベル(ユーザID)に対する類似度等を示すスコアが参考値として出力される。スコアは、例えば、類似度が正規化され「0」から「1」の間で表現される場合、「1-(類似度)」として出力されることができる。
Here, when SVM is used as an algorithm for the model learning unit 14, the classification model outputs, as a reference value, a score indicating the degree of similarity to each label (user ID) of the classification model with respect to the input spectrogram. be done. The score can be output as '1-(similarity)', for example, if the similarity is normalized and expressed between '0' and '1'.
また、Random Forestをモデル学習部14のアルゴリズムとする場合、教師データからランダムにデータを抽出し、複数の決定木を生成しており、入力データに対する各決定木の各ラベルへの判定結果数が出力される。この判定結果数が高いほど、良いものであるため、分類モデルからは、「(判定回数)―(判定結果数)」が参考値として出力される。
Also, when Random Forest is used as an algorithm for the model learning unit 14, data is randomly extracted from the teacher data and a plurality of decision trees are generated. output. Since the higher the number of determination results, the better, the classification model outputs "(the number of determinations) - (the number of determination results)" as a reference value.
また、分類アルゴリズムは、他のDNN(Deep Neural Network)等の分類アルゴリズムを用いても構わない。その場合、参考値は、「1」から正規化した類似度引いたものでも良いし、類似度の逆数等によって類似度を変換して得ても良い。
Also, other classification algorithms such as DNN (Deep Neural Network) may be used as the classification algorithm. In this case, the reference value may be obtained by subtracting the normalized similarity from "1", or may be obtained by converting the similarity using the reciprocal of the similarity.
ユーザ識別部17は、得られた参考値一覧を用いて、最も小さい参考値(類似する)のユーザIDを判定する。判定処理では、類似度に判定の閾値を設け、閾値よりも参考値が小さい場合のみ判定しても良い。
The user identification unit 17 uses the obtained reference value list to determine the user ID with the smallest (similar) reference value. In the determination process, a threshold for determination may be set for the degree of similarity, and determination may be made only when the reference value is smaller than the threshold.
図5は、装着ユーザ識別装置10のハードウェア構成の一例を示す図である。装着ユーザ識別装置10は、図5に示すように、例えばマイクロコンピュータやパーソナルコンピュータ等のコンピュータにより構成され、CPU(Central Processing Unit)等のハードウェアプロセッサ101を有する。なお、CPUは、マルチコア及びマルチスレッドのものを用いることで、同時に複数の情報処理を実行することができる。また、プロセッサ101は、複数のCPUを備えていても良い。そして、装着ユーザ識別装置10では、このプロセッサ101に対し、プログラムメモリ102と、データメモリ103と、通信インタフェース104と、入出力インタフェース105とが、バス106を介して接続される。なお、図5では、「インタフェース」を「IF」と略記している。
FIG. 5 is a diagram showing an example of the hardware configuration of the wearing user identification device 10. As shown in FIG. As shown in FIG. 5, the wearing user identification device 10 is configured by a computer such as a microcomputer or a personal computer, and has a hardware processor 101 such as a CPU (Central Processing Unit). By using a multi-core and multi-threaded CPU, it is possible to execute a plurality of information processes at the same time. Also, the processor 101 may include multiple CPUs. In the wearable user identification device 10 , a program memory 102 , a data memory 103 , a communication interface 104 and an input/output interface 105 are connected to the processor 101 via a bus 106 . In FIG. 5, "interface" is abbreviated as "IF".
通信インタフェース104は、例えば一つ以上の有線または無線の通信モジュールを含むことができる。図5に示す例では、二つの通信モジュール1041,1042を示している。通信モジュール1041は、Bluetooth(登録商標)等の近距離無線技術を利用した通信モジュールであり、オーディオインタフェース部30の信号制御部31及び信号増幅部32と信号の送受信を行う。また、通信モジュール1041は、ネットワークNWを介して、遠隔のオーディオインタフェース部30の信号制御部31及び信号増幅部32と信号の送受信を行うことができる。ネットワークは、インターネットを含むIP網と、このIP網にアクセスするためのアクセス網とから構成される。アクセス網としては、例えば公衆有線網や携帯電話網、有線LAN(Local Area Network)、無線LAN、CATV(Cable Television)等が用いられる。よって、装着ユーザ識別装置10は、複数のオーディオインタフェース部30を介して複数の計測部20から反応信号を取得し、それぞれ計測部20を装着した複数の装着ユーザを識別することも可能である。
The communication interface 104 can include, for example, one or more wired or wireless communication modules. The example shown in FIG. 5 shows two communication modules 1041 and 1042 . The communication module 1041 is a communication module using short-range wireless technology such as Bluetooth (registered trademark), and transmits and receives signals to and from the signal control section 31 and the signal amplification section 32 of the audio interface section 30 . Also, the communication module 1041 can transmit and receive signals to and from the signal control section 31 and the signal amplification section 32 of the remote audio interface section 30 via the network NW. A network consists of an IP network including the Internet and an access network for accessing this IP network. As the access network, for example, a public wired network, a mobile phone network, a wired LAN (Local Area Network), a wireless LAN, CATV (Cable Television), etc. are used. Therefore, the wearing user identification device 10 can also acquire reaction signals from the plurality of measuring units 20 via the plurality of audio interface units 30 and identify the plurality of wearing users who wear the respective measuring units 20 .
また、入出力インタフェース105には、入力部107及び表示部108が接続されている。入力部107及び表示部108は、例えば液晶または有機EL(Electro Luminescence)を使用した表示デバイスの表示画面上に、静電方式または圧力方式を採用した入力検知シートを配置した、いわゆるタブレット型の入力・表示デバイスを用いたものが用いられることができる。なお、入力部107及び表示部108は独立するデバイスにより構成されてもよい。入出力インタフェース105は、上記入力部107において入力された操作情報をプロセッサ101に入力すると共に、プロセッサ101で生成された表示情報を表示部108に表示させる。
Also, an input unit 107 and a display unit 108 are connected to the input/output interface 105 . The input unit 107 and the display unit 108 are so-called tablet-type inputs, in which an input detection sheet adopting an electrostatic method or a pressure method is arranged on a display screen of a display device using liquid crystal or organic EL (Electro Luminescence), for example. - using a display device can be used; Note that the input unit 107 and the display unit 108 may be configured by independent devices. The input/output interface 105 inputs operation information input from the input unit 107 to the processor 101 and displays display information generated by the processor 101 on the display unit 108 .
なお、入力部107及び表示部108は、入出力インタフェース105に接続されていなくても良い。入力部107及び表示部108は、通信インタフェース104と直接またはネットワークNWを介して接続するための通信ユニットを備えることで、プロセッサ101との間で情報の授受を行い得る。
Note that the input unit 107 and the display unit 108 do not have to be connected to the input/output interface 105 . The input unit 107 and the display unit 108 are provided with a communication unit for connecting directly to the communication interface 104 or via the network NW, so that information can be exchanged with the processor 101 .
また、入出力インタフェース105は、フラッシュメモリ等の半導体メモリといった記録媒体のリード/ライト機能を有しても良いし、あるいは、そのような記録媒体のリード/ライト機能を持ったリーダライタとの接続機能を有しても良い。これにより、装着ユーザ識別装置10に対して着脱自在な記録媒体を、分類モデルを保持するモデルデータベースとすることができる。入出力インタフェース105は、さらに、他の機器との接続機能を有して良い。
The input/output interface 105 may have a read/write function for a recording medium such as a semiconductor memory such as a flash memory, or may be connected to a reader/writer having a read/write function for such a recording medium. It may have functions. As a result, a recording medium detachable from the wearing user identification device 10 can be used as a model database holding classification models. The input/output interface 105 may further have a connection function with other devices.
また、プログラムメモリ102は、非一時的な有形のコンピュータ可読記憶媒体として、例えば、HDD(Hard Disk Drive)またはSSD(Solid State Drive)等の随時書込み及び読出しが可能な不揮発性メモリと、ROM(Read Only Memory)等の不揮発性メモリとが組合せて使用されたものである。このプログラムメモリ102には、プロセッサ101が一実施形態に係る各種制御処理を実行するために必要なプログラムが格納されている。すなわち、上記の信号生成部11、特徴量生成部13、モデル学習部14、識別実行判定部16及びユーザ識別部17の各処理機能部は、何れも、プログラムメモリ102に格納されたプログラムを上記プロセッサ101により読み出させて実行させることにより実現され得る。なお、これらの処理機能部の一部または全部は、特定用途向け集積回路(ASIC:Application Specific Integrated Circuit)またはFPGA(field-programmable gate array)等の集積回路を含む、他の多様な形式によって実現されても良い。
In addition, the program memory 102 includes, as a non-temporary tangible computer-readable storage medium, for example, a non-volatile memory such as a HDD (Hard Disk Drive) or an SSD (Solid State Drive) that can be written and read at any time, and a ROM ( It was used in combination with non-volatile memory such as Read Only Memory). The program memory 102 stores programs necessary for the processor 101 to execute various control processes according to one embodiment. That is, each processing function unit of the signal generation unit 11, the feature amount generation unit 13, the model learning unit 14, the identification execution determination unit 16, and the user identification unit 17 executes the program stored in the program memory 102 as described above. It can be implemented by being read and executed by the processor 101 . Some or all of these processing functions may be implemented in various other forms, including integrated circuits such as Application Specific Integrated Circuits (ASICs) or field-programmable gate arrays (FPGAs). May be.
また、データメモリ103は、有形のコンピュータ可読記憶媒体として、例えば、上記の不揮発性メモリと、RAM(Random Access Memory)等の揮発性メモリとが組合せて使用されたものである。このデータメモリ103は、各種処理が行われる過程で取得及び作成された各種データが記憶されるために用いられる。すなわち、データメモリ103には、各種処理が行われる過程で、適宜、各種データを記憶するための領域が確保される。そのような領域として、データメモリ103には、例えば、信号記憶部1031、モデル記憶部1032、識別結果記憶部1033、及び一時記憶部1034を設けることができる。
In addition, the data memory 103 is a tangible computer-readable storage medium, for example, a combination of the above nonvolatile memory and a volatile memory such as RAM (Random Access Memory). This data memory 103 is used to store various data acquired and created in the process of performing various processes. That is, in the data memory 103, an area for storing various data is appropriately secured in the process of performing various processes. As such areas, the data memory 103 can be provided with, for example, a signal storage section 1031, a model storage section 1032, an identification result storage section 1033, and a temporary storage section 1034. FIG.
信号記憶部1031は、オーディオインタフェース部30の信号増幅部32から送信されてきた反応信号を記憶する。すなわち、信号記憶部12が、この信号記憶部1031に構成されることができる。
The signal storage section 1031 stores the reaction signal transmitted from the signal amplification section 32 of the audio interface section 30 . That is, the signal storage section 12 can be configured in this signal storage section 1031 .
モデル記憶部1032は、モデル学習部14が学習した分類モデルを記憶する。すなわち、モデル記憶部15が、このモデル記憶部1032に構成されることができる。
The model storage unit 1032 stores the classification model learned by the model learning unit 14. That is, the model storage unit 15 can be configured in this model storage unit 1032 .
識別結果記憶部1033は、プロセッサ101がユーザ識別部17としての動作を実施した際に得られる出力情報を記憶する。
The identification result storage unit 1033 stores output information obtained when the processor 101 operates as the user identification unit 17 .
一時記憶部1034は、プロセッサ101が、上記特徴量生成部13、モデル学習部14、識別実行判定部16、及びユーザ識別部17としての動作を実施した際に取得または生成する、スペクトログラム、教師データ、分類モデル、参考値、等のデータを記憶する。
Temporary storage unit 1034 stores spectrograms and teacher data acquired or generated when processor 101 performs operations as feature amount generation unit 13, model learning unit 14, identification execution determination unit 16, and user identification unit 17. , classification models, reference values, and other data.
次に、装着ユーザ識別装置10の動作を説明する。
本実施形態では、装着ユーザ識別装置10は、ユーザの識別に先立って、先ず、識別対象となる各ユーザの状態を計測可能なセンサを用いてユーザIDと関連付けた分類モデルを生成し、その生成した分類モデルを登録データとしてモデル記憶部15に保存する。 Next, the operation of the wearinguser identification device 10 will be described.
In this embodiment, prior to user identification, the wearableuser identification device 10 first generates a classification model associated with the user ID using a sensor capable of measuring the state of each user to be identified, and generates the classification model. The classified model is stored in the model storage unit 15 as registration data.
本実施形態では、装着ユーザ識別装置10は、ユーザの識別に先立って、先ず、識別対象となる各ユーザの状態を計測可能なセンサを用いてユーザIDと関連付けた分類モデルを生成し、その生成した分類モデルを登録データとしてモデル記憶部15に保存する。 Next, the operation of the wearing
In this embodiment, prior to user identification, the wearable
そのために、先ず、計測部20の信号発生部21及び信号受信部22を、バンド24を用いて識別対象者となるユーザの手首に装着する。信号発生部21が発生する振動は、音響信号のような周波数特性を有する振動であれば、その形態及び種類は問わない。本実施形態においては、音響信号を例として挙げて説明する。
For this purpose, first, the signal generator 21 and the signal receiver 22 of the measurement unit 20 are worn on the wrist of the user who is the person to be identified using the band 24 . The vibration generated by the signal generator 21 may be of any form and type as long as it has frequency characteristics like an acoustic signal. In this embodiment, an acoustic signal will be described as an example.
図6は、装着ユーザ識別装置10における分類モデルの学習に係わる学習処理動作の一例を示すフローチャートである。このフローチャートは、装着ユーザ識別装置10の一部、具体的には、信号生成部11、特徴量生成部13、及びモデル学習部14として機能するコンピュータのプロセッサ101における処理動作を示している。計測部20を登録対象のユーザの手首に装着した後、入出力インタフェース105を介して入力部107から学習の開始が指示されると、プロセッサ101は、このフローチャートに示す動作を開始する。また、遠隔の登録対象ユーザについては、当該ユーザが操作するスマートホン等の情報処理装置からネットワークNWを介して送信されてくる所定の装着完了通知を、通信インタフェース104により受信すると、プロセッサ101は、このフローチャートに示す動作を開始する。
FIG. 6 is a flowchart showing an example of a learning processing operation related to learning of a classification model in the wearing user identification device 10. FIG. This flowchart shows the processing operation of a part of the wearing user identification device 10 , specifically, the processor 101 of the computer functioning as the signal generator 11 , the feature amount generator 13 , and the model learning unit 14 . After the measurement unit 20 is worn on the wrist of the user to be registered, when the input unit 107 instructs the start of learning via the input/output interface 105, the processor 101 starts the operation shown in this flowchart. In addition, for a remote user to be registered, when the communication interface 104 receives a predetermined installation completion notification transmitted from an information processing device such as a smartphone operated by the user via the network NW, the processor 101 The operation shown in this flow chart is started.
先ず、プロセッサ101は、信号生成部11として機能して、任意に設定したパラメータに基づく音響信号(駆動信号)を生成する(ステップS101)。駆動信号は、例えば、20kHzから40kHzまで掃引する超音波とする。但し、掃引するか否か、他の周波数帯域の利用の有無等、音響信号の設定は問わない。生成された駆動信号は、通信インタフェース104により、オーディオインタフェース部30に送信される。また、プロセッサ101の制御により駆動信号を生成する信号生成モジュールを別途用意しておき、そこで生成した駆動信号を、通信インタフェース104によりオーディオインタフェース部30に送信するものとしても良い。
First, the processor 101 functions as the signal generator 11 and generates an acoustic signal (driving signal) based on arbitrarily set parameters (step S101). The driving signal is, for example, an ultrasonic wave sweeping from 20 kHz to 40 kHz. However, the settings of the acoustic signal, such as whether or not to sweep, whether or not to use other frequency bands, etc., do not matter. The generated drive signal is transmitted to the audio interface section 30 via the communication interface 104 . Alternatively, a signal generation module that generates a drive signal under the control of the processor 101 may be prepared separately, and the drive signal generated there may be transmitted to the audio interface section 30 via the communication interface 104 .
オーディオインタフェース部30は、この駆動信号を計測部20の信号発生部21に送信する。この駆動信号により、信号発生部21を通じて登録対象のユーザの生体に振動が与えられる。計測部20の信号受信部22は、信号発生部21によって登録対象ユーザの生体に与えられ、生体内部及び表面を伝搬してきた振動を取得する。ここで、信号発生部21の圧電素子から与えられる振動が信号受信部22の圧電素子まで伝搬される際に、登録対象ユーザの生体が伝搬路として機能し、この伝搬路に応じて、与えられた振動の周波数特性が変化する。この周波数特性は、各人が異なっている。信号受信部22は、伝搬してきた振動を検出し、この検出された振動で示される反応信号をオーディオインタフェース部30に送信する。オーディオインタフェース部30の信号増幅部32は、計測部20の信号受信部22から送信されてきた反応信号を増幅し、装着ユーザ識別装置10に送信する。
The audio interface section 30 transmits this drive signal to the signal generation section 21 of the measurement section 20 . This driving signal causes the body of the user to be registered to vibrate through the signal generator 21 . The signal receiving unit 22 of the measuring unit 20 acquires the vibration that is given to the living body of the user to be registered by the signal generating unit 21 and propagates inside and on the surface of the living body. Here, when the vibration given from the piezoelectric element of the signal generating section 21 is propagated to the piezoelectric element of the signal receiving section 22, the living body of the user to be registered functions as a propagation path, and the vibration applied according to this propagation path. change the frequency characteristics of the vibration. This frequency characteristic differs from person to person. The signal receiving section 22 detects the propagated vibration and transmits a reaction signal indicated by the detected vibration to the audio interface section 30 . The signal amplifying section 32 of the audio interface section 30 amplifies the reaction signal transmitted from the signal receiving section 22 of the measuring section 20 and transmits it to the wearing user identification device 10 .
オーディオインタフェース部30から送信されてきた反応信号は、通信インタフェース104により受信される。プロセッサ101は、この受信した反応信号をデータメモリ103の信号記憶部1031に記憶していく(ステップS102)。
The reaction signal transmitted from the audio interface unit 30 is received by the communication interface 104. The processor 101 stores the received reaction signal in the signal storage section 1031 of the data memory 103 (step S102).
次に、プロセッサ101は、特徴量生成部13として機能して、以下の処理動作を行う。
先ず、プロセッサ101は、信号記憶部1031に記憶した反応信号を、一定時間区間毎に抽出する。信号のサンプル数は問わない。抽出された反応信号は、データメモリ103の一時記憶部1034に記憶される。そして、プロセッサ101は、一時記憶部1034に記憶した抽出された反応信号から、生体の音響周波数特性等を表す特徴量であるスペクトログラムを生成する(ステップS103)。生成したスペクトログラムは、データメモリ103の一時記憶部1034に記憶される。 Next, theprocessor 101 functions as the feature generator 13 and performs the following processing operations.
First, theprocessor 101 extracts the reaction signal stored in the signal storage unit 1031 for each fixed time interval. The number of samples of the signal does not matter. The extracted reaction signal is stored in temporary storage section 1034 of data memory 103 . Then, the processor 101 generates a spectrogram, which is a feature quantity representing the acoustic frequency characteristics of the living body, from the extracted reaction signal stored in the temporary storage unit 1034 (step S103). The generated spectrogram is stored in temporary storage section 1034 of data memory 103 .
先ず、プロセッサ101は、信号記憶部1031に記憶した反応信号を、一定時間区間毎に抽出する。信号のサンプル数は問わない。抽出された反応信号は、データメモリ103の一時記憶部1034に記憶される。そして、プロセッサ101は、一時記憶部1034に記憶した抽出された反応信号から、生体の音響周波数特性等を表す特徴量であるスペクトログラムを生成する(ステップS103)。生成したスペクトログラムは、データメモリ103の一時記憶部1034に記憶される。 Next, the
First, the
そして、プロセッサ101は、その生成したスペクトログラムに対して一意な識別子であるユーザIDを付与し、これらスペクトログラムとユーザIDとを組とする教師データを生成する(ステップS104)。生成した教師データは、データメモリ103の一時記憶部1034に記憶される。さらに、プロセッサ101は、データメモリ103のモデル記憶部1032に構成したモデル記憶部15から、予め作成した登録データを抽出し、それを用いて教師データを生成しても良い。
Then, the processor 101 assigns a user ID, which is a unique identifier, to the generated spectrogram, and generates teacher data that combines these spectrograms and the user ID (step S104). The generated teacher data is stored in temporary storage section 1034 of data memory 103 . Furthermore, the processor 101 may extract registration data created in advance from the model storage unit 15 configured in the model storage unit 1032 of the data memory 103, and use it to generate teacher data.
次に、プロセッサ101は、モデル学習部14として機能して、以下の処理動作を行う。
先ず、プロセッサ101は、上記教師データにおけるスペクトログラムを入力とし、ラベルとしての上記教師データにおけるユーザIDと、上記入力に対する差である参考値と、を出力とする、分類モデルを生成及び学習する(ステップS105)。 Next, theprocessor 101 functions as the model learning unit 14 and performs the following processing operations.
First, theprocessor 101 receives the spectrogram in the training data as an input, and generates and learns a classification model that outputs a user ID in the training data as a label and a reference value as a difference from the input (step S105).
先ず、プロセッサ101は、上記教師データにおけるスペクトログラムを入力とし、ラベルとしての上記教師データにおけるユーザIDと、上記入力に対する差である参考値と、を出力とする、分類モデルを生成及び学習する(ステップS105)。 Next, the
First, the
そして、プロセッサ101は、この学習処理によって得られた分類モデル及び分類モデルについて、モデルそのもの、またはモデルのパラメータを、データメモリ103のモデル記憶部1032に構成したモデル記憶部15に登録する(ステップS106)。
Then, the processor 101 registers the model itself or the parameters of the classification model and the classification model obtained by this learning process in the model storage unit 15 configured in the model storage unit 1032 of the data memory 103 (step S106). ).
こうして、一人の登録対象のユーザについての学習が終了したならば、プロセッサ101は、駆動信号の生成を停止し、通信インタフェース104による駆動信号のオーディオインタフェース部30への送信を終了する(ステップS107)。そして、このフローチャートに示す学習処理動作を終了する。
When the learning of one user to be registered is completed in this way, the processor 101 stops generating the drive signal and ends the transmission of the drive signal to the audio interface unit 30 by the communication interface 104 (step S107). . Then, the learning processing operation shown in this flow chart ends.
その後、他のユーザについても、同様にして学習を行うことができる。
After that, other users can learn in the same way.
次に、識別対象であるユーザ個人を識別する際の装着ユーザ識別装置10の動作を説明する。装着ユーザ識別装置10は、モデル記憶部15等に登録された分類モデルに、計測部20を装着した識別対象のユーザから取得した特徴量であるスペクトログラムを入力し、ユーザ個人を識別する。これらの具体的な処理を、以下に説明する。
Next, the operation of the wearable user identification device 10 when identifying an individual user to be identified will be described. The wearing user identification device 10 inputs a spectrogram, which is a feature amount obtained from a user to be identified who wears the measurement unit 20, into a classification model registered in the model storage unit 15 or the like, and identifies an individual user. These specific processes will be described below.
図7は、装着ユーザ識別装置10におけるユーザの個人識別に係わる識別処理動作の一例を示すフローチャートである。このフローチャートは、装着ユーザ識別装置10の一部、具体的には、信号生成部11、特徴量生成部13、識別実行判定部16、及びユーザ識別部17として機能するコンピュータのプロセッサ101における処理動作を示している。識別対象のユーザの手首に計測部20を装着した後、入出力インタフェース105を介して入力部107から個人識別の開始が指示されると、プロセッサ101は、このフローチャートに示す動作を開始する。また、遠隔の識別対象ユーザについては、当該ユーザが操作するスマートホン等の情報処理装置からネットワークNWを介して送信されてくる所定の識別開始通知を、通信インタフェース104により受信すると、プロセッサ101は、このフローチャートに示す動作を開始する。
FIG. 7 is a flowchart showing an example of an identification processing operation related to personal identification of the user in the wearable user identification device 10. FIG. This flowchart shows the processing operation in the processor 101 of the computer that functions as a part of the wearing user identification device 10, specifically, the signal generation unit 11, the feature amount generation unit 13, the identification execution determination unit 16, and the user identification unit 17. is shown. After the measurement unit 20 is worn on the wrist of the user to be identified, when the input unit 107 instructs the start of personal identification through the input/output interface 105, the processor 101 starts the operation shown in this flowchart. For a remote user to be identified, when a predetermined identification start notification transmitted via the network NW from an information processing device such as a smart phone operated by the user is received by the communication interface 104, the processor 101 The operation shown in this flow chart is started.
先ず、プロセッサ101は、信号生成部11として機能して、任意に設定したパラメータに基づく駆動信号を生成する(ステップS201)。生成された駆動信号は、通信インタフェース104により、オーディオインタフェース部30に送信される。また、プロセッサ101の制御により駆動信号を生成する信号生成モジュールを別途用意しておき、そこで生成した駆動信号を、通信インタフェース104によりオーディオインタフェース部30に送信するものとしても良い。
First, the processor 101 functions as the signal generator 11 and generates a drive signal based on arbitrarily set parameters (step S201). The generated drive signal is transmitted to the audio interface section 30 via the communication interface 104 . Alternatively, a signal generation module that generates a drive signal under the control of the processor 101 may be prepared separately, and the drive signal generated there may be transmitted to the audio interface section 30 via the communication interface 104 .
オーディオインタフェース部30は、この駆動信号を計測部20の信号発生部21に送信する。この駆動信号により、信号発生部21を通じて登録対象のユーザの生体に振動が与えられる。このときの振動は、モデル記憶部15に登録されている学習済みのユーザ(以下、登録ユーザと称する。)に関する登録データに含まれる特徴量を生成する際に用いた振動に含まれる周波数を含んでいれば、他の周波数が混在していても構わない。計測部20の信号受信部22は、信号発生部21によって識別対象のユーザの生体に与えられ、生体内部及び表面を伝搬してきた振動を取得する。ここで、信号発生部21の圧電素子から与えられる振動が信号受信部22の圧電素子まで伝搬される際に、識別対象ユーザの生体が伝搬路として機能し、この伝搬路に応じて、与えられた振動の周波数特性が変化する。信号受信部22は、伝搬してきた振動を検出し、この検出された振動で示される反応信号をオーディオインタフェース部30に送信する。オーディオインタフェース部30の信号増幅部32は、計測部20の信号受信部22から送信されてきた反応信号を増幅し、装着ユーザ識別装置10に送信する。
The audio interface section 30 transmits this drive signal to the signal generation section 21 of the measurement section 20 . This driving signal causes the body of the user to be registered to vibrate through the signal generator 21 . The vibration at this time includes the frequency included in the vibration used to generate the feature quantity included in the registered data regarding the learned user registered in the model storage unit 15 (hereinafter referred to as a registered user). It does not matter if other frequencies are mixed as long as it is The signal receiving unit 22 of the measuring unit 20 acquires the vibration that is given to the living body of the user to be identified by the signal generating unit 21 and propagates inside and on the surface of the living body. Here, when the vibration given from the piezoelectric element of the signal generating section 21 is propagated to the piezoelectric element of the signal receiving section 22, the body of the user to be identified functions as a propagation path, and the vibration is applied according to this propagation path. change the frequency characteristics of the vibration. The signal receiving section 22 detects the propagated vibration and transmits a reaction signal indicated by the detected vibration to the audio interface section 30 . The signal amplifying section 32 of the audio interface section 30 amplifies the reaction signal transmitted from the signal receiving section 22 of the measuring section 20 and transmits it to the wearing user identification device 10 .
オーディオインタフェース部30から送信されてきた反応信号は、通信インタフェース104により受信される。プロセッサ101は、この受信した反応信号をデータメモリ103の信号記憶部1031に記憶していく(ステップS202)。
The reaction signal transmitted from the audio interface unit 30 is received by the communication interface 104. The processor 101 stores the received reaction signal in the signal storage section 1031 of the data memory 103 (step S202).
次に、プロセッサ101は、特徴量生成部13として機能して、信号記憶部1031に記憶した反応信号を、一定時間区間毎に抽出する。信号のサンプル数は問わない。抽出された反応信号は、データメモリ103の一時記憶部1034に記憶される。そして、プロセッサ101は、一時記憶部1034に記憶した抽出された反応信号に対して、例えばFFTを行うことで、生体の音響周波数特性等を表す特徴量であるスペクトログラムを生成する(ステップS203)。生成したスペクトログラムは、データメモリ103の一時記憶部1034に記憶される。
Next, the processor 101 functions as the feature amount generation unit 13 and extracts the reaction signal stored in the signal storage unit 1031 for each fixed time interval. The number of samples of the signal does not matter. The extracted reaction signal is stored in temporary storage section 1034 of data memory 103 . Then, the processor 101 performs, for example, FFT on the extracted reaction signal stored in the temporary storage unit 1034 to generate a spectrogram, which is a feature quantity representing the acoustic frequency characteristics of the living body (step S203). The generated spectrogram is stored in temporary storage section 1034 of data memory 103 .
次に、プロセッサ101は、識別実行判定部16として機能して、以下の処理動作を行う。
先ず、プロセッサ101は、以下のようにして、一時記憶部1034に記憶されたスペクトログラムより、設定した一定時間内におけるスペクトログラムの安定度を算出する。先ず、プロセッサ101は、一定時間分、例えば2秒間分のスペクトログラムを生成したか否かを判断する(ステップS204)。未だ2秒間分のスペクトログラムを生成していないと判断した場合(ステップS204のNO)、プロセッサ101は、上記ステップS201の処理へ移行して、前述した処理動作を繰り返す。 Next, theprocessor 101 functions as the identification execution determination unit 16 and performs the following processing operations.
First, theprocessor 101 calculates the stability of the spectrogram within a set fixed time from the spectrogram stored in the temporary storage unit 1034 as follows. First, the processor 101 determines whether spectrograms for a certain period of time, for example, 2 seconds, have been generated (step S204). When determining that the spectrogram for 2 seconds has not yet been generated (NO in step S204), the processor 101 proceeds to the process of step S201 and repeats the above-described processing operations.
先ず、プロセッサ101は、以下のようにして、一時記憶部1034に記憶されたスペクトログラムより、設定した一定時間内におけるスペクトログラムの安定度を算出する。先ず、プロセッサ101は、一定時間分、例えば2秒間分のスペクトログラムを生成したか否かを判断する(ステップS204)。未だ2秒間分のスペクトログラムを生成していないと判断した場合(ステップS204のNO)、プロセッサ101は、上記ステップS201の処理へ移行して、前述した処理動作を繰り返す。 Next, the
First, the
これに対して、2秒間分のスペクトログラムを生成したと判断した場合(ステップS204のYES)、プロセッサ101は、安定度を算出する(ステップS205)。例えば、プロセッサ101は、一時記憶部1034に記憶された2秒間分のスペクトログラムより、2秒間における周波数毎のdBの平均値を求め、この平均値を用いて、スペクトログラムの各周波数における標準偏差を、安定度として求める。
On the other hand, if it is determined that the spectrogram for 2 seconds has been generated (YES in step S204), the processor 101 calculates the degree of stability (step S205). For example, the processor 101 obtains the average value of dB for each frequency for 2 seconds from the spectrogram for 2 seconds stored in the temporary storage unit 1034, and uses this average value to obtain the standard deviation at each frequency of the spectrogram, Calculated as stability.
そして、プロセッサ101は、これらスペクトログラムの各周波数における標準偏差が、設定した任意の閾値以下であるか否かにより、計測部20のユーザ装着部位の状態が安定しているかどうかを判断する(ステップS206)。
Then, the processor 101 determines whether the state of the user-mounted site of the measurement unit 20 is stable based on whether the standard deviation at each frequency of these spectrograms is equal to or less than the set arbitrary threshold value (step S206). ).
スペクトログラムの各周波数における標準偏差が閾値よりも高い場合には、プロセッサ101は、計測部20を装着したユーザが動いていて安定していないと判定する(ステップS206のNO)。この場合、プロセッサ101は、一時記憶部1034に記憶されている一定時間内におけるスペクトログラムの内、最も古いものを削除する(ステップS207)。その後、プロセッサ101は、上記ステップS201の処理へ移行して、前述した処理動作を繰り返す。
When the standard deviation at each frequency of the spectrogram is higher than the threshold, the processor 101 determines that the user wearing the measurement unit 20 is moving and not stable (NO in step S206). In this case, the processor 101 deletes the oldest spectrogram from among the spectrograms stored in the temporary storage unit 1034 within a certain period of time (step S207). After that, the processor 101 shifts to the process of step S201 and repeats the above-described processing operations.
これに対して、スペクトログラムの各周波数における標準偏差が閾値以下である場合には、プロセッサ101は、計測部20のユーザ装着部位の状態が安定していると判定する(ステップS206のYES)。この場合、プロセッサ101は、ユーザ識別部17として機能して、以下の処理動作を行う。
先ず、プロセッサ101は、ユーザの個人識別を行う(ステップS208)。すなわち、プロセッサ101は、データメモリ103のモデル記憶部1032に構成されたモデル記憶部15に登録された分類モデルに、上記ステップS203で生成して一時記憶部1034に記憶した特徴量であるスペクトログラムの一つ、例えば最新のスペクトログラムを入力し、分類モデルの参考値の一覧を取得する。取得した参考値の一覧は、データメモリ103の一時記憶部1034に記憶される。次に、プロセッサ101は、この一時記憶部1034に記憶された参考値一覧の中から、最も小さい参考値を特定する。プロセッサ101は、モデル記憶部15において、この特定した参考値と同じ特徴量に関連付けられる登録ユーザを、類似するユーザであると判定いる。そして、プロセッサ101は、その判定した登録ユーザのユーザIDを、識別対象のユーザの個人識別結果として、データメモリ103の識別結果記憶部1033に記憶する。なお、この判定処理では、類似度に判定の閾値を設け、特定した参考値がこの閾値よりも小さい場合のみ、類似するユーザを判定するようにしても良い。 On the other hand, when the standard deviation at each frequency of the spectrogram is equal to or less than the threshold, theprocessor 101 determines that the state of the user-mounted site of the measurement unit 20 is stable (YES in step S206). In this case, the processor 101 functions as the user identification unit 17 and performs the following processing operations.
First, theprocessor 101 performs personal identification of the user (step S208). That is, the processor 101 stores the classification model registered in the model storage unit 15 configured in the model storage unit 1032 of the data memory 103 with the spectrogram, which is the feature amount generated in step S203 and stored in the temporary storage unit 1034. Enter one, say the latest spectrogram, and get a list of reference values for the classification model. A list of the acquired reference values is stored in the temporary storage unit 1034 of the data memory 103 . Next, processor 101 identifies the smallest reference value from the list of reference values stored in temporary storage unit 1034 . The processor 101 determines, in the model storage unit 15, that registered users associated with the same feature amount as the specified reference value are similar users. Then, processor 101 stores the determined user ID of the registered user in identification result storage section 1033 of data memory 103 as the personal identification result of the user to be identified. Note that in this determination process, a threshold for determination may be set for the degree of similarity, and similar users may be determined only when the specified reference value is smaller than this threshold.
先ず、プロセッサ101は、ユーザの個人識別を行う(ステップS208)。すなわち、プロセッサ101は、データメモリ103のモデル記憶部1032に構成されたモデル記憶部15に登録された分類モデルに、上記ステップS203で生成して一時記憶部1034に記憶した特徴量であるスペクトログラムの一つ、例えば最新のスペクトログラムを入力し、分類モデルの参考値の一覧を取得する。取得した参考値の一覧は、データメモリ103の一時記憶部1034に記憶される。次に、プロセッサ101は、この一時記憶部1034に記憶された参考値一覧の中から、最も小さい参考値を特定する。プロセッサ101は、モデル記憶部15において、この特定した参考値と同じ特徴量に関連付けられる登録ユーザを、類似するユーザであると判定いる。そして、プロセッサ101は、その判定した登録ユーザのユーザIDを、識別対象のユーザの個人識別結果として、データメモリ103の識別結果記憶部1033に記憶する。なお、この判定処理では、類似度に判定の閾値を設け、特定した参考値がこの閾値よりも小さい場合のみ、類似するユーザを判定するようにしても良い。 On the other hand, when the standard deviation at each frequency of the spectrogram is equal to or less than the threshold, the
First, the
そして、プロセッサ101は、識別結果記憶部1033に記憶した個人識別結果であるユーザIDを、出力する(ステップS209)。例えば、プロセッサ101は、ユーザIDを、入出力インタフェース105を介して表示部108に表示出力する。また、プロセッサ101は、ユーザの個人認証を必要とするアプリケーション等に、ユーザIDを提供することもできる。
Then, the processor 101 outputs the user ID, which is the personal identification result stored in the identification result storage unit 1033 (step S209). For example, the processor 101 displays the user ID on the display unit 108 via the input/output interface 105 . The processor 101 can also provide the user ID to applications and the like that require personal authentication of the user.
こうして、一つの識別対象のユーザについての個人識別が終了したならば、プロセッサ101は、駆動信号の生成を停止し、通信インタフェース104による駆動信号のオーディオインタフェース部30への送信を終了する(ステップS210)。そして、このフローチャートに示す識別処理動作を終了する。
When personal identification for one user to be identified is completed in this manner, the processor 101 stops generating the drive signal and ends transmission of the drive signal to the audio interface unit 30 by the communication interface 104 (step S210). ). Then, the identification processing operation shown in this flow chart ends.
以上に説明した一実施形態に係る装着ユーザ識別装置10は、特徴量生成部13により、識別対象であるユーザの身体の部位に装着されたセンサである計測部20から、計測部20が計測したユーザの身体部位の振動特性に応じた計測信号である反応信号を受信し、反応信号から振動特性を表す特徴量を生成し、判断部としての識別実行判定部16により、特徴量生成部13が生成した特徴量の変動の大きさに基づいて、計測部20のユーザへの装着部位の状態が安定しているか否か判断し、装着部位の状態が安定していると判断したとき、ユーザ識別部17により、特徴量生成部13が生成した特徴量に基づいて、ユーザの個人識別を行う。このように、振動特性値に大きな変化が生じない安定した状態においてのみユーザの個人識別を行うようにすることで、装着部位を動かす、装着部位への外部刺激が与えられる、等の外乱の影響を受けづらくし、個人識別における誤判定を軽減することができる。すなわち、一実施形態に係る装着ユーザ識別装置10は、アクティブ音響センシングをユーザの個人識別に利用した際の誤判定を軽減することが可能となる。
In the wearable user identification device 10 according to the embodiment described above, the feature amount generation unit 13 measures the A response signal, which is a measurement signal corresponding to the vibration characteristics of a user's body part, is received, a feature quantity representing the vibration characteristics is generated from the reaction signal, and an identification execution determination unit 16 as a determination unit determines the feature quantity generation unit 13. Based on the magnitude of variation in the generated feature amount, it is determined whether the state of the site where the measurement unit 20 is worn by the user is stable. The unit 17 performs personal identification of the user based on the feature amount generated by the feature amount generation unit 13 . In this way, by performing personal identification of the user only in a stable state where the vibration characteristic value does not change significantly, the effect of disturbance such as moving the wearing part or receiving an external stimulus to the wearing part. It is possible to reduce erroneous determination in personal identification. That is, the wearable user identification device 10 according to one embodiment can reduce erroneous determinations when active acoustic sensing is used for personal identification of the user.
なお、計測部20から受信する反応信号は、計測部20の装着部位であるユーザの身体の内部を伝搬した振動を検出した振動信号であり、特徴量生成部13が生成する特徴量は、反応信号に対して、例えばFFT(Fast Fourier Transform:高速フーリエ変換)を行うことで生成される、振動信号の周波数特性を表すスペクトログラムであることができる。このように、特徴量としてスペクトログラムを生成することができる。
Note that the reaction signal received from the measurement unit 20 is a vibration signal obtained by detecting vibration propagating inside the user's body, which is the part where the measurement unit 20 is attached. It can be a spectrogram representing the frequency characteristics of the vibration signal generated by, for example, performing FFT (Fast Fourier Transform) on the signal. In this way, a spectrogram can be generated as a feature quantity.
そして、識別実行判定部16は、一定時間分のスペクトログラムより、一定時間における周波数毎のdBの平均値を用いて、スペクトログラムの各周波数における標準偏差を求め、各周波数における標準偏差が、設定した閾値以下である場合、装着部位の状態が安定していると判断する。このようにして、安定度を容易に算出でき、その安定度に基づいて、振動特性値に大きな変化が生じない安定した状態を判別することができる。
Then, the identification execution determination unit 16 obtains the standard deviation at each frequency of the spectrogram from the spectrogram for a certain period of time using the average value of dB for each frequency in the certain period of time, and the standard deviation at each frequency is the set threshold In the following cases, it is determined that the state of the attachment site is stable. In this manner, the stability can be easily calculated, and based on the stability, a stable state in which the vibration characteristic value does not change significantly can be determined.
また、一実施形態に係る装着ユーザ識別装置10は、複数の登録対象のユーザそれぞれについての特徴量を予め登録したデータベースであるモデル記憶部15をさらに備え、ユーザ識別部17は、モデル記憶部15に登録されている複数の登録対象のユーザの中から、計測部20から受信した反応信号から特徴量生成部13が生成した特徴量に対応する特徴量を有するユーザを、識別対象であるユーザであるとして識別する。このように、モデル記憶部15に、複数の登録対象のユーザそれぞれについて、特徴量を登録しておくことで、この特徴量に基づいて、識別対象のユーザを容易に識別することができる。
In addition, the wearing user identification device 10 according to one embodiment further includes a model storage unit 15 which is a database in which feature amounts for each of a plurality of users to be registered are registered in advance. A user having a feature amount corresponding to the feature amount generated by the feature amount generation unit 13 from the reaction signal received from the measurement unit 20 from among a plurality of registration target users registered in the identification target user identify as In this way, by registering the feature amount for each of a plurality of registration target users in the model storage unit 15, the identification target user can be easily identified based on the feature amount.
ここで、モデル記憶部15は、特徴量を入力し、少なくとも一つの登録対象のユーザの特徴量と入力された特徴量との差分に基づく値を、登録対象のユーザに一意に付与される識別子であるユーザIDと関連付けて出力するモデルを記憶しており、このモデルは、複数の登録対象のユーザのそれぞれについて、計測部20から受信した反応信号から特徴量生成部13が生成した特徴量であるスペクトログラムに基づいて学習され、ユーザ識別部17は、モデル記憶部15に記憶されたモデルに、識別対象のユーザについて生成された特徴量であるスペクトログラムを入力し、モデルから出力された値のうち、ユーザのスペクトログラムとの関連性が最も高いことを示す値に関連付けて出力されたユーザIDを、識別対象のユーザのユーザIDとして判定することで、識別対象のユーザを識別する。よって、登録済みのユーザの特徴量であるスペクトログラムを用いて、識別対象であるユーザの適切な識別を行うことができる。
Here, the model storage unit 15 inputs a feature amount, and stores a value based on the difference between the feature amount of at least one user to be registered and the input feature amount as an identifier uniquely given to the user to be registered. This model is a feature amount generated by the feature amount generation unit 13 from the reaction signal received from the measurement unit 20 for each of a plurality of registered users. The user identification unit 17 inputs a spectrogram, which is a feature amount generated for a user to be identified, to the model stored in the model storage unit 15, and the user identification unit 17 selects the values output from the model. , is determined as the user ID of the user to be identified, thereby identifying the user to be identified. Therefore, it is possible to appropriately identify the user to be identified using the spectrogram, which is the feature amount of the registered user.
また、一実施形態に係る装着ユーザ識別システム1は、一実施形態に係る装着ユーザ識別装置10と、ユーザの身体の装着部位に与える第1の振動を圧電素子により発生させ、ユーザの身体に与えられた第1の振動のうちで身体の内部を伝搬した第2の振動に対応する振動信号を計測信号として取得する計測部20と、を備える。よって、識別対象のユーザそれぞれに計測部20を装着させることで、各ユーザを個別に識別することが可能となる。
In addition, the wearable user identification system 1 according to one embodiment includes the wearable user identification device 10 according to one embodiment, and the piezoelectric element generates the first vibration to be applied to the wearing part of the user's body, and gives it to the user's body. a measurement unit 20 that acquires, as a measurement signal, a vibration signal corresponding to the second vibration propagated inside the body among the first vibrations received. Therefore, by having each user to be identified wear the measurement unit 20, each user can be individually identified.
[他の実施形態]
前記一実施形態では、振動特性値に大きな変化が生じない安定した状態においてのみ特徴量に基づくユーザの個人識別を行うものとして説明した。しかしながら、ユーザの個人識別の場合に限らず、学習時のモデル学習部14においても同様に、安定した状態においてのみ特徴量に基づく学習を行うようにしも良い。これにより、学習に用いるデータに対しても、安定した学習データのみを用いることが可能となる。 [Other embodiments]
In the above embodiment, the user's individual identification is performed based on the feature quantity only in a stable state where the vibration characteristic value does not change significantly. However, themodel learning unit 14 may similarly perform learning based on the feature amount only in a stable state, not limited to the case of personal identification of the user. As a result, only stable learning data can be used for data used for learning.
前記一実施形態では、振動特性値に大きな変化が生じない安定した状態においてのみ特徴量に基づくユーザの個人識別を行うものとして説明した。しかしながら、ユーザの個人識別の場合に限らず、学習時のモデル学習部14においても同様に、安定した状態においてのみ特徴量に基づく学習を行うようにしも良い。これにより、学習に用いるデータに対しても、安定した学習データのみを用いることが可能となる。 [Other embodiments]
In the above embodiment, the user's individual identification is performed based on the feature quantity only in a stable state where the vibration characteristic value does not change significantly. However, the
また、装着ユーザ識別装置10と計測部20の間に、オーディオインタフェース部30を配置しているが、オーディオインタフェース部30は、装着ユーザ識別装置10と計測部20の何れかに組み込まれていても良い。
Also, the audio interface unit 30 is arranged between the wearing user identification device 10 and the measuring unit 20, but the audio interface unit 30 may be incorporated in either the wearing user identification device 10 or the measuring unit 20. good.
また、前記実施形態では、装着ユーザ識別装置10の処理機能部を一つのコンピュータにより構成するものとして説明したが、任意の切り分けにより、複数のコンピュータによって構成しても良い。例えば、モデル学習部14及びモデル記憶部15は、通信インタフェース104によりネットワークNWを介して通信可能な、装着ユーザ識別装置10を構成するコンピュータとは別のコンピュータやサーバ装置に構成しても構わない。
Also, in the above embodiment, the processing function unit of the wearable user identification device 10 has been described as being composed of one computer, but it may be composed of a plurality of computers by arbitrary division. For example, the model learning unit 14 and the model storage unit 15 may be configured in a computer or a server device that can communicate via the network NW via the communication interface 104 and that is separate from the computer that configures the wearing user identification device 10. .
また、前記実施形態に記載した手法は、計算機(コンピュータ)に実行させることができるプログラム(ソフトウェア手段)として、例えば磁気ディスク(フロッピー(登録商標)ディスク、ハードディスク等)、光ディスク(CD-ROM、DVD、MO等)、半導体メモリ(ROM、RAM、フラッシュメモリ等)等の記録媒体に格納し、また通信媒体により伝送して頒布することもできる。なお、媒体側に格納されるプログラムには、計算機に実行させるソフトウェア手段(実行プログラムのみならずテーブル、データ構造も含む)を計算機内に構成させる設定プログラムをも含む。本装置を実現する計算機は、記録媒体に記録されたプログラムを読み込み、また場合により設定プログラムによりソフトウェア手段を構築し、このソフトウェア手段によって動作が制御されることにより上述した処理を実行する。なお、本明細書でいう記録媒体は、頒布用に限らず、計算機内部あるいはネットワークを介して接続される機器に設けられた磁気ディスク、半導体メモリ等の記憶媒体を含むものである。
In addition, the method described in the above embodiment can be executed by a computer (computer) as a program (software means), such as a magnetic disk (floppy (registered trademark) disk, hard disk, etc.), an optical disk (CD-ROM, DVD , MO, etc.), a semiconductor memory (ROM, RAM, flash memory, etc.), or the like, or may be transmitted and distributed via a communication medium. The programs stored on the medium also include a setting program for configuring software means (including not only execution programs but also tables and data structures) to be executed by the computer. A computer that realizes this apparatus reads a program recorded on a recording medium, and optionally constructs software means by a setting program. The operation is controlled by this software means to execute the above-described processes. The term "recording medium" as used herein is not limited to those for distribution, and includes storage media such as magnetic disks, semiconductor memories, etc. provided in computers or devices connected via a network.
要するに、この発明は上記実施形態に限定されるものではなく、実施段階ではその要旨を逸脱しない範囲で種々に変形することが可能である。また、各実施形態は可能な限り適宜組合せて実施してもよく、その場合組合せた効果が得られる。さらに、上記実施形態には種々の段階の発明が含まれており、開示される複数の構成要件における適当な組み合わせにより種々の発明が抽出され得る。
In short, the present invention is not limited to the above embodiments, and can be modified in various ways without departing from the gist of the invention at the implementation stage. Moreover, each embodiment may be implemented in combination as much as possible, and in that case, the effect of the combination can be obtained. Furthermore, the above-described embodiments include inventions at various stages, and various inventions can be extracted by appropriately combining a plurality of disclosed constituent elements.
1…装着ユーザ識別システム
10…装着ユーザ識別装置
11…信号生成部
12…信号記憶部
13…特徴量生成部
14…モデル学習部
15…モデル記憶部
16…識別実行判定部
17…ユーザ識別部
20…計測部
21…信号発生部
22…信号受信部
23…固定部
24…バンド
25…角カン
26…面ファスナ
30…オーディオインタフェース部
31…信号制御部
32…信号増幅部
101…プロセッサ
102…プログラムメモリ
103…データメモリ
1031…信号記憶部
1032…モデル記憶部
1033…識別結果記憶部
1034…一時記憶部
104…通信インタフェース
1041,1042…通信モジュール
105…入出力インタフェース
106…バス
107…入力部
108…表示部
NW…ネットワーク
Reference Signs List 1 wearing user identification system 10 wearing user identification device 11 signal generation unit 12 signal storage unit 13 feature amount generation unit 14 model learning unit 15 model storage unit 16 identification execution determination unit 17 user identification unit 20 Measurement unit 21 Signal generation unit 22 Signal reception unit 23 Fixing unit 24 Band 25 Square can 26 Velcro fastener 30 Audio interface unit 31 Signal control unit 32 Signal amplification unit 101 Processor 102 Program memory 103... Data memory 1031... Signal storage unit 1032... Model storage unit 1033... Identification result storage unit 1034... Temporary storage unit 104... Communication interface 1041, 1042... Communication module 105... Input/output interface 106... Bus 107... Input unit 108... Display Part NW ... network
10…装着ユーザ識別装置
11…信号生成部
12…信号記憶部
13…特徴量生成部
14…モデル学習部
15…モデル記憶部
16…識別実行判定部
17…ユーザ識別部
20…計測部
21…信号発生部
22…信号受信部
23…固定部
24…バンド
25…角カン
26…面ファスナ
30…オーディオインタフェース部
31…信号制御部
32…信号増幅部
101…プロセッサ
102…プログラムメモリ
103…データメモリ
1031…信号記憶部
1032…モデル記憶部
1033…識別結果記憶部
1034…一時記憶部
104…通信インタフェース
1041,1042…通信モジュール
105…入出力インタフェース
106…バス
107…入力部
108…表示部
NW…ネットワーク
Claims (8)
- 識別対象であるユーザの身体の部位に装着されたセンサから、前記センサが計測した前記ユーザの身体部位の振動特性に応じた計測信号を受信し、前記計測信号から前記振動特性を表す特徴量を生成する特徴量生成部と、
前記特徴量生成部が生成した前記特徴量の変動の大きさに基づいて、前記センサの前記ユーザへの装着部位の状態が安定しているか否か判断する判断部と、
前記判断部が、前記装着部位の状態が安定していると判断したとき、前記特徴量生成部が生成した前記特徴量に基づいて、前記ユーザの個人識別を行う識別部と、
を備える、装着ユーザ識別装置。 A measurement signal corresponding to the vibration characteristics of the user's body part measured by the sensor is received from a sensor attached to the body part of the user to be identified, and a feature quantity representing the vibration characteristic is obtained from the measurement signal. a feature generation unit that generates
a judgment unit that judges whether the state of the part where the sensor is attached to the user is stable based on the magnitude of variation in the feature amount generated by the feature amount generation unit;
an identification unit that performs personal identification of the user based on the feature amount generated by the feature amount generation unit when the determination unit determines that the state of the wearing site is stable;
A wearable user identification device comprising: - 前記センサから受信する前記計測信号は、前記センサの前記装着部位である前記ユーザの前記身体の内部を伝搬した振動を検出した振動信号であり、
前記特徴量生成部が生成する前記特徴量は、前記振動信号の周波数特性を表すスペクトログラムである、請求項1に記載の装着ユーザ識別装置。 The measurement signal received from the sensor is a vibration signal obtained by detecting vibration propagating inside the body of the user, which is the attachment site of the sensor, and
2. The wearing user identification device according to claim 1, wherein the feature amount generated by the feature amount generation unit is a spectrogram representing frequency characteristics of the vibration signal. - 前記判断部は、一定時間分の前記スペクトログラムより、前記一定時間における周波数毎のdBの平均値を用いて、前記スペクトログラムの各周波数における標準偏差を求め、前記各周波数における前記標準偏差が、設定した閾値以下である場合、前記装着部位の状態が安定していると判断する、請求項2に記載の装着ユーザ識別装置。 The determination unit obtains the standard deviation at each frequency of the spectrogram from the spectrogram for a certain period of time using the average value of dB for each frequency in the certain period of time, and the standard deviation at each frequency is set 3. The wearing user identification device according to claim 2, wherein, if it is equal to or less than a threshold, it is determined that the state of said wearing part is stable.
- 複数の登録対象のユーザそれぞれについての前記特徴量を予め登録したデータベースをさらに備え、
前記識別部は、前記データベースに登録されている前記複数の登録対象のユーザの中から、前記センサから受信した計測信号から前記特徴量生成部が生成した前記特徴量に対応する前記特徴量を有するユーザを、前記識別対象であるユーザであるとして識別する、
請求項1乃至3の何れかに記載の装着ユーザ識別装置。 Further comprising a database in which the feature amount for each of a plurality of registration target users is registered in advance,
The identification unit has the feature amount corresponding to the feature amount generated by the feature amount generation unit from the measurement signal received from the sensor, among the plurality of registered users registered in the database. identifying the user as being the user to be identified;
A wearable user identification device according to any one of claims 1 to 3. - 前記データベースは、前記特徴量を入力し、少なくとも一つの前記登録対象のユーザの前記特徴量と前記入力された前記特徴量との差分に基づく値を、前記登録対象の前記ユーザに一意に付与される識別子と関連付けて出力するモデルを記憶しており、
前記モデルは、前記複数の登録対象のユーザのそれぞれについて、前記センサから受信した計測信号から前記特徴量生成部が生成した特徴量に基づいて学習され、
前記識別部は、前記モデルに、前記識別対象の前記ユーザについて生成された前記特徴量を入力し、前記モデルから出力された値のうち、前記識別対象の前記ユーザの特徴量との関連性が最も高いことを示す値に関連付けて出力された識別子を、前記識別対象の前記ユーザの前記識別子として判定することで、前記ユーザを識別する、
請求項4に記載の装着ユーザ識別装置。 The database inputs the feature amount, and uniquely assigns to the user to be registered a value based on the difference between the feature amount of at least one user to be registered and the input feature amount. It remembers the model to be output in association with the identifier that
The model is learned based on the feature amount generated by the feature amount generation unit from the measurement signal received from the sensor for each of the plurality of registered users,
The identification unit inputs the feature amount generated for the user to be identified to the model, and determines, among the values output from the model, the relevance to the feature amount of the user to be identified. Identifying the user by determining the identifier output in association with the highest value as the identifier of the user to be identified;
5. A wearable user identification device according to claim 4. - 請求項1乃至5の何れかに記載の装着ユーザ識別装置と、
前記ユーザの前記身体の前記装着部位に与える第1の振動を圧電素子により発生させ、前記ユーザの前記身体に与えられた前記第1の振動のうちで前記身体の内部を伝搬した第2の振動に対応する振動信号を前記計測信号として取得する前記センサと、
を備える装着ユーザ識別システム。 a wearable user identification device according to any one of claims 1 to 5;
A piezoelectric element generates a first vibration applied to the attachment site of the body of the user, and a second vibration propagated inside the body among the first vibration applied to the body of the user. the sensor that acquires a vibration signal corresponding to the measurement signal as the measurement signal;
A wearable user identification system comprising: - プロセッサを備え、センサを身体の部位に装着した識別対象であるユーザを識別する装着ユーザ識別装置における装着ユーザ識別方法であって、
前記プロセッサにより、前記センサが計測した前記ユーザの身体部位の振動特性に応じた計測信号から、前記振動特性を表す特徴量を生成し、
前記プロセッサにより、前記生成した前記特徴量の変動の大きさに基づいて、前記センサの前記ユーザへの装着部位の状態が安定しているか否か判断し、
前記プロセッサにより、前記装着部位の状態が安定していると判断したとき、前記生成した前記特徴量に基づいて、前記ユーザの個人識別を行う、
装着ユーザ識別方法。 A wearing user identification method in a wearing user identification device that includes a processor and identifies a user who is an identification target wearing a sensor on a part of the body, comprising:
generating, by the processor, a feature quantity representing the vibration characteristics from a measurement signal according to the vibration characteristics of the user's body part measured by the sensor;
determining, by the processor, whether or not the state of the site where the sensor is attached to the user is stable based on the magnitude of variation in the generated feature quantity;
When the processor determines that the state of the attachment site is stable, personal identification of the user is performed based on the generated feature amount.
Wearable user identification method. - 請求項1乃至5の何れかに記載の装着ユーザ識別装置の前記各部としてプロセッサを機能させる装着ユーザ識別プログラム。
A wearable user identification program that causes a processor to function as each part of the wearable user identification device according to any one of claims 1 to 5.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2021/042778 WO2023089822A1 (en) | 2021-11-22 | 2021-11-22 | Wearer identification device, wearer identification system, wearer identification method, and wearer identification program |
JP2023562093A JPWO2023089822A1 (en) | 2021-11-22 | 2021-11-22 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2021/042778 WO2023089822A1 (en) | 2021-11-22 | 2021-11-22 | Wearer identification device, wearer identification system, wearer identification method, and wearer identification program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023089822A1 true WO2023089822A1 (en) | 2023-05-25 |
Family
ID=86396547
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/042778 WO2023089822A1 (en) | 2021-11-22 | 2021-11-22 | Wearer identification device, wearer identification system, wearer identification method, and wearer identification program |
Country Status (2)
Country | Link |
---|---|
JP (1) | JPWO2023089822A1 (en) |
WO (1) | WO2023089822A1 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009211370A (en) * | 2008-03-04 | 2009-09-17 | Oki Electric Ind Co Ltd | Iris authentication apparatus |
WO2019082988A1 (en) * | 2017-10-25 | 2019-05-02 | 日本電気株式会社 | Biometric authentication device, biometric authentication system, biometric authentication method and recording medium |
WO2021048974A1 (en) * | 2019-09-12 | 2021-03-18 | 日本電気株式会社 | Information processing device, information processing method, and storage medium |
-
2021
- 2021-11-22 WO PCT/JP2021/042778 patent/WO2023089822A1/en active Application Filing
- 2021-11-22 JP JP2023562093A patent/JPWO2023089822A1/ja active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009211370A (en) * | 2008-03-04 | 2009-09-17 | Oki Electric Ind Co Ltd | Iris authentication apparatus |
WO2019082988A1 (en) * | 2017-10-25 | 2019-05-02 | 日本電気株式会社 | Biometric authentication device, biometric authentication system, biometric authentication method and recording medium |
WO2021048974A1 (en) * | 2019-09-12 | 2021-03-18 | 日本電気株式会社 | Information processing device, information processing method, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
JPWO2023089822A1 (en) | 2023-05-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Ferlini et al. | EarGate: gait-based user identification with in-ear microphones | |
KR101497644B1 (en) | Voice and position localization | |
JP6943248B2 (en) | Personal authentication system, personal authentication device, personal authentication method and personal authentication program | |
EP2915165B1 (en) | System and method for detection of speech related acoustic signals by using a laser microphone | |
CN103344959B (en) | A kind of ultrasound positioning system and the electronic installation with positioning function | |
US20150215723A1 (en) | Wireless speaker system with distributed low (bass) frequency | |
US10932714B2 (en) | Frequency analysis feedback systems and methods | |
US11076243B2 (en) | Terminal with hearing aid setting, and setting method for hearing aid | |
US10418965B2 (en) | Positioning method and apparatus | |
US10625670B2 (en) | Notification device and notification method | |
KR20180099721A (en) | Crowd source database for sound identification | |
JP6767322B2 (en) | Output control device, output control method and output control program | |
US20230230599A1 (en) | Data augmentation system and method for multi-microphone systems | |
WO2023089822A1 (en) | Wearer identification device, wearer identification system, wearer identification method, and wearer identification program | |
AU2018322409B2 (en) | System and method for determining a location of a mobile device based on audio localization techniques | |
WO2020209337A1 (en) | Identification device, identification method, identification processing program, generation device, generation method, and generation processing program | |
Diaconita et al. | Do you hear what i hear? using acoustic probing to detect smartphone locations | |
JP4944219B2 (en) | Sound output device | |
US20160125711A1 (en) | Haptic microphone | |
JP7035525B2 (en) | Attention system, information processing device, information processing method, and program | |
JP7501619B2 (en) | Identification device, identification method, and identification program | |
US11237669B2 (en) | Method and apparatus for improving the measurement of the timing of touches of a touch screen | |
Zhou et al. | Acoustic emission source localization using coupled piezoelectric film strain sensors | |
Campeiro et al. | Damage detection in noisy environments based on EMI and Lamb waves: A comparative study | |
US9532155B1 (en) | Real time monitoring of acoustic environments using ultrasound |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21964837 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023562093 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |