CN110368005A - A kind of intelligent earphone and mood and physiological health monitoring method based on intelligent earphone - Google Patents
A kind of intelligent earphone and mood and physiological health monitoring method based on intelligent earphone Download PDFInfo
- Publication number
- CN110368005A CN110368005A CN201910677898.9A CN201910677898A CN110368005A CN 110368005 A CN110368005 A CN 110368005A CN 201910677898 A CN201910677898 A CN 201910677898A CN 110368005 A CN110368005 A CN 110368005A
- Authority
- CN
- China
- Prior art keywords
- heartbeat
- mood
- ear canal
- intelligent earphone
- sound
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000036651 mood Effects 0.000 title claims abstract description 64
- 230000036541 health Effects 0.000 title claims abstract description 36
- 238000000034 method Methods 0.000 title claims abstract description 33
- 238000012544 monitoring process Methods 0.000 title claims abstract description 33
- 210000000613 ear canal Anatomy 0.000 claims abstract description 50
- 239000000284 extract Substances 0.000 claims abstract description 9
- 238000001514 detection method Methods 0.000 claims abstract description 8
- 240000006409 Acacia auriculiformis Species 0.000 claims abstract description 6
- 210000005069 ears Anatomy 0.000 claims abstract description 6
- 230000005236 sound signal Effects 0.000 claims description 14
- 238000004891 communication Methods 0.000 claims description 13
- 230000002996 emotional effect Effects 0.000 claims description 13
- 230000006870 function Effects 0.000 claims description 7
- 238000013135 deep learning Methods 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 5
- 238000009432 framing Methods 0.000 claims description 5
- 238000010801 machine learning Methods 0.000 claims description 5
- 230000003321 amplification Effects 0.000 claims description 4
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 4
- 230000001788 irregular Effects 0.000 claims description 3
- 238000003672 processing method Methods 0.000 claims description 3
- 230000035479 physiological effects, processes and functions Effects 0.000 claims 1
- 238000012545 processing Methods 0.000 abstract description 3
- 230000008451 emotion Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000002567 electromyography Methods 0.000 description 2
- 230000008921 facial expression Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 206010054196 Affect lability Diseases 0.000 description 1
- 208000019901 Anxiety disease Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 210000000467 autonomic pathway Anatomy 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 210000000750 endocrine system Anatomy 0.000 description 1
- 230000033764 rhythmic process Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6801—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
- A61B5/6813—Specially adapted to be attached to a specific body part
- A61B5/6814—Head
- A61B5/6815—Ear
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/0205—Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/024—Detecting, measuring or recording pulse rate or heart rate
- A61B5/02438—Detecting, measuring or recording pulse rate or heart rate with portable devices, e.g. worn by the patient
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/165—Evaluating the state of mind, e.g. depression, anxiety
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6801—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
- A61B5/6802—Sensor mounted on worn items
- A61B5/6803—Head-worn items, e.g. helmets, masks, headphones or goggles
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7253—Details of waveform analysis characterised by using transforms
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7253—Details of waveform analysis characterised by using transforms
- A61B5/7257—Details of waveform analysis characterised by using transforms using Fourier transforms
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
- A61M21/02—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis for inducing sleep or relaxation, e.g. by direct nerve stimulation, hypnosis, analgesia
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/45—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of analysis window
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/70—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
- H04R1/1091—Details not provided for in groups H04R1/1008 - H04R1/1083
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
- A61M2021/0005—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
- A61M2021/0027—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the hearing sense
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Surgery (AREA)
- Molecular Biology (AREA)
- Pathology (AREA)
- Biophysics (AREA)
- Psychiatry (AREA)
- Physiology (AREA)
- Signal Processing (AREA)
- Artificial Intelligence (AREA)
- Cardiology (AREA)
- Acoustics & Sound (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Child & Adolescent Psychology (AREA)
- Hospice & Palliative Care (AREA)
- Psychology (AREA)
- Human Computer Interaction (AREA)
- Anesthesiology (AREA)
- Multimedia (AREA)
- Developmental Disabilities (AREA)
- Mathematical Physics (AREA)
- Social Psychology (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Educational Technology (AREA)
- Pulmonology (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Pain & Pain Management (AREA)
- Hematology (AREA)
- Epidemiology (AREA)
Abstract
The invention discloses a kind of intelligent earphone and the mood based on intelligent earphone and physiological health monitoring method, pass through the sound that the microphone on In-Ear Headphones acquires human antrum auris;And the small-signal of human antrum auris sound is amplified;Collected data are transferred to intelligent terminal and carry out data processing;Intelligent monitoring terminal is handled and is analyzed to the voice data received, identified to mood, is obtained physiological health information and is fed back to user.The earphone is wearable intelligent earphone, and heartbeat sound detection extracts ear canal sound using microphone, and hardware cost of the invention is low, it facilitates the carrying and use, it can be realized and mood and physiological health are monitored anywhere or anytime, do not cause user note that being suitble to daily and being used for a long time.
Description
Technical field
The invention belongs to earphone and health equipment fields, and in particular to a kind of intelligent earphone and the mood based on intelligent earphone
And physiological health monitoring method.
Background technique
Nowadays, with the continuous increase of life stress, more and more people's emotional instability, mood is low for a long time, and suffers from
Depression, anxiety disorder etc. have been gone up, therefore mood can be monitored in time and continuously and give mood and releive just to seem particularly significant.
In order to realize the monitoring of mood, currently available technology mainly include the following types:
(1) based on the Emotion identification of facial expression, this method needs to be constantly tracked facial expression change using camera
Change, it is expensive, need user to cooperate on one's own initiative, there are privacy concerns, and are easy to pretend not measuring true internal mood.
(2) based on the Emotion identification of voice signal, this method by the semantic content to voice carries out analysis or to saying
The rhythm of speaking of words person is analyzed, and equally has the risk of leakage user speech content, and poor by the habit of individual expression mood
It is different to be affected, it is same to be easy camouflage and measure true internal mood, it needs the user Shi Caineng that speaks to be monitored, needs user
Cooperation could use.
(3) based on the Emotion identification of physiological signal, such as common physiological signal has EEG signals (EEG), electromyography signal
(EMG), skin electrical signal, electrocardiosignal (ECG), pulse signal, this method of breath signal (RSP) and people inside mood shape
State is more relevant, is the physiological signal because of people only by the domination of autonomic nerves system and endocrine system, however to measure precisely
This scheme of physiological signal equipment it is typically more heavy, it has not been convenient to carry, be an impediment to the daily routines of user.
(4) based on multi-modal Emotion identification, 2 kinds or more of unlike signal of this method in summary technology, although
Superiority with accuracy rate, but the shortcomings that had both them simultaneously.
In conclusion the equipment of real-time monitoring human feelings thread is primarily present following disadvantage in the prior art:
Equipment is inconvenient to carry, is difficult to accomplish real-time monitoring;
Privacy, the information security for being easy leakage user are poor;
Monitoring result is inaccurate.
Summary of the invention
The technical problems to be solved by the present invention are: providing a kind of intelligent earphone, solves health monitoring in the prior art
Equipment problem inconvenient to carry.
The present invention uses following technical scheme to solve above-mentioned technical problem:
A kind of intelligent earphone, including headset body, line traffic control ontology and intelligent terminal are arranged for obtaining ear in headset body
The microphone of sound, ear canal sound signal amplifying circuit, communication module, intelligent earphone controller in road;Wherein, microphone obtains
Voice signal in ear canal is exported after amplifying circuit amplifies to intelligent earphone controller, the control communication of intelligent earphone controller
The audio data transmitting that module will acquire to external monitor terminal, external monitor terminal to the voice signal in ear canal at
Reason extracts heartbeat signal, and obtains emotional characteristics and physiological health feature according to heartbeat.Setting is used on line traffic control ontology
Start the microphone button key of sound detection microphone in ear canal.
Line traffic control ontology includes line traffic control shell and the tuning key being arranged on line traffic control shell, power button, call microphone, ear
Road sound microphone button key.
The front of line traffic control shell is arranged in ear canal voice microphone button key.
Type-c charge port is arranged in the side of line traffic control shell.
The present invention also provides a kind of mood based on intelligent earphone and physiological health monitoring methods, solve in the prior art
Health monitoring equipment real-time and safety is poor, problem of monitoring result inaccuracy.
The present invention uses following technical scheme to solve above-mentioned technical problem:
A kind of mood and physiological health monitoring method based on intelligent earphone, includes the following steps:
Step 1 utilizes original sound signal in the microphone acquisition ear canal in headset body;
Step 2 amplifies the voice signal in human antrum auris by signal amplification circuit;
Amplified ear canal voice signal is sent to external monitor terminal by communication module by step 3;
After step 4, external monitor terminal receive data, the voice signal in ear canal is handled, heartbeat is extracted
Sound characteristic compares heartbeat feature and the data in pre-stored heart pattern classifier, and previous existence is worked as in acquisition
Health characteristics are managed, heartbeat and the data in pre-stored mood classifier are compared, obtain current emotional feature;
Step 5, by step 4 emotional characteristics and physiological health feature shown and achieved, generate statement-of-health.
The processing method of sound in ear canal is included the following steps: in step 4
Step 4-1, the sound in amplified ear canal is subjected to framing using Hamming window function;
Step 4-2, the voice signal in each window is filtered, obtains heartbeat signal;
Step 4-3, feature extraction is carried out to heartbeat signal, obtained and the data and mood in heart pattern classifier
The corresponding characteristic parameter of data in classifier.
The method for building up of the heart pattern classifier is as follows:
Step a, the heartbeat data under a certain number of different heart patterns are collected in advance;
Step b, corresponding heart pattern label is stamped for different heartbeat data;
Step c, heartbeat feature is extracted from heartbeat data;
Step d, by machine learning, the method for deep learning, heart pattern classifier is trained.
The heart pattern label includes that heartbeat is overrun, heartbeat is slow, irregular heartbeat, heartbeat pause.
The method for building up of the mood classifier is as follows:
Step A, the heartbeat data under a certain number of different moods are collected;
Step B, corresponding mood label is stamped for the heartbeat data under different moods;
Step C, heartbeat feature is extracted from heartbeat data;
Step D, by machine learning, the method for deep learning, mood classifier is trained.
The mood label includes that at least one of gram mood model mood is cut in pula.
Compared with prior art, the invention has the following advantages:
It 1, further include the Mike for obtaining sound in ear canal after the earphone system is in addition to can satisfy general ear-phone function
Wind accomplishes that mood monitoring and physiological health are supervised according to the sound in ear canal true to user inherent mood and physiological characteristic
Control.
2, hardware cost of the invention is low, facilitates the carrying and use, as long as user puts on earphone as usual, the system
The inherent mood and physiological characteristic of user can be tracked continuously and in real-time in the case where protecting privacy of user, and given
The moods such as voice adjusting, song recommendations are releived measure, are facilitated the carrying and use, and routine use is suitble to.
3, by machine learning, the method for deep learning, heart pattern classifier and mood classifier are trained, so that answering
Used time can fast and accurately obtain monitoring result.
4, framing is carried out to voice data using Hamming window function, then the voice signal in each window is filtered
Processing, can accurately extract the corresponding signal of heartbeat, enhance the accuracy of this method.
Detailed description of the invention
Fig. 1 is the In-Ear Headphones body part structural schematic diagram of intelligent earphone of the present invention.
Fig. 2 is the line traffic control partial structure diagram of intelligent earphone of the present invention.
Fig. 3 is the flow chart of monitoring method of the present invention.
Fig. 4 is the flow chart in monitoring method of the present invention to sound signal processing.
Wherein, the mark in figure are as follows: 1- built-in earplug;2- sound detection microphone;3- earphone speaker;4- ear canal sound
Sound signal magnification circuit plate;5- earphone case;6- line traffic control ontology;7-type-c charge port;8- volume increase is built;9- broadcasting/temporarily
Stop key;10- volume down key;11- starts the microphone button key of sound monitoring;12- ear microphone.
Specific embodiment
Structure and the course of work of the invention are described further with reference to the accompanying drawing.
A kind of intelligent earphone, including headset body, line traffic control ontology and intelligent terminal are arranged for obtaining ear in headset body
The microphone of sound, ear canal sound signal amplifying circuit, communication module, intelligent earphone controller in road;Wherein, microphone obtains
Voice signal in ear canal is after amplifying circuit amplifies, output to intelligent earphone controller, and the control of intelligent earphone controller is logical
The audio data transmitting that will acquire of letter module to external monitor terminal, external monitor terminal to the voice signal in ear canal at
Reason extracts heartbeat signal, and obtains emotional characteristics and physiological health feature according to heartbeat.Setting is used on line traffic control ontology
Start the microphone button key of sound detection microphone in ear canal.
Specific embodiment one, as shown in Figure 1 and Figure 2:
A kind of intelligent earphone, including headset body, line traffic control ontology and intelligent terminal, headset body include earphone case 5 with
And be arranged on shell 5 built-in earplug 1, the sound detection microphone 2 for obtaining sound in ear canal, earphone speaker 3,
Ear canal sound signal amplifying circuit plate 4, communication module, intelligent earphone controller;Wherein, sound detection microphone 2 obtains ear canal
Interior voice signal is exported after amplifying circuit amplifies to intelligent earphone controller, and intelligent earphone controller controls communication module
The voice data obtained is sent to external monitor terminal, external monitor terminal handles the voice signal in ear canal, extracts
Heartbeat signal, and emotional characteristics and physiological health feature are obtained according to heartbeat.Line traffic control ontology 6 include line traffic control shell and
The positive volume increase of line traffic control shell is set and builds the Mike that 8, broadcasting/Pause key 9, volume down key 10, starting sound monitor
Type-c charge port 7 is arranged in the side of wind button key 11, ear microphone 12, line traffic control shell.
Inside setting intelligent earphone controller, power module and the communication module of line traffic control shell.
The working principle and the course of work of the intelligent earphone are as follows:
In-Ear Headphones are filled in ear canal, starts the microphone button key of sound monitoring on line traffic control ontology, utilizes earphone
Original sound signal in microphone acquisition ear canal on ontology, can also pass through earphone normal play while acquiring ear canal sound
Music etc., the voice signal in ear canal that In-Ear microphone obtains are exported after amplifying circuit amplifies to earphone controller,
Controller controls communication module and sends the voice data obtained to external monitor terminal, and external monitor terminal is to the sound in ear canal
Signal is handled, including framing, filtering etc., extracts heartbeat signal, and according to heartbeat data acquisition mood number
According to feature and physiological health data characteristics, data characteristics is then input to preparatory trained heart pattern classifier and mood
In classifier, feature inference goes out heart beat status classification and other information such as heart rate etc. and feelings to classifier according to the input data
Not-ready status information is shown and is achieved to obtained emotional state and physiological health state, is generated statement-of-health and is achieved, and
Give the moods such as voice adjusting, song recommendations to releive measure.
A kind of mood and physiological health monitoring method based on intelligent earphone, includes the following steps:
Step 1 utilizes original sound signal in the microphone acquisition ear canal in headset body;
Step 2 amplifies the voice signal in human antrum auris by signal amplification circuit;
Amplified ear canal voice signal is sent to external monitor terminal by communication module by step 3;
After step 4, external monitor terminal receive data, the voice signal in ear canal is handled, heartbeat is extracted
Sound characteristic compares heartbeat feature and the data in pre-stored heart pattern classifier, and previous existence is worked as in acquisition
Health characteristics are managed, heartbeat and the data in pre-stored mood classifier are compared, obtain current emotional feature;
Step 5, by step 4 emotional characteristics and physiological health feature shown and achieved, generate statement-of-health.
Specific embodiment two, as shown in Figure 3, Figure 4:
A kind of mood and physiological health monitoring method based on intelligent earphone, includes the following steps:
Step 1 fills in In-Ear Headphones in ear canal, starts the microphone button key of sound monitoring on line traffic control ontology, benefit
With original sound signal in the microphone acquisition ear canal in headset body;
Step 2 amplifies the faint sound signal in human antrum auris by the signal amplification circuit of In-Ear Headphones;
Amplified ear canal voice signal is sent to external monitor terminal by communication module by step 3;Outside the embodiment
Monitoring APP is mounted on portion's monitor terminal in advance;
After step 4, external monitor terminal receive data, the voice signal in ear canal is handled, heartbeat is extracted
Sound characteristic compares heartbeat feature and the data in pre-stored heart pattern classifier, and previous existence is worked as in acquisition
Health characteristics are managed, heartbeat and the data in pre-stored mood classifier are compared, obtain current emotional feature;
Step 5, by step 4 emotional characteristics and physiological health feature at the interface APP shown and achieved, generate strong
Health report, and can choose and play out prompt in earphone using other forms such as voice or songs.
In the step 1, user supervises function in addition to opening mood monitoring and physiological health either manually or by the button in line traffic control,
The microphone that also will use the automatic starting heartbeat sound detection of timing realizes mood monitoring and physiological health monitoring function.
The processing method of sound in ear canal is included the following steps: in the embodiment step 4
Step 4-1, the sound in amplified ear canal is subjected to framing using Hamming window function, it will be in amplified ear canal
Sound is divided into multiple wickets;
Step 4-2, the voice signal in each window is filtered, obtains heartbeat signal;Due to user
Music may be being played simultaneously, therefore the original sound signal being collected into may be musical sound, heartbeat, voice and ambient noise
Mixed sound, need to be filtered to extract the corresponding signal of heartbeat.Due to the HR Heart Rate of heartbeat adult
Between 40-100BPM, and 220BPM is reached as high as during exercise, and the sample rate of microphone is 44.1KHz, therefore it is arranged one
Cut frequency is 1.66 × 10-4Low-pass filter to filter out noise, additionally using wavelet filtering, mean filter etc. with common
Extract the corresponding signal of heartbeat;
Step 4-3, feature extraction is carried out to heartbeat signal, obtained and the data and mood in heart pattern classifier
The characteristic parameters such as the corresponding time domain of data, frequency domain, energy in classifier.
For the heartbeat signal characteristic parameter of acquisition, time domain is including but not limited to extracted using time-frequency conversion technology
With frequency domain character, Fast Fourier Transform (FFT), Short Time Fourier Transform, Eugene Wigner-Weir distribution are including but not limited to used
The technologies such as Wigner-Ville Distribution (WVD), wavelet transformation extract including but not limited to time-frequency figure, Meier frequency
The features such as spectral coefficient, mel-frequency cepstrum coefficient, root mean square, zero-crossing rate, frequency spectrum entropy and P wave, R wave, S wave, T wave, original sound
The time domain waveforms feature such as signal.
The method for building up of the heart pattern classifier is as follows:
Step a, in the establishment stage of heart pattern classifier, the heart under a certain number of different heart patterns is collected in advance
Jump voice data;
Step b, corresponding heart pattern label is stamped for different heartbeat data;Heart pattern label includes but not
Be limited to heartbeat is overrun, heartbeat is slow, irregular heartbeat, heartbeat pause etc.;
Step c, heartbeat feature is extracted from heartbeat data;
Step d, by machine learning, the method for deep learning, heart pattern classifier is trained, heart pattern classification
Device is the mapping relations of heartbeat feature Yu heart pattern label.
Heartbeat sound characteristic can be three-dimensional time-frequency figure, embodiment of the different heart patterns on map in the embodiment
It is different, therefore, is compared by multiple maps to distinguish, furthermore, it is possible to which there are also other feature such as P waves, R wave, S
The time domain waveforms features such as wave, T wave, original sound signal are classified by multiple feature collective effects.
When being predicted, then the feature extracted is input to by the heartbeat data that will acquire by feature extraction
Established heart pattern classifier, heart pattern classifier infer classification results according to trained mapping relations.
The method for building up of the mood classifier is as follows:
Step A, in the establishment stage of mood classifier, the heartbeat data under a certain number of different moods are collected;
Step B, will lead to the inconsistent of heartbeat mode according to different moods, so can in the ear canal sound of extraction body
Reveal and to identify different moods, such as happy, sad, angry, frightened, loss of emotion, but is not limited to the above feelings
Thread stamps corresponding mood label for the heartbeat data under different moods;Mood label is including but not limited to happy, sad
Wound, fear, indignation etc. (can refer to pula and cut a gram mood model), and the mood label includes that pula is cut in gram mood model extremely
A kind of few mood;
Step C, heartbeat feature is extracted from heartbeat data;
Step D, by machine learning, the method for deep learning, mood classifier is trained, which is the heart
Jump the mapping relations of sound characteristic and mood label.
When being predicted, then the heartbeat data that will acquire inputs the feature into established feelings by feature extraction
In thread classifier, mood classifier infers corresponding mood according to trained mapping relations.
It will obtain Emotion identification and show and achieve on the interface APP with physiological health result, establish to mood and life
The long term monitoring and tracking of health are managed, statement-of-health is generated, and may be selected to carry out prompt casting using voice or according to mood
Adjustment and recommendation song.
Claims (10)
1. a kind of intelligent earphone, including headset body, line traffic control ontology and intelligent terminal, it is characterised in that: be arranged in headset body
For obtaining the microphone of sound in ear canal, ear canal sound signal amplifying circuit, communication module, intelligent earphone controller;Wherein,
The voice signal that microphone obtains in ear canal is exported after amplifying circuit amplifies to intelligent earphone controller, intelligent earphone control
Device controls communication module and sends the voice signal obtained to external monitor terminal, and external monitor terminal is to the voice signal in ear canal
It is handled, extracts heartbeat signal, and according to heartbeat signal acquisition emotional characteristics and physiological health feature;
Microphone button key for starting sound detection microphone in ear canal is set on line traffic control ontology.
2. intelligent earphone according to claim 1, it is characterised in that: line traffic control ontology includes that line traffic control shell and setting are online
Control tuning key, power button, the call microphone, ear canal voice microphone button key on shell.
3. intelligent earphone according to claim 2, it is characterised in that: ear canal voice microphone button key is arranged in line traffic control shell
The front of body.
4. intelligent earphone according to claim 2, it is characterised in that: type-c charge port is arranged in the side of line traffic control shell.
5. a kind of mood and physiological health monitoring method based on intelligent earphone, characterized by the following steps:
Step 1 utilizes original sound signal in the microphone acquisition ear canal in headset body;
Step 2 amplifies the voice signal in human antrum auris by signal amplification circuit;
Amplified ear canal voice signal is sent to external monitor terminal by communication module by step 3;
After step 4, external monitor terminal receive data, the voice signal in ear canal is handled, heartbeat is extracted
Feature compares heartbeat feature and the data in pre-stored heart pattern classifier, and it is strong to obtain current physiology
Kang Tezheng compares heartbeat and the data in pre-stored mood classifier, obtains current emotional feature;
Step 5, by step 4 emotional characteristics and physiological health feature shown and achieved, generate statement-of-health.
6. the mood and physiological health monitoring method according to claim 5 based on intelligent earphone, it is characterised in that: step
The processing method of sound in ear canal is included the following steps: in 4
Step 4-1, the sound in amplified ear canal is subjected to framing using Hamming window function;
Step 4-2, the voice signal in each window is filtered, obtains heartbeat signal;
Step 4-3, feature extraction is carried out to heartbeat signal, obtained and the data and mood classification in heart pattern classifier
The corresponding characteristic parameter of data in device.
7. the mood and physiological health monitoring method according to claim 5 based on intelligent earphone, it is characterised in that: described
The method for building up of heart pattern classifier is as follows:
Step a, the heartbeat data under a certain number of different heart patterns are collected in advance;
Step b, corresponding heart pattern label is stamped for different heartbeat data;
Step c, heartbeat feature is extracted from heartbeat data;
Step d, by machine learning, the method for deep learning, heart pattern classifier is trained.
8. the mood and physiological health monitoring method according to claim 7 based on intelligent earphone, it is characterised in that: heartbeat
Mode tag includes that heartbeat is overrun, heartbeat is slow, irregular heartbeat, heartbeat pause.
9. the mood and physiological health monitoring method according to claim 5 based on intelligent earphone, it is characterised in that: described
The method for building up of mood classifier is as follows:
Step A, the heartbeat data under a certain number of different moods are collected;
Step B, corresponding mood label is stamped for the heartbeat data under different moods;
Step C, heartbeat feature is extracted from heartbeat data;
Step D, by machine learning, the method for deep learning, mood classifier is trained.
10. the mood and physiological health monitoring method according to claim 9 based on intelligent earphone, it is characterised in that: feelings
Thread label includes that at least one of gram mood model mood is cut in pula.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910677898.9A CN110368005A (en) | 2019-07-25 | 2019-07-25 | A kind of intelligent earphone and mood and physiological health monitoring method based on intelligent earphone |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910677898.9A CN110368005A (en) | 2019-07-25 | 2019-07-25 | A kind of intelligent earphone and mood and physiological health monitoring method based on intelligent earphone |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110368005A true CN110368005A (en) | 2019-10-25 |
Family
ID=68256057
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910677898.9A Pending CN110368005A (en) | 2019-07-25 | 2019-07-25 | A kind of intelligent earphone and mood and physiological health monitoring method based on intelligent earphone |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110368005A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111666549A (en) * | 2020-06-12 | 2020-09-15 | 深圳大学 | Intelligent earphone and user identification method thereof |
CN111696538A (en) * | 2020-06-05 | 2020-09-22 | 北京搜狗科技发展有限公司 | Voice processing method, apparatus and medium |
CN112351360A (en) * | 2020-10-28 | 2021-02-09 | 深圳市捌爪鱼科技有限公司 | Intelligent earphone and emotion monitoring method based on intelligent earphone |
CN112820286A (en) * | 2020-12-29 | 2021-05-18 | 北京搜狗科技发展有限公司 | Interaction method and earphone equipment |
CN113630681A (en) * | 2021-08-05 | 2021-11-09 | 北京安声浩朗科技有限公司 | Active noise reduction earphone |
CN113907756A (en) * | 2021-09-18 | 2022-01-11 | 深圳大学 | Wearable system of physiological data based on multiple modalities |
WO2023179484A1 (en) * | 2022-03-21 | 2023-09-28 | 华为技术有限公司 | Earphone |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102512138A (en) * | 2011-11-24 | 2012-06-27 | 胡建斌 | Cardiac sound monitoring and early warning method |
WO2014057921A1 (en) * | 2012-10-09 | 2014-04-17 | Necカシオモバイルコミュニケーションズ株式会社 | Electronic apparatus and sound reproduction method |
CN206596164U (en) * | 2017-01-20 | 2017-10-27 | 歌尔股份有限公司 | Portable earphone device |
CN107997751A (en) * | 2018-01-16 | 2018-05-08 | 华南理工大学 | A kind of intelligent earphone system based on biofeedback |
CN207382517U (en) * | 2017-10-13 | 2018-05-18 | 深圳市耳海声学技术有限公司 | A kind of wire controlled bluetooth headset |
CN108132995A (en) * | 2017-12-20 | 2018-06-08 | 北京百度网讯科技有限公司 | For handling the method and apparatus of audio-frequency information |
CN108391207A (en) * | 2018-03-30 | 2018-08-10 | 广东欧珀移动通信有限公司 | Data processing method, device, terminal, earphone and readable storage medium storing program for executing |
CN109171644A (en) * | 2018-06-22 | 2019-01-11 | 平安科技(深圳)有限公司 | Health control method, device, computer equipment and storage medium based on voice recognition |
US20190022348A1 (en) * | 2017-07-20 | 2019-01-24 | Bose Corporation | Earphones for Measuring and Entraining Respiration |
CN109416729A (en) * | 2016-04-18 | 2019-03-01 | 麻省理工学院 | Feature is extracted from physiological signal |
WO2019079909A1 (en) * | 2017-10-27 | 2019-05-02 | Ecole De Technologie Superieure | In-ear nonverbal audio events classification system and method |
CN109819366A (en) * | 2019-01-31 | 2019-05-28 | 华为技术有限公司 | Wired control box and wireless headset for wireless headset |
CN109843179A (en) * | 2016-09-07 | 2019-06-04 | 皇家飞利浦有限公司 | For detecting the combining classifiers of abnormal heart sound |
-
2019
- 2019-07-25 CN CN201910677898.9A patent/CN110368005A/en active Pending
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102512138A (en) * | 2011-11-24 | 2012-06-27 | 胡建斌 | Cardiac sound monitoring and early warning method |
WO2014057921A1 (en) * | 2012-10-09 | 2014-04-17 | Necカシオモバイルコミュニケーションズ株式会社 | Electronic apparatus and sound reproduction method |
CN109416729A (en) * | 2016-04-18 | 2019-03-01 | 麻省理工学院 | Feature is extracted from physiological signal |
CN109843179A (en) * | 2016-09-07 | 2019-06-04 | 皇家飞利浦有限公司 | For detecting the combining classifiers of abnormal heart sound |
CN206596164U (en) * | 2017-01-20 | 2017-10-27 | 歌尔股份有限公司 | Portable earphone device |
US20190022348A1 (en) * | 2017-07-20 | 2019-01-24 | Bose Corporation | Earphones for Measuring and Entraining Respiration |
CN207382517U (en) * | 2017-10-13 | 2018-05-18 | 深圳市耳海声学技术有限公司 | A kind of wire controlled bluetooth headset |
WO2019079909A1 (en) * | 2017-10-27 | 2019-05-02 | Ecole De Technologie Superieure | In-ear nonverbal audio events classification system and method |
CN108132995A (en) * | 2017-12-20 | 2018-06-08 | 北京百度网讯科技有限公司 | For handling the method and apparatus of audio-frequency information |
CN107997751A (en) * | 2018-01-16 | 2018-05-08 | 华南理工大学 | A kind of intelligent earphone system based on biofeedback |
CN108391207A (en) * | 2018-03-30 | 2018-08-10 | 广东欧珀移动通信有限公司 | Data processing method, device, terminal, earphone and readable storage medium storing program for executing |
CN109171644A (en) * | 2018-06-22 | 2019-01-11 | 平安科技(深圳)有限公司 | Health control method, device, computer equipment and storage medium based on voice recognition |
CN109819366A (en) * | 2019-01-31 | 2019-05-28 | 华为技术有限公司 | Wired control box and wireless headset for wireless headset |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111696538A (en) * | 2020-06-05 | 2020-09-22 | 北京搜狗科技发展有限公司 | Voice processing method, apparatus and medium |
CN111696538B (en) * | 2020-06-05 | 2023-10-31 | 北京搜狗科技发展有限公司 | Voice processing method, device and medium |
CN111666549A (en) * | 2020-06-12 | 2020-09-15 | 深圳大学 | Intelligent earphone and user identification method thereof |
CN112351360A (en) * | 2020-10-28 | 2021-02-09 | 深圳市捌爪鱼科技有限公司 | Intelligent earphone and emotion monitoring method based on intelligent earphone |
CN112820286A (en) * | 2020-12-29 | 2021-05-18 | 北京搜狗科技发展有限公司 | Interaction method and earphone equipment |
CN113630681A (en) * | 2021-08-05 | 2021-11-09 | 北京安声浩朗科技有限公司 | Active noise reduction earphone |
CN113907756A (en) * | 2021-09-18 | 2022-01-11 | 深圳大学 | Wearable system of physiological data based on multiple modalities |
WO2023179484A1 (en) * | 2022-03-21 | 2023-09-28 | 华为技术有限公司 | Earphone |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110368005A (en) | A kind of intelligent earphone and mood and physiological health monitoring method based on intelligent earphone | |
US6647368B2 (en) | Sensor pair for detecting changes within a human ear and producing a signal corresponding to thought, movement, biological function and/or speech | |
WO2020186651A1 (en) | Smart sports earphones based on eeg thoughts and implementation method and system thereof | |
AU2002307038B2 (en) | Ear microphone apparatus and method | |
WO2021139327A1 (en) | Audio signal processing method, model training method, and related apparatus | |
CN102973277B (en) | Frequency following response signal test system | |
US20220188392A1 (en) | Method for user recognition and emotion monitoring based on smart headset | |
CN110367934B (en) | Health monitoring method and system based on non-voice body sounds | |
Patil et al. | The physiological microphone (PMIC): A competitive alternative for speaker assessment in stress detection and speaker verification | |
JPWO2011135789A1 (en) | EEG measurement apparatus, electrical noise estimation method, and computer program for executing electrical noise estimation method | |
CN109246515A (en) | A kind of intelligent earphone and method promoting personalized sound quality function | |
EP3954278A1 (en) | Apnea monitoring method and device | |
CN111105796A (en) | Wireless earphone control device and control method, and voice control setting method and system | |
CN110742603A (en) | Brain wave audible mental state detection method and system for realizing same | |
CN106030707A (en) | System for audio analysis and perception enhancement | |
TWI749663B (en) | Method for monitoring phonation and system thereof | |
CN206007247U (en) | A kind of bone conduction feedback device based on the anti-fatigue monitoring of brain wave | |
CN113143289A (en) | Intelligent brain wave music earphone capable of being interconnected and interacted | |
CN108392201A (en) | Brain training method and relevant device | |
CN105405447B (en) | One kind sending words respiratory noise screen method | |
US10488831B2 (en) | Biopotential wakeup word | |
CN101310695A (en) | Artificial electronic cochlea based on mobile telephone and voice tone information | |
CN108784692A (en) | A kind of Feeling control training system and method based on individual brain electricity difference | |
EP3319337A1 (en) | Method for beat detection by means of a hearing aid | |
CN107393539A (en) | A kind of sound cipher control method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information | ||
CB03 | Change of inventor or designer information |
Inventor after: Lin Jiawei Inventor after: Wang Dan Inventor after: Zou Yongpan Inventor after: Wu Kaishun Inventor before: Zou Yongpan Inventor before: Wang Dan Inventor before: Wu Kaishun Inventor before: Lin Jiawei |
|
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191025 |