CN114707132A - Brain wave encryption and decryption method and system based on emotional voice - Google Patents
Brain wave encryption and decryption method and system based on emotional voice Download PDFInfo
- Publication number
- CN114707132A CN114707132A CN202210501444.8A CN202210501444A CN114707132A CN 114707132 A CN114707132 A CN 114707132A CN 202210501444 A CN202210501444 A CN 202210501444A CN 114707132 A CN114707132 A CN 114707132A
- Authority
- CN
- China
- Prior art keywords
- brain wave
- data
- user
- decryption
- word
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 210000004556 brain Anatomy 0.000 title claims abstract description 111
- 238000000034 method Methods 0.000 title claims abstract description 48
- 230000002996 emotional effect Effects 0.000 title claims abstract description 15
- 238000012545 processing Methods 0.000 claims abstract description 46
- 238000012795 verification Methods 0.000 claims description 30
- 230000011218 segmentation Effects 0.000 claims description 25
- 238000012549 training Methods 0.000 claims description 23
- 238000012360 testing method Methods 0.000 claims description 20
- 238000000605 extraction Methods 0.000 claims description 10
- 238000010276 construction Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 5
- 238000002790 cross-validation Methods 0.000 claims description 5
- 238000012216 screening Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000008451 emotion Effects 0.000 description 4
- 238000007635 classification algorithm Methods 0.000 description 3
- PEDCQBHIVMGVHV-UHFFFAOYSA-N Glycerine Chemical compound OCC(O)CO PEDCQBHIVMGVHV-UHFFFAOYSA-N 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000010200 validation analysis Methods 0.000 description 2
- 240000007594 Oryza sativa Species 0.000 description 1
- 235000007164 Oryza sativa Nutrition 0.000 description 1
- 208000031975 Yang Deficiency Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 239000008186 active pharmaceutical agent Substances 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 235000013305 food Nutrition 0.000 description 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 1
- 239000010931 gold Substances 0.000 description 1
- 229910052737 gold Inorganic materials 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000006996 mental state Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 235000009566 rice Nutrition 0.000 description 1
- 230000026676 system process Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/011—Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Computer Security & Cryptography (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Acoustics & Sound (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
The invention discloses a brain wave encryption and decryption method based on emotional voice, which comprises the following steps: connecting the brain wave sensor with the brain of a user, and displaying a section of content on a display interface by a central processing unit; reading the content by the user, and collecting the brain wave input of the user from the brain wave sensor by the central processing unit; after the central processing unit performs voice recognition on the voice read by the user through the microphone, the recognized characters are displayed contents, so that the user is confirmed to read the contents, and then brain wave processing operation is performed, otherwise, the brain wave processing operation is not performed; according to the invention, different users are distinguished according to different brain wave signals which send out emotional voice when the users read, so that different users are distinguished based on different biological characteristics of different users, and user identity authentication is realized.
Description
Technical Field
The invention relates to the technical field of brain wave signal encryption, in particular to a brain wave encryption and decryption method and system based on emotional voice.
Background
In recent years, biometric-based related authentication schemes have been increasing, and earlier schemes have been based on external biometrics, but at higher risk of counterfeiting than internal biometrics, and therefore most schemes have turned from external biometrics to internal biometrics. The brain wave is one of the internal biological characteristics, has the characteristics of difficult acquisition and uniqueness, can effectively avoid the counterfeiting problem, has the continuity and can carry out the continuous verification of a user. In particular, when different people read aloud, the presented brainwaves are different, and no scheme for encrypting brainwaves based on emotional voice is adopted currently.
The existing patent proposes a "trick lock based on brain-computer switching technology and an encryption and decryption method thereof", and the patent application number is "201410101482. X". The main idea is to decrypt the password in the brain wave by using the brain wave so as to encrypt and decrypt. The final encryption and decryption is judged by the password, and different people can decrypt the password as long as knowing the password. Therefore, brain waves are a medium for acquiring the password, and brain waves of different people do not play a distinguishing role, so that biological characteristics of different people cannot be distinguished. The prior art has the following disadvantages: the distinction of different people cannot be realized, and the decryption can be realized only by knowing the password.
Disclosure of Invention
Therefore, it is necessary to provide a brain wave encryption and decryption method and system based on emotional speech, and solve the problem that the prior art cannot distinguish different people and can decrypt only by knowing the password.
In order to achieve the above object, the present invention provides an emotion voice-based brain wave encryption and decryption method, which is used for a brain wave encryption and decryption system, wherein the brain wave encryption and decryption system comprises a brain wave sensor, a central processing unit, a display interface and a microphone connected with the central processing unit, which are connected in sequence, and the method comprises the following steps: connecting the brain wave sensor with the brain of a user, and displaying a section of content on a display interface by a central processing unit; the user reads the content aloud, and the central processing unit collects brain wave input of the user from the brain wave sensor; after the central processing unit performs voice recognition on the voice read by the user through the microphone, the recognized characters are displayed contents, so that the user is confirmed to read the contents, and then brain wave processing operation is performed, otherwise, the brain wave processing operation is not performed; the brain wave processing operation comprises the steps of taking brain wave data in continuous stages, finding out all broken words which are consistent with the first ten times of the occurrence frequency and have more than two words in a section of content according to the occurrence time section of each broken word, recording the time section of each broken word for guiding reading, finding out the brain wave data recorded in the same time section, storing the brain wave data as a verification data source, and simultaneously carrying out encryption locking operation which is related to the verification data source; entering into an identity verification stage when decrypting; the central processing unit displays another section of content on the display interface, and the central processing unit collects brain wave input of the user from the brain wave sensor; the central processing unit collects the voice read by the user through the microphone and carries out voice recognition, verifies the recognized characters and the displayed other section of content, carries out brain wave decryption processing operation when the characters are verified to be consistent with the displayed other section of content, or does not carry out brain wave decryption processing operation; the brain wave decryption processing operation comprises the steps of finding out all word breaks which are consistent with the first ten words of the occurrence frequency and have more than two words in the other section of content, recording the time section of the reading guide of each word break in the other section of content and finding out the brain wave data recorded in the time section; inputting brain wave data into a classification verification model, comparing the brain wave data with a verification data source, and outputting a comparison result; if the comparison result is the same user, the decryption is successful, otherwise, the decryption fails, and the decryption locking state is kept.
Further, the construction of the classification verification model comprises the following steps: performing a characteristic value extraction stage step, wherein the characteristic value extraction stage is used for processing data, and respectively generating training and test data required by cross validation for preset times after performing 3-Gram method and outlier removal processing outside positive and negative standard deviations on the acquired brain wave data; and a classifier construction stage step, namely generating a plurality of training subsets by using a whole learning method Bagging according to the training and testing data, independently training each training subset to obtain a respective training model by using an OC-SVM, testing by using the same word segmentation testing data, and then using a majority decision method of a merging rule to the classification result to obtain a final classification verification model result.
Further, the eigenvalue extraction stage comprises the steps of: performing word segmentation analysis on a specified Chinese article, selecting the available word segmentation of the article by using a preset available word segmentation screening condition, and performing Chinese decoding and storage on the selected available word segmentation; collecting brain wave data of the Chinese article read by the user, matching with word breaks in the Chinese article after voice recognition, and acquiring starting time and ending time corresponding to each word break; acquiring brainwave data of a time section corresponding to time according to the starting time and the ending time corresponding to each word segmentation; and performing a brain wave data acquisition process with preset repetition times to obtain training and testing data.
Further, the brain wave sensor is connected with the central processing unit through bluetooth, and the step of obtaining the brain wave data of the time section corresponding to the time according to the start time and the end time corresponding to each word segmentation comprises the following steps: and adding a delay time to the starting time and the ending time corresponding to each word segmentation to obtain the brain wave data of the time section corresponding to the time.
Further, the preset number of repetitions is 5.
Further, the characteristic values of the brain wave data include concentration, relaxation, stress, and fatigue.
Meanwhile, the invention provides an emotion voice-based brain wave encryption and decryption system, which comprises a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the steps of the method in any one of the embodiments of the invention are realized.
By implementing the technical scheme, different from the prior art, different users are distinguished according to different brain wave signals which send out emotional voice when the users read, so that different users are distinguished based on different biological characteristics of different users, and user identity verification is realized.
Drawings
FIG. 1 is a block diagram of a system according to an embodiment;
FIG. 2 is a flowchart of a method for generating a classification verification model according to an embodiment;
FIG. 3 is a graph of word segmentation analysis of an experimental article in accordance with an illustrative embodiment;
FIG. 4 is a flowchart of a read-directing data processing routine in accordance with an embodiment;
FIG. 5 is a flowchart of an electroencephalogram data processing routine according to an embodiment;
FIG. 6 is a schematic diagram of brain wave data processing for each word segment according to an embodiment;
FIG. 7 is a brain wave data graph of a classification algorithm according to an embodiment;
FIG. 8 is a diagram of five times cross validation of validation according to an embodiment;
FIG. 9 is a diagram of five times cross validation of illegal validation according to an embodiment;
FIG. 10 is a flow chart of classifier construction stages according to an embodiment.
Detailed Description
To explain technical contents, structural features, and objects and effects of the technical solutions in detail, the following detailed description is given with reference to the accompanying drawings in conjunction with the embodiments.
Referring to fig. 1 to 10, the present embodiment provides a method and a system for brain wave encryption and decryption based on emotion voice. Firstly, the user pre-input is carried out: the brain wave sensor of the system is firstly connected with the brain of a user, namely the head of the user is provided with a brain wave acquisition instrument, then the system displays a section of content (Tang poetry, five-language absolute sentence, seven-language absolute sentence, Buddhist sutra and the like) on a display interface, the user reads the content, and the system acquires the input of the user from the brain wave sensor. After speech recognition by the microphone it can be confirmed that the user is indeed reading the content. When the system processes brain wave data, continuous brain wave data is collected according to the occurrence time section of each word segmentation (or word segmentation), so that the brain wave section to be analyzed is selected more accurately, which is different from the past research that all brain wave data are used. Finding out all the word breaks in the first ten words and with more than two word numbers in the displayed content, recording the time section of each word break, and finding out the brain wave data recorded in the same time section. The brain wave data is used as a verification data source, and meanwhile, encryption operation is carried out.
During identity authentication, a section of content is displayed, and the system collects the input of the user from the brain wave sensor. Finding out all the word breaks in the first ten words and with more than two word numbers in the displayed content, recording the time section of each word break, and finding out the brain wave data recorded in the same time section. And inputting the data into a classification verification model, comparing the data with a verification data source, and outputting a comparison result. If the comparison is the same user, the decryption is successful, otherwise, the decryption is failed.
The class verification model generation process, as shown in fig. 2, is a flow chart of a method of the class verification model generation process. The method mainly comprises a characteristic value extraction stage and a classifier construction stage, wherein the characteristic value extraction stage is mainly used for processing data, and training and testing data required by quintupling cross validation are respectively generated after acquired brain wave data are subjected to 3-Gram method and outlier removal processing outside positive and negative standard deviations. In the classifier construction stage, the whole learning method Bagging is used, the base classifier is OC-SVM, and the final classification result is obtained by using the majority decision method in the merging rule, and the details of the two stages are described below.
In the stage of feature value extraction, data processing is first performed, and as shown in fig. 3, in the first step, word segmentation analysis is performed on a specified chinese article, and then, by using a preset available word segmentation screening condition, an available word segment of the article is selected and is used as a chinese word for decoding and storing in the system. The word segmentation can be manually labeled and then stored or an existing word segmentation interface (such as a Baidu word segmentation API) is called for word segmentation. Because the relation between brain waves and word breaks generated by the reading-guiding behavior is not considered, from the machine learning point of view, the word breaks with higher frequency of occurrence have more available data, and the condition of word break analysis is set as the frequency of occurrence of the word breaks in the article, namely, the invention focuses on using the word break part as the judgment basis rather than the whole sentence.
Table 1 shows the top ten word breaks after analysis, and then according to the conditions of available word breaks: the number of words is more than or equal to 2, the first ten words appear frequently, and six available word breaks are obtained after screening, wherein the word breaks are respectively the place, the unknown place, the smile, the trillions, the kilometers and the today. The six word breaks are decoded in chinese and corresponding Unicode code combinations are formed, as shown in table 2, which includes the number of times each word break appears in the experimental article.
TABLE 1 frequency statistics of the highest frequency two-character string of the Total Tang Shi
Character string | Frequency of | Character string | Frequency of | Character string | Frequency of | Character string | Frequency of | Character string | Frequency of |
Where and where | 166 | Nobody | 881 | (Castle Peak) | 662 | Running water | 550 | Sunset | 498 |
Is not aware of | 146 | Air blower | 834 | Children's cycle | 634 | Return head | 544 | Is not as good as | 497 |
Wan Li | 145 | 24774A Chinese medicinal composition, and its preparation method | 780 | Meet each other | 629 | Flow of liquid | 539 | Gui-go | 496 |
Qianli (Chinese character of Qianli) | 130 | Deceased person | 778 | Pingsheng food | 597 | Thus, it is possible to provide | 526 | At night | 496 |
Today's day | 116 | Autumn wind | 749 | Year by year | 593 | White hair | 520 | Can not | 481 |
Is not seen | 115 | Youyou | 740 | Lonely rice paper | 592 | Master | 517 | Detaching from the body | 481 |
Must not | 114 | Thought of a missing | 733 | Gold | 589 | Today's dynasty | 516 | When and when | 478 |
Spring breeze | 112 | Changan (Changan) | 722 | Day | 588 | Moon cake | 515 | At this time | 477 |
White Cloud | 110 | (Daylight) | 697 | The human body is not | 587 | From here on | 509 | Luoyang medicine for treating kidney-yang deficiency | 476 |
Must not obtain | 947 | How to do | 687 | All over the ground | 586 | Sun and moon | 508 | All over the world | 472 |
Tomorrow month | 896 | Ten years old | 678 | What things are | 579 | Pedestrian | 507 | All-grass of Fangcao | 472 |
Interpersonal space | 890 | Who is | 663 | Jiang Shang | 553 | General of general | 499 | Coming back to | 471 |
TABLE 2 Unicode code combinations of available word segmentations
When the subject executes the task of "guiding and reading the designated article", the second step is entered, as shown in fig. 4, and the guiding and reading and brain wave data of each round of the subject are collected at the same time, the corresponding Unicode code combination in the guiding and reading data is compared by using the Unicode code combination of the available word-breaking stored in the first step, and the start and end time corresponding to each word-breaking is found out, so that the comparison with the brain wave data and the feature data acquisition can be performed in the following, i.e. the guiding and reading article material is displayed, the voice information and the corresponding time read by the user are acquired, and the corresponding time of the Unicode code is acquired by converting the voice information into the Unicode code of the character, thereby acquiring the brain wave data of the corresponding time period.
When the subject leads the available word-breaking, the present invention records the time and UNICODE code when the subject leads the word-breaking, Table 3 shows the format of the data for leading the word-breaking when the subject leads "where", the UNICODE code combination is { (4f55,8655}, and after comparing with the "where" UNICODE code combination {4f55,8655} stored in the first step, the subject leads the word-breaking between the time 16:27:4:609 and 16:27:5:308, and then records the corresponding start time (16:27:4) and end time (16:27:5) of the leading, because the minimum unit of each stroke of the brain wave data is second, the minimum unit of the leading time segment is also second.
TABLE 3 format of the lead data, where
Time | UNICODE code |
16:27:4:60 | 4f |
16:27:4:72 | 55 |
16:27:5:10 | 86 |
16:27:5:30 | 55 |
The third step is as shown in fig. 5, comparing the data of brain waves with the start and end time of each word-breaking recorded in the second step to obtain the required data section of brain waves, wherein the data of brain waves contains 8 characteristic values, including twelve characteristic values, such as Attention (Attention), relaxation (mediation), Pressure (Pressure) and Fatigue (Fatigue), in addition to Delta, Theta, Low Alpha, High Alpha, Low Beta, High Beta, Low Gamma and High Gamma, and EEG202 brain wave detector developed by BeneGear. The EEG202 brain wave detector transmits EEG signals acquired by a single electrode to an EEG202 chip, the chip performs noise reduction processing, the EEG202 is obtained through a self-opening algorithm, and the current mental state of a person is measured in a digital mode.
In the implementation process, the electroencephalograph communicates with the computer software through the bluetooth connection, and a delay time exists between the two, so that the invention takes 3 more seconds of electroencephalogram data from the end time point of recording backward, as shown in fig. 6, the recording time point is 16:27:4 to 16:27:5, and the time section of the electroencephalogram to be taken is 16:27:4 to 16:27:8, so as to ensure that the corresponding electroencephalogram section is taken.
The purpose of taking 3 seconds later is to acquire the corresponding brain wave section, but due to the limitation of the experimental instrument, the time of the main brain wave data cannot be determined, so that the classification algorithm is used to add the possible brain wave data together, and if the time of the main brain wave data is covered by the data, as shown in fig. 7, the verification accuracy can be improved. Since the part of the feature values is also classified by the classification algorithm, there are 36 feature values in the brain wave data when entering the classifier construction stage.
In the classifier construction stage, each testee can repeat five times of experiments, each experiment can be processed by the data processing step of the characteristic value extraction stage, each word segmentation can respectively generate brain wave data with the numbers from A to E, and required training data and test data can be generated according to the numbers. Since the verification method adopts quintuple cross verification, as shown in fig. 8, the training data and the test data need to be generated according to the requirement of quintuple cross verification, taking the test person number 1 as an example, if one of the data is taken as the test data, the rest four data will be merged into the training data, which is the legal user identity verification.
During the abnormal authentication of the illegal user, as shown in fig. 9, the a to E brain wave data generated by all the testees except the testee No. 1 are merged according to the same serial number, and then the data of the same serial number is randomly selected from the merged data according to the test data of the same serial number of the testee No. 1 as the illegal authentication test data of the testee No. 1.
After the required training data and test data are generated, the whole learning method is started, as shown in fig. 10, firstly, the training data of each word-breaking is used for Bagging to generate a plurality of training subsets, each training subset is independently trained to form a respective training model by using an OC-SVM, then, the same test data of each word-breaking is used for testing, and then, the classification result is used for a majority decision method of a merging rule to obtain a final classification verification model result.
In addition to the comparison between the experimental situation and the word-breaking, the present invention mainly includes three kinds of majority decision methods, simple average method and weighted average method. The other two merging rules are added to compare with the merging rule with better effect, and the simple average method indicates that the classification is correct if the final average value is more than or equal to 50%, and indicates that the classification is wrong if the value is less than 50%.
If the weighted average method is used, it is mentioned that the setting of the normal weight is based on the positive example ratio in each classification verification model, but the classification verification model used in the method is of a single type, and the weight cannot be determined according to the normal setting method, so the invention can be used as the setting basis of the weight value considering that the occurrence frequency of each word-breaking, the number of which may have different degrees of influence on the classification result. Therefore, the ratio of the occurrence frequency of each word-breaking to the total occurrence frequency is used as the weight value of each word-breaking, so that a better effect can be achieved.
The invention also provides an emotion voice-based brain wave encryption and decryption system, which comprises a storage medium, wherein a computer program is stored in the storage medium, and the steps of the method are realized when the computer program is executed by a processor. The storage medium of the present embodiment may be a storage medium provided in an electronic apparatus, and the electronic apparatus may read the content of the storage medium and achieve the effects of the present invention. The storage medium may also be a separate storage medium, which is connected to the electronic device, and the electronic device may read the content of the storage medium and implement the method steps of the present invention.
It should be noted that, although the above embodiments have been described herein, the invention is not limited thereto. Therefore, based on the innovative concepts of the present invention, the technical solutions of the present invention can be directly or indirectly applied to other related technical fields by making changes and modifications to the embodiments described herein, or by using equivalent structures or equivalent processes performed in the content of the present specification and the attached drawings, which are included in the scope of the present invention.
Claims (7)
1. A brain wave encryption and decryption method based on emotional voice is characterized by being used for a brain wave encryption and decryption system, wherein the brain wave encryption and decryption system comprises a brain wave sensor, a central processing unit, a display interface and a microphone which are sequentially connected, and the method comprises the following steps:
connecting the brain wave sensor with the brain of a user, and displaying a section of content on a display interface by a central processing unit;
reading the content by the user, and collecting the brain wave input of the user from the brain wave sensor by the central processing unit;
after the central processing unit performs voice recognition read aloud by the user through the microphone, the recognized characters are displayed contents, so that the user is confirmed to read the contents, and then brain wave processing operation is performed, otherwise, brain wave processing operation is not performed;
the brain wave processing operation comprises the steps of taking brain wave data in continuous stages, finding out all broken words which are consistent with the first ten times of the occurrence frequency and have more than two words in a section of content according to the occurrence time section of each broken word, recording the time section of each broken word for guiding reading, finding out the brain wave data recorded in the same time section, storing the brain wave data as a verification data source, and simultaneously carrying out encryption locking operation which is related to the verification data source;
entering into an identity verification stage when decrypting; the central processing unit displays another section of content on the display interface, and the central processing unit collects brain wave input of the user from the brain wave sensor; the central processing unit collects the voice read by the user through the microphone and carries out voice recognition, verifies the recognized characters and the displayed other section of content, carries out brain wave decryption processing operation when the characters are verified to be consistent with the displayed other section of content, or does not carry out brain wave decryption processing operation;
the brain wave decryption processing operation comprises finding out all word breaks which are consistent with the first ten words of the occurrence frequency and have more than two words in the other section of content, recording the time section of reading guide of each word break in the other section of content and finding out the brain wave data recorded in the same time section; inputting brain wave data into a classification verification model, comparing the brain wave data with a verification data source, and outputting a comparison result;
if the comparison result is the same user, the decryption is successful, otherwise, the decryption is failed, and the decryption locking state is maintained.
2. The electroencephalogram encryption and decryption method based on emotional speech according to claim 1, wherein: the construction of the classification verification model comprises the following steps:
performing a characteristic value extraction stage step, wherein the characteristic value extraction stage is used for processing data, and respectively generating training and test data required by cross validation for preset times after performing 3-Gram method and outlier removal processing outside positive and negative standard deviations on the acquired brain wave data;
and a classifier construction stage step, namely generating a plurality of training subsets by using a whole learning method Bagging according to the training and testing data, independently training each training subset into a respective training model by using an OC-SVM, testing by using the same word-breaking testing data, and then using a majority decision method of a merging rule for a classification result to obtain a final classification verification model result.
3. The electroencephalogram encryption and decryption method based on emotional speech according to claim 1, wherein: the eigenvalue extraction stage comprises the steps of:
performing word segmentation analysis on a designated Chinese article, selecting the available word segmentation of the article by using a preset available word segmentation screening condition, and performing Chinese decoding and storage on the selected available word segmentation;
collecting brain wave data of the Chinese article read by the user, matching with word breaks in the Chinese article after voice recognition, and acquiring starting time and ending time corresponding to each word break;
acquiring brainwave data of a time section corresponding to time according to the starting time and the ending time corresponding to each word segmentation;
and performing a brain wave data acquisition process with preset repetition times to obtain training and testing data.
4. The electroencephalogram encryption and decryption method based on emotional speech according to claim 3, wherein: the brain wave sensor is connected with the central processing unit through Bluetooth, and the step of acquiring brain wave data of a time section corresponding to time according to the starting time and the ending time corresponding to each word-breaking comprises the following steps:
and acquiring brainwave data of a time section of corresponding time after adding a delay time to the start time and the end time corresponding to each word segmentation.
5. The electroencephalogram encryption and decryption method based on emotional speech according to claim 3, wherein: the preset repetition number is 5.
6. The electroencephalogram encryption and decryption method based on emotional speech according to claim 1, wherein: the characteristic values of the brain wave data include concentration, relaxation, stress and fatigue.
7. An electroencephalogram encryption and decryption system based on emotional speech is characterized in that: comprising a memory, a processor, said memory having stored thereon a computer program which, when being executed by the processor, carries out the steps of the method according to any one of claims 1 to 6.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2021105520360 | 2021-05-20 | ||
CN202110552036 | 2021-05-20 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114707132A true CN114707132A (en) | 2022-07-05 |
CN114707132B CN114707132B (en) | 2023-04-18 |
Family
ID=82176243
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210501444.8A Active CN114707132B (en) | 2021-05-20 | 2022-05-09 | Brain wave encryption and decryption method and system based on emotional voice |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114707132B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101127160A (en) * | 2006-08-18 | 2008-02-20 | 苏荣华 | Word, expression induction and brain wave identification method and the language study instrument |
CN103810780A (en) * | 2014-03-18 | 2014-05-21 | 苏州大学 | Coded lock based on brain-computer switching technique and encryption and decryption method of coded lock |
CN105125210A (en) * | 2015-09-09 | 2015-12-09 | 陈包容 | Brain wave evoking method and device |
US20170228526A1 (en) * | 2016-02-04 | 2017-08-10 | Lenovo Enterprise Solutions (Singapore) PTE. LTE. | Stimuli-based authentication |
US20170325720A1 (en) * | 2014-11-21 | 2017-11-16 | National Institute Of Advanced Industrial Science And Technology | Authentication device using brainwaves, authentication method, authentication system, and program |
CN108234130A (en) * | 2017-12-04 | 2018-06-29 | 阿里巴巴集团控股有限公司 | Auth method and device and electronic equipment |
CN108304073A (en) * | 2018-02-11 | 2018-07-20 | 广东欧珀移动通信有限公司 | Electronic device, solution lock control method and Related product |
CN108418962A (en) * | 2018-02-13 | 2018-08-17 | 广东欧珀移动通信有限公司 | Information response's method based on brain wave and Related product |
CN108564011A (en) * | 2017-08-01 | 2018-09-21 | 南京邮电大学 | A kind of personal identification method that normal form being presented based on brain electricity Rapid Speech |
CN110413125A (en) * | 2019-08-02 | 2019-11-05 | 广州市纳能环保技术开发有限公司 | Conversion method, electronic equipment and storage medium based on brain wave |
-
2022
- 2022-05-09 CN CN202210501444.8A patent/CN114707132B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101127160A (en) * | 2006-08-18 | 2008-02-20 | 苏荣华 | Word, expression induction and brain wave identification method and the language study instrument |
CN103810780A (en) * | 2014-03-18 | 2014-05-21 | 苏州大学 | Coded lock based on brain-computer switching technique and encryption and decryption method of coded lock |
US20170325720A1 (en) * | 2014-11-21 | 2017-11-16 | National Institute Of Advanced Industrial Science And Technology | Authentication device using brainwaves, authentication method, authentication system, and program |
CN105125210A (en) * | 2015-09-09 | 2015-12-09 | 陈包容 | Brain wave evoking method and device |
US20170228526A1 (en) * | 2016-02-04 | 2017-08-10 | Lenovo Enterprise Solutions (Singapore) PTE. LTE. | Stimuli-based authentication |
CN108564011A (en) * | 2017-08-01 | 2018-09-21 | 南京邮电大学 | A kind of personal identification method that normal form being presented based on brain electricity Rapid Speech |
CN108234130A (en) * | 2017-12-04 | 2018-06-29 | 阿里巴巴集团控股有限公司 | Auth method and device and electronic equipment |
CN108304073A (en) * | 2018-02-11 | 2018-07-20 | 广东欧珀移动通信有限公司 | Electronic device, solution lock control method and Related product |
CN108418962A (en) * | 2018-02-13 | 2018-08-17 | 广东欧珀移动通信有限公司 | Information response's method based on brain wave and Related product |
CN110413125A (en) * | 2019-08-02 | 2019-11-05 | 广州市纳能环保技术开发有限公司 | Conversion method, electronic equipment and storage medium based on brain wave |
Non-Patent Citations (1)
Title |
---|
林颖: "基于心电和脑电信号的压力测量模型研究" * |
Also Published As
Publication number | Publication date |
---|---|
CN114707132B (en) | 2023-04-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Chan et al. | Challenges and future perspectives on electroencephalogram-based biometrics in person recognition | |
US9942225B2 (en) | User authentication via evoked potential in electroencephalographic signals | |
Maiorana et al. | Longitudinal evaluation of EEG-based biometric recognition | |
CN108776788B (en) | Brain wave-based identification method | |
US7249263B2 (en) | Method and system for user authentication and identification using behavioral and emotional association consistency | |
CN105249963B (en) | N400 Evoked ptential lie detecting methods based on Sample Entropy | |
CN106503517B (en) | A kind of security certification system based on the acquisition of virtual implementing helmet brain line | |
Keshishzadeh et al. | Improved EEG based human authentication system on large dataset | |
Jianfeng et al. | Multi-feature authentication system based on event evoked electroencephalogram | |
Pham et al. | EEG-based user authentication in multilevel security systems | |
Gui et al. | Multichannel EEG-based biometric using improved RBF neural networks | |
Pan et al. | Depression detection based on reaction time and eye movement | |
Gui et al. | Towards EEG biometrics: Pattern matching approaches for user identification | |
Xie et al. | WT feature based emotion recognition from multi-channel physiological signals with decision fusion | |
Moreno-Rodriguez et al. | BIOMEX-DB: A cognitive audiovisual dataset for unimodal and multimodal biometric systems | |
Arias-Cabarcos et al. | Performance and usability evaluation of brainwave authentication techniques with consumer devices | |
Fallahi et al. | BrainNet: Improving Brainwave-based Biometric Recognition with Siamese Networks | |
CN114707132B (en) | Brain wave encryption and decryption method and system based on emotional voice | |
Boubakeur et al. | EEG-based person recognition analysis and criticism | |
KR101281852B1 (en) | Biometric authentication device and method using brain signal | |
Hu et al. | EEG authentication system based on auto-regression coefficients | |
Jian-feng | Multifeature biometric system based on EEG signals | |
LU505929B1 (en) | Electroencephalogram encryption and decryption method and system based on global learning | |
Pham | EEG-based person authentication for security systems | |
Chen et al. | An Identity Authentication Method Based on Multi-modal Feature Fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |