CN114707132A - Brain wave encryption and decryption method and system based on emotional voice - Google Patents

Brain wave encryption and decryption method and system based on emotional voice Download PDF

Info

Publication number
CN114707132A
CN114707132A CN202210501444.8A CN202210501444A CN114707132A CN 114707132 A CN114707132 A CN 114707132A CN 202210501444 A CN202210501444 A CN 202210501444A CN 114707132 A CN114707132 A CN 114707132A
Authority
CN
China
Prior art keywords
brain wave
data
user
decryption
encryption
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210501444.8A
Other languages
Chinese (zh)
Other versions
CN114707132B (en
Inventor
吴敏豪
张廷政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Panyu Polytechnic
Original Assignee
Guangzhou Panyu Polytechnic
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Panyu Polytechnic filed Critical Guangzhou Panyu Polytechnic
Publication of CN114707132A publication Critical patent/CN114707132A/en
Application granted granted Critical
Publication of CN114707132B publication Critical patent/CN114707132B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Computer Security & Cryptography (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Acoustics & Sound (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

本发明公开一种基于情感语音的脑波加解密方法,包括如下步骤:将脑波传感器与用户大脑连接,中央处理器在显示界面上显示一段内容;用户朗读该内容,中央处理器从脑波传感器上采集用户的脑波输入;中央处理器通过麦克风进行用户朗读的声音语音识别后,识别后的文字为显示的内容,从而确认用户在阅读该内容,而后进行脑波处理操作,否则不进行脑波处理操作;本发明通过用户的阅读时发出情感语音的脑波信号的不同来区分不同的用户,实现基于不同用户的不同生物学特征来区分不同用户,实现用户身份验证。

Figure 202210501444

The invention discloses a brainwave encryption and decryption method based on emotional speech, which comprises the following steps: connecting a brainwave sensor to a user's brain, a central processing unit displays a piece of content on a display interface; The user's brainwave input is collected on the sensor; after the central processor performs voice recognition of the user's reading through the microphone, the recognized text is the displayed content, so as to confirm that the user is reading the content, and then perform the brainwave processing operation, otherwise it will not be performed. Brain wave processing operation; the present invention distinguishes different users based on the difference of the brain wave signals of the emotional speech emitted by the user when reading, realizes the distinction of different users based on different biological characteristics of different users, and realizes user identity verification.

Figure 202210501444

Description

基于情感语音的脑波加解密方法和系统Brainwave encryption and decryption method and system based on emotional speech

技术领域technical field

本发明涉及脑波信号加密技术领域,尤其涉及一种基于情感语音的脑波加解密方法和系统。The invention relates to the technical field of brainwave signal encryption, in particular to a brainwave encryption and decryption method and system based on emotional speech.

背景技术Background technique

近年来,以生物特征为基础的相关身份验证方案日益增加,早期多以外部生物特征为方案基础,但其被仿冒的风险较内部生物特征高,因此大多数的方案方向已从外部生物特征转向内部生物特征。脑波作为内部生物特征的一种,具有不易取得性与独特性,能有效避免仿冒问题,且其具有的连续性,可进行使用者的连续验证。特别地,不同人在朗读的时候,所呈现出的脑波也是不同的,当前并未有采用基于情感语音的脑波进行加密的方案。In recent years, related authentication schemes based on biometrics have been increasing. In the early days, external biometrics were used as the basis, but the risk of being counterfeited was higher than that of internal biometrics. Therefore, the direction of most solutions has shifted from external biometrics to Internal biometrics. As a kind of internal biological characteristics, brain waves are difficult to obtain and unique, which can effectively avoid the problem of counterfeiting, and its continuity can be continuously verified by users. In particular, different people present different brainwaves when they read aloud. Currently, there is no encryption scheme using brainwaves based on emotional speech.

现有专利有提出一种“基于脑机交换技术的密码锁及其加密解密方法”,专利申请号为“201410101482.X”。其主要的思路是利用脑波来解密出脑波中的密码,从而进行加解密。其最终的加解密判断的还是密码,不同的人只要知道密码也可以进行解密。这样脑波进行是获取密码的介质,不同人的脑波并没有起到区分作用,无法实现不同人的生物学特征区分。现有技术具有如下缺点:无法实现不同人的区分,只要知道密码就可以解密。The existing patent proposes a "Brain-Computer Switching Technology-Based Combination Lock and Its Encryption and Decryption Method", the patent application number is "201410101482.X". The main idea is to use brainwaves to decrypt the password in the brainwaves, so as to perform encryption and decryption. The final encryption and decryption judgment is still the password, and different people can decrypt it as long as they know the password. In this way, brainwaves are the medium for obtaining passwords. The brainwaves of different people do not play a role in distinguishing, and it is impossible to distinguish the biological characteristics of different people. The prior art has the following disadvantages: it is impossible to distinguish between different people, and it can be decrypted as long as the password is known.

发明内容SUMMARY OF THE INVENTION

为此,需要提供基于情感语音的脑波加解密方法和系统,解决现有技术无法实现不同人的区分、只要知道密码就可以解密的问题。To this end, it is necessary to provide a brainwave encryption and decryption method and system based on emotional speech, to solve the problem that the existing technology cannot distinguish between different people and can be decrypted only by knowing the password.

为实现上述目的,本发明提供了基于情感语音的脑波加解密方法,用于脑波加解密系统,脑波加解密系统包括依次连接的脑波传感器、中央处理器、显示界面以及与中央处理器连接的麦克风,本方法包括如下步骤:将脑波传感器与用户大脑连接,中央处理器在显示界面上显示一段内容;用户朗读该内容,中央处理器从脑波传感器上采集用户的脑波输入;中央处理器通过麦克风进行用户朗读的声音语音识别后,识别后的文字为显示的内容,从而确认用户在阅读该内容,而后进行脑波处理操作,否则不进行脑波处理操作;脑波处理操作包括针对阶段连续的脑波数据做采取,采取依据为各断词的出现时间区段,找出显示一段内容中符合出现频率前十名且字数两个以上的所有断词,纪录各断词导读的时间区段,并找出同时间区段里记录到的脑波数据,存储该脑波数据作为验证数据源,同时进行加密锁定操作,加密锁定操作与该验证数据源关联;进行解密时,进入身份验证阶段;中央处理器在显示界面显示另一段内容,中央处理器从脑波传感器上采集用户的脑波输入;中央处理器通过麦克风采集用户朗读的声音并进行语音识别,并对识别后的文字与显示另一段内容进行验证,当验证相符时,而后进行脑波解密处理操作,否则不进行脑波解密处理操作;脑波解密处理操作包括找出显示另一段内容中符合出现频率前十名且字数两个以上的所有断词,纪录另一段内容中各断词导读的时间区段并找出同时间区段里记录到的脑波数据;将脑波数据输入到分类验证模型中,与验证数据源进行比对,输出比对结果;如果比对结果为同一用户,则解密成功,否则解密失败,保持解密锁定状态。In order to achieve the above object, the present invention provides a brainwave encryption and decryption method based on emotional speech, which is used in a brainwave encryption and decryption system. The method includes the following steps: connecting the brain wave sensor to the user's brain, the central processing unit displays a piece of content on the display interface; the user reads the content aloud, and the central processing unit collects the user's brain wave input from the brain wave sensor ; After the central processing unit performs the voice recognition of the user's reading through the microphone, the recognized text is the displayed content, so as to confirm that the user is reading the content, and then perform the brain wave processing operation, otherwise the brain wave processing operation is not performed; brain wave processing The operation includes taking the continuous brain wave data of the stage, taking the time period of each hyphenation, finding out all hyphenation in the content that matches the top ten occurrence frequencies and having more than two words, and recording each hyphenation. The time segment of the guided reading, and find out the brainwave data recorded in the same time segment, store the brainwave data as the verification data source, and perform the encryption lock operation at the same time, and the encryption lock operation is associated with the verification data source; when decrypting , enter the authentication stage; the central processing unit displays another piece of content on the display interface, and the central processing unit collects the user's brainwave input from the brainwave sensor; the central processing unit collects the user's reading voice through the microphone and performs speech recognition, and recognizes The following text is verified with the displayed other content. When the verification is consistent, the brainwave decryption processing operation is performed, otherwise, the brainwave decryption processing operation is not performed; For all hyphenated words with ten names and two or more words, record the time period of each hyphenation guide in another piece of content and find out the brainwave data recorded in the same time period; input the brainwave data into the classification verification model , compare with the verification data source, and output the comparison result; if the comparison result is the same user, the decryption succeeds, otherwise the decryption fails, and the decryption lock state is maintained.

进一步地,所述分类验证模型的构建包括如下方法:进行特征值萃取阶段步骤,特征值萃取阶段用于对数据进行处理,将采集到的脑波数据进行3-Gram法以及正负三标准偏差外的离异值去除处理后,分别产出预设重复次数的交叉验证所需要的训练及测试数据;分类器建构阶段步骤,根据所述训练及测试数据,使用整体学习法Bagging产生多个训练子集,使用OC-SVM将各个训练子集独立训练出各自的训练模型,使用同样的各断词测试数据作测试后,将分类结果使用合并法则的多数决法,得到最终的分类验证模型结果。Further, the construction of the classification verification model includes the following methods: performing a feature value extraction stage step, the feature value extraction stage is used to process the data, and the collected brainwave data is subjected to the 3-Gram method and positive and negative three standard deviations. After the outliers are removed, the training and test data required for cross-validation with a preset number of repetitions are generated respectively; in the steps of the classifier construction stage, according to the training and test data, the overall learning method Bagging is used to generate a plurality of trainers Set, use OC-SVM to independently train each training subset to get its own training model, use the same segmentation test data for testing, and use the majority decision method of the merging rule for the classification results to obtain the final classification and verification model results.

进一步地,所述特征值萃取阶段包括如下步骤:对指定的中文文章做断词分析,再利用事先设定好的可用断词筛选条件,选出该篇文章的可用断词,将其作中文解码并储存起来;采集用户阅读所述中文文章的脑波数据,通过声音识别后与中文文章中的断词进行匹配,获取各断词对应的起始与结束时间;根据各断词对应的起始与结束时间获取对应时间的时间区段的脑波资料;进行预设重复次数的脑波资料获取过程,得到训练及测试数据。Further, the feature value extraction stage comprises the following steps: analyzing the specified Chinese articles, and then using the pre-set available hyphenation screening conditions, selecting the available hyphenation of this article, and making it Chinese. Decode and store it; collect the brainwave data of the user reading the Chinese article, match it with the word segmentation in the Chinese article through voice recognition, and obtain the start and end times corresponding to each word segmentation; The brainwave data of the time segment corresponding to the time is obtained at the start and end times; the brainwave data acquisition process of the preset number of repetitions is performed to obtain training and test data.

进一步地,脑波传感器与中央处理器通过蓝牙连接,则所述根据各断词对应的起始与结束时间获取对应时间的时间区段的脑波资料包括步骤:根据各断词对应的起始与结束时间加上一个延迟时间后获取对应时间的时间区段的脑波资料。Further, if the brainwave sensor and the central processing unit are connected through Bluetooth, the obtaining of the brainwave data of the time section corresponding to the time according to the corresponding start and end times of each segment includes the steps of: according to the start and end times corresponding to each segment. After adding a delay time to the end time, the brainwave data of the time segment corresponding to the time are obtained.

进一步地,所述预设重复次数为5次。Further, the preset number of repetitions is 5 times.

进一步地,所述脑波数据的特征值包括专注力、放松度、压力和疲劳度。Further, the characteristic values of the brain wave data include concentration, relaxation, stress and fatigue.

同时,本发明提供一种基于情感语音的脑波加解密系统,包括存储器、处理器,所述存储器上存储有计算机程序,所述计算机程序被处理器执行时实现本发明实施例任意一项所述方法的步骤。At the same time, the present invention provides a brainwave encryption and decryption system based on emotional speech, including a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, any one of the embodiments of the present invention is implemented. steps of the method described.

实施本发明技术方案,区别于现有技术,上述技术方案通过用户的阅读时发出情感语音的脑波信号的不同来区分不同的用户,实现基于不同用户的不同生物学特征来区分不同用户,实现用户身份验证。The implementation of the technical solution of the present invention is different from the prior art. The above-mentioned technical solution distinguishes different users based on the difference in the brain wave signals of the emotional speech emitted by the user during reading, and realizes different users based on different biological characteristics of different users. User authentication.

附图说明Description of drawings

图1为具体实施方式所述的系统结构图;Fig. 1 is the system structure diagram described in the specific embodiment;

图2为具体实施方式所述分类验证模型产生过程方法流程图;2 is a flow chart of a method for generating a classification verification model according to the specific embodiment;

图3为具体实施方式所述的实验文章的断词分析图;Fig. 3 is the word segmentation analysis diagram of the experimental article described in the specific embodiment;

图4为具体实施方式所述的导读数据处理程序流程图;4 is a flow chart of the guide reading data processing program described in the specific embodiment;

图5为具体实施方式所述的脑波数据处理程序流程图;FIG. 5 is a flowchart of an electroencephalogram data processing program according to a specific embodiment;

图6为具体实施方式所述的各断词的脑波数据处理示意图;6 is a schematic diagram of brainwave data processing of each segmented word according to the specific embodiment;

图7为具体实施方式所述的进行分类算法的脑波数据图;Fig. 7 is the brain wave data diagram of the classification algorithm described in the specific embodiment;

图8为具体实施方式所述的合法验证的五倍交叉验证图;8 is a five-fold cross-validation diagram of legal verification according to the specific embodiment;

图9为具体实施方式所述的非法验证的五倍交叉验证图;Fig. 9 is the five-fold cross-validation diagram of the illegal verification described in the specific embodiment;

图10为具体实施方式所述的分类器建构阶段流程图。FIG. 10 is a flow chart of the construction stage of the classifier according to the specific embodiment.

具体实施方式Detailed ways

为详细说明技术方案的技术内容、构造特征、所实现目的及效果,以下结合具体实施例并配合附图详予说明。In order to describe the technical content, structural features, achieved objects and effects of the technical solution in detail, the following detailed description is given in conjunction with the specific embodiments and the accompanying drawings.

请参阅图1到图10,本实施例提供一种基于情感语音的脑波加解密方法和系统。首先要先进行用户的预输入:系统的脑波传感器先与用户大脑连接,即用户头部带上脑波采集仪,而后系统现在显示界面上显示一段内容(唐诗、五言绝句、七言绝句和佛经等),用户读出该内容,系统从脑波传感器上采集用户的输入。通过麦克风进行语音识别后可以确认用户的确在阅读该内容。系统在处理脑波数据时,有针对连续的脑波数据做采取,采取依据为各断词(或者分词)的出现时间区段,目的是为了更加精确的选择所要分析的脑波区段,有别于过去研究是使用全部的脑波数据。找出显示一段内容中符合出现频率前十名且字数两个以上的所有断词,纪录各断词导读的时间区段,并找出同时间区段里记录到的脑波数据。该脑波数据作为验证数据源,同时进行加密操作。Referring to FIG. 1 to FIG. 10 , this embodiment provides a method and system for brainwave encryption and decryption based on emotional speech. First of all, the user's pre-input must be performed first: the brainwave sensor of the system is first connected to the user's brain, that is, the user's head is equipped with a brainwave collector, and then the system now displays a piece of content (Tang poetry, five-character quatrains, seven-character quatrains) on the display interface. and Buddhist scriptures, etc.), the user reads the content, and the system collects the user's input from the brainwave sensor. Speech recognition through the microphone can confirm that the user is indeed reading the content. When the system processes brainwave data, it takes continuous brainwave data, and the basis is the occurrence time segment of each word segmentation (or word segmentation). The purpose is to select the brainwave segment to be analyzed more accurately. Unlike previous studies that used all brainwave data. Find out all the hyphenated words that match the top ten frequency and have more than two words in a piece of content, record the time period of each hyphenation guide, and find out the brainwave data recorded in the same time period. The brainwave data is used as a verification data source, and encryption operations are performed at the same time.

身份验证时,同样显示一段内容,系统从脑波传感器上采集用户的输入。找出显示一段内容中符合出现频率前十名且字数两个以上的所有断词,纪录各断词导读的时间区段,并找出同时间区段里记录到的脑波数据。输入到分类验证模型中,与验证数据源进行比对,输出比对结果。比对为同一用户,则解密成功,否则解密失败。During authentication, a piece of content is also displayed, and the system collects the user's input from the brainwave sensor. Find out all the hyphenated words that match the top ten frequency and have more than two words in a piece of content, record the time period of each hyphenation guide, and find out the brainwave data recorded in the same time period. Input into the classification verification model, compare with the verification data source, and output the comparison result. If the comparison is the same user, the decryption succeeds, otherwise the decryption fails.

分类验证模型产生过程,如图2所示,为分类验证模型产生过程方法流程图。主要分为两阶段,分别为特征值萃取阶段以及分类器建构阶段,在特征值萃取阶段主要是做数据处理的部分,先将采集到的脑波数据进行3-Gram法以及正负三标准偏差外的离异值去除处理后,分别产出五倍交叉验证所需要的训练及测试数据。在分类器建构阶段会使用整体学习法Bagging,其基底分类器为OC-SVM,再利用合并法则中的多数决法,得到最终的分类结果,此两阶段的详细内容会在下面介绍。The classification verification model generation process, as shown in Figure 2, is a flow chart of the classification verification model generation process method. It is mainly divided into two stages, namely the feature value extraction stage and the classifier construction stage. In the feature value extraction stage, it is mainly the part of data processing. First, the collected brainwave data is subjected to 3-Gram method and positive and negative three standard deviations After the outliers are removed, the training and test data required for five-fold cross-validation are generated respectively. In the classifier construction stage, the overall learning method Bagging is used, and the base classifier is OC-SVM, and then the majority decision in the merger rule is used to obtain the final classification result. The details of these two stages will be introduced below.

在特征值萃取阶段,首先要进行数据处理,如图3所示,第一步骤需先针对指定的中文文章做断词分析,再利用事先设定好的可用断词筛选条件,选出该篇文章的可用断词,将其作中文解码并储存在系统内。断词可以人工标注而后存储或者调用现有的分词接口(如百度分词API)进行分词。由于本发明并无考虑通过导读行为产生的脑波与断词间的关系,而从机器学习的角度来看,出现频率越高的断词,其可用的数据也越多,因此断词分析的条件,设为断词于文章中出现的次数,即本发明重点在于利用断词部分作为判断依据,而不是整个句子。In the feature value extraction stage, data processing must be performed first, as shown in Figure 3. In the first step, word segmentation analysis is performed on the specified Chinese article, and then the pre-set available word segmentation filter conditions are used to select the article. The available word segmentation of the article, decode it into Chinese and store it in the system. Word segmentation can be manually marked and then stored or call the existing word segmentation interface (such as Baidu word segmentation API) for word segmentation. Since the present invention does not consider the relationship between the brainwaves generated by the reading guide and word segmentation, and from the perspective of machine learning, the higher the frequency of word segmentation, the more available data, so the analysis of word segmentation The condition is set as the number of occurrences of word segmentation in the article, that is, the focus of the present invention is to use the word segmentation part as the judgment basis, rather than the entire sentence.

表1为分析后的前十名断词,之后根据可用断词的条件:字数要大于等于2且出现频率前十名,筛选后得到六个可用断词,分别为何处、不知、笑容、万里、千里以及今日。此六个断词会进行中文译码,并存成对应的Unicode码组合,如表2所示,其中包含各个断词出现在实验文章中的次数。Table 1 shows the top ten hyphenated words after analysis, and then according to the conditions of available hyphenation: the number of words is greater than or equal to 2 and the frequency of occurrence is in the top ten, after screening, six available hyphenation words are obtained, which are where, I don’t know, smile, Wanli , thousands of miles and today. The six segmented words will be decoded in Chinese and stored as corresponding Unicode code combinations, as shown in Table 2, which includes the number of times each segmented word appears in the experimental article.

表1、《全唐诗》最高频率二字字符串的频率统计Table 1. Frequency statistics of the most frequent two-character strings in "Full Tang Poems"

字符串string 频率frequency 字符串string 频率frequency 字符串string 频率frequency 字符串string 频率frequency 字符串string 频率frequency 何处where 166166 无人unmanned 881881 青山Castle Peak 662662 流水running water 550550 落日sunset 498498 不知I don't know 146146 风吹wind blows 834834 少年juvenile 634634 回首look back 544544 不如better 497497 万里Miles 145145 惆怅melancholy 780780 相逢meet 629629 可怜Pitiful 539539 归去go home 496496 千里Thousands of miles 130130 故人deceased 778778 平生life 597597 如此in this way 526526 日暮sunset 496496 今日today 116116 秋风autumn wind 749749 年年every year 593593 白发gray hair 520520 不能cannot 481481 不见not see 115115 悠悠long 740740 寂寞lonely 592592 主人Owner 517517 别离leave 481481 不可not possible 114114 相思Acacia 733733 黄金gold 589589 今朝now 516516 何时when 478478 春风spring breeze 112112 长安Chang'an 722722 天子Son of Heaven 588588 月明moonlight 515515 此时at this time 477477 白云Baiyun 110110 白日day 697697 人不people do not 587587 从此from now on 509509 洛阳Luoyang 476476 不得must not 947947 如何how 687687 天地world 586586 日月sun and moon 508508 天下world 472472 明月bright moon 896896 十年ten years 678678 何事what's the matter 579579 行人pedestrian 507507 芳草herb 472472 人间world 890890 何人who 663663 江上Egami 553553 将军general 499499 归来return 471471

表2、可用断词的Unicode码组合Table 2. Unicode code combinations of available word segmentation

Figure BDA0003634589110000051
Figure BDA0003634589110000051

Figure BDA0003634589110000061
Figure BDA0003634589110000061

当受测者执行「导读指定文章」的任务时,进入第二步骤,如图4所示,同时搜集受测者每轮的导读与脑波数据,利用第一步骤储存的可用断词的Unicode码组合,比对导读数据中相对应的Unicode码组合,并找出各断词对应的起始与结束时间,以便后续与脑波数据间的对比与特征数据获取,即显示导读的文章材料,获取用户阅读的语音信息与对应时间,根据语音信息转换为文字的Unicode码,则获取Unicode码的对应时间,从而获取到对应时间段的脑波数据。When the subject performs the task of "Guide to read the specified article", enter the second step, as shown in Figure 4, collect the subject's reading guide and brainwave data for each round at the same time, and use the Unicode that can be segmented stored in the first step. Code combination, compare the corresponding Unicode code combination in the guide data, and find out the corresponding start and end time of each segmented word, so as to facilitate the subsequent comparison with the brainwave data and the acquisition of characteristic data, that is, to display the guide article materials, Acquire the voice information read by the user and the corresponding time, and convert the voice information into the Unicode code of the text to obtain the corresponding time of the Unicode code, thereby obtaining the brainwave data of the corresponding time period.

当受测者导读可用断词时,本发明会记录导读时的时间及UNICODE码,表3为受测者导读「何处」时的导读数据格式,其UNICODE码组合为{(4f55,8655},与第一步骤储存的「何处」UNICODE码组合{4f55,8655}比对后,发现在时间16:27:4:609至16:27:5:308间,受测者导读了「何处」断词。之后记录其对应的导读起始时间(16:27:4)与结束时间(16:27:5),由于脑波数据每笔的最小单位为秒,因此记录导读时间区段的最小单位也为秒。When the test subject guides reading and can use word segmentation, the present invention will record the time and UNICODE code of the guide reading. Table 3 is the guide reading data format when the test subject guides reading "Where", and the UNICODE code combination is {(4f55,8655} , after comparing it with the "where" UNICODE code combination {4f55,8655} stored in the first step, it was found that between the time 16:27:4:609 and 16:27:5:308, the subject read "Where" "Location" segmentation. Then record the corresponding start time (16:27:4) and end time (16:27:5) of the guide reading. Since the minimum unit of each brainwave data is seconds, record the guide reading time segment The smallest unit is also seconds.

表3、导读数据格式,以「何处」为例Table 3. Guide data format, taking "Where" as an example

时间time UNICODE码UNICODE code 16:27:4:6016:27:4:60 4f4f 16:27:4:7216:27:4:72 5555 16:27:5:1016:27:5:10 8686 16:27:5:3016:27:5:30 5555

第三步骤如图5所示,利用第二步骤纪录的各断词起始与结束时间来比对脑波数据,以取得所需的脑波资料区段,而这些脑波数据报含了8个特征值,除了有Delta、Theta、Low Alpha、High Alpha、Low Beta、High Beta、Low Gamma以及HighGamma,还包含了BeneGear公司自行研发的EEG202脑波侦测仪,有专注力(Attention)、放松度(Meditation)、压力(Pressure)以及疲劳度(Fatigue),共十二个特征值。其中,EEG202脑波侦测仪是使用单电极采集到的脑电讯号传送至EEG202芯片,该芯片会进行降噪处理,再通过自行开法算法得到EEG202,将人的当前精神状况进行数字化的度量。The third step is shown in Figure 5, using the start and end times of each segment recorded in the second step to compare the brainwave data to obtain the required brainwave data segment, and these brainwave data reports contain 8 In addition to Delta, Theta, Low Alpha, High Alpha, Low Beta, High Beta, Low Gamma, and HighGamma, it also includes the EEG202 brainwave detector developed by BeneGear, which has the functions of Attention, Relaxation There are twelve eigenvalues in total: Meditation, Pressure and Fatigue. Among them, the EEG202 brain wave detector uses the EEG signal collected by a single electrode to transmit to the EEG202 chip, the chip will perform noise reduction processing, and then obtain the EEG202 through a self-opening algorithm to digitally measure the current mental state of the person. .

具体实施过程中,脑波仪是透过蓝牙联机与计算机软件沟通,这之间存在着延迟时间,因此本发明将纪录的结束时间点往后多取3秒的脑波数据,如图6所示「何处」纪录的时间点为16:27:4至16:27:5,其所要采取的脑波时间区段即为16:27:4至16:27:8,以确保有采取到相对应的脑波区段。In the specific implementation process, the brainwave instrument communicates with the computer software through the Bluetooth connection, and there is a delay time between them. Therefore, the present invention takes the brainwave data for 3 seconds after the end time of the recording, as shown in FIG. 6 . The time point of the "where" record is 16:27:4 to 16:27:5, and the brainwave time period to be taken is 16:27:4 to 16:27:8 to ensure that the Corresponding brain wave segment.

往后取3秒目的是为了确实获取到相对应的脑波区段,但由于实验仪器的限制,无法确定主要的脑波数据时间,因此使用分类算法,将可能的脑波数据加在一起,若该笔数据有涵盖到主要的脑波数据时间,如图7所示,将可提高验证的准确度。而特征值的部分,也因为经过了分类算法,所以在进入分类器建构阶段时,脑波数据内共有36个特征值。The purpose of taking the next 3 seconds is to obtain the corresponding brain wave segment. However, due to the limitation of the experimental equipment, the main brain wave data time cannot be determined. Therefore, a classification algorithm is used to add the possible brain wave data together. If the data covers the main brain wave data time, as shown in Figure 7, the verification accuracy will be improved. The part of the eigenvalues has also gone through the classification algorithm, so when entering the classifier construction stage, there are a total of 36 eigenvalues in the brainwave data.

在分类器建构阶段,每位受测者都会重复进行五次实验,每次实验都会经过特征值萃取阶段的数据处理步骤,各断词会分别产生编号A至E的脑波数据,依据编号能产生所需要的训练数据以及测试数据。由于验证方式采用五倍交叉验证法,如图8所示,训练数据与测试数据需针对五倍交叉验证的需求来产生,以1号受测者为例,若将其中一次数据作为测试数据,则其余四次数据则会合并为训练数据,此种情况为合法使用者身份验证。In the classifier construction stage, each subject will repeat the experiment five times. Each experiment will go through the data processing step of the feature value extraction stage. Each word segmentation will generate brain wave data numbered A to E. Generate the required training data and test data. Since the verification method adopts the five-fold cross-validation method, as shown in Figure 8, the training data and test data need to be generated according to the requirements of five-fold cross-validation. Then the remaining four data will be merged into training data, which is legal user authentication.

在非法使用者的异常身份验证时,如图9所示,会将1号受测者以外的所有受测者各自产生的A至E脑波数据,依据相同编号作资料合并,接着再根据1号受测者同编号的测试数据笔数,从合并的资料中随机取等笔数的资料作为1号受测者的非法验证测试数据。During the abnormal identity verification of illegal users, as shown in Figure 9, the A to E brainwave data generated by all subjects except subject No. 1 will be merged according to the same number, and then the data will be merged according to the same number. Subject No. 1 has the same number of test data, and randomly select the same number of data from the combined data as the illegal verification test data of subject No. 1.

产生好所需的训练数据与测试数据后,开始进行整体学习法,如图10所示,首先将各断词的训练数据使用Bagging产生多个训练子集,并使用OC-SVM将各个训练子集独立训练出各自的训练模型,之后使用同样的各断词测试数据作测试后,将分类结果使用合并法则的多数决法,得最终的分类验证模型结果。After generating the required training data and test data, start the overall learning method, as shown in Figure 10, first use Bagging to generate multiple training subsets of the training data of each segment, and use OC-SVM to combine each training subgroup. The sets independently train their own training models, and then use the same test data for each word segmentation for testing, and use the majority decision method of the merging rule for the classification results to obtain the final classification and verification model results.

实验情境与有无使用断词的比较之外,现有多种合并法则,主要分为三种多数决法、简单平均法以及加权平均法,本发明所采用的是最简单的多数决法,根据分类验证模型的输出值决定是否合法,相关标准在上述分类器建构阶段时已有详细解释。此处加入另外两种合并法则是为了比较何种合并法则效果较好,简单平均法,其最终平均值若≥50%,代表分类正确,若该值<50%,代表分类错误。In addition to the comparison between the experimental situation and the presence or absence of word segmentation, there are many existing merging rules, which are mainly divided into three majority decision methods, simple average method and weighted average method. The present invention adopts the simplest majority decision method. Whether it is legal or not is determined according to the output value of the classification verification model, and the relevant standards have been explained in detail in the above-mentioned classifier construction stage. The other two merging rules are added here to compare which merging rules work better. For the simple average method, if the final average value is ≥ 50%, it means the classification is correct, and if the value is less than 50%, it means the classification is wrong.

若使用加权平均法,有提到通常权重的设定是以各分类验证模型中的正例比率为基准,但本方法在所使用的分类验证模型是单一类别,无法根据一般的设定方法决定权重,因此本发明考虑到各断词的出现次数,其数量可能会对分类结果有不同程度的影响,可当作权重值的设定依据。所以将各断词出现次数占总次数出现的比率作为各断词的权重值,可以实现更好的效果。If the weighted average method is used, it is mentioned that the setting of the weight is usually based on the positive ratio in each classification verification model, but the classification verification model used in this method is a single category, which cannot be determined according to the general setting method Therefore, the present invention considers the number of occurrences of each segmented word, and the number may have different degrees of influence on the classification result, which can be used as the basis for setting the weight value. Therefore, taking the ratio of the occurrences of each segment to the total number of occurrences as the weight value of each segment can achieve better results.

本发明还提供一种基于情感语音的脑波加解密系统,包括存储介质,所述存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述方法的步骤。本实施例的存储介质可以是设置在电子设备中的存储介质,电子设备可以读取存储介质的内容并实现本发明的效果。存储介质还可以是单独的存储介质,将该存储介质与电子设备连接,电子设备就可以读取存储介质里的内容并实现本发明的方法步骤。The present invention also provides a brainwave encryption and decryption system based on emotional speech, including a storage medium, wherein the storage medium stores a computer program, and the computer program implements the steps of the above method when the computer program is executed by the processor. The storage medium in this embodiment may be a storage medium set in an electronic device, and the electronic device can read the content of the storage medium and achieve the effects of the present invention. The storage medium can also be a separate storage medium, and the storage medium is connected to the electronic device, and the electronic device can read the content in the storage medium and implement the method steps of the present invention.

需要说明的是,尽管在本文中已经对上述各实施例进行了描述,但并非因此限制本发明的专利保护范围。因此,基于本发明的创新理念,对本文所述实施例进行的变更和修改,或利用本发明说明书及附图内容所作的等效结构或等效流程变换,直接或间接地将以上技术方案运用在其他相关的技术领域,均包括在本发明的专利保护范围之内。It should be noted that, although the above embodiments have been described herein, it does not limit the scope of the patent protection of the present invention. Therefore, based on the innovative concept of the present invention, changes and modifications to the embodiments described herein, or equivalent structures or equivalent process transformations made by using the contents of the description and drawings of the present invention, directly or indirectly apply the above technical solutions In other related technical fields, all are included within the scope of patent protection of the present invention.

Claims (7)

1. A brain wave encryption and decryption method based on emotional voice is characterized by being used for a brain wave encryption and decryption system, wherein the brain wave encryption and decryption system comprises a brain wave sensor, a central processing unit, a display interface and a microphone which are sequentially connected, and the method comprises the following steps:
connecting the brain wave sensor with the brain of a user, and displaying a section of content on a display interface by a central processing unit;
reading the content by the user, and collecting the brain wave input of the user from the brain wave sensor by the central processing unit;
after the central processing unit performs voice recognition read aloud by the user through the microphone, the recognized characters are displayed contents, so that the user is confirmed to read the contents, and then brain wave processing operation is performed, otherwise, brain wave processing operation is not performed;
the brain wave processing operation comprises the steps of taking brain wave data in continuous stages, finding out all broken words which are consistent with the first ten times of the occurrence frequency and have more than two words in a section of content according to the occurrence time section of each broken word, recording the time section of each broken word for guiding reading, finding out the brain wave data recorded in the same time section, storing the brain wave data as a verification data source, and simultaneously carrying out encryption locking operation which is related to the verification data source;
entering into an identity verification stage when decrypting; the central processing unit displays another section of content on the display interface, and the central processing unit collects brain wave input of the user from the brain wave sensor; the central processing unit collects the voice read by the user through the microphone and carries out voice recognition, verifies the recognized characters and the displayed other section of content, carries out brain wave decryption processing operation when the characters are verified to be consistent with the displayed other section of content, or does not carry out brain wave decryption processing operation;
the brain wave decryption processing operation comprises finding out all word breaks which are consistent with the first ten words of the occurrence frequency and have more than two words in the other section of content, recording the time section of reading guide of each word break in the other section of content and finding out the brain wave data recorded in the same time section; inputting brain wave data into a classification verification model, comparing the brain wave data with a verification data source, and outputting a comparison result;
if the comparison result is the same user, the decryption is successful, otherwise, the decryption is failed, and the decryption locking state is maintained.
2. The electroencephalogram encryption and decryption method based on emotional speech according to claim 1, wherein: the construction of the classification verification model comprises the following steps:
performing a characteristic value extraction stage step, wherein the characteristic value extraction stage is used for processing data, and respectively generating training and test data required by cross validation for preset times after performing 3-Gram method and outlier removal processing outside positive and negative standard deviations on the acquired brain wave data;
and a classifier construction stage step, namely generating a plurality of training subsets by using a whole learning method Bagging according to the training and testing data, independently training each training subset into a respective training model by using an OC-SVM, testing by using the same word-breaking testing data, and then using a majority decision method of a merging rule for a classification result to obtain a final classification verification model result.
3. The electroencephalogram encryption and decryption method based on emotional speech according to claim 1, wherein: the eigenvalue extraction stage comprises the steps of:
performing word segmentation analysis on a designated Chinese article, selecting the available word segmentation of the article by using a preset available word segmentation screening condition, and performing Chinese decoding and storage on the selected available word segmentation;
collecting brain wave data of the Chinese article read by the user, matching with word breaks in the Chinese article after voice recognition, and acquiring starting time and ending time corresponding to each word break;
acquiring brainwave data of a time section corresponding to time according to the starting time and the ending time corresponding to each word segmentation;
and performing a brain wave data acquisition process with preset repetition times to obtain training and testing data.
4. The electroencephalogram encryption and decryption method based on emotional speech according to claim 3, wherein: the brain wave sensor is connected with the central processing unit through Bluetooth, and the step of acquiring brain wave data of a time section corresponding to time according to the starting time and the ending time corresponding to each word-breaking comprises the following steps:
and acquiring brainwave data of a time section of corresponding time after adding a delay time to the start time and the end time corresponding to each word segmentation.
5. The electroencephalogram encryption and decryption method based on emotional speech according to claim 3, wherein: the preset repetition number is 5.
6. The electroencephalogram encryption and decryption method based on emotional speech according to claim 1, wherein: the characteristic values of the brain wave data include concentration, relaxation, stress and fatigue.
7. An electroencephalogram encryption and decryption system based on emotional speech is characterized in that: comprising a memory, a processor, said memory having stored thereon a computer program which, when being executed by the processor, carries out the steps of the method according to any one of claims 1 to 6.
CN202210501444.8A 2021-05-20 2022-05-09 Brain wave encryption and decryption method and system based on emotional voice Active CN114707132B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110552036 2021-05-20
CN2021105520360 2021-05-20

Publications (2)

Publication Number Publication Date
CN114707132A true CN114707132A (en) 2022-07-05
CN114707132B CN114707132B (en) 2023-04-18

Family

ID=82176243

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210501444.8A Active CN114707132B (en) 2021-05-20 2022-05-09 Brain wave encryption and decryption method and system based on emotional voice

Country Status (1)

Country Link
CN (1) CN114707132B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101127160A (en) * 2006-08-18 2008-02-20 苏荣华 Word, expression induction and brain wave identification method and the language study instrument
CN103810780A (en) * 2014-03-18 2014-05-21 苏州大学 Coded lock based on brain-computer switching technique and encryption and decryption method of coded lock
CN105125210A (en) * 2015-09-09 2015-12-09 陈包容 Brain wave evoking method and device
US20170228526A1 (en) * 2016-02-04 2017-08-10 Lenovo Enterprise Solutions (Singapore) PTE. LTE. Stimuli-based authentication
US20170325720A1 (en) * 2014-11-21 2017-11-16 National Institute Of Advanced Industrial Science And Technology Authentication device using brainwaves, authentication method, authentication system, and program
CN108234130A (en) * 2017-12-04 2018-06-29 阿里巴巴集团控股有限公司 Auth method and device and electronic equipment
CN108304073A (en) * 2018-02-11 2018-07-20 广东欧珀移动通信有限公司 Electronic device, solution lock control method and Related product
CN108418962A (en) * 2018-02-13 2018-08-17 广东欧珀移动通信有限公司 Information response's method based on brain wave and Related product
CN108564011A (en) * 2017-08-01 2018-09-21 南京邮电大学 A kind of personal identification method that normal form being presented based on brain electricity Rapid Speech
CN110413125A (en) * 2019-08-02 2019-11-05 广州市纳能环保技术开发有限公司 Conversion method, electronic equipment and storage medium based on brain wave

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101127160A (en) * 2006-08-18 2008-02-20 苏荣华 Word, expression induction and brain wave identification method and the language study instrument
CN103810780A (en) * 2014-03-18 2014-05-21 苏州大学 Coded lock based on brain-computer switching technique and encryption and decryption method of coded lock
US20170325720A1 (en) * 2014-11-21 2017-11-16 National Institute Of Advanced Industrial Science And Technology Authentication device using brainwaves, authentication method, authentication system, and program
CN105125210A (en) * 2015-09-09 2015-12-09 陈包容 Brain wave evoking method and device
US20170228526A1 (en) * 2016-02-04 2017-08-10 Lenovo Enterprise Solutions (Singapore) PTE. LTE. Stimuli-based authentication
CN108564011A (en) * 2017-08-01 2018-09-21 南京邮电大学 A kind of personal identification method that normal form being presented based on brain electricity Rapid Speech
CN108234130A (en) * 2017-12-04 2018-06-29 阿里巴巴集团控股有限公司 Auth method and device and electronic equipment
CN108304073A (en) * 2018-02-11 2018-07-20 广东欧珀移动通信有限公司 Electronic device, solution lock control method and Related product
CN108418962A (en) * 2018-02-13 2018-08-17 广东欧珀移动通信有限公司 Information response's method based on brain wave and Related product
CN110413125A (en) * 2019-08-02 2019-11-05 广州市纳能环保技术开发有限公司 Conversion method, electronic equipment and storage medium based on brain wave

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
林颖: "基于心电和脑电信号的压力测量模型研究" *

Also Published As

Publication number Publication date
CN114707132B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
Chan et al. Challenges and future perspectives on electroencephalogram-based biometrics in person recognition
US20230205610A1 (en) Systems and methods for removing identifiable information
CN105740682B (en) The personal identification method and system of a kind of computer system and its user
WO2019085331A1 (en) Fraud possibility analysis method, device, and storage medium
JPWO2008107997A1 (en) Form type identification program, form type identification method, and form type identification device
JP2014191823A (en) Method and system for validating personalized account identifiers using biometric authentication and self-learning algorithms
Gamaarachchige et al. Multi-task, multi-channel, multi-input learning for mental illness detection using social media text
Keshishzadeh et al. Improved EEG based human authentication system on large dataset
CN111145903A (en) Method and device for acquiring vertigo inquiry text, electronic equipment and inquiry system
CN109308578A (en) A kind of enterprise&#39;s big data analysis system and method
CN114398681A (en) Method and device for training privacy information classification model and method and device for identifying privacy information
CN112732910B (en) Cross-task text emotion state evaluation method, system, device and medium
CN107808663A (en) Parkinson&#39;s speech data categorizing system based on DBN and RF algorithms
Moreno-Rodriguez et al. BIOMEX-DB: A cognitive audiovisual dataset for unimodal and multimodal biometric systems
Dargan et al. Gender classification and writer identification system based on handwriting in Gurumukhi script
CN109190556B (en) Method for identifying notarization will authenticity
CN114707132B (en) Brain wave encryption and decryption method and system based on emotional voice
Dhingra et al. Speech de-identification data augmentation leveraging large language model
CN107894837A (en) Dynamic sentiment analysis model sample processing method and processing device
LU505929B1 (en) Electroencephalogram encryption and decryption method and system based on global learning
Fuglsby et al. Use of an automated system to evaluate feature dissimilarities in handwriting under a two‐stage evaluative process
Chen et al. An Identity Authentication Method Based on Multi-modal Feature Fusion
CN111079420B (en) Text recognition method and device, computer readable medium and electronic equipment
Tschinkel et al. Keylogger keystroke biometric system
CN110349585A (en) Voice authentication method and information processing equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant