MXPA00005981A - Apparatus and methods for detecting emotions - Google Patents

Apparatus and methods for detecting emotions

Info

Publication number
MXPA00005981A
MXPA00005981A MXPA/A/2000/005981A MXPA00005981A MXPA00005981A MX PA00005981 A MXPA00005981 A MX PA00005981A MX PA00005981 A MXPA00005981 A MX PA00005981A MX PA00005981 A MXPA00005981 A MX PA00005981A
Authority
MX
Mexico
Prior art keywords
individual
information
plateaus
cor
speech
Prior art date
Application number
MXPA/A/2000/005981A
Other languages
Spanish (es)
Inventor
Liberman Amir
Original Assignee
Carmel Avi
Liberman Amir
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Carmel Avi, Liberman Amir filed Critical Carmel Avi
Publication of MXPA00005981A publication Critical patent/MXPA00005981A/en

Links

Abstract

This invention discloses apparatus for detecting the emotional status of an individual, the apparatus including a voice analyzer (760) operative to input (710, 720, 730) a speech specimen generated by the individual and to derive therefrom intonation information, and an emotion reporter operative to generate an output indication of the individual's emotional status based on the intonation information (735). Methods for multi-dimensional lie detection and for detecting emotional status are also disclosed.

Description

APPARATUS AND METHOD FOR DETECTING EMOTIONS FIELD OF THE INVENTION The present invention relates to an apparatus and methods for monitoring emotional states. BACKGROUND OF THE INVENTION The published TCP Application WO 97/01984 (TCP / IL96 / 00027) describes a method for performing feedback regulation of at least one physiologically variable feature of an individual's emotional state, including the steps of monitoring by at least one voice parameter characteristic of the emotional state of the subject to produce an indication signal, and using the indication signal, to provide the subject with an indication of at least one physiological variable. A system allows carrying out the method in a stand-alone mode or via the telephone line in which case the indication signal can be derived at a remote location of the subject. Information regarding the emotional state of the subject can be passed verbally to a remote or textually part through the Internet, and then processed as necessary. The published European Patent Application Number 94850185.3 (Publication Number 306 664 537 A2) describes a method and configuration for determining accents in a spoken sequence. From a recognized sequence in spoken discourse, a model of discourse is created. By comparing the spoken sequence with the model discourse, you get a difference between them.
U.S. Patent Serial Number 1,384,721 describes a method and apparatus for physiological response analysis. U.S. Patent Serial No. 3,855,416 to Fuller discloses a method and apparatus for phonation analysis that leads to valid truth / lie decisions by weight-weighted vibrato component evaluation of fundamental speech energy. U.S. Patent Serial Number 3,855,417 to Fuller discloses a method and apparatus for phonation analysis that leads to valid truth / lie decisions by comparison of spectrum energy region. U.S. Patent Serial Number 3,855,416 to Fuller discloses a method and apparatus for phonation analysis that leads to valid truth / lie decisions by vibrato component titration. Disclosures of all publications mentioned in the specification and of the publications cited herein are incorporated therein by reference. BRIEF DESCRIPTION OF THE INVENTION The present invention seeks to provide improved apparatus and methods for monitoring emotional stress. Therefore, in accordance with a preferred embodiment of the present invention, an apparatus for detecting the emotional state of an individual is provided, the apparatus includes an operational speech analyzer for inserting a speech sample generated by the individual and for deriving from the same intonation information, and an operational emotion reporter to generate an indication of exit from the emotional state of the individual based on the intonation information. Additionally, in accordance with a preferred embodiment of the present invention, the speech sample is provided on the telephone to the speech analyzer. Additionally in accordance with a preferred embodiment of the present invention, the report on the emotional state of the individual includes a report of detection of lies based on the emotional state of the individual. Additionally in accordance with a preferred embodiment of the present invention, the intonation information includes multidimensional intonation information. Additionally in accordance with a preferred embodiment of the present invention, multidimensional intonation information includes at least three-dimensional information. Additionally in accordance with a preferred embodiment of the present invention, multidimensional intonation information includes at least four-dimensional information. Additionally in accordance with a preferred embodiment of the present invention, the intonation information includes information about peaks. Additionally in accordance with a preferred embodiment of the present invention, the peak information includes the number of peaks in a predetermined time period. Additionally in accordance with a preferred embodiment of the present invention, the peak information includes the distribution of peaks in time. Additionally in accordance with a preferred embodiment of the present invention, the intonation information includes information about plateaus. Additionally in accordance with a preferred embodiment of the present invention, the information on plateaus includes the number of plateaus in a predetermined period of time. Additionally in accordance with a preferred embodiment of the present invention, information on plateaus includes information on the length of the plateaus. Additionally in accordance with a preferred embodiment of the present invention, information on the length of plateaus includes an average plateau length for a predetermined period of time. Additionally in accordance with a preferred embodiment of the present invention, information on the length of plateaus includes the standard error of the plateau length for a predetermined period of time. Also provided, in accordance with another preferred embodiment of the present invention, is a lie detection system that includes a multidimensional speech analyzer operative to insert a speech sample generated by an individual and to quantify a plurality of sample characteristics of the individual. voice, and an informant of the operative credibility evaluator to generate an indication of exit from the credibility of the individual, including the detection of lies, based on the plurality of quantified characteristics. Additionally, according to another preferred embodiment of the present invention, there is provided a detection method that includes receiving a speech sample generated by an individual and quantifying a plurality of characteristics of the speech sample, and generating an indication of speech output. credibility of the individual, including the detection of lies, based on the plurality of quantified characteristics. Additionally, in accordance with a preferred embodiment of the present invention, the speech sample includes a main speech wave having a period and wherein the speech analyzer is operative to analyze the speech sample to determine the rate of occurrence of plateaus. , each plateau indicates that a relatively local low frequency wave is superimposed on the main speech wave, and the emotion reporter is operative to provide an adequate output indication based on the speed of occurrence of the plateaus. For example, the emotion reporter can provide an adequate output indication when it is found that the plateau occurrence speed has changed.
Similarly, each peak indicates that a relatively local high-frequency wave is superimposed on the main speech wave. A particular advantage of analyzing plateaus and peaks as shown and described herein is that substantially all the frequencies of the speech wave can be analyzed. Also provided, in accordance with another preferred embodiment of the present invention, is a method for detecting emotional states and includes establishing a multidimensional characteristic range that characterizes the range of emotion of an individual when at rest by monitoring the individual for a plurality of related parameters. with emotions, in a first period during which the individual is in an emotionally neutral state, and defining the multidimensional characteristic range as a function of the range of the plurality of parameters related to emotions during the first period, and monitor the individual by the plurality of parameters related to emotions, in a second period during which it is desired to detect the emotional state of the individual, in order to obtain a measurement of the plurality of parameters related to emotions, and adjust the measurement to consider the range. Also provided, in accordance with another preferred embodiment of the present invention, is a method for detecting the emotional state of the individual, the method includes receiving a speech sample generated by the individual and intonation information derived therefrom, and generating an indication of exit from the emotional state of the individual based on the intonation information. BRIEF DESCRIPTION OF THE DRAWINGS < • The present invention will be understood and appreciated from the following detailed description, taken in conjunction with the drawings in which: Figure 1A is a pictorial illustration of a system for online monitoring of a subject's emotional state; Figure 1B is an illustration of a simplified flow diagram of a preferred method for online monitoring of a subject's emotional state; Figure 2 is a graphic illustration of a voice segment that includes a number of peaks; Figure 3 is a graphic illustration of a voice segment that includes a number of plateaus; Figure 4 is an illustration of a simplified flow chart of a preferred method for performing step 40 of Figure 1B; Figure 5 is an illustration of a simplified flow chart of a preferred method for implementing the truth / neutral emotion profile construction step of Figure 1B; Figure 6 is an illustration of a simplified flow chart of a preferred method for performing step 90 of Figure 1B in a particular segment; Figure 7 is an illustration of a simplified flow chart of a preferred method for performing step 100 of Figure 1 B; Figure 8 is an illustration of a simplified flow chart of a preferred method for performing step 105 of Figure 1B; Figure 9 is a pictorial illustration of an on-screen display showing the shape, in design mode, just before starting the application of Annex A; Figure 10 is a pictorial illustration of an on-screen display showing the form, in operating mode of the system of Annex A, during the calibration of a particular subject; Figure 11 is a pictorial illustration of an on-screen display showing the form, in operating mode of the Annex A system, during the test of a subject; and Figure 12 is a simplified block diagram illustration of a preferred system for performing the method of Figure 1 B. Attached herein is the following annex which aids in the understanding and appreciation of a preferred embodiment of the invention. shown and described herein: Annex A is a computer listing of a preferred software implementation of a preferred embodiment of the invention shown and described herein. DETAILED DESCRIPTION OF THE PREFERRED MODALITIES A portion of the disclosure of this patent document contains material that is subject to copyright protection. The owner of the copyright has no objection to facsimile reproduction by any person in the patent document or patent disclosure, as it appears in the registers or the patent file of the Patent and Trademark Office, but it reserves all the author's rights in another way. Figure 1A is a pictorial illustration of a system for online monitoring of a subject's emotional state. As shown, a speech input that arrives on a telephone line, in the illustrated mode, is received by the system. The system analyzes the speech input to obtain an indication of the emotional state of the individual whose indication is preferably provided to the user in real time, for example, on the screen display as shown. Figure 1B is an illustration of a simplified flow diagram of a preferred method for online monitoring of a subject's emotional state. The method of Figure 1B preferably includes the following steps: Initialization step 10: Constants are defined such as the established values of various parameters, which define ranges that are considered indicative of various emotions, as described in detail below. Step 20: Record a voice, periodically or on request. For example, segments of 0.5 seconds of voice can be recorded continuously, that is, every 0.5 seconds. Alternatively, segments of any other suitable length that may or may not overlap may be considered. For example, adjacent segments can overlap almost completely, (• 5 except for one or a few samples) Scan the voice recording In addition or alternatively, the overlapping segments of the recording can be sampled Step 30: Analyze the voice segment to mark the crucial part of the voice segment, that is, the portion of the voice segment that is considered to actually contain voice information as opposed to background noise.A suitable criterion for the detection of voice information is amplitude, for example, the first instance of amplitude that exceeding a certain level is considered the beginning of the voice information and the end of the voice information is considered the point after which no sound is found that exceeds the determined level for a predetermined duration. crucial portion are normalized, for example by amplifying the samples to take advantage of the whole range of amplitude that can be accommodated in the memory, for example +/- 127 units of amplitude if an 8-bit memory is used. Step 40: Count peaks and plateaus in the crucial portion. 25 Calculate the length of each identified plateau, and calculate the average plateau length for the crucial portion and the standard error for the length of the plateau. A "peak" is a notch-like appearance. For example, him < term "peak" can be defined as: C > 5 a. a sequence of 3 adjacent samples in which the first and third samples are higher than the middle sample, or b. a sequence of 3 adjacent samples in which the first and third samples are lower than the sample in 10 medium. Preferably, a peak is declared even if the first and third samples differ only very slightly from the average sample, ie, preferably there is no minimum value established for the difference between the samples. However, preferably there is a minimum value established for the baseline of the peak, i.e. peaks occurring at a very low amplitude are not considered because they are considered to be related to the background noise rather than a voice. Figure 2 is a graphic illustration of a voice segment 20 32, which includes a number of peaks 34. A "plateau" is a local plain in the voice wave. for example, a plateau can be defined as a planar sequence whose length is greater than a predetermined minimum level and is less than a predetermined maximum level. The maximum level is requires to differentiate the local plain from a period of silence.
A sequence can be considered flat if the difference in amplitude between consecutive samples is less than a predetermined level such as 5 amplitude units if 8-bit memory is used. Figure 3 is a graphic illustration of a voice segment '• 36, which includes a number of plateaus 38. In Annex A, the plateaus are called "jumps". The system of the present invention commonly operates in one of two modes: a. Calibration - form a profile of the emotional state neutral / truth of the subject monitoring a subject while the subject is not lying and / or is in a neutral emotional state. b. Test - Compare the speech of a subject with the neutral / true emotional state profile of the subject established during the calibration, to establish the emotional state and / or if the subject is being truthful or not. If the system is to be used in the calibration mode, the method proceeds from step 50 to step 60. If the system is to be used in the test mode, the method proceeds from step 50 to step 80. 20 Step 60: If step 60 is reached, this indicates that the current segment has been processed for calibration processes. Therefore, the peak and plateau information derived in step 40 is saved in a calibration mode. The processes of steps 20-50 are referred to herein "voice recording insertion processes". If there are more voice recordings to be inserted for calibration purposes, the method returns to step 20. If the entry of all voice recordings has been completed for calibration purposes (step 70) the method proceeds to step 80. tf 5 Step 80: Form the true / neutral emotional status profile for the subject that is being tested. This completes the operation in calibration mode. Therefore, the system enters the test mode in which the subject's voice recordings are compared to their true / neutral emotion profile to identify instances of falsehood or altered emotion. The profile of the subject commonly reflects the central trends of peak / plateau information and is commonly adjusted to consider artifacts from the calibration situation. For example, due to the natural stress at the beginning of the calibration process, the initial voice recordings can be less reliable than the subsequent voice recordings. Preferably, to obtain a reliable indication of the central tendencies, the extreme entries in the calibration table can be neglected. Passages 90 and subsequent belong to the test mode. 20 Step 90: Compare the peak / plateau information of the current segment with the true / neutral emotion profile calculated in step 80. Step 100 Level the results of the comparison process from step 90 to categorize the current segment as indicative of various emotions and / or falsehood.
Step 105: Optionally, compensate the remainder. The term "remnant" refers to a residual emotional state that remains in a "real" emotional state caused by a perceived first situation, where the residual emotional state remains after the first perceived situation is over. An example of a suitable implementation for step 105 is described herein in the flow chart of Figure 8. Step 110: Display a message indicating the category determined in step 100. Step 120: If there are additional voice segments to analyze, go back to step 20. Otherwise, go out. Any suitable number m of segments can be used for calibration, such as 5 segments. Figure 4 is an illustration of a simplified flow chart of a preferred method for performing step 40 of Figure 1B. As described above, in step 40, the peak / plateau information is generated for the crucial portion of a current voice recording segment. The current length of the plateau is called "jj". "Jjmap (jj)" is the number of plateaus whose length is exactly jj. "Plateau" is the counter that counts the number of plateaus regardless of length. "Peak" is the counter that counts the number of peaks. n is the number of samples in a crucial portion under test.
In step 150, the peak and plateau counters are reset. In step 160, the circuit is initiated in all samples of the crucial portion. The circuit starts at the first crucial sample and ends at the last crucial sample minus 2. In step 164, the amplitudes of the samples in the circuit are recorded. In steps 170 and 180 the peaks are detected, and in the steps 190, 195, 200 and 210 the plateaus are detected. In step 200, if the length of the candidate plateau is between reasonable limits, such as between 3 and 20, increase the number of plateaus of the length jj and increase the plateau, the total number of plateaus. Otherwise, that is, if the length of the candidate plateau is less than 3 or greater than 20, the candidate plateau is not considered a plateau. Regardless of whether the candidate plateau is considered a "real" plateau, the length of the plateau, jj, is set to zero (step 210). Step 220 is the end of the circuit, that is, the point at which all the samples in the sequence have been reviewed. In step 230, calculate the average (AVJ) and standard error (JQ) of the plateau length variable, jjmapa. In step 240, calculate SPT and SPJ. SPT is the average number of peaks per sample, preferably adequately normalized. SPJ is the average number of plateaus per sample, preferably adequately normalized. In accordance with the illustrated embodiment, the detection of the emotional state is multidimensional, that is, the emotional state is derived from the speech information via a plurality of preferably independent intermediate variables. Figure 5 is an illustration of a simplified flow chart of a preferred method for implementing the true / neutral emotion profile construction step of Figure 1B.
In Figure 5, SPT (i) is the SPT value for segment i. MinSPT is the minimum SPT value measured in any of the m segments. MaxSPT is the maximum SPT value measured in any of the m segments. MinSPJ is the minimum SPJ value measured in any of the m segments. MaxSPJ is the maximum SPJ value measured in any of the m segments. * MinJQ is the minimum JQ value measured in any of the m segments. MaxJQ is the maximum JQ value measured in any of the m segments. ResSPT is the size of the range of SPT values found during calibration. More generally, ResSPT can comprise any adequate indication of the degree of variation in the number of peaks that can be expected, when the subject is in a true / neutral emotional state. Therefore, if the number of peaks in a voice segment is non-normative, in relation to ResSPT, then it can be said that the subject is in a non-neutral emotional state such as an emotional state characterized by (5) Excitation or even physical excitement Therefore, ResSPT is commonly an input to the process of evaluating SPJ values generated during unknown emotional circumstances ResJQ is the size of the range of SPT values found during calibration that serves as a value baseline for 10 the evaluation of JQ values generated during unknown emotional circumstances.It is appreciated that the baseline does not necessarily have to be a 4-dimensional baseline as shown in Figure 5, but alternatively it can be even of a dimension or may have many more dimensions than 4. Figure 6 is an illustration of a simplified flow chart of a preferred method for performing step 90 of Figure 1B in a particular segment. in step 90, the peak / plateau information of a current 20 segment is compared to the true / neutral emotion baseline calculated in step 80. Step 400 is an initialization step. Step 410 calculates the deviation of a current crucial portion of the true / neutral emotional state profile of the calculated subject previously. In the illustrated embodiment, the deviation comprises a four-dimensional value including a first component related to the number of peaks, a second component related to the number of plateaus, a third component related to the standard error in the length of the plateau and a fourth component related to the length of the average plateau. However, it is appreciated that different components can be used in different applications. For example, in some applications, the distribution of peaks (uniform, erratic, etc.) in a time interval can be useful to derive information about the emotional state of the subject. "Break point" is a level value that characterizes the acceptable range of relationships between the average number of peaks in true / neutral emotional circumstances and the particular number of peaks in the current crucial portion. "Bankruptcy point / is a level value that characterizes the acceptable range of relationships between the average number of plateaus in true / neutral emotional circumstances and the particular number of plateaus in the current crucial portion." Bankruptcy pointQ "is a value of level that characterizes the acceptable range of relationships between the average standard error of the number of plateaus in true / neutral emotional circumstances and the particular standard error in the number of plateaus in the current crucial portion. which characterizes the acceptable range of relationships between average plateau length in true / neutral emotional circumstances and particular average plateau length in the current crucial portion. Steps 420-470 update the subject profile to consider the new information collected from the current segment. In the illustrated mode, only the ResSPT and ResSPJ values are updated and only if the deviation of a current crucial portion of the previously calculated true / neutral emotional state profile of the subject is either very large (for example, it exceeds the predetermined ceiling values) or very small (for example, it falls below certain predetermined negative floor values). If the deviation of the current crucial portion of the truth / neutral profile is neither large nor small (for example, it falls between the ceiling and floor values), the profile of the subject is usually left unchanged at this stage. In steps 460 and 470, if zzSPT and zzSPJ, respectively, are very close to zero, then the sensitivity of the system is increased by reducing ResSPT and ResSPJ, respectively. Step 480 generates commonly appropriate application-specific combinations of the deviation components calculated in step 410. These combinations are used as a basis for appropriate emotional rating criteria, such as the emotional classification criteria of Figure 7. The criteria Emotional classification of Figure 7 determine whether or not to classify the subject as being exaggerating, as if he is truthful, as if he is being evasive, as if he is confused or insecure, as if he is excited or as if he is sarcastic. However, it is appreciated that different different emotional classifications can be used in different situations. In the illustrated mode, the SPT information is used (t mainly to determine the level of excitation.) More specifically, zzSPT is used to determine the value of crEXCITAR, which may also depend on additional parameters such as CRTENSION For example, a value of CREXCITAR between 70 and 120 can be considered normal, while the values between 120 and 160 can be considered indicative of excitation of medium and values that exceed 160 can be considered indicative of a high level excitation. In the illustrated modality, SPK information is mainly used to determine feelings of dissonance psychological. For example, a value of zzSPJ of between 0.6 and 1.2 can be considered normal while a value of between 1.2 and 1.7 can be considered indicative of voice awareness on the part of the subject, and / or of an attempt by the subject to control its voice. In the illustrated mode, the values of zzJQ and CRTENSION are mainly used to determine the level of tension. For example, a CRTENSION value of between 70 and 120 can be considered normal, while values of more than 120 can be considered indicative of high voltage. In the illustrated mode, the AVJ information is used to determine the amount of thought invested in spoken words or statements. For example, if CrPENSAR exceeds a value of 100, then the amount of thought invested in the last spoken sentence is greater than the amount of thought invested in the calibration phase. This means that the person is (• 5 Thinking more about what you are saying than what you did in the calibration phase If the value is less than 100.1a person is thinking less about what you are saying than what you did in the calibration phase. In the illustrated mode, the crMENTIRA parameter is used to determine the veracity. A CRIME value of 50 can be considered indicative of implausibility, values between 50 and 60 can be considered indicative of sarcasm or humor, r values between 60 and 130 can be considered indicative of accuracy, values between 130 and 170 can be considered indicative of imprecision or exaggeration, and values that exceed 170 can be considered indicative of improbability. With reference again to Figure 6, the parameters mentioned above may receive the following values: Bankruptcy point = Bankruptcy pointj = Bankruptcy pointQ = 20 Bankruptcy pointA = 1.1 Tech? T = Roofj = 1.1 FloorTj = Floorx = 0.6 lncrement ? t = Increment ,. = Decrementot = Decrementj = 0.1 Minimum = Minimum. = 0.1 25 It is appreciated that all numerical values are simply examples and commonly depend on the application. Figure 7 illustrates the method for converting the different parameters into messages that can be displayed, as shown in for example Figure 1. Figure 8 represents a method for fine tuning the truth / neutral emotional state. Annex A is a computer listing of a software implementation of a preferred embodiment of the invention shown and described herein that differs slightly from the mode shown and described herein with reference to the drawings. An adequate method to generate the implementation of software is as follows: a. On a personal computer equipped with a microphone, a sound card and a Visual Basic ™ Version 5 software, generate a new project. The recording value of the sound card can operate in accordance with the following parameters: 11 KHz, 8 bits, mono, PCM. b. Place a timer object in the default form that appears in the new project. The timer object is called "timerl". c. Place an MCI multimedia control object in the shape. This object is called "mmcontrol". d. Place 5 label objects in the shape. These labels are called label, label2, label3, label4 and labeld. and. Create 4 label matrices in the form. Rename the matrices as follows: SPT (0..4), SPJ (0..4), JQ (0..4), AVJ (0..4). F. Place a command button on the form and change its title property to finish. The command button is called "command". g. Generate code for the form by inserting the pages of Annex A that are titled "formal". h. Add a module to the project. Generate code for the module by inserting the pages of Annex A that are entitled "detector_de_sentimientos". i. Connect a microphone to the personal computer, j. Press (F5) or "execute" to start the application. Figure 9 is a pictorial illustration of a screen display showing the shape, in design mode, just before starting the application. Figure 10 is a pictorial illustration of a screen display showing the form, in operation mode, during the calibration of a particular subject. Figure 11 is a pictorial illustration of an on-screen display showing the form, in operating mode, during the test of a subject. The values of the variable CoR_msgX in Annex A are as follows: 1 - truthfulness, 2 - sarcasm, 3 - excitation, 4 confusion / uncertainty, 5 - high excitation, 6 - manipulation of voice, 7 - falsehood / false statement , 8 - exaggeration / inaccuracy. The variables that carry data from the current crucial portion have names that begin with the following characters: cor_. The baseline factors have names that begin with the following characters: cal_. Bankruptcy point factors have names that begin with the following characters: bp_. ResSPT and ResSPJ are called ResT and ResJ, respectively. Figure 12 is a simplified block diagram illustration of a preferred system for detecting emotional states that is constructed and operative in accordance with a preferred embodiment of the present invention and that is operative to perform the method of Figure 1B. As shown, the system of Figure 12 includes a voice input device such as a tape recorder 700, microphone 710 or telephone 720 that generates speech that is inserted by an emotion detection workstation 735 via a frequency converter. analog to digitaloice window recorder 750 commonly divides incoming signals that represent speech in segments or voice windows that are analyzed by a voice window analyzer 760. The speech window analyzer compares the windows or voice segments with calibration data stored in the unit 770. The calibration data is commonly derived individually for each individual subject, as described in detail above. A printer or display unit 780 is provided to display or print an emotional status report, preferably online, for the user of the system. It is appreciated that the software components of the present invention can, if desired, be implemented in the form of ROM (read-only memory). Software components can, generally, be implemented in hardware, if desired, using conventional techniques. It is appreciated that the particular embodiment described in the Annex is intended only to provide an extremely detailed disclosure of the present invention and is not intended to be limiting. It is appreciated that various aspects of the invention that are, for clarity, described in the context of separate embodiments may also be provided in combination in a single embodiment. Conversely, various aspects of the invention that are, for brevity, described in the context of a single embodiment may also be provided separately or in any suitable subcommand. Those skilled in the art will appreciate that the present invention is not limited to what has been particularly shown and described herein. Rather, the scope of the present invention is defined solely by the following claims.
APPENDIX A The following code must be written to the form object: Form 1 a »Private Sub Command1_Click () 5 End End Sub Private Sub Form_Carge () 'Set properties MCI needs to open. a = mciServerString ("setfileformformaudioudiopachm algorithm pcm 10 bitsbysample to 8_bytesporsec to 11025 input volume to 100 source to average", 0,0,0) MMContro 11. Notify r = False MMContro 11. Wait r = True MMContro 11. Shareable = False 15 MMContro 11. DeviceType = "OndaAudio" MMControM nombredearch? Vo = "C: \ buf WAV" 'Open the MCI OndaAudio device MMControl1.Comando = "Open" P' Define constants 20 CR_Bgnivel = 15 'Bottom level barrier CR_Bgfilter = 3 'Local wave rectifier CR_DATOSstr = ""' Reboot Data String CR_mode = 1 CONS_SARCASMO = 50 25 CONS LETTER11 = 130"CONS MENTÍ RA12 = 175 CONS_BajozzT = -0.4: CONS_AltozzT = 0.3 CONS_BajozzJ = -0.2: CONS_AltozzJ = 0.7 CONS_RES_SPT = 2: CONS_RES_SPJ = 2 CONS_BGarchivor = 3 (. 'Set timer object to work every 0.5 seconds Timerl .value = 500 Timer1.Activated = True' fix deployment Tag.Title = "System decision" 10 TAG_2_.Title o = "Global tension." Et tag 3. Title = "Excitation:" TAG_4_.Title = "Lie voltage:" MMContro 11. Visible = False Finish Sub 15 Private Sub Timer1_Timer () Static been In Error Resume After MMContro 11. Command = "stop" MMContro 11. Command = "guard ar" 20 MM Con tro 11.Command = "close" 'read file data ff = MMControl1.filename Dim kk As String * 6500 kk = Space (6500) 25 Open ffFor Binary Access Reading As # 1 Obtain # 1, 50, kk Close # 1 Remove ff MM Control 1.Command = "open" (• 5 a = MMControM. Error Message MMControl1.Command = "record" CR_DATOS = kk If OP_stat = 0 Then OP_stat = 1 'first round or after demand of the recalibration was = 0 End Yes If it was < 5 Then Label 1. Title = "Ca freeing." 15 Call Calibrate 'Perform calibration Get calibration status by CR_msgX If CoR_msgX > -1 Then 'good sample Sido = been + 1 End Yes 20 Exit Sub Sine OP_stat = 2' Checking state call CHECK 'get segment status by CR_msgX 25 End Yes If CoRmsgX < 0 Then Exit Sub 'not enough good samples Label4.Title = "Strain lying ..." + Format (lnt (CR_MENTIRA)) Label2.Title = "Global Tension:" + Format (lnt (CR_TENSION)) Label3.Title = "Excitation Value:" + Format (lnt (CR_EXCITAR)) Label6.Title = "Thought Value:" + Format (lnt (CR_PENSANDO)) been = been + 1 Select Case CoR_msgX Case 0 Answer = "background noise" Case 1 Answer = "TRUE" Case 2 Answer = "Ready" Case 3 Answer = "Excitement" Case 4 Answer = "Uncertainty" Case 5 Answer = "High excitement" Case 6 Answer = "Voice manipulation / Avoiding / Emphasizing" Case 7 Answer = "LIE" Case 8 ( • 5 Response = "Imprecision" End Selection Label1.Title = answer End Sub Sub Calibrate () 10 Call CUT_sec If CR_noSMP <800 Then 'do not show CoR_msgX = -1 Exit Sub 15 End If Explore Peaks CONS_RES_SPT = 2 CONS_RES_SPJ = 2 Call TJ_exploration 20 If lnt (CoR_spT) = 0 or lnt (CoR_saltoAV) = 0 or lnt (CoR_SALTO Q) = 0 or lnt (CoR_SPJ) = 0 Then CoR_msgX = -1 Exit Sub End If 25 tot_T = 0: tot_J = 0: tot_J = 0: tot_avj = 0 minspT = 1000: minspj = 1000: minspJQ = 1000 For a = 0 to 4 If SPT (a). Title = 0 Y If SPJ (a). Title = 0 Then SPT (a). Title = lnt (CoR_spT) SPT (a). Title = lnt (CoR_SPJ) JQ (a). Title = lnt (CoR_SALTOQ) AVJ (a). Title = lnt (CoR_SALTOAV) Exit To End If tot_T = tot_T + SPT (a) tot_J = tot_J + SPJ (a) tot_JQ = tot_JQ + JQ (a) tot_avj = tot_avj + AVJ (a) If Val (SPT (a). Title) < minspT Then minspT = Val (SPT (a) Title) Si Val (SPT (a) .Title) > maxspT Then maxspT = Val (SPT (a). Title) If Val (SPJ (a) .Title) < minspJ Then minspj = Val (SPJ (a). Title) If Val (SPJ (a) .Title) > maxspJ Then maxspj = Val (SPJ (a) Title) If Val (JQ (a) .Title) < minJQ Then minJQ = Val (JQ (a). Title) If Val (JQ (a) .Title) > maxJQ Then maxJQ = Val (JQ (a). Title) Then to 'cale current CAL factors CAL_spT = (tot_T + lnt (CoR_spT)) / (a + 1) CAL_spJ = (tot_J + lnt (CoR_SPJ)) / (a + 1) CAL_JQ = (tot_JQ + lnt (CoR_SALTOQ)) / (a + 1) CAL_AVJ = (tot_avj + lnt (CoR_saltoAV)) / (a + 1) 'cale resolution by factor In Error Resume After If a > 1 Then Res_T = maxspT / minspT Res_J = maxspJ / minspj End If CoR_msgX = 0 End Sub Sub CHECK () Call CUT_sec If CR_noSMP < 800 Then 'do not show CoR_msgX = -1 Exit Sub End If CONS_RES_SPT = 2 CONS_RES_SPJ = 2 Call scan_TJ If lnt (CoR_spT) = 0 or lnt (CoR_saltoAV) = Oó lnt (CoR_SALTO Q) = 0 or lnt (CoR_SPJ) = 0 Then CoR_msgX = -1 Exit Sub End Yes Call analyze Call decision 'Fine tuning of calibration factors CAL_spT = ((CAL_spT * 6) + CoR_spT) \ 7 CAL_spJ = ((CAL_spJ * 6) + CoR_SPJ) \ 7 CAL_JQ = ((CAL_JQ * 9) + CoR_SALTOQ) \ 10 CAL_AVJ = ((CAL_AVJ * 9) + CoR_saltoAV) \ 10 End Sub The following code must be written to a new module object: Sentiment detector 'Global declaration section Fname' - file name Global CR_BGfilter 'Global Background Filter CR_BGnivel' - Global Fund Level CR_DATOSstr Global CR_noSMP '- number of samples Global res_J, res_T Global CoR_spT, CoR_SPJ, CoR_saltoAV, CoR_SALTOQ Global CoR_msgX, CR_retDATOSstr Global SMP (10000) As Global Integer OP_stat' ** Calibration factors Global CAL_spJ, CAL_spT Global CAL_JQ, CAL_AVJ Global BP_J, BP_T '- Bankruptcy points of Global CALIBRATION WI_J, WI_T, WI_JQ' - Weighting of factors in cale. Global CR_zzT, CR_zzJ Global CR_TENSION, CR_MENTIRA, CR_EXCITAR, CR_PENSAR Global CR_RESfilter '- resolution filter' Constants for global decision CONS_SARCASMO Global CONS_MENTIRA11, CONS_MENTIRA12 Global CONS_BajozzT, CONS_AltozzT Global CONS_BajozzJ, CONS_AltozzJ Global CONS_RES_SPT, CONS_RES_SPJ Declare Function mciSendexStr Lib "winmm.dll" Alias "mciSendStringA" (ByVal IpstrCinging As String, By Val IpstrReturn String As String, By ValReturn Length While, By Val hwndRecallRecall While) While Sub AnalyzeQ In Error Resume After CR_MENTIRA = 0 CR_TENSION = 0 CR_EXCITAR = 0 S (CoR_spT = 0 And CoR_SPJ = 0) or CR_noSMP = 0 Then CR_msg = "ERROR" Exit Sub End Yes If CoR_SPJ = 0 Then CoR_SPJ = 1 If CoR_spT = 0 Then CoR_spT = 1 In Error Resume After RrJ = res_J: rrT = res_T BP_J = 1.1: BP_T = 1.1 zz_spj = (((CAL_spJ / lnt (CoR_SPJ)) - BP_J) / rrJ) If zz_spj > -0.05 And zz_spj < 0.05 Then res_J = res_J-0.1 If res_J < 1.3 Then res_J = 1.3 If zz_spj < -0.6 Then zz_spj = -0.6 res_J = res_J + 0.1 End Yes If zz_spj > 1.2 Then zz_spj = 1.2 res_J = res_J + 0.1 End Yes If res_J > 3.3 Then res_J = 3.3 CR_zzJ = zz_spj zz_spT = (((CAL_spT / lnt (CoR_spT)) - BP_T) / rrT) CR_zzT = zz_spT If zz_spT > -0.05 And zz_spT < 0.05 Then res_T = res_T-0.1 If res_T < 1.3 Then res_T = 1.3 If zz_spT < -0.6 Then zz_spT = -0.6 res_T = res_T + 0.1 End Yes If zz_spT > 1.2 Then zz_spT = 1.2 res_T = res_T + 0.1 End Yes If res_T > 3.3 Then res_T = 3.3 WI_J = 6: WI_T = 4 CR_TENSION = lnt ((CoR_SALTOQ / CAL_JQ) * 100) Ggwi = WI_J * WI_T CR_MENTIRA = ((zz_spT + 1) * WI_T) * ((zz_spj + 1) * WI_J) CR_MENTIRA = ((CR_MENTIRA / ggwi)) * 100 CR_MENTIRA = CR_MENTIRA + lnt ((CoR_SALTOQ / CAL_JQ) * 1.5) CR_PENSAR = lnt ((CoR_saltoAV / CAL_JQ) * 100) CR_EXCITAR = (((((cr_ZZt) / 2) + 1) * 100) * 9) + CR_TENSION) / 10 ********* END OF STAGE 2 - ****** If CR_MENTIRA > 210 Then CR_MENTIRA = 210 If CR_EXCITATE > 250 Then CR_EXCITAR = 250 If CR_TENSION > 300 Then CR_TENSION = 300 If CR_MENTIRATE < 30 Then CR_MENTIRA = 30 If CR_EXCITAR < 30 Then CR_EXCITAR = 3 'If CR_TENSION < 30 Then CR_TENSION = 30 End Sub Sub CUT_sec () CR_noSMP = 0 If CR_DATOSstr = "" Then CR_msg = "ERROR! - No data provided" Exit Sub End If CR_AUTOvol = 1 'Auto amplifier CoR_volume = 3' omission CR_minSMP = 800 ' omission Free = FreeFile 'Separate CR_DATOSstr in bytes LocA = 1: LocB = 1 BGAmin = 0 BGAmax = 0 VolumeMAX = 0 TestP = 0 BR_BAJO = -128 BR_high = -128 ddd = -128 ddd = lnt (ddd * (CoR_volume / 3 )) ddd = (ddd \ CR_BGfilter) * CR_BGfilter If CR_AUTOvol = 1 Then apply automatic volume detection MAXMX = 0 For a = 1 A Len (CR_DATOSstr) ccc = Asc (Mid $ (CR_DATOSstr, a, 1)) ccc = ccc-128 ccc = (ccc \ CR_BGfilter) * CR_BGfilter í Yes (ccc> CR_BGlevel or ecc <0-CR_BGlevel) And ccoddd Then If Abs (ccc) > VolumenMAX Then volumenMAX = Abs (ccc) If StartPos = 0 Then StartPos = a OKsmp = Oksmp + 1 End Yes 10 If VolumeMAX > 110 Then Exit For Later to Si Oksmp < 10 Then CR_msg = "There are not enough samples" CR_noSMP = 0 15 Exit Sub End If CoR_volume = lnt (360 / VolumenMAX) If CoR_volume > 16 Then CoR_volume = 3 P End Yes 20 In Error Resume After drect = "": DR_indieador = 0 VolumenMAX = 0 LocA = 0 Fact = 0 25 89 For a = StartPos A Len (CR_DATOSstr) -1 ccc = Asc (Mid $ ( CR_DATOSstr, a, 1)): ccd = Asc (Mid $ (CR_DATOSstr, a + 1, 1)) ccc = ccc-128: ccd = ccd-128 ccc = lnt (ccc * (CoR_volume / 3)) 5 ccd = lnt (ccd * (CoR_volume / 3)) ccc = (ccc \ CR_BGfilter) * CR_BGfilter ccd = (ccd \ CR_BGfilter) * CR_BGfilter Yes (ccc> CR_BGlevel or ccc <0-CR_BGlevel) And ccoddd Then If Abs (ccc) > VolumenMAX Then volumenMAX = 0 Abs (ccc) f1 = f1 + 1 End Si If f1 > 5 Then SMP (LocA) = ecc 5 If BR_alto < ccc Then BR_high = ccc If BR_BAJO > ccc óBR_BAJO = -128 Then BR_BAJO = ccc Yes (SMP (LocA)> 0 -CR_BGlevel And SMP (LocA) <-CR_BGlevel) or SMP (LocA) = ddd Then blnk = blnk +1 or Sino blnk = 0 End Yes If blnk > 1000 Then LocA = LocA -700 Fact = 1 If LocA > CR_minSMP Then Exit To Done = 0 f1 = 2: blnk = 0 BR_BAJO = -128: BR_high = -128 End If LocA = LocA +1 End If Next to Err = 0 CR_noSMP = LocA If CR_noSMP < CR_minSMP Then CR_msg = "There are not enough samples" Exit Sub End If CR_msg = "Finished O.K." End Sub Sub decision () If CR_zzT = OY CR_zzJ = 0 Y (CL_spJ <lnt (CoR_SPJ)) Then CR_msg = "ERROR - Missing required parameters" Exit Sub End Yes If CR_TENSION = 0 OR CR_MENTIRA = 0 OR CR_EXCITAR = 0 Then CR_msg = "ERROR - Required calculations are missing" Exit Sub End If CR_msgCODE = OR r «CoR_msgX = O 5 Sarcasm = O If CR_MENTIRATE < 60 Then CoR_msgX = 2 Exit Sub End Yes 10 55555 Yes ((CR_zzJ + 1) * 100) < 65 Then Si ((CR_zzJ + 1) * 100) < 50 Then sarcasm = 0 CR_zzJ = 0.1 End Yes Yes ((CR_zzT + 1) * 100) < 65 Then 15 If ((CR_zzT + 1) * 100) < CONS_SARK Then sarcasm = sarcasm + 1 CR_zzT = 0.1 End Yes > LIE_BORD1 = CONS_MENTIRA11: LIE_BORD2 = CONS 20 _MENTIRA12 If CR_MENTIRA < LIE_BORD1 AND CR_TENSION < LIE_BORD1 Then CR_msgCode = CR_msgCode +1 End Yes 25 If CR LIES < LIE BORD1 AND CR_MENTIRA < LIE_BORD2 Then CR_msgX = 8 Exit Sub End Yes If CR_MENTIRATE < LIE_BORD2 Then Yes CR_msgCode < 128 Then CR_msgCode CR_msgCode +128 End Yes If CR_zzJ > CONS_BajozzJ Then Si CR_zzJ > CONS_AltozzJ Then CR_msgCode = CR_msgCode +64 If CR_msgCode = CR_msgCode +8 End Yes End Yes If CR_EXCITATE > LIE_BORD1 Then Si CR_EXCITAR > LIE_BORD2 Then Yes (CR_msgCode Y 32) = False Then CR_msgCode CR_msgCode +32 If Yes (CR_msgCode Y 4) = False Then CR_msgCode CR_msgCode +4 End If End Yes If CR_msgCode < 3 Then If sarcasm = 2 Then CR_msgCode = -2 CoR_msgX = 2 Exit Sub End Yes If sarcasm = 1 Then Yes (CR_zzT) > CONS_BajozzT and CR_zzT < CONS_AltozzT) Then CR_msgCode = -1 CoR_msgX = 2 If not CR_zzT > CONS_AltozzT) Then CoR_msgX = 7 r End Yes If (CR_zzJ) > CONS_BajozzT and CR_zzJ < CONS_AltozzT) Then CR_msgCode = -1 CoR_msgX = 2 If not CR_zzJ > CONS_AltozzT) Then CoR_msgX = 7 End If Exit Sub End If CR_msgCode = 1 CoR_msgX = 1 Exit Sub End Yes If CR_msgCode > 127 Then CoR_msgX = 7 Exit Sub End Yes If CR_msgCode > 67 Then CoR_msgX = 8 Exit Sub End Yes If CR_msgCode > 63 Then CoR_msgX = 6 Exit Sub End Yes If CR_msgCode > 31 Then CoR_msgX = 5 Exit Sub End Yes If CR_msgCode > 7 Then CoR_msgX = 4 Exit Sub End Yes If CR_msgCode > 3 Then CoR_msgX = 3 Exit Sub End If CoR_msgX = 1 Exit Sub Sub scan_TJ () ReDimjsalto (IOO) CR_msg = "" Test P = CR_noSMP CR_spT = 0 CR_SPJ = 0 If TestP < = 0Then CR_msg = "Number of samples not transmitted" Exit Sub End If CR_minSALTO = 3 Omission CR_maxSALTO = 20 Skip ommission = 0 peaks = 0 GREAT jumps = 0 For a = 1a CR_noSMP Jjtl = SMP (a): jjt2 = SMP (a + 1): jjt3 = SMP (a + 2) 'explore peaks Si (jjt1 < jjt2 > jjt3) Then Si jjtl > 15 And jjt2 > 15 And jjt3 > 15 Then peaks = peaks + 1 End Yes Yes (jjt1 > jjt2 < jjt3) Then Si jjtl < -15 And jjt2 < - 15 And jjt3 < -15 Then peaks = peaks + 1 End Yes Yes (jjtl> jjt2-5) Y (jjtKjjt2 + 5) Y (jjt3> jjt2-5) Y (jjt3 < jjt2 + 5) Then sss = sss + 1 Sino If sss > = CR_minSALTO AND sss < = CR_maxSALTO Then jump = jump + 1 jsalto (sss) = jsalto (sss) +1 End Yes Sss = 0 End Yes After a SaltoAV = 0 SALTOtot = 0 CR_SALTOQ = 0 For a = 1 to 100 SALTOtot = SALTOtot + jsalto (a ) SaltoAV = SaltoAV + (jsalto (a) * a) After a Si SALTOtot > 0 Then cr_saltoAV = saltoAV / SALTOtot For a = 1 to 100 If jsalto (a) > 1 Then SALTOQ = SALTOQ + ((jsalto (a) * Abs (cr_saltoAV-a))) '* jsalto (a)) Next to CoR_spT = (lnt (((peaks) / CR_noSMP) * 1000) CONS_RES_SPT) CoR_SPJ = (lnt (((leap ) / CR_noSMP) * 1000) CONS_RES_SPJ) CoR_SALTOQ = Sqr = (SALTOQ) CoR_saltoAV = cr_saltoAV CR_msg = "Exploration of Peaks and Hops completed O.K." End Sub

Claims (10)

  1. CLAIMS 1. Apparatus for detecting the emotional state of an individual, the apparatus comprises: an operative voice analyzer to insert a voice sample generated by the individual and derive from the same intonation information; and an operational emotion reporter for generating an indication of departure from the emotional state of the individual based on said intonation information, 2. Apparatus according to claim 1, wherein said speech sample is provided on the telephone to said analyzer. voice. 3. Apparatus according to claim 1, wherein said report on the emotional state of the individual includes a report of detection of lies based on the emotional state of the individual. 4. Apparatus according to claims 1-3, wherein said intonation information comprises multidimensional intonation information. 5. Apparatus according to claim 4, wherein said multi-dimensional information comprises at least three-dimensional information. Apparatus according to Claim 5, wherein said multidimensional information comprises at least 4-dimensional information. 7. Apparatus according to Claims 1 - 3, 5-6 wherein said intonation information includes information about peaks. Apparatus according to Claim 7, wherein said peak information comprises the number of peaks in a predetermined period of time. 9. Apparatus according to claim 8, wherein said peak information comprises the distribution of peaks in time. 10. Apparatus according to claims 1 -3, 5-6, 8-9, wherein said intonation information includes information about plateaus. eleven . Apparatus according to Claim 10, wherein said plateau information comprises the number of plateaus in a predetermined period of time. Apparatus according to Claim 1 1, wherein said information about plateaus comprises information about the length of the plateaus. Apparatus according to Claim 12, wherein said information about the length of the plateaus comprises an average plateau length for a predetermined period of time. Apparatus according to Claim 12, wherein said information about the length of the plateaus comprises the standard error of the length of the plateau for a predetermined period of time. 15. A lie detection system comprising: a multidimensional speech analyzer operative to insert a speech sample generated by an individual and to quantify a plurality of characteristics of said speech sample; and an operative credibility evaluator reporter to generate an indication of exit from the credibility of the individual, including the detection of lies, based on said plurality of quantified characteristics. 16. A multidimensional lie detection method comprising: receiving a speech sample generated by an individual and quantifying a plurality of features of said speech sample; and generate an indication of exit from the credibility of the individual, including the detection of lies, based on the aforementioned plurality of quantified characteristics. 17. Apparatus according to claims 1-3, 5-6, 8-9, 11-15, wherein said speech sample comprises a main speech wave having a period and wherein said speech analyzer is operative to analyze the speech sample to determine the rate of occurrence of plateaus, each plateau indicates that a local low frequency wave is superimposed on the main speech wave; and where the emotion reporter is operative to provide an indication of departure based on the speed of occurrence of plateaus. 18. A method to detect emotional states that includes: establishing a multidimensional characteristic range that characterizes a range of emotions of the individual when at rest: monitoring the individual for a plurality of parameters related to emotions, in a first period during which the individual is in an emotionally neutral state; and defining the characteristic range as a function of the range of parameters related to emotions during said first period; and monitor the individual by the mentioned parameters related to emotions, in a second period during which it is desired to detect the emotional state of the individual, in order to obtain a measurement of said plurality of parameters related to emotions, and adjust said measurement to consider said rank 19. A method to detect the emotional state of the individual, the method comprises: receiving a voice sample generated by the individual and deriving from the same intonation information; and generating an indication of departure from the emotional state of the individual based on said intonation information.
MXPA/A/2000/005981A 1997-12-16 2000-06-16 Apparatus and methods for detecting emotions MXPA00005981A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
IL122632 1997-12-16

Publications (1)

Publication Number Publication Date
MXPA00005981A true MXPA00005981A (en) 2002-02-26

Family

ID=

Similar Documents

Publication Publication Date Title
CA2313526C (en) Apparatus and methods for detecting emotions
AU774088B2 (en) Apparatus and methods for detecting emotions in the human voice
Kent et al. Reliability of the Multi-Dimensional Voice Program for the analysis of voice samples of subjects with dysarthria
Dubnov Generalization of spectral flatness measure for non-gaussian linear processes
Huffman Measures of phonation type in Hmong
EP1393300B1 (en) Segmenting audio signals into auditory events
EP0789296B1 (en) Voice controlled image manipulation
US7490038B2 (en) Speech recognition optimization tool
EP1944753A2 (en) Method and device for detecting voice sections, and speech velocity conversion method and device utilizing said method and device
WO1986003047A1 (en) Endpoint detector
US6240381B1 (en) Apparatus and methods for detecting onset of a signal
JPH0431898A (en) Voice/noise separating device
JPH08286693A (en) Information processing device
US6704671B1 (en) System and method of identifying the onset of a sonic event
MXPA00005981A (en) Apparatus and methods for detecting emotions
US6219636B1 (en) Audio pitch coding method, apparatus, and program storage device calculating voicing and pitch of subframes of a frame
AU2004200002B2 (en) Apparatus and methods for detecting emotions
AU612737B2 (en) A phoneme recognition system
US6594601B1 (en) System and method of aligning signals
US7392178B2 (en) Chaos theoretical diagnosis sensitizer
CN118410201A (en) Voice data classified storage method and system based on Internet of things platform
WO2004049303A1 (en) Analysis of the vocal signal quality according to quality criteria
JPH041920B2 (en)