WO2020136872A1 - Atmosphere purifying device, atmosphere purifying method, and elevator - Google Patents

Atmosphere purifying device, atmosphere purifying method, and elevator Download PDF

Info

Publication number
WO2020136872A1
WO2020136872A1 PCT/JP2018/048478 JP2018048478W WO2020136872A1 WO 2020136872 A1 WO2020136872 A1 WO 2020136872A1 JP 2018048478 W JP2018048478 W JP 2018048478W WO 2020136872 A1 WO2020136872 A1 WO 2020136872A1
Authority
WO
WIPO (PCT)
Prior art keywords
atmosphere
sound
stimulus
emotion
cleaning device
Prior art date
Application number
PCT/JP2018/048478
Other languages
French (fr)
Japanese (ja)
Inventor
昌鷹 馬場
啓吾 川島
信秋 田中
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to JP2019541820A priority Critical patent/JPWO2020136872A1/en
Priority to PCT/JP2018/048478 priority patent/WO2020136872A1/en
Publication of WO2020136872A1 publication Critical patent/WO2020136872A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M21/02Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis for inducing sleep or relaxation, e.g. by direct nerve stimulation, hypnosis, analgesia
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor

Definitions

  • the present invention relates to an atmosphere cleaning device, an atmosphere cleaning method, and an elevator that detect the harshness of the atmosphere of a space where people are present and clean the harsh atmosphere.
  • the "harmful atmosphere" is, for example, a situation in which one of the parties is harmful to the other, for example, a situation in which an act of extortion, an injury, or a molester is performed, or a notice of harm is given as a preceding step. , The situation where some interaction occurs.
  • the present invention has been made to solve the above-mentioned problems, and is to obtain an atmosphere cleaning device capable of cleaning a severe atmosphere.
  • the atmosphere cleaning device identifies a person's emotion based on an input unit for acquiring a sound, a feature amount derived from a person's emotion contained in the sound, and boundary information for identifying the person's emotion from the feature amount.
  • the emotion identification unit an atmosphere estimation unit that estimates the degree of the atmosphere around the sound source based on the identification result of the person's emotion identified by the emotion identification unit, and a stimulus that cleans the atmosphere according to the degree of the atmosphere
  • a stimulus output unit that outputs the sound around the sound source.
  • the stimulus for cleaning the atmosphere is output according to the degree of the atmosphere, so that a bad atmosphere can be cleaned.
  • FIGS. 5A and 5B are diagrams for explaining the identification boundary information for identifying an emotion from the feature amount vector for each emotion according to the first embodiment of the present invention. It is a figure explaining the identification boundary information recorded on the 2nd recording part for atmosphere detection which concerns on Embodiment 1 of this invention.
  • FIG. 10(a) and 10(b) are diagrams for explaining a method of separating an utterance voice and an action sound from an input voice according to the first embodiment of the present invention. It is a figure which shows the operation
  • FIGS. 12(a) to 12(c) are diagrams showing an operation for obtaining a transition of the atmosphere level from the appearance frequency of the estimated emotion in the atmosphere estimating unit according to the first embodiment of the present invention. It is a figure which shows an example of the specific structure of the stimulation control part, the stimulation production
  • Embodiment 1. 1 is a block diagram showing a configuration of an atmosphere cleaning device according to a first embodiment for carrying out the present invention.
  • the atmosphere cleaning device 100 is a device that executes an atmosphere cleaning method.
  • the atmosphere cleaning device 100 can acquire sound.
  • the sound is a person's uttered voice (hereinafter, referred to as “uttered voice”), a person's action sound (hereinafter, referred to as “action sound”), or a synthesized sound of the uttered voice and the action sound.
  • uttered voice a person's uttered voice
  • action sound a person's action sound
  • the atmosphere cleaning device 100 detects a harsh atmosphere based on the sound and outputs a stimulus for cleaning the harsh atmosphere to the vicinity of the sound source.
  • the sound is input by the sound input unit 1 that is an input unit that acquires the utterance voice and the action sound of the target people, the utterance voice analysis unit 2, and the action sound analysis unit 3.
  • the input side is configured.
  • the first recording unit 4 in which the identification boundary information regarding the emotion feature amount of the emotional speech to be detected is recorded in advance, and the identification boundary information regarding the emotion feature amount of the emotional action sound is also recorded in advance.
  • the second recording unit 5 is configured.
  • the first recording unit 4 and the second recording unit 5 constitute a boundary information reference side for identification.
  • the first identification unit 6 that identifies the emotion of the uttered voice based on the identification boundary information that is previously recorded in the first recording unit 4, and the identification that is previously recorded in the second recording unit 5.
  • a second identification unit 7 that identifies the emotion of the action sound based on the boundary information and an atmosphere estimation unit 8 are configured.
  • the first identification section 6, the second identification section 7, and the atmosphere estimation section 8 constitute an atmosphere detection side.
  • the stimulus control unit 9 and the stimulus generation unit 10 generate a stimulus for cleaning the atmosphere, and the stimulus output unit 11 outputs the stimulus to people.
  • the sound input unit 1 is an input unit that acquires a sound.
  • the sound acquired by the sound input unit 1 is at least one of an utterance voice and an action sound.
  • the action sound is a sound that accompanies a person's action.
  • the action sound refers to a sound generated as a result of handling the object without paying attention to the object itself or others around the person when handling the object, and also includes a sound generated when a person taps a desk, a wall, or a floor. ..
  • the emotion identifying unit is based on the feature amount derived from the human emotion (hereinafter referred to as “emotion”) included in the sound acquired by the sound input unit 1 and the boundary information for identifying the emotion from the feature amount. , Estimate emotions. The emotion identifying unit also generates information indicating the estimated emotion type (for example, "anger” or "irritation”).
  • the processing executed by the identification unit can be realized by the utterance voice analysis unit 2, the action sound analysis unit 3, the first identification unit 6, and the second identification unit 7.
  • Fig. 2 is a diagram showing the "Pultic's emotional circle".
  • the utterance voice and the action sound uttered by the party X are, for example, “anger”, “anger”, “frustration”, “hate”, “disgust” when viewed in “Pulch's emotional ring”. It is considered that the emotional characteristics such as “” are included.
  • “anger is the inside out of fear”
  • the reason why the party X has become agitated is “fear”, “worry”, “anxiety” with respect to the party Y or a third party. It is also possible that the feeling of is a distant cause.
  • the party X feels that his or her thoughts and values have been denied, or feels that things do not go according to his or her own wishes, and fears that his or her social evaluation will decline. It may be. Therefore, the utterance voice and the action sound uttered by the party X may include emotional features such as “fear”, “worry”, and “anxiety”.
  • the atmosphere cleaning device 100 analyzes the emotional characteristic amount of the utterance voice and the action sound acquired from the people including the agitated party X, and determines “anger”, “anger”, “irritability”, and “hate”. ”, “Aversion”, “Horror”, “Worry”, “Anxiety”, etc. Then, the atmosphere cleaning device 100 outputs to the people a stimulus for cleaning the atmosphere based on the detection result, so that the "dangerous atmosphere” does not reach a critical state. It works to cleanse (relax) the emotions you have made. However, the atmosphere cleaning device 100 may be operated so as not to reach from a slight "unpleasant atmosphere” to a serious "severe atmosphere” as a preceding step.
  • the target of monitoring is an elevator car
  • parties X, Y, and Z in the elevator car.
  • the party X, of the parties X, Y, and Z has an offense for some reason.
  • the party X expects the party Y to apologize for the reasons such as "the shoulder and the baggage of the party Y hit the party X" and "the party Y gazed at the party X". Instead, they assume that they were "ignored,” “laughed,” “ridiculed,” etc., and they are notified of some harm and end up in a critical state.
  • Notifications of harm include, for example, “shibaku,” “tsukuri,” “kill,” “buy in the soil,” “sink in the sea,” and the like, as well as actions that are enough to scare the general public. It can be said that the act of ringing falls under that level if the ordinary person is afraid enough. It is considered that the utterance voice and the action sound in such a notification of malicious intent include emotional features such as “anger”, “anger”, “irritability”, “hate”, and “disgust”. Similarly to the above, from the interpretation of “anger is the inside out of fear”, emotional features such as “fear”, “worry”, and “anxiety” may be included.
  • Harsh atmospheres also occur in conference rooms and other closed spaces such as elevators.
  • the closed space is a space where various vehicles such as railroads, automobiles, ships, and space stations cannot easily escape to the outside.
  • the atmosphere cleaning device 100 can clean a harsh atmosphere even if a harsh atmosphere occurs in a closed space other than the conference room and the elevator.
  • FIG. 3 is a diagram for explaining the utterance voice analysis process, and shows the flow of voice emotion recognition.
  • feature quantities such as the height (fundamental frequency) of the uttered voice and the loudness (power) of the uttered voice are extracted.
  • LLD Low-Level Descriptions
  • the LLD obtained in this way is time-series data, so the length varies depending on the utterance content. Moreover, it is presumed that the emotion contained in the voice is not expressed by a value for each moment but as a tendency throughout the utterance. Therefore, statistical values such as average value, variance, slope, maximum value, minimum value, etc. are calculated from the LLD, which is the feature value of voice obtained as time-series data. If m statistics are calculated for each of n LLD time-series data, the input speech voice is converted into an n ⁇ m-dimensional feature vector. Since the feature amount vector is a vector in which a plurality of feature amounts are combined into one, it may be simply expressed as a feature amount.
  • the uttered voice analysis unit 2 recognizes the emotion contained in the uttered voice by using the feature amount vector thus obtained.
  • Two types of expressions of emotions are often used: those expressed in categories such as “anger” and “joy” and those expressed in coordinate axes in the emotional space such as “comfort-discomfort” and “wake-sleep”.
  • pattern recognition is performed to estimate the category to which the feature amount vector belongs, and various statistical classifiers such as linear discriminant analysis and support vector machine (SVM: Support Vector Machine) are used.
  • SVM Support Vector Machine
  • the coordinate value is estimated from the feature vector, and a model such as linear regression, support vector regression (SVR), neural network, etc. is used.
  • the processing executed by the utterance voice analysis unit 2 shown in FIG. 3 is similarly applied to the processing executed by the action sound analysis unit 3, the second recording unit 5, and the second identification unit 7.
  • the utterance voice and the action sound have greatly different acoustic features, the feature amounts are naturally different.
  • FIG. 4 is a diagram illustrating identification boundary information for atmosphere detection recorded in the first recording unit 4, and illustrates an example of identifying four types of emotions from emotion A to emotion D.
  • the utterance voice analysis unit 2 calculates the feature amount vector of the emotions A to D from the utterance voices of the plurality of emotions A to D.
  • labeled speech sounds from emotion A to emotion D are prepared.
  • the characteristic amounts such as the fundamental frequency, power, and spectrum are extracted, and the statistics are calculated.
  • the feature vector of is derived.
  • FIG. 5 is a diagram illustrating identification boundary information for identifying an emotion from the feature amount vector for each emotion.
  • FIG. 5A shows emotion distribution
  • FIG. 5B shows emotion classification.
  • the vertical axis in FIGS. 5A and 5B indicates the feature amount M.
  • the horizontal axes of FIGS. 5A and 5B represent the feature amount L.
  • the features L and M are shown in a two-dimensional space for simplicity. Assuming that the feature vectors of emotions A to D are distributed as shown in FIG. 5A, as a result of performing multi-class classification by the one-to-other classification method of SVM, as shown in FIG. 5B. Different identification boundaries are found by learning.
  • a boundary AB for identifying emotion A and emotion B a boundary BC for identifying emotion B and emotion C
  • a boundary CD for identifying emotion C and emotion D
  • a boundary DA for identifying emotion D and emotion A.
  • emotion A is identified by boundaries AB and DA
  • emotion B is identified by boundaries BC and AB
  • emotion C is identified by boundaries CD and BC
  • emotion D is identified by boundaries DA and CD.
  • the boundary AB, the boundary BC, the boundary CD, and the boundary DA obtained here are recorded in the first recording unit 4 of FIG. 1 as identification boundary information and used for emotion identification. Emotions can be classified by the feature space divided by the dotted line based on the identification boundary information.
  • FIG. 6 is a diagram for explaining the identification boundary information for atmosphere detection recorded in the second recording unit 5, similarly illustrating the case of identifying four types of emotions from emotion A to emotion D. ing.
  • the description is the same as that of FIGS. 4 and 5 except that the input is a labeled action sound, and the details are omitted.
  • the boundary AB, the boundary BC, the boundary CD, and the boundary DA obtained in the same manner are recorded in the second recording unit 5 as identification boundary information and are used for identification.
  • FIG. 7 is a diagram in which the operation explanatory diagram of FIG. 3 is applied to the block configuration of FIG.
  • the uttered voice analysis unit 2 the uttered voice is extracted from the voice input signal, and then the feature amount vector is calculated.
  • the first discriminator 6 configured by a statistical discriminator that captures the discrimination boundary information of emotions related to the utterance voice recorded in the first recording unit 4 in advance, the emotions included in the voice input (utterance voice) To estimate.
  • the first identification unit 6 outputs the identification result to the atmosphere estimation unit 8.
  • FIG. 8 is a diagram showing an example of emotion estimation by the first identification unit 6.
  • the feature amount vector calculated by the speech analysis unit 2 for a new sound input is indicated by a star.
  • the star mark indicates the boundary AB. Since it exists within the boundary BC, it is identified as emotion B.
  • the identification result of the first identification unit 6 is output to the atmosphere estimation unit 8.
  • FIG. 9 is a diagram in which the operation explanatory diagram of FIG. 3 is applied to the block configuration of FIG.
  • the action sound analysis unit 3 extracts the action sound from the voice input signal and then calculates the feature amount vector thereof. Then, in the second discriminating unit 7 configured by a statistical discriminator that captures the discrimination boundary information of the emotion related to the action sound recorded in the second recording unit 5 in advance, the emotion included in the voice input (action sound) To estimate. Further, the second identification unit 7 outputs the identification result to the atmosphere estimation unit 8.
  • FIG. 10 is an explanatory diagram of a method of separating an utterance voice from an input voice and an action sound, and shows an example of an emotion utterance voice and an emotional action sound.
  • FIG. 10A is an example of an emotional utterance voice, and shows an amplitude waveform of the emotional utterance voice of "anger” and its spectrogram.
  • FIG. 10B is an example of an emotional action sound, which shows an action waveform (often a strong percussive sound) produced by operating an OA device in a situation where the emotion of “anger” is inherent, and its amplitude waveform and spectrogram. Shows.
  • the horizontal axis represents time
  • the upper vertical axis represents amplitude
  • the lower vertical axis represents frequency.
  • the lower spectrogram shows that the darker the black, the stronger the power, and the action sound side has a wide spread of the signal component.
  • the uttered voice has a strong power distribution in the vicinity of several hundred Hz to 5 kHz, but is relatively weak in other regions. Therefore, it is conceivable to separate the uttered voice and the action sound by the low-pass filter and the high-pass filter with the boundary around 5 kHz.
  • the uttered voice analysis unit 2 determines a signal having a frequency of less than 5 kHz as uttered voice and extracts the uttered voice.
  • the action sound analysis unit 3 determines a signal having a frequency of 5 kHz or higher as the action sound, and extracts the action sound. Further, the frequency of less than about 5 kHz is divided into the uttered voice, and the frequency of 5 kHz or more is divided into the action sound, and during the period when the action sound has the power equal to or higher than a predetermined level, the atmosphere estimation unit 8 sets the weight of emotion recognition on the utterance voice side. It is possible to reduce the recognition accuracy due to the overlapping of sounds by lowering the importance of the action sound side.
  • FIG. 11 is a diagram showing an operation of estimating an emotion in the atmosphere estimation unit 8 for each predetermined estimation period.
  • the uppermost diagram shows the amplitude waveform of the uttered voice
  • the vertical axis shows the amplitude
  • the horizontal axis shows the time.
  • the first identification unit 6 estimates emotions by dividing the estimation period. It is possible to divide the estimation period without overlapping, but by providing the overlapping period as shown in FIG. 11, even if the emotional feature quantity appears prominently in the boundary portion, the period is not divided and is appropriate. You can estimate your emotions.
  • the estimated emotions are indicated by circles at the tip of the arrow drawn from the amplitude waveform of the speech voice.
  • a and B are presumed to be estimated by black circles.
  • White circles indicate other emotions (including those not estimated).
  • the description of the portion of the figure that is based on the amplitude waveform of the uttered voice is the same even if it is replaced with the amplitude waveform of the action sound, and thus the description thereof is omitted.
  • FIG. 12 is a diagram showing the operation of expressing the estimated emotion shown in FIG. 11 in a wider period and obtaining the transition of the atmosphere level from the estimated appearance frequency of the emotion in the atmosphere estimation unit 8.
  • the histogram of the appearance frequency of the estimated emotion is shown for each appearance frequency determination period in which several estimated emotion periods are collected.
  • the appearance frequency determination period may also be changed by providing an overlapping period for the same reason as the estimation period.
  • the atmosphere estimation unit 8 counts the appearance frequency from the emotion A to the emotion D and other appearance frequencies as an example.
  • the contribution degree varies depending on the emotion. There is a need to. For example, if you consider emotions such as “anger”, “anger”, “irritability”, “hate”, “dislike”, etc., “rage” is the highest contribution to “a terrible atmosphere”, and “anger”, “Hate” follows, followed by “frustration” and “disgust” at relatively low levels. Further, if the frequency of “frustration” and “disgust” is extremely high, the degree of contribution to the “aggressive atmosphere” is high. Therefore, it is necessary to weight the emotions A to D and other appearance frequencies according to the contribution.
  • the histogram of the appearance frequency of emotions shown in FIG. 12A becomes a histogram of the appearance frequency of emotions after weighting shown in FIG. 12B.
  • the weight is shown for each emotion as emotion A>emotion B>emotion C>emotion D, and the others are shown as zero.
  • FIG. 12C shows the transition of the mood level and the threshold value obtained by summing the histogram of the appearance frequency of emotions after weighting.
  • the sum value is an atmosphere value.
  • the mood value may be expressed as a numerical value based on information indicating the type of emotion.
  • the atmosphere estimation unit 8 calculates the atmosphere value for each appearance frequency determination period.
  • One appearance frequency determination period corresponds to one circle as an atmosphere value (atmosphere level), and a transition of the atmosphere value is expressed by sequentially shifting the period.
  • the severity of the atmosphere can be defined in stages.
  • the threshold value is given as Th1 to Th3 as the degree of deterioration of the atmosphere increases. For example, when the threshold value exceeds Th3, it is determined that the “aggressive atmosphere” has reached the extreme level, and the values fall to Th2 and Th1. Therefore, the degree of “aggressive atmosphere” is weakening.
  • the stimulus output unit 11 outputs a stimulus for cleaning the atmosphere to the vicinity of the sound generation source according to the degree of the atmosphere.
  • FIG. 13 is a diagram illustrating an example of a specific configuration of the stimulation control unit 9, the stimulation generation unit 10, and the stimulation output unit 11 based on the estimation result of the atmosphere estimation unit 8.
  • a stimulus for cleaning the "acute atmosphere" for example, an auditory stimulus, an olfactory stimulus, a visual stimulus, an electromagnetic wave stimulus, a charged fine particle stimulus, etc. are output individually or in combination.
  • the stimulus control unit 9 receives the estimation result of the atmosphere estimation unit 8 and combines a plurality of stimuli, a main control unit 9a for instructing the content of each stimulus control, and a control for each stimulus based on the instruction of the main control unit 9a.
  • the control unit for performing the above that is, the auditory stimulation control unit 9b, the olfactory stimulation control unit 9c, the visual stimulation control unit 9d, the electromagnetic wave stimulation control unit 9e, and the charged particle stimulation control unit 9f.
  • the stimulus generation unit 10 generates a stimulus for each stimulus control unit 9b to 9f, that is, an auditory stimulus generation unit 10b, an olfactory stimulus generation unit 10c, a visual stimulus generation unit 10d, and an electromagnetic wave stimulus generation unit. 10e and a charged particle stimulus generation unit 10f.
  • the stimulus output unit 11 outputs an stimulus in a form suitable for each stimulus generation unit 10b to 10f, that is, an auditory stimulus output unit 11b, an olfactory stimulus output unit 11c, and a visual stimulus output unit 11d.
  • An electromagnetic wave stimulation output unit 11e and a charged particle stimulation output unit 11f are provided.
  • Each stimulus output from the stimulus output unit 11 is output to people, that is, around the sound source, according to the estimation result of the atmosphere estimation unit 8.
  • the content of each stimulus will be described individually.
  • Auditory stimulus is the sound that cleans a harsh atmosphere.
  • the auditory stimulus is composed of sounds that are said to have parasympathetic nerve enhancement and relaxing effects as effects on humans, for example, sounds that are emitted in the natural environment, especially sounds that accompany unpredictable movements of water and the atmosphere.
  • Natural sound is used as a stimulus. Specifically, the sound of waves returning to the beach, the sound of brooks, the sound of clear waterfalls, the sound of rain falling on leaves, and the sound of the branches and leaves of trees rubbing against the wind. Furthermore, in addition to these combinations, even those that overlap, such as the chirping of birds, the crying of insects, the crying of animals, or those reconstructed as sounds that make you feel the depth and breadth of natural space, It is used as a sound stimulus.
  • the acoustic characteristic of this "natural sound” is that it is accompanied by unpredictable movement as described above, and the power becomes weaker as the frequency becomes higher like pink noise, that is, the power is inversely proportional to the frequency f.
  • the characteristic is that it has "/f fluctuation”. Aside from the case where you concentrate your attention on a specific feature contained in "natural sound,” "natural sound” is used in everyday life, as if you were in a natural space by listening to it. It is speculated that by reproducing the annoyance, worries, and the situation of withdrawal from stress, parasympathetic nerve enhancement and relaxing effects can be obtained.
  • auditory stimulus may cause parasympathetic nerve enhancement or relaxing effect depending on the relationship with human memory.
  • a melody sound that reminds of a heartwarming story or one's own experience is heard, it is presumed that a parasympathetic nerve is enhanced or a relaxing effect is generated, and the melody sound may be used as an auditory stimulus.
  • it can be nursery rhymes, songs, anime and drama music, etc., but it is desirable that they have universal support with little bias in personal taste.
  • some classical music is said to have a high relaxing effect, and for example, performance sounds such as Ravel's composition "Pavane for the Dead Princess" may be used as auditory stimuli, but in that case as well It is desirable to have universal support with little bias in taste.
  • the pure temperament the scale has a simple integer ratio relationship, It may be a musical composition or melody sound based on a temperament in which a chord does not swell.
  • Sound stimulus that evokes “laughter” is a sound effect that impresses punches (end of “laughter”) and blur (misguided behavior and actions made to induce “laughter”) in comedy programs and controls. Or theme music is equivalent. As a specific music piece, "Somebody Stole My Gal” played by Pee Wee Hunt, an American jazz musician, is used. Even if it is not this performance itself, a similar "laughing" evoking effect can be expected if it is a melody sound reminiscent of the Pee Wee Hunt performance.
  • This song is the theme song of "Yoshimoto Shin Comedy", and the causal relationship with "laughter” is strongly remembered by those who have something to do with the Kansai cultural area, mainly people from the Kansai region or those who have ties to the Kansai area. It is estimated that By the way, it is possible that the acoustic features included in this song may cause everyone to laugh, but it has not been verified.
  • FIG. 14 is a diagram showing an example of a configuration for outputting the auditory stimulation described above, and more specifically showing the auditory stimulation system shown in FIG.
  • the auditory stimulus output unit 11b outputs a natural sound composed of a sound generated by the movement of water or the atmosphere as an auditory stimulus for cleaning the atmosphere. Alternatively, the auditory stimulus output unit 11b outputs a music or melody sound as an auditory stimulus for cleaning the atmosphere.
  • the auditory stimulation control unit 9b transmits to the auditory stimulation generation unit 10b an instruction to generate a sound for cleaning a bad atmosphere.
  • the auditory stimulation generation unit 10b includes an audio signal reproduction unit 10b1 and an audio signal data storage unit 10b2.
  • Audio signal data such as the aforementioned "natural sound, music, melody sound" is stored in the audio signal data storage unit 10b2.
  • the predetermined audio signal data stored in the audio signal data storage unit 10b2 is read by the audio signal reproducing unit 10b1 and reproduced as an audio signal to configure the auditory stimulation output unit 11b. It is output to the space as an auditory stimulus from the speaker 11b.
  • the olfactory stimulus is a scent that cleans a harsh atmosphere.
  • a diluted plant-derived aromatic component so-called essential oil (essential oil) or aroma oil (containing a synthetic component) is used as the stimulus.
  • essential oils are natural aroma substances extracted from plant flowers, leaves, peels, fruits, heartwood, roots, seeds, bark, resins, etc. Has scent and function.
  • ylang-ylang effect ingredient: linalool
  • orange sweet sucralose
  • chamomile roman sucralose
  • clary sage sucralose
  • linalyl acetate linalool
  • grapefruit sucralose
  • limonene Herbs and citrus fruits
  • lavender the same: linalyl acetate, linalool
  • bergamot the same: limonene, linalool
  • peppermint the same: menthol
  • pine the same: pinene
  • cedar the same: pinene
  • cypress Sime: Pinene
  • kuromoji Sude: Linalool
  • FIG. 15 is a diagram showing an example of a configuration for output processing the olfactory stimulus described above, and more specifically shows the olfactory stimulus system shown in FIG.
  • the olfactory stimulus output unit 11c outputs a diluted essential oil or a diluted aroma oil as an olfactory stimulus for cleaning the atmosphere.
  • the olfactory stimulus control unit 9c transmits a scent generation instruction to the olfactory stimulus generation unit 10c.
  • the olfactory stimulus generation unit 10c includes a cartridge operation unit 10c1 and scent component cartridges 10c2 to 10c4. Although three cartridges are shown in FIG. 15 for simplification of description, the number is not limited to this.
  • the scent component cartridges 10c2 to 10c4 hold different scent components.
  • the scent component is generated alone or in a plurality of blends according to an operation signal from the cartridge operation unit 10c1 based on an instruction from the olfactory stimulation control unit 9c.
  • the olfactory stimulus output unit 11c releases the generated scent component.
  • the scent component cartridges 10c2 to 10c4 for example, a spray structure that holds a liquid obtained by diluting the essential oil that serves as the scent component and atomizes by using mechanical movement by a high-pressure gas or a piezo element can be considered.
  • an ultrasonic atomization separation structure in which ultrasonic waves are applied to the diluted liquid to atomize it may be used.
  • the olfactory stimulation output unit 11c includes a nozzle 11c1, a guide tube 11c2, a fan 11c3, and an output operation unit 11c4. The scent component atomized in the olfactory stimulus generation unit 10c is guided to the nozzle 11c1 through the guide tube 11c2 and is discharged.
  • the olfactory stimulus emitted from the scent component cartridge is operated by the output operation unit 11c4 based on the instruction of the olfactory stimulus control unit 9c to change the direction of the nozzle 11c1 in order to guide it to the intended direction or to operate the air volume of the fan 11c3.
  • the scent output by the olfactory stimulus output unit 11c may remain at the place where the scent was output. Therefore, the olfactory stimulus output unit 11c may output the deodorant component a predetermined time after outputting the scent. Thereby, the atmosphere cleaning device 100 can deodorize the scent.
  • a visual stimulus is at least one of information and light that cleans a hostile atmosphere.
  • Image output or lighting control is used as a visual stimulus.
  • the image outputs a natural landscape regardless of whether it is a moving image or a still image.
  • the contents associated with the auditory stimulus and the olfactory stimulus can reinforce each other to obtain the enhancement of parasympathetic nerve and the relaxing effect.
  • "natural scenery” such as forests and mountain streams
  • "natural sounds” such as the sound of brooks, the sound of clear waterfalls, and the sound of the branches and leaves of trees rubbing against the wind. It is assumed that it has a high affinity with the scent of the system.
  • FIG. 16 is a diagram showing an example of a configuration for outputting the visual stimulus described above, and more specifically showing the visual stimulus system shown in FIG.
  • the visual stimulus generation unit 10d and the visual stimulus output unit 11d shown in FIG. 13 are provided in each of two systems (image display type and lighting operation type), but the present invention is not limited to this and only one may be provided.
  • the visual stimulus output unit 11d outputs an image, a natural landscape existing with the movement of water and the atmosphere, or illumination light as a visual stimulus for cleaning the atmosphere.
  • the visual stimulus output unit 11d uses, as the illuminating light, illuminating light accompanied by a color tone change that individually intensifies the blue, green, and orange lights.
  • the visual stimulus control unit 9d transmits a visual stimulus output instruction to the visual stimulus generation unit 10d.
  • the visual stimulus generation unit 10d includes an image signal data storage unit 10d2, an image signal reproduction unit 10d1, an illumination control data storage unit 10d4, and an illumination drive unit 10d3.
  • the image signal data storage unit 10d2 stores image information.
  • the image signal reproduction unit 10d1 reads the image information based on the output instruction from the image information storage unit 173a.
  • the image signal reproduction unit 10d1 outputs an image signal based on the read image information to the monitor 11d1. Thereby, the monitor 11d1 can display the image information.
  • the monitor 11d1 is an example of the visual stimulus output unit 11d.
  • the lighting control data storage unit 10d4 stores control data for controlling the lighting 11d2. Based on the instruction from the visual stimulus control unit 9d, the illumination drive unit 10d3 controls the illumination control data storage unit 10d4 to read the stored illumination control data and output a drive signal to the illumination 11d2. Thereby, the illumination 11d2 can output light.
  • the illumination 11d2 is, for example, an illumination having a function of controlling the color tone, and may emit light having a color shade corresponding to the input drive signal directly or indirectly by reflection. Further, the visual stimulus generation unit 10d may adjust the intensity of the light output by the illumination 11d2.
  • Electromagnetic stimulation is an electromagnetic wave that cleans a harsh atmosphere.
  • electromagnetic wave stimulation a standing wave generated by propagating extra-long waves generated by lightning discharge in a spherical shell-shaped cavity between the earth and the ionosphere, approximately 7.83 Hz based on so-called Schumann resonance (resonance)
  • An electromagnetic wave having a frequency is simulated and used as a stimulus.
  • the ultra-long-wave electromagnetic waves that have continued to exist in the history of biological evolution have increased parasympathetic nerves.
  • the relaxation effect and the relaxation effect there are cases where attention is focused on the fact that it is close to the frequency of the electroencephalogram ⁇ wave.
  • FIG. 17 is a diagram showing an example of a configuration for output processing of the electromagnetic wave stimulation described above, and more specifically shows the electromagnetic wave stimulation system shown in FIG.
  • the electromagnetic wave stimulus control unit 9e transmits an electromagnetic wave stimulus output instruction to the electromagnetic wave stimulus generation unit 10e.
  • the electromagnetic wave stimulus generation unit 10e is composed of an electromagnetic wave drive unit 10e1 and an ultra long wave oscillation unit 10e2.
  • the ultra-long wave oscillating unit 10e2 includes an electronic circuit that oscillates at a frequency of about 7.83 Hz, and oscillates based on a command from the electromagnetic wave driving unit 10e1 to output an ultra-long wave signal.
  • the electromagnetic wave driving unit 10e1 drives the input ultra-long wave signal with a predetermined electric power, supplies a current based on the ultra-long wave signal to the antenna 11e, and outputs an electromagnetic wave stimulus.
  • the antenna 11e is an example of the electromagnetic wave stimulation output unit 11e.
  • Charged particle stimuli are charged particles that clean a harsh atmosphere.
  • fine water particles negatively charged by the Leonard effect are used as the stimulus.
  • water droplets are polarized, and negative charges are distributed on the surface, and positive charges tend to be distributed inside.Therefore, when water droplets collide mechanically and are finely divided, relatively small water particles are generated. Are negatively charged (generally also called negative or negative ions), and large water particles tend to be positively charged. Large water particles fall quickly due to the action of gravity, but small water particles float and become an air mass in which negative ions predominate.
  • this charged particle stimulus may be used alone, it is possible to reinforce each other by enhancing parasympathetic nerves and relaxing effects by making the contents associated with auditory stimuli, olfactory stimuli, and visual stimuli.
  • "natural sounds" such as the sound of brooks and the sound of clear waterfalls, tree-based scents, landscape images of forests and mountain streams, and charged particles can be output in association with each other.
  • the natural environment can be reproduced more realistically, and as a result, it becomes possible to obtain the parasympathetic nerve promotion and relaxing effect.
  • FIG. 18 is a diagram showing an example of a configuration for outputting the charged particle stimulus described above, and more specifically showing the charged particle stimulus system shown in FIG.
  • the charged particle stimulation control unit 9f sends an output instruction to the charged particle stimulation generation unit 10f.
  • the charged particle stimulus generation unit 10f includes an element drive unit 10f1 and an atomized particle water generation unit 10f2.
  • the charged particle stimulation output unit 11f includes a nozzle 11f1, a guide tube 11f2, a fan 11f3, and an output operation unit 11f4.
  • the particle water atomized in the charged particle stimulus generation unit 10f is guided to the nozzle 11f1 through the guide pipe 11f2 and is discharged.
  • the charged particles discharged from the atomized particle water generation unit 10f2 are changed in the direction of the nozzle 11f1 in order to be guided to the intended direction by the operation of the output operation unit 11f4 based on the instruction of the charged particle stimulation control unit 9f, and the fan. Operate the air volume of 11f3.
  • the guide tube 11f2 and the nozzle 11f1 through which the charged particles pass may be partially negatively charged to supplement the charge amount of the charged particles.
  • FIG. 19 is a matrix diagram showing an example of stimulation control.
  • FIG. 13 shows a configuration in which five categories of stimuli are output
  • FIG. 19 shows three categories for the sake of simplification of explanation, four types of stimuli K, K1 to K4, and stimuli L, L1 to L3. It is assumed that there are three types and three types of stimulation M, M1 to M3. It is shown that the larger the number of each stimulus, the stronger the action of cleaning the deterioration of the atmosphere.
  • the stimulus K is an auditory stimulus (that is, sound)
  • the stimulus K1 is a natural sound
  • the stimulus K2 is classical music.
  • the stimulus control matrix diagram expresses the degree of the harsh atmosphere in five levels.
  • the case where there are three threshold values is illustrated, but it may be considered that the case where the threshold value is 20 is illustrated in the matrix diagram of the stimulation control illustrated in FIG. 19.
  • the degree of a bad atmosphere is 1, the atmosphere value is equal to or more than the threshold Th1 and less than the threshold Th2.
  • the degree of the severe atmosphere is 2
  • the atmosphere value is equal to or more than the threshold Th2 and less than the threshold Th3.
  • the stimulus control matrix diagram shows the relationship between the degree of a bad atmosphere and the stimulus to be output.
  • the horizontal direction shows the degree of deterioration of the atmosphere, and is divided into 5 stages from mild to severe, and each stage is displayed in 4 sections. It means that the more you move to the right, the worse the atmosphere becomes.
  • one of the stimuli K1, K2, L1, and L2 is used, and two stimuli are gradually used at the same time according to the deterioration, and further the stimuli M1 to M3 are added.
  • the stimulus control method is used so as to enhance the action of cleaning the atmosphere deterioration. For example, by providing the main controller 9a in FIG.
  • FIG. 19 does not show the control of the intensity or amount of each stimulus, it goes without saying that such a control can stepwise enhance the cleaning action for the deterioration of the atmosphere, and it goes without saying that the above-mentioned olfactory stimulus and charging are performed. Particle stimulation is applicable.
  • the atmosphere cleaning device configured as described above, as an environment surrounding people, an inappropriate situation from a slight “unpleasant atmosphere” to a serious “severe atmosphere”
  • the acquired audio signal is detected by analysis, and a stimulus that has the effect of changing the emotions or emotions of people is output according to the degree of inadequacy of the atmosphere, so it is possible to prevent further deterioration of the atmosphere.
  • the effect of being able to prevent a critical situation is obtained.
  • a stimulus having an action of guiding people's emotions or emotions in a gentle direction is output according to the degree of inappropriateness of the atmosphere, so that the atmosphere is further improved. As a result, it is possible to prevent deterioration and prevent a critical situation.
  • water is used as an auditory stimulus having an effect of changing people's emotions or emotions or an effect of guiding the person in a gentle direction according to the degree of inappropriateness of the atmosphere. Or, it outputs a natural sound composed of sounds that accompany the movement of the atmosphere, so that it is possible to prevent further deterioration of the atmosphere without hindering the concentration of consciousness and to prevent a critical situation. The effect that can be obtained is obtained.
  • music is used as an auditory stimulus having an effect of changing people's emotions or emotions or an effect of leading to a gentle direction according to the degree of inappropriateness of the atmosphere.
  • music is used as an auditory stimulus having an effect of changing people's emotions or emotions or an effect of leading to a gentle direction according to the degree of inappropriateness of the atmosphere.
  • by outputting a melody sound it is possible to prevent further emotional deterioration by suppressing the emotional emotions caused by the parties remembering a heartwarming story or their own experience, and preventing a critical situation. The effect of being able to do is obtained.
  • the atmosphere cleaning device as an auditory stimulus having an action of changing people's emotions or emotions or an action of leading to a gentle direction according to the degree of inappropriateness of the atmosphere, By using sound effects and theme music that impress or blur, laughter is aroused, and further deterioration of the atmosphere can be prevented, and it is possible to prevent a critical situation.
  • the atmosphere cleaning device as an auditory stimulus having an effect of changing people's emotions or emotions or an effect of leading to a gentle direction according to the degree of inappropriateness of the atmosphere,
  • the effect of being able to prevent a critical situation is obtained.
  • the essential oil is used as an olfactory stimulus having an effect of changing people's emotions or emotions according to the degree of inappropriateness of the atmosphere, or an effect of leading to a gentle direction. Since it uses a dilution of (essential oil) or aroma oil (containing synthetic components), stimulation is rapidly transmitted to the limbic system, preventing further deterioration of the atmosphere in a short period of time, and in a critical situation It is possible to obtain the effect that it can be prevented from reaching.
  • water is used as a visual stimulus having an action of changing people's emotions or emotions according to the degree of inadequacy of the atmosphere, or a action of guiding in a gentle direction.
  • image output such as "natural scenery” that reminds the freshness of plants and trees that exist with the movement of the atmosphere, or enhance the blue light that has been confirmed to have a high sedative effect, reminiscent of the freshness of sunbeams Since we use lighting control with color tone changes such as intensifying the greenish light and intensifying the orangeish light that reminds us of warmth, security and nostalgia, we can transmit a large amount of information to the brain at once.
  • Schumann is used as an electromagnetic wave stimulus having an effect of changing people's emotions or emotions according to the degree of inappropriateness of the atmosphere, or an effect of guiding in a gentle direction. Since an electromagnetic wave having a frequency of about 7.83 Hz based on resonance is used, it is possible to simultaneously transmit stimuli to a large number of people without directly using the five senses stimuli that are susceptible to disturbances, which further deteriorates the atmosphere. It is possible to prevent such a situation as well as prevent a critical situation.
  • the atmosphere cleaning device as charged particle stimuli having an effect of changing people's emotions or emotions or an effect of leading to a gentle direction, depending on the degree of inappropriateness of the atmosphere, Since fine water particles negatively charged by the Leonard effect are used, it is possible to convey to the brain the freshness as if one were in the natural environment, so that it is possible to effectively prevent further deterioration of the atmosphere, The effect is that it is possible to prevent reaching a critical situation.
  • the atmosphere cleaning device detects an inappropriate situation from a slight “unpleasant atmosphere” surrounding people to a serious “dangerous atmosphere”, and mainly detects people.
  • a stimulus that has the effect of leading to a calm state and cleaning the atmosphere, it is possible to clean up stressful inappropriate situations before the crisis and prevent deteriorating health and crime. The effect of being able to be obtained is obtained.
  • FIG. 20 is a diagram showing an example of a behavior sound analysis method.
  • FIG. 21 is a diagram illustrating an example of a result of action sound analysis.
  • emotional behavior sounds feelings of "anger”, “anger”, “irritability”, “hate”, “disgust”, etc. make us less considerate of others and things, and make things violent. It is expected that the characteristics of the sound due to being handled, for example, peak components will appear strong and many. Therefore, an acoustic feature quantity suitable for detecting a peak component called a crest factor is used.
  • FIG. 20 shows a method of analyzing the acquired action sound, in which an analysis period of a predetermined length is set and the crest factor for each section is calculated by shifting the period by half.
  • This method of determination is a general analysis method in the field of speech analysis, and each analysis period is called a “short-time frame”, and how much the adjacent sections are shifted is called a “frame cycle”.
  • frame cycle In the analysis in the frequency domain, it is common to apply some window function to the short time frame in order to avoid the inconvenience caused by the discontinuity at both ends of the frame.
  • the crest factor is a feature amount in the time domain and has no relation to discontinuity at both ends of the frame, the window function need not be used.
  • the crest factor definition is shown in the lower part of FIG.
  • the maximum value of the absolute value is obtained for each analysis period, that is, for each short-time frame, and is similarly divided by the effective value for each short-time frame to be called a crest factor.
  • the effective value is obtained by calculating the square root of the root mean square for each short time frame.
  • the crest factor is calculated from the equation (1).
  • the effective value is the root mean square obtained during the analysis period.
  • FIG. 21 shows, as an example of the result of analysis of action sounds, comparison of action sounds when a keyboard operation of a personal computer is performed during normal times and during frustration.
  • the vertical axis represents the number of frames in which the crest factor exceeds a predetermined value
  • the horizontal axis represents the number of frames in which the peak exceeds a predetermined value.
  • FIG. 21 shows the result of sampling the action sound during normal times and the action sound during irritation a plurality of times.
  • the black circles indicate normal times
  • the X marks indicate irritation. For example, based on the behavior sound during frustration as shown in FIG.
  • the number of analysis periods in which the crest factor exceeds a predetermined value (first feature amount) and the number of analysis periods in which the peak component exceeds a predetermined value (first Two feature quantities) are calculated.
  • a plurality of analysis periods in which the crest factor exceeds a predetermined value and a plurality of analysis periods in which the peak component exceeds the predetermined value are calculated based on a plurality of action sounds during frustration.
  • the calculated results are X marks at the time of irritation.
  • the characteristics of the behavior sounds are the crest factor and the frequency at which the peak value exceeds the predetermined value. Since it is configured to be used, it is possible to improve the usability of emotion identification by action sound, effectively utilize action sound that tends to interfere with emotion identification by uttered voice, and improve the robustness of atmosphere detection. Is obtained.
  • FIG. 22 is a functional block diagram when the atmosphere cleaning device is applied to an elevator.
  • the present embodiment shows an embodiment in which the atmosphere cleaning device is applied to an elevator.
  • the third embodiment will mainly describe matters different from the first embodiment, and the explanation of matters common to the first embodiment will be omitted.
  • the atmosphere cleaning device 200 is coupled to the elevator car 300, the sound input unit 201 acquires a sound generated in the elevator car 300, and the stimulus output unit 202 outputs a stimulus into the elevator car 300. It is configured to do. Further, although the inside of the atmosphere cleaning device 200 is simplified and described only in the main parts of the atmosphere estimation unit 203 and the stimulus generation unit 204, the configuration is actually the same as the block diagram of FIG. 1. There is a party X and a party Y, and the party X is unilaterally agitated with the party Y, and the atmosphere cleaning device 200 also applies a series of sounds to the elevator as described above. Atmosphere is detected by and the atmosphere is cleaned by outputting various stimuli.
  • FIG. 23 is a diagram showing an example of a serious situation when the atmosphere cleaning device is applied to an elevator.
  • FIG. 22 shows an example in which the situation shown in FIG. 22 has progressed to a more serious situation.
  • the party X has taken a posture of taking out the weapon and covering the party Y. Or, as a situation that is not so serious, the situation where the party X is about to hit and grab the chest of the party Y is about to hit.
  • the party X generally has a posture of holding the party Y in a so-called “blunted” state where the front and rear are indistinguishable. At least, it is difficult to imagine a situation in which the party X on the offending side is below the party Y.
  • the outputs of the various stimuli described in the first embodiment may not be able to clear the situation before a serious crime occurs.
  • the parasympathetic nerve is enhanced or the relaxing effect is produced by the output of the stimulus. It is not enough to have an olfactory stimulus that can provide a sedative effect that quickly transmits limbic system to the limbic system, calms the nervous system, and relaxes the work of the mind and body.
  • an olfactory stimulus as shown below is output.
  • the configuration for outputting the olfactory stimulus other than the content of the stimulus is the same as in FIG.
  • a malodorous substance diluted to such an extent that it is not directly harmful to human health is used as an olfactory stimulus.
  • Rafflesia a malodorous component: indole, amines, etc.
  • Shokdai Okonjak the same: dimethyl trisulfide
  • Durian the same: propanethiol, etc.
  • American Hornbill Dead Force Arum
  • the odorous components emitted by stink bugs, camels, skunks, vultures, millipedes, and zorillas are used.
  • organic sulfur compounds used as odorants for city gas and propane gas are used.
  • the elevator car 300 it is possible to prevent the party X from calming or crime. The fact that the party X can be sedated or the crime is prevented can be said to temporarily cleanse the harsh atmosphere.
  • the control may be performed so as to output the olfactory stimulus mainly to the party X who is the aggressor.
  • the output direction of the olfactory stimulus is controlled using the nozzle 11c1 and the fan 11c3 shown in FIG.
  • a camera image installed in the elevator car 300 may be used to specify the position of the party X and determine the output direction of the olfactory stimulus.
  • 25 cm or more from the floor surface is used.
  • FIG. 24 is a functional block diagram when an atmosphere cleaning device is applied to an elevator, a malodorous component is used as an olfactory stimulus, and ventilation processing is performed.
  • a malodorous component is used as the olfactory stimulus
  • the malodor remaining in the elevator car 300 for a long time hinders the operation of the elevator. Therefore, as shown in FIG. 24, it is effective to interlock with a mechanism for forming an airflow toward the ceiling from a position close to the floor surface, for example, a position less than 25 cm from the floor surface for ventilation.
  • FIG. 24 like FIG. 22 and the like, a part of the configuration of FIG. 1 is extracted and described, and the description of the overlapping parts will be omitted.
  • a ventilation fan 407 is installed on the ceiling of the elevator car 300, a ventilation port 408 is installed near the floor surface of the elevator car 300, and the atmosphere cleaning device 400 is equipped with a ventilation fan control unit 406.
  • the ventilation fan control unit 406 receives the control signal from the stimulation control unit 404.
  • the ventilation fan control unit 406 controls the ventilation fan 407 according to the control signal.
  • the ventilation fan control unit 406 operates the ventilation fan 407 installed on the ceiling portion of the elevator car 300 to flow the air for ventilating the inside of the elevator car 300 from the ventilation port 408 near the floor toward the ventilation fan 407. Cause Then, the rising air and scent are discharged to the outside. Further, the atmosphere cleaning device 400 holds a deodorant component in addition to the malodorous component used as the olfactory stimulus, and the stimulation control unit 404 outputs the malodorous component to the elevator car 300, and after a predetermined time, the deodorant component. May be output.
  • the elevator to which the atmosphere cleaning device according to the third embodiment is applied, as a stimulus for applying an olfactory interruption to the consciousness of the assailant side, a diluted malodorous component or an organic sulfur compound emitted by a plant or an animal. Since it is configured to use, it is possible to temporarily suspend the actions of the perpetrator and improve the crime prevention and safety of the closed space. If the application is to an elevator, it will be possible to secure the time required for an emergency response such as an emergency stop on the nearest floor or rushing security guards, and the effect of improving crime prevention and safety of the elevator can be obtained.
  • FIG. 25 is a diagram showing a hardware configuration of the atmosphere cleaning device.
  • the atmosphere cleaning device 100 includes a processor 101, a volatile storage device 102, and a non-volatile storage device 103.
  • the processor 101 controls the entire atmosphere cleaning device 100.
  • the processor 101 is a CPU (Central Processing Unit), an FPGA (Field Programmable Gate Array), or the like.
  • the processor 101 may be a multiprocessor.
  • the atmosphere cleaning device 100 may be realized by a processing circuit, or may be realized by software, firmware or a combination thereof.
  • the processing circuit may be a single circuit or a composite circuit.
  • the volatile storage device 102 is a main storage device of the atmosphere cleaning device 100.
  • the volatile storage device 102 is a RAM (Random Access Memory).
  • the non-volatile storage device 103 is an auxiliary storage device of the atmosphere cleaning device 100.
  • the non-volatile storage device 103 is an SSD (Solid State Drive).
  • Utterance voice analysis unit 2 action sound analysis unit 3, first identification unit 6, second identification unit 7, first recording unit 4, second recording unit 5, atmosphere estimation unit 8, stimulus control unit 9, and stimulus generation unit. 10 can be realized by an information processing device. Further, the first recording unit 4 and the second recording unit 5 may be realized as a storage area secured in the volatile storage device 102 or the non-volatile storage device 103.
  • the utterance voice analysis unit 2 the action sound analysis unit 3, the first identification unit 6, the second identification unit 7, the atmosphere estimation unit 8, the stimulation control unit 9, the stimulation generation unit 10, and the stimulation output unit 11.
  • Some or all may be realized by the processor 101.
  • a part or all of the uttered voice analysis unit 2, the action sound analysis unit 3, the first identification unit 6, and the second identification unit 7 may be realized by the processor 101.
  • the sound input unit 1 the utterance voice analysis unit 2, the action sound analysis unit 3, the first identification unit 6, the second identification unit 7, the atmosphere estimation unit 8, the stimulation control unit 9, the stimulation generation unit 10, and the stimulation output unit 11.
  • Some or all may be realized as a module of a program executed by the processor 101.
  • Part or all of the uttered voice analysis unit 2, the action sound analysis unit 3, the first identification unit 6, and the second identification unit 7 may be realized as a module of a program executed by the processor 101.
  • the program executed by the processor 101 is also called an atmosphere cleaning program.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Anesthesiology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Hematology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Acoustics & Sound (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Psychology (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Pain & Pain Management (AREA)
  • Indicating And Signalling Devices For Elevators (AREA)
  • Cage And Drive Apparatuses For Elevators (AREA)
  • Reverberation, Karaoke And Other Acoustics (AREA)

Abstract

An atmosphere purifying device (100) has an input unit (1), a feeling discrimination unit, an atmosphere estimation unit (8), and a stimulus output unit (11). The input unit (1) acquires a sound. The feeling discrimination unit discriminates the feeling of a person on the basis of a feature quantity derived from the feeling of the person included in a sound and boundary information for discriminating the feeling of the person from the feature quantity. The atmosphere estimation unit (8) estimates the degree of atmosphere in the periphery of a sound generation source on the basis of the result of discrimination of the person's feeling. The stimulus output unit (11) outputs a stimulus for purifying the atmosphere in accordance with the degree of atmosphere to the periphery of the sound generation source.

Description

雰囲気清浄化装置、雰囲気清浄化方法及びエレベータAtmosphere cleaning device, atmosphere cleaning method and elevator
 本発明は、人が存在する空間の雰囲気の険悪さを検知し、険悪な雰囲気を清浄化する雰囲気清浄化装置、雰囲気清浄化方法及びエレベータに関する。 The present invention relates to an atmosphere cleaning device, an atmosphere cleaning method, and an elevator that detect the harshness of the atmosphere of a space where people are present and clean the harsh atmosphere.
 人間の生活、社会活動の様々な場面で現れる「悪い雰囲気」或いは「嫌な雰囲気」に対して、その程度による差はあるが、多くの場合、人々は無意識に或いは意識してやり過ごし生活を営んでいる。このような状況が繰り返されることは当事者にとってはストレスとなり、心身の健康を損ねたり、極端な場合には何らかの犯罪発生に至るケースが考えられる。また、直接的な犯罪発生にまでは至らなくとも、「悪い雰囲気」の延長線上にあり得る心身症の発症などによる社会的損失は大きく、対処すべき社会問題と言える。「悪い雰囲気」が重篤になると「険悪な雰囲気」になる。「険悪な雰囲気」としては、当事者の一方に他方への害意が生じている場合、例えば、恐喝行為、傷害行為、痴漢行為などが行われる状況、或いはその前段階として害悪の通告が行われ、何らかのやり取りが生ずる状況が当てはまる。 There are differences depending on the degree of "bad atmosphere" or "unpleasant atmosphere" that appears in various scenes of human life and social activities, but in many cases, people live unconsciously or consciously There is. Repeating such a situation may cause stress to the parties, impair physical and mental health, and, in extreme cases, lead to some crime. Moreover, even if a direct crime does not occur, the social loss caused by the onset of psychosomatic disorder, which may be an extension of the "bad atmosphere", is large, and it can be said that this is a social problem to be dealt with. When the "bad atmosphere" becomes serious, it becomes a "bad atmosphere". The "harmful atmosphere" is, for example, a situation in which one of the parties is harmful to the other, for example, a situation in which an act of extortion, an injury, or a molester is performed, or a notice of harm is given as a preceding step. , The situation where some interaction occurs.
 このような「険悪な雰囲気」から犯罪そのものに至る段階、即ち危機的状態を検知して警備員への緊急通報を行うなどの危機監視装置がある。例えば、監視対象内部の音声信号を取得して感情特徴量を抽出し、予め用意した危機的状況下における発話音声の感情特徴量と比較し、判定された感情が危機的状態である場合には、予め定められた対応方法に従って警備員に通報するなどの緊急処理を行う(例えば、特許文献1参照)。 There is a crisis monitoring device that detects the critical state from such a "bad atmosphere" to the crime itself, and makes an emergency call to the guards. For example, when the voice signal inside the monitoring target is acquired and the emotion feature amount is extracted and compared with the emotion feature amount of the utterance voice prepared in advance in a critical situation, when the determined emotion is in a critical state, Emergency processing such as notifying a security guard according to a predetermined response method is performed (for example, see Patent Document 1).
特開2005-346254号公報JP 2005-346254 A
 特許文献1の危機監視装置では、監視対象内部の人の発話音声を分析することにより感情を推定する。危機的状態にある場合には、監視対象外部の警備員へ通報するなどの緊急処理を行うが、危機的状態をその場で緩和する働きは一切有せず、通報から警備員の駆け付けまでの時間は危機的状態が継続し、或いは更に危機的状態が悪化するという問題点があった。 In the crisis monitoring device of Patent Document 1, emotions are estimated by analyzing the uttered voice of a person inside the monitoring target. In the case of a critical condition, emergency processing such as reporting to a guard outside the monitoring target is carried out, but there is no function to mitigate the critical condition on the spot, and from the reporting to the rush of the guard. There was a problem that the crisis situation continued for a long time or the crisis situation worsened.
 この発明は、前述のような課題を解決するためになされたもので、険悪な雰囲気を清浄化することができる雰囲気清浄化装置を得るものである。 The present invention has been made to solve the above-mentioned problems, and is to obtain an atmosphere cleaning device capable of cleaning a severe atmosphere.
 本発明の一態様に係る雰囲気清浄化装置が提供される。雰囲気清浄化装置は、音を取得する入力部と、音に含まれる人の感情に由来する特徴量と、特徴量から人の感情を識別するための境界情報とに基づいて人の感情を識別する感情識別部と、感情識別部で識別した人の感情の識別結果に基づいて音の発生源周辺の雰囲気の度合いを推定する雰囲気推定部と、雰囲気の度合いに応じて雰囲気を清浄化させる刺激を音の発生源周辺に出力する刺激出力部と、を有する。 An atmosphere cleaning device according to an aspect of the present invention is provided. The atmosphere cleaning device identifies a person's emotion based on an input unit for acquiring a sound, a feature amount derived from a person's emotion contained in the sound, and boundary information for identifying the person's emotion from the feature amount. The emotion identification unit, an atmosphere estimation unit that estimates the degree of the atmosphere around the sound source based on the identification result of the person's emotion identified by the emotion identification unit, and a stimulus that cleans the atmosphere according to the degree of the atmosphere And a stimulus output unit that outputs the sound around the sound source.
 この発明は、雰囲気の度合いに応じて雰囲気を清浄化させる刺激を出力するので、険悪な雰囲気を清浄化することができる。 According to the present invention, the stimulus for cleaning the atmosphere is output according to the degree of the atmosphere, so that a bad atmosphere can be cleaned.
この発明の実施の形態1に係る雰囲気清浄化装置の構成を示すブロック図である。It is a block diagram which shows the structure of the atmosphere cleaning apparatus which concerns on Embodiment 1 of this invention. 「プルチックの感情の輪」を示す図である。It is a figure which shows a "ring of emotions of Prutic". この発明の実施の形態1に係る発話音声(行動音)分析処理を説明するための図である。It is a figure for demonstrating the utterance voice (action sound) analysis processing which concerns on Embodiment 1 of this invention. この発明の実施の形態1に係る雰囲気検知のための第一記録部に記録される識別境界情報について説明する図である。It is a figure explaining the identification boundary information recorded on the 1st recording part for atmosphere detection which concerns on Embodiment 1 of this invention. 図5(a)及び(b)は、この発明の実施の形態1に係る感情ごとの特徴量ベクトルから感情を識別するための識別境界情報について説明する図である。FIGS. 5A and 5B are diagrams for explaining the identification boundary information for identifying an emotion from the feature amount vector for each emotion according to the first embodiment of the present invention. この発明の実施の形態1に係る雰囲気検知のための第二記録部に記録される識別境界情報について説明する図である。It is a figure explaining the identification boundary information recorded on the 2nd recording part for atmosphere detection which concerns on Embodiment 1 of this invention. この発明の実施の形態1に係る発話音声分析処理の流れを雰囲気清浄化装置の図1のブロック構成に当てはめたものである。The flow of the speech analysis processing according to the first embodiment of the present invention is applied to the block configuration of the atmosphere cleaning device shown in FIG. この発明の実施の形態1に係る第一識別部による感情の推定例を示す図である。It is a figure which shows the example of emotion estimation by the 1st identification part which concerns on Embodiment 1 of this invention. この発明の実施の形態1に係る行動音分析処理の流れを雰囲気清浄化装置の図1のブロック構成に当てはめたものである。The flow of action sound analysis processing according to Embodiment 1 of the present invention is applied to the block configuration of the atmosphere cleaning device in FIG. 1. 図10(a)及び(b)は、この発明の実施の形態1に係る入力音声からの発話音声と行動音との分離方法について説明する図である。FIGS. 10(a) and 10(b) are diagrams for explaining a method of separating an utterance voice and an action sound from an input voice according to the first embodiment of the present invention. この発明の実施の形態1に係る雰囲気推定部における所定の推定期間ごとに感情を推定する動作を示す図である。It is a figure which shows the operation|movement which estimates an emotion for every predetermined estimation period in the atmosphere estimation part which concerns on Embodiment 1 of this invention. 図12(a)~(c)は、この発明の実施の形態1に係る雰囲気推定部における推定した感情の出現度数から雰囲気レベルの推移を求める動作を示す図である。12(a) to 12(c) are diagrams showing an operation for obtaining a transition of the atmosphere level from the appearance frequency of the estimated emotion in the atmosphere estimating unit according to the first embodiment of the present invention. この発明の実施の形態1に係る刺激制御部、刺激生成部及び刺激出力部の具体的構成の一例を示す図である。It is a figure which shows an example of the specific structure of the stimulation control part, the stimulation production|generation part, and the stimulation output part which concern on Embodiment 1 of this invention. この発明の実施の形態1に係る聴覚刺激を出力処理するための構成の一例を示す図である。It is a figure which shows an example of the structure for the output processing of the auditory stimulation which concerns on Embodiment 1 of this invention. この発明の実施の形態1に係る嗅覚刺激を出力処理するための構成の一例を示す図である。It is a figure which shows an example of a structure for the output processing of the olfactory stimulus which concerns on Embodiment 1 of this invention. この発明の実施の形態1に係る視覚刺激を出力処理するための構成の一例を示す図である。It is a figure which shows an example of the structure for the output processing of the visual stimulation which concerns on Embodiment 1 of this invention. この発明の実施の形態1に係る電磁波刺激を出力処理するための構成の一例を示す図である。It is a figure which shows an example of the structure for output processing of the electromagnetic wave stimulation which concerns on Embodiment 1 of this invention. この発明の実施の形態1に係る帯電粒子刺激を出力処理するための構成の一例を示す図である。It is a figure which shows an example of the structure for output processing of the charged particle stimulation which concerns on Embodiment 1 of this invention. この発明の実施の形態1に係る刺激制御の一例を示すマトリクス図である。It is a matrix diagram which shows an example of the stimulus control which concerns on Embodiment 1 of this invention. この発明の実施の形態2に係る行動音の分析方法の一例を示す図である。It is a figure which shows an example of the analysis method of the action sound which concerns on Embodiment 2 of this invention. この発明の実施の形態2に係る行動音の分析結果の一例を示す図である。It is a figure which shows an example of the analysis result of the action sound which concerns on Embodiment 2 of this invention. この発明の実施の形態3に係る雰囲気清浄化装置をエレベータに適用した場合の機能ブロック図である。It is a functional block diagram at the time of applying the atmosphere cleaning device concerning Embodiment 3 of this invention to an elevator. この発明の実施の形態3に係る雰囲気清浄化装置をエレベータに適用した場合の重篤な状況の一例を示す図である。It is a figure which shows an example of the serious situation when the atmosphere cleaning apparatus which concerns on Embodiment 3 of this invention is applied to an elevator. この発明の実施の形態3に係る雰囲気清浄化装置をエレベータに適用し、嗅覚刺激として悪臭成分を使用し、換気処理を行う場合の機能ブロック図である。It is a functional block diagram at the time of applying the atmosphere cleaning apparatus concerning Embodiment 3 of this invention to an elevator, using a malodorous component as an olfactory stimulus, and performing a ventilation process. 雰囲気清浄化が有するハードウェアの構成を示す図である。It is a figure which shows the structure of the hardware which an atmosphere cleaning has.
 以下、図面を参照しながら実施の形態を説明する。以下の実施の形態は、例にすぎず、本発明の範囲内で種々の変更が可能である。 Embodiments will be described below with reference to the drawings. The following embodiments are merely examples, and various modifications can be made within the scope of the present invention.
実施の形態1.
 図1は、この発明を実施するための実施の形態1に係る雰囲気清浄化装置の構成を示すブロック図である。雰囲気清浄化装置100は、雰囲気清浄化方法を実行する装置である。雰囲気清浄化装置100は、音を取得することができる。例えば、音は、人の発話音声(以降、「発話音声」と称す)、人の行動音(以降、「行動音」と称す)、又は発話音声と行動音との合成音である。雰囲気清浄化装置100は、音に基づいて、険悪な雰囲気を検出し、険悪な雰囲気を清浄化する刺激を音の発生源周辺に出力する。
Embodiment 1.
1 is a block diagram showing a configuration of an atmosphere cleaning device according to a first embodiment for carrying out the present invention. The atmosphere cleaning device 100 is a device that executes an atmosphere cleaning method. The atmosphere cleaning device 100 can acquire sound. For example, the sound is a person's uttered voice (hereinafter, referred to as “uttered voice”), a person's action sound (hereinafter, referred to as “action sound”), or a synthesized sound of the uttered voice and the action sound. The atmosphere cleaning device 100 detects a harsh atmosphere based on the sound and outputs a stimulus for cleaning the harsh atmosphere to the vicinity of the sound source.
 本実施の形態に係わる雰囲気清浄化装置100において、対象とする人々の発話音声及び行動音を取得する入力部である音入力部1と、発話音声分析部2及び行動音分析部3とにより音入力側が構成される。また、雰囲気清浄化装置100において、検知する感情的発話音声の感情特徴量に関する識別境界情報を予め記録した第一記録部4と、同じく感情的行動音の感情特徴量に関する識別境界情報を予め記録した第二記録部5とが構成される。第一記録部4と第二記録部5とによって識別のための境界情報参照側が構成される。また、雰囲気清浄化装置100において、予め第一記録部4に記録された識別境界情報に基づいて発話音声の感情を識別する第一識別部6と、予め第二記録部5に記録された識別境界情報に基づいて行動音の感情を識別する第二識別部7と、雰囲気推定部8とが構成される。第一識別部6と第二識別部7と雰囲気推定部8とによって雰囲気検知側が構成される。雰囲気の検知結果に基づいて、刺激制御部9と刺激生成部10とにより雰囲気清浄化のための刺激が生成され、刺激出力部11から人々に対して刺激が出力される。 In the atmosphere cleaning device 100 according to the present embodiment, the sound is input by the sound input unit 1 that is an input unit that acquires the utterance voice and the action sound of the target people, the utterance voice analysis unit 2, and the action sound analysis unit 3. The input side is configured. Further, in the atmosphere cleaning device 100, the first recording unit 4 in which the identification boundary information regarding the emotion feature amount of the emotional speech to be detected is recorded in advance, and the identification boundary information regarding the emotion feature amount of the emotional action sound is also recorded in advance. The second recording unit 5 is configured. The first recording unit 4 and the second recording unit 5 constitute a boundary information reference side for identification. In the atmosphere cleaning device 100, the first identification unit 6 that identifies the emotion of the uttered voice based on the identification boundary information that is previously recorded in the first recording unit 4, and the identification that is previously recorded in the second recording unit 5. A second identification unit 7 that identifies the emotion of the action sound based on the boundary information and an atmosphere estimation unit 8 are configured. The first identification section 6, the second identification section 7, and the atmosphere estimation section 8 constitute an atmosphere detection side. Based on the detection result of the atmosphere, the stimulus control unit 9 and the stimulus generation unit 10 generate a stimulus for cleaning the atmosphere, and the stimulus output unit 11 outputs the stimulus to people.
 音入力部1は、音を取得する入力部である。音入力部1が取得する音は、発話音声及び行動音のうちの少なくとも1つである。なお、行動音は、人の行動に伴う音である。例えば、行動音は、物を取り扱う上で物自体或いは周囲の他者に対する配慮を欠く扱い方の結果として生ずる音を指し、机、壁、又は床を人が打ち鳴らすことで生じる音も含まれる。 The sound input unit 1 is an input unit that acquires a sound. The sound acquired by the sound input unit 1 is at least one of an utterance voice and an action sound. The action sound is a sound that accompanies a person's action. For example, the action sound refers to a sound generated as a result of handling the object without paying attention to the object itself or others around the person when handling the object, and also includes a sound generated when a person taps a desk, a wall, or a floor. ..
 感情識別部は、音入力部1が取得した音に含まれる人の感情(以降、「感情」と称す)に由来する特徴量と、特徴量から感情を識別するための境界情報とに基づいて、感情を推定する。また、感情識別部は、推定した感情の種類を示す情報(例えば、「怒り」や「苛立ち」)を生成する。また、識別部が実行する処理は、発話音声分析部2、行動音分析部3、第一識別部6、及び第二識別部7によって実現できる。 The emotion identifying unit is based on the feature amount derived from the human emotion (hereinafter referred to as “emotion”) included in the sound acquired by the sound input unit 1 and the boundary information for identifying the emotion from the feature amount. , Estimate emotions. The emotion identifying unit also generates information indicating the estimated emotion type (for example, "anger" or "irritation"). The processing executed by the identification unit can be realized by the utterance voice analysis unit 2, the action sound analysis unit 3, the first identification unit 6, and the second identification unit 7.
 次にシステムの動作として具体的に状況を設定して説明する。監視する対象として例えば会議室内にいる人々について、何らかの理由があって「険悪な雰囲気」の状況になったと仮定する。更に具体的には、当事者X、Y、Zが人々を構成していて、当事者Xが激昂して当事者Yに対して極めて威圧的な態度で人格を否定するような不適切な言動をしているとする。当事者Yにとっては極めて耐え難い状況であることは言うまでもないが、その場に居合わせた当事者Zにとっても「険悪な雰囲気」と感じられ、極めて居心地が悪い状況になる。 Next, we will explain the system operation by setting the specific situation. For example, it is assumed that the people in the conference room to be monitored have, for some reason, been in an "awful atmosphere". More specifically, the parties X, Y, and Z make up the people, and the party X is agitated and makes an inappropriate act of denying the character with a very intimidating attitude toward the party Y. Suppose Needless to say, the situation is extremely unbearable for the party Y, but also for the party Z who is present at that moment, the party Y feels a “terrible atmosphere” and becomes extremely uncomfortable.
 図2は、「プルチックの感情の輪」を示す図である。このような状況下では、当事者Xから発せられる発話音声及び行動音には、例えば「プルチックの感情の輪」で見れば、「激怒」、「怒り」、「苛立」、「憎悪」、「嫌悪」などの感情の特徴が含まれていると考えられる。一方、見方を変えれば、「怒りは恐怖の裏返し」とも言われるように、当事者Xが激昂するに至った理由として、当事者Y或いは第3者に対する「恐怖」、「心配」、「不安」などの感情が遠因となっている場合とも考えられる。例えば、当事者Xは自分の考えや価値観を否定されたと感じたり、ものごとが自分の思い通りにならないと感じたりすることで、自らの社会的評価が下がることを恐れ、焦燥感が高まっていたのかも知れない。従って、当事者Xから発せられる発話音声及び行動音には、「恐怖」、「心配」、「不安」などの感情の特徴が含まれる可能性もある。 Fig. 2 is a diagram showing the "Pultic's emotional circle". Under such a situation, the utterance voice and the action sound uttered by the party X are, for example, “anger”, “anger”, “frustration”, “hate”, “disgust” when viewed in “Pulch's emotional ring”. It is considered that the emotional characteristics such as “” are included. On the other hand, from a different point of view, it is said that “anger is the inside out of fear”, and the reason why the party X has become agitated is “fear”, “worry”, “anxiety” with respect to the party Y or a third party. It is also possible that the feeling of is a distant cause. For example, the party X feels that his or her thoughts and values have been denied, or feels that things do not go according to his or her own wishes, and fears that his or her social evaluation will decline. It may be. Therefore, the utterance voice and the action sound uttered by the party X may include emotional features such as “fear”, “worry”, and “anxiety”.
 このような状況において、例えば当事者X自身に自制心があって、自ら激昂を抑えることが出来れば、そもそも「険悪な雰囲気」が危機的状況にまでは至らない。また、当事者Zが当事者Xを宥め落ち着かせて冷静な状況に導くことが出来れば、同様に危機的状況にまでは至ることを回避できる。しかしながら、当事者Xの激昂がZ自らに向けられることを懸念し、往々にして関わり合いを避けてしまうのは致し方ないことである。 In such a situation, if, for example, the party X himself has self-control and can suppress the excitement himself, the "dangerous atmosphere" will not reach a critical situation in the first place. Further, if the party Z can restrain the party X to calm down and lead to a calm situation, similarly, it is possible to avoid reaching a critical situation. However, it is unavoidable to avoid the involvement of the party X, often concerned with the excitement of the party X directed to Z himself.
 そこで、この発明による雰囲気清浄化装置100は、激昂する当事者Xを含む人々から取得する発話音声及び行動音の感情特徴量を分析して、「激怒」、「怒り」、「苛立」、「憎悪」、「嫌悪」、「恐怖」、「心配」、「不安」などの感情による「険悪な雰囲気」を検知する。そして、雰囲気清浄化装置100は、その検知結果に基づいて雰囲気を清浄化するための刺激を人々に出力して、「険悪な雰囲気」が危機的状態に至らないように、人々に内在する激した感情を清浄化(緩和)するように動作する。もっとも、雰囲気清浄化装置100は、その前段階として、軽微な「嫌な雰囲気」から重篤な「険悪な雰囲気」に至らないように動作するようにしても良い。 Therefore, the atmosphere cleaning device 100 according to the present invention analyzes the emotional characteristic amount of the utterance voice and the action sound acquired from the people including the agitated party X, and determines “anger”, “anger”, “irritability”, and “hate”. ”, “Aversion”, “Horror”, “Worry”, “Anxiety”, etc. Then, the atmosphere cleaning device 100 outputs to the people a stimulus for cleaning the atmosphere based on the detection result, so that the "dangerous atmosphere" does not reach a critical state. It works to cleanse (relax) the emotions you have made. However, the atmosphere cleaning device 100 may be operated so as not to reach from a slight "unpleasant atmosphere" to a serious "severe atmosphere" as a preceding step.
 また、別の具体的状況設定について説明する。監視する対象として例えばエレベータかご内を想定し、エレベータかご内に当事者X、Y、Zがいるものとする。当事者X、Y、Zのうち当事者Xに何らかの理由で犯意が芽生えた場合について述べる。例えば、「当事者Yの肩や荷物が当事者Xにぶつかった」、「当事者Yが当事者Xを睨みつけた」などを理由に、当事者Xは当事者Yに対して謝罪の意思表示を期待したにも拘らず、「無視された」、「笑われた」、「馬鹿にされた」などと思い込み、何らかの害悪の通告が行われ危機的状態に至る場合である。害悪の通告としては、例えば「しばく」、「どつく」、「殺す」、「土に埋める」、「海に沈める」など一般人が畏怖するに足る言動が該当すると共に、壁や床を打ち鳴らす行為も一般人が畏怖するに足る程度を超えればそれに該当すると言える。このような害意の通告における発話音声及び行動音には、「激怒」、「怒り」、「苛立」、「憎悪」、「嫌悪」などの感情の特徴が含まれていると考えられ、前述と同じく「怒りは恐怖の裏返し」の解釈から、「恐怖」、「心配」、「不安」などの感情の特徴も含まれている可能性がある。 Also, explain another specific situation setting. For example, it is assumed that the target of monitoring is an elevator car, and there are parties X, Y, and Z in the elevator car. Described below is a case where the party X, of the parties X, Y, and Z, has an offense for some reason. For example, the party X expects the party Y to apologize for the reasons such as "the shoulder and the baggage of the party Y hit the party X" and "the party Y gazed at the party X". Instead, they assume that they were "ignored," "laughed," "ridiculed," etc., and they are notified of some harm and end up in a critical state. Notifications of harm include, for example, "shibaku," "tsukuri," "kill," "buy in the soil," "sink in the sea," and the like, as well as actions that are enough to scare the general public. It can be said that the act of ringing falls under that level if the ordinary person is afraid enough. It is considered that the utterance voice and the action sound in such a notification of malicious intent include emotional features such as "anger", "anger", "irritability", "hate", and "disgust". Similarly to the above, from the interpretation of "anger is the inside out of fear", emotional features such as "fear", "worry", and "anxiety" may be included.
 このような状況においても、前述の会議室の例と同様に、当事者Xの自制や当事者Zによる慰撫を期待することは必ずしも容易ではない。このため、この発明による雰囲気清浄化装置100の働きかけにより、「険悪な雰囲気」が危機的状態に至らないようにすること、或いは軽微な「嫌な雰囲気」から重篤な「険悪な雰囲気」に至らないようにすることは、社会のニーズに広く応えるものである。 Even in such a situation, it is not always easy to expect the restraint of the party X and the comfort of the party Z, as in the example of the conference room described above. Therefore, by the action of the atmosphere cleaning device 100 according to the present invention, it is possible to prevent the "dangerous atmosphere" from reaching a critical state, or to change from a slight "unpleasant atmosphere" to a serious "dangerous atmosphere". Preventing it from reaching the bottom is a broad response to the needs of society.
 状況設定として、以上のように会議室とエレベータにおいて「険悪な雰囲気」になる場合を説明した。険悪な雰囲気は、会議室及びエレベータのような他の閉じた空間でも起こる。例えば、閉じた空間とは、鉄道、自動車、船舶など各種の乗り物や、宇宙ステーションなどの容易に外部に逃れられない空間である。雰囲気清浄化装置100は、会議室及びエレベータ以外の閉じた空間で険悪な雰囲気が起きても、険悪な雰囲気を清浄化することができる。  As a situation setting, I explained the case where there is a "dangerous atmosphere" in the conference room and elevator as described above. Harsh atmospheres also occur in conference rooms and other closed spaces such as elevators. For example, the closed space is a space where various vehicles such as railroads, automobiles, ships, and space stations cannot easily escape to the outside. The atmosphere cleaning device 100 can clean a harsh atmosphere even if a harsh atmosphere occurs in a closed space other than the conference room and the elevator.
 次に、雰囲気清浄化装置100が行う個別の動作として、人々の発話音声から感情特徴量を抽出し、感情を推定するまでの動作について説明する。図3は、発話音声分析処理を説明するための図であり、音声感情認識の流れを示している。音入力部1に発話音声が入力されると、発話音声の高さ(基本周波数)や発話音声の大きさ(パワー)といった特徴量が抽出される。これは一般的にLLD(Low-Level Descriptions)と呼ばれるもので、音声認識等で行われるのと同様に、窓関数を用いてフレームに分割され、フレームごとにLLDが算出される。 Next, as individual operations performed by the atmosphere cleaning device 100, operations up to extracting emotion feature amounts from people's uttered voices and estimating emotions will be described. FIG. 3 is a diagram for explaining the utterance voice analysis process, and shows the flow of voice emotion recognition. When the uttered voice is input to the sound input unit 1, feature quantities such as the height (fundamental frequency) of the uttered voice and the loudness (power) of the uttered voice are extracted. This is generally called LLD (Low-Level Descriptions), and is divided into frames using a window function and LLD is calculated for each frame, as in the case of performing voice recognition or the like.
 このようにして得られたLLDは時系列データであるため、発話内容によって長さが異なる。また、音声に含まれる感情は、その瞬間ごとの値で表現されるものではなく、その発話全体を通しての傾向として表していると推測される。そこで、時系列データとして得られる音声の特徴量であるLLDから、平均値、分散、傾き、最大値、最小値などの統計量を算出する。n個のLLD時系列データに対して、それぞれm個の統計量を算出したとすると、入力された発話音声はn×m次元の特徴量ベクトルへと変換される。特徴量ベクトルは、複数の特徴量を1つにまとめたベクトルなので、単に特徴量と表現してもよい。  The LLD obtained in this way is time-series data, so the length varies depending on the utterance content. Moreover, it is presumed that the emotion contained in the voice is not expressed by a value for each moment but as a tendency throughout the utterance. Therefore, statistical values such as average value, variance, slope, maximum value, minimum value, etc. are calculated from the LLD, which is the feature value of voice obtained as time-series data. If m statistics are calculated for each of n LLD time-series data, the input speech voice is converted into an n×m-dimensional feature vector. Since the feature amount vector is a vector in which a plurality of feature amounts are combined into one, it may be simply expressed as a feature amount.
 発話音声分析部2は、このようにして得られた特徴量ベクトルを使って、発話音声に含まれる感情を認識する。感情の表現には、「怒り」「喜び」といったカテゴリで表現するものと、「快適-不快」「覚醒-睡眠」といった感情空間上の座標軸で表現されるものの2種類がよく用いられる。前者で表現する場合は、特徴量ベクトルが所属するカテゴリを推定するパターン認識となり、線形判別分析やサポートベクタマシン(SVM:Support Vector Machine)などの各種統計的識別器が使用される。後者で表現する場合は、特徴量ベクトルから座標の値を推定することとなり、線形回帰やサポートベクタ回帰(SVR:Support Vector Regression)、ニューラルネットワークなどのモデルを用いる。 The uttered voice analysis unit 2 recognizes the emotion contained in the uttered voice by using the feature amount vector thus obtained. Two types of expressions of emotions are often used: those expressed in categories such as “anger” and “joy” and those expressed in coordinate axes in the emotional space such as “comfort-discomfort” and “wake-sleep”. When the former is used, pattern recognition is performed to estimate the category to which the feature amount vector belongs, and various statistical classifiers such as linear discriminant analysis and support vector machine (SVM: Support Vector Machine) are used. In the latter case, the coordinate value is estimated from the feature vector, and a model such as linear regression, support vector regression (SVR), neural network, etc. is used.
 以上のように、図3の発話音声分析部2の後段部分は1つの特徴量ベクトルから感情を推定する問題に集約されるため、古くから数多く研究されているパターン認識の手法が流用できる。従って、あまり工夫の余地は大きくなく、寧ろ、どのような特徴量を用いるかが性能を大きく左右することとなる。前述のように、古くから利用されている基本周波数やパワーと言った特徴量の他に、メル周波数ケプストラム係数(MFCC:Mel-Frequency Cepstram Coefficients)といったスペクトラム特徴量を加えるなど行われてきた。更に、十分な性能を得るために、必要十分な特徴量を吟味して選別するのではなく、関係がありそうな特徴量を全て使ってしまうという考えが主流となってきている。よく使用される識別器の1つであるSVMでは、高次元の特徴量ベクトルであっても過学習等の問題が起こりにくく、高い性能を維持することが知られている。 As described above, since the latter part of the speech analysis unit 2 in Fig. 3 is concentrated on the problem of estimating emotions from one feature vector, the pattern recognition method that has been studied many times since then can be used. Therefore, there is not much room for improvement, and rather, what kind of feature amount is used greatly affects the performance. As described above, in addition to features such as the fundamental frequency and power that have been used for a long time, spectrum features such as mel-frequency cepstrum coefficients (MFCC: Mel-Frequency Cepstrum Coefficients) have been added. Furthermore, in order to obtain sufficient performance, the idea is to use all the feature quantities that are likely to be related, instead of carefully examining and selecting the necessary and sufficient feature quantities. It is known that the SVM, which is one of the commonly used classifiers, does not easily cause problems such as overlearning even with a high-dimensional feature amount vector and maintains high performance.
 また、図3に示した発話音声分析部2が実行する処理については、行動音分析部3、第二記録部5及び第二識別部7が実行する処理についても同様に当てはまる。但し、発話音声と行動音は音響的特徴が大きく異なるので、特徴量も当然異なったものとなる。 Also, the processing executed by the utterance voice analysis unit 2 shown in FIG. 3 is similarly applied to the processing executed by the action sound analysis unit 3, the second recording unit 5, and the second identification unit 7. However, since the utterance voice and the action sound have greatly different acoustic features, the feature amounts are naturally different.
 例えば、発話音声の場合、情動に伴う声道の繊細な変形に伴って差異を生じるのだが、個人差はあるものの、「激怒」、「怒り」、「苛立」、「憎悪」、「嫌悪」、「恐怖」、「心配」、「不安」などの感情の微妙な差を、聴感上ほぼ区別できることはしばしば経験されることであり、人が聴感上区別できるということはそのための何らかの音響的特徴があることを意味している。 For example, in the case of uttered speech, differences occur due to the delicate deformation of the vocal tract due to emotions, but there are individual differences, but "anger", "anger", "irritation", "hate", "disgust" It is often experienced that subtle differences in emotions such as, "fear", "worry", and "anxiety" can be distinguished in the sense of hearing, and the fact that a person can distinguish them in terms of the sense of hearing is some acoustic feature for that. Means that there is.
 一方、行動音の場合は、「激怒」、「怒り」、「苛立」、「憎悪」、「嫌悪」などの感情により、モノに感情のやり場を求めてしまい、他人やモノに対する配慮の気持ちが薄れ、モノを乱暴に扱ってしまうことにより音の特徴が生ずる。但し、行動音には発話音声ほどには繊細な感情表出の差が表れるとは考えにくく、単なるインパクト音の強弱や頻度の差によって、「激怒」、「苛立」、「平静」のせいぜい3段階程度の区別に留まるものとするのが妥当である。 On the other hand, in the case of action sounds, emotions such as “anger”, “anger”, “frustration”, “hate”, and “disgust” require the object to have a place of emotion, and the feeling of consideration for others and things is The characteristics of sound are caused by fading and rough handling of things. However, it is unlikely that the behavioral sound has a more subtle difference in emotional expression than the spoken voice, and due to the difference in the strength and frequency of the mere impact sound, "anger", "irritability", and "calmness" are at most 3 It is appropriate to only distinguish between stages.
 図4は、第一記録部4に記録される雰囲気検知のための識別境界情報について説明する図であり、感情Aから感情Dまでの4種類の感情を識別する場合を例示している。図示を省略しているが、発話音声分析部2は、複数の感情A~Dの発話音声から感情A~Dの特徴量ベクトルを算出する。音の入力としては、感情Aから感情Dまでのラベル付きの発話音声が用意され、それぞれ前述の様に基本周波数やパワー、スペクトラムなどの特徴量を抽出した上で統計量を算出し、感情ごとの特徴量ベクトルを導出する。 FIG. 4 is a diagram illustrating identification boundary information for atmosphere detection recorded in the first recording unit 4, and illustrates an example of identifying four types of emotions from emotion A to emotion D. Although not shown, the utterance voice analysis unit 2 calculates the feature amount vector of the emotions A to D from the utterance voices of the plurality of emotions A to D. As the sound input, labeled speech sounds from emotion A to emotion D are prepared. As described above, the characteristic amounts such as the fundamental frequency, power, and spectrum are extracted, and the statistics are calculated. The feature vector of is derived.
 図5は感情ごとの特徴量ベクトルから感情を識別するための識別境界情報について説明する図である。図5(a)に感情の分布、図5(b)に感情の分類を示す。図5(a)、(b)の縦軸は、特徴量Mを示す。図5(a)、(b)の横軸は、特徴量Lを示す。図5では、簡便のために特徴Lと特徴Mの2次元空間上に示している。感情Aから感情Dの特徴量ベクトルが図5(a)に示すように分布しているとすると、SVMの1対他分類法による多クラス分類を行った結果、図5(b)に示すような識別境界が学習により求まる。 FIG. 5 is a diagram illustrating identification boundary information for identifying an emotion from the feature amount vector for each emotion. FIG. 5A shows emotion distribution, and FIG. 5B shows emotion classification. The vertical axis in FIGS. 5A and 5B indicates the feature amount M. The horizontal axes of FIGS. 5A and 5B represent the feature amount L. In FIG. 5, the features L and M are shown in a two-dimensional space for simplicity. Assuming that the feature vectors of emotions A to D are distributed as shown in FIG. 5A, as a result of performing multi-class classification by the one-to-other classification method of SVM, as shown in FIG. 5B. Different identification boundaries are found by learning.
 即ち、感情Aと感情Bとを識別する境界AB、感情Bと感情Cとを識別する境界BC、感情Cと感情Dとを識別する境界CD、感情Dと感情Aとを識別する境界DAが求められる。結果として、感情Aは境界ABと境界DA、感情Bは境界BCと境界AB、感情Cは境界CDと境界BC、感情Dは境界DAと境界CDにより識別される。ここで求まった境界AB、境界BC、境界CD、境界DAが識別境界情報として図1の第一記録部4に記録されて感情の識別に利用される。識別境界情報に基づき点線で区分けされた特徴空間によって感情を分類することができる。 That is, a boundary AB for identifying emotion A and emotion B, a boundary BC for identifying emotion B and emotion C, a boundary CD for identifying emotion C and emotion D, and a boundary DA for identifying emotion D and emotion A. Desired. As a result, emotion A is identified by boundaries AB and DA, emotion B is identified by boundaries BC and AB, emotion C is identified by boundaries CD and BC, and emotion D is identified by boundaries DA and CD. The boundary AB, the boundary BC, the boundary CD, and the boundary DA obtained here are recorded in the first recording unit 4 of FIG. 1 as identification boundary information and used for emotion identification. Emotions can be classified by the feature space divided by the dotted line based on the identification boundary information.
 一方、図6は,第二記録部5に記録される雰囲気検知のための識別境界情報について説明する図であり、感情Aから感情Dまでの4種類の感情を識別する場合を同様に例示している。入力をラベル付き行動音としている以外は図4及び図5の説明と同様であり詳細は省略する。同様にして求まった境界AB、境界BC、境界CD、境界DAが識別境界情報として第二記録部5に記録されて識別に利用される。 On the other hand, FIG. 6 is a diagram for explaining the identification boundary information for atmosphere detection recorded in the second recording unit 5, similarly illustrating the case of identifying four types of emotions from emotion A to emotion D. ing. The description is the same as that of FIGS. 4 and 5 except that the input is a labeled action sound, and the details are omitted. The boundary AB, the boundary BC, the boundary CD, and the boundary DA obtained in the same manner are recorded in the second recording unit 5 as identification boundary information and are used for identification.
 図7は、図3の動作説明図を図1のブロック構成に当てはめたものである。発話音声分析部2においては音声入力信号から発話音声を抽出した上で、その特徴量ベクトルを算出する。そして、予め第一記録部4に記録してある発話音声に関わる感情の識別境界情報を取り込んだ統計的識別器により構成される第一識別部6において、その音声入力(発話音声)が含む感情を推定する。更に、第一識別部6は、識別結果を雰囲気推定部8へ出力する。 FIG. 7 is a diagram in which the operation explanatory diagram of FIG. 3 is applied to the block configuration of FIG. In the uttered voice analysis unit 2, the uttered voice is extracted from the voice input signal, and then the feature amount vector is calculated. Then, in the first discriminator 6 configured by a statistical discriminator that captures the discrimination boundary information of emotions related to the utterance voice recorded in the first recording unit 4 in advance, the emotions included in the voice input (utterance voice) To estimate. Furthermore, the first identification unit 6 outputs the identification result to the atmosphere estimation unit 8.
 図8は、第一識別部6による感情の推定例を示す図である。図8において、新たな音入力に対する発話音声分析部2で算出された特徴量ベクトルが星印で示されている。第一記録部4からの識別境界情報に基づき点線で区分けされた特徴空間に対して、入力された音声の特徴量ベクトルが図8の星印に位置した場合には、星印が境界ABと境界BCとの範囲内に存在することから、感情Bと識別されることになる。第一識別部6の識別結果は雰囲気推定部8へと出力される。 FIG. 8 is a diagram showing an example of emotion estimation by the first identification unit 6. In FIG. 8, the feature amount vector calculated by the speech analysis unit 2 for a new sound input is indicated by a star. When the feature vector of the input voice is located at the star mark in FIG. 8 with respect to the feature space divided by the dotted line based on the identification boundary information from the first recording unit 4, the star mark indicates the boundary AB. Since it exists within the boundary BC, it is identified as emotion B. The identification result of the first identification unit 6 is output to the atmosphere estimation unit 8.
 また、図9は、図3の動作説明図を図1のブロック構成に当てはめたものである。行動音分析部3は、音声入力信号から行動音を抽出した上で、その特徴量ベクトルを算出する。そして、予め第二記録部5に記録してある行動音に関わる感情の識別境界情報を取り込んだ統計的識別器により構成される第二識別部7において、その音声入力(行動音)が含む感情を推定する。更に、第二識別部7は、識別結果を雰囲気推定部8へ出力する。 Also, FIG. 9 is a diagram in which the operation explanatory diagram of FIG. 3 is applied to the block configuration of FIG. The action sound analysis unit 3 extracts the action sound from the voice input signal and then calculates the feature amount vector thereof. Then, in the second discriminating unit 7 configured by a statistical discriminator that captures the discrimination boundary information of the emotion related to the action sound recorded in the second recording unit 5 in advance, the emotion included in the voice input (action sound) To estimate. Further, the second identification unit 7 outputs the identification result to the atmosphere estimation unit 8.
 なお、図7で説明した発話音声の抽出、及び図9で説明した行動音の抽出の方法としては、例えばフィルタにより単純に周波数帯域を分けて分離する構成が簡便である。ここでは、発話音声分析部2がどのように発話音声を抽出するか、行動音分析部3がどのように行動音を抽出するかを説明する。 Note that, as a method of extracting the uttered voice described in FIG. 7 and the method of extracting the action sound described in FIG. 9, for example, a configuration in which the frequency bands are simply separated by a filter is convenient. Here, how the uttered voice analysis unit 2 extracts the uttered voice and how the action sound analysis unit 3 extracts the action sound will be described.
 図10は、入力音声からの発話音声と行動音との分離方法についての説明図であり、感情発話音声と感情行動音の一例を示している。図10(a)は、感情発話音声の一例であり、「怒り」の感情発話音声の振幅波形とそのスペクトログラムを示したものである。図10(b)は、感情行動音の一例であり、「怒り」の感情を内在した状況でのOA機器操作による行動音(往々にして強い打撃的な音となる)その振幅波形とスペクトログラムを示している。図10(a)、(b)において、横軸は時間、上段の縦軸は振幅、下段の縦軸は周波数である。 FIG. 10 is an explanatory diagram of a method of separating an utterance voice from an input voice and an action sound, and shows an example of an emotion utterance voice and an emotional action sound. FIG. 10A is an example of an emotional utterance voice, and shows an amplitude waveform of the emotional utterance voice of "anger" and its spectrogram. FIG. 10B is an example of an emotional action sound, which shows an action waveform (often a strong percussive sound) produced by operating an OA device in a situation where the emotion of “anger” is inherent, and its amplitude waveform and spectrogram. Shows. In FIGS. 10A and 10B, the horizontal axis represents time, the upper vertical axis represents amplitude, and the lower vertical axis represents frequency.
 図10において、下段のスペクトログラムは黒が濃いほどパワーが強いこと示しており、行動音側は広帯域に信号成分が強く広がっている。これに対して、発話音声側は数百Hzから5kHz付近にパワーが強い分布となっているが、それ以外は相対的に弱い。このため、5kHz付近を境界とした低域通過フィルタ及び高域通過フィルタによる発話音声と行動音との分離が考えられる。例えば、発話音声分析部2は、周波数5kHz未満の信号を発話音声と判定して、発話音声を抽出する。行動音分析部3は、周波数5kHz以上の信号を行動音と判定して、行動音を抽出する。また、周波数5kHz付近未満を発話音声、周波数5kHz以上を行動音として分け、行動音が予め定めたレベル以上のパワーを持っている期間は、雰囲気推定部8において発話音声側の感情認識の重みを下げて行動音側を重視するようにして、音の重複による認識精度低下の影響を緩和するなどの方法が考えられる。 In Fig. 10, the lower spectrogram shows that the darker the black, the stronger the power, and the action sound side has a wide spread of the signal component. On the other hand, the uttered voice has a strong power distribution in the vicinity of several hundred Hz to 5 kHz, but is relatively weak in other regions. Therefore, it is conceivable to separate the uttered voice and the action sound by the low-pass filter and the high-pass filter with the boundary around 5 kHz. For example, the uttered voice analysis unit 2 determines a signal having a frequency of less than 5 kHz as uttered voice and extracts the uttered voice. The action sound analysis unit 3 determines a signal having a frequency of 5 kHz or higher as the action sound, and extracts the action sound. Further, the frequency of less than about 5 kHz is divided into the uttered voice, and the frequency of 5 kHz or more is divided into the action sound, and during the period when the action sound has the power equal to or higher than a predetermined level, the atmosphere estimation unit 8 sets the weight of emotion recognition on the utterance voice side. It is possible to reduce the recognition accuracy due to the overlapping of sounds by lowering the importance of the action sound side.
 次に、雰囲気推定部8について説明する。雰囲気推定部8は、感情識別部で識別した感情の識別結果に基づいて音の発生源周辺の雰囲気の度合いを推定する。図11は、雰囲気推定部8における所定の推定期間ごとに感情を推定する動作を示す図である。図11において、最上段の図は発話音声の振幅波形であり、縦軸は振幅、横軸は時間を示す。第一識別部6は、推定期間毎に区切って、感情を推定する。推定期間は重複させずに区切っていくことも可能であるが、図11のように重複期間を設けることにより境界部に感情特徴量が顕著に表れても、期間が分割されてしまうことなく妥当な感情推定ができる。 Next, the atmosphere estimation unit 8 will be described. The atmosphere estimation unit 8 estimates the degree of the atmosphere around the sound generation source based on the emotion identification result identified by the emotion identification unit. FIG. 11 is a diagram showing an operation of estimating an emotion in the atmosphere estimation unit 8 for each predetermined estimation period. In FIG. 11, the uppermost diagram shows the amplitude waveform of the uttered voice, the vertical axis shows the amplitude, and the horizontal axis shows the time. The first identification unit 6 estimates emotions by dividing the estimation period. It is possible to divide the estimation period without overlapping, but by providing the overlapping period as shown in FIG. 11, even if the emotional feature quantity appears prominently in the boundary portion, the period is not divided and is appropriate. You can estimate your emotions.
 図11の下部に、発話音声の振幅波形から引き出された矢印の先に、推定された感情を丸で表している。ここでは感情AからDの内、A及びBが推定されていることを黒丸白抜きで示している。白丸はその他の感情(推定されない場合を含む)を示している。また、この図で発話音声の振幅波形を前提に説明している部分については、行動音の振幅波形と置き換えても同様なので説明を省略する。 In the lower part of Fig. 11, the estimated emotions are indicated by circles at the tip of the arrow drawn from the amplitude waveform of the speech voice. Here, out of emotions A to D, A and B are presumed to be estimated by black circles. White circles indicate other emotions (including those not estimated). Further, the description of the portion of the figure that is based on the amplitude waveform of the uttered voice is the same even if it is replaced with the amplitude waveform of the action sound, and thus the description thereof is omitted.
 次に、雰囲気の判定処理について、説明する。図12は、図11に示した推定感情を更に広い期間で表し、雰囲気推定部8における推定した感情の出現度数から雰囲気レベルの推移を求める動作を示す図である。感情の推定期間を幾つかまとめた出現度数判定期間ごとに、推定感情の出現度数のヒストグラムで表している。因みに、出現度数判定期間についても、推定期間と同様の理由で重複期間を設けて推移させても良い。 Next, the atmosphere determination process will be explained. FIG. 12 is a diagram showing the operation of expressing the estimated emotion shown in FIG. 11 in a wider period and obtaining the transition of the atmosphere level from the estimated appearance frequency of the emotion in the atmosphere estimation unit 8. The histogram of the appearance frequency of the estimated emotion is shown for each appearance frequency determination period in which several estimated emotion periods are collected. Incidentally, the appearance frequency determination period may also be changed by providing an overlapping period for the same reason as the estimation period.
 また、雰囲気推定部8は、例示として、感情Aから感情D及びその他の出現度数をカウントしているが、各感情に基づいて雰囲気を数値化する場合に、感情によって寄与度が異なることを考慮する必要がある。例えば、「激怒」、「怒り」、「苛立」、「憎悪」、「嫌悪」などの感情を考えた場合に、「険悪な雰囲気」に対する寄与度は「激怒」が最も高く、「怒り」、「憎悪」がそれに続き、「苛立」、「嫌悪」は相対的に低いと考えられる。また、「苛立」、「嫌悪」の頻度が極めて高ければ、「険悪な雰囲気」に対する寄与度は高いことになる。従って、感情Aから感情D及びその他の出現度数に対して寄与度に応じた重み付けを行う必要がある。 Further, the atmosphere estimation unit 8 counts the appearance frequency from the emotion A to the emotion D and other appearance frequencies as an example. However, when the atmosphere is digitized based on each emotion, the contribution degree varies depending on the emotion. There is a need to. For example, if you consider emotions such as "anger", "anger", "irritability", "hate", "dislike", etc., "rage" is the highest contribution to "a terrible atmosphere", and "anger", "Hate" follows, followed by "frustration" and "disgust" at relatively low levels. Further, if the frequency of “frustration” and “disgust” is extremely high, the degree of contribution to the “aggressive atmosphere” is high. Therefore, it is necessary to weight the emotions A to D and other appearance frequencies according to the contribution.
 例えば、図12(a)の感情の出現度数のヒストグラムが重み付けにより図12(b)の重み付け後の感情の出現度数のヒストグラムとなる。図12(b)では重みを感情ごとに感情A>感情B>感情C>感情Dとし、その他はゼロとして示している。 For example, the histogram of the appearance frequency of emotions shown in FIG. 12A becomes a histogram of the appearance frequency of emotions after weighting shown in FIG. 12B. In FIG. 12B, the weight is shown for each emotion as emotion A>emotion B>emotion C>emotion D, and the others are shown as zero.
 図12(c)は重み付け後の感情の出現度数のヒストグラムから総和を取るなどして求めた雰囲気レベルの推移と閾値とを示している。総和の値は、雰囲気値である。また、雰囲気値は、感情の種類を示す情報に基づく数値と表現してもよい。このように、雰囲気推定部8は、出現度数判定期間毎に、雰囲気値を算出する。1つの出現度数判定期間が雰囲気値(雰囲気レベル)としての1つの丸に対応しており、その期間を順次ずらせて行くことによって雰囲気値の推移が表現される。この雰囲気値に対して閾値を設けることにより、雰囲気の険悪さを段階的に定義できる。 FIG. 12C shows the transition of the mood level and the threshold value obtained by summing the histogram of the appearance frequency of emotions after weighting. The sum value is an atmosphere value. Further, the mood value may be expressed as a numerical value based on information indicating the type of emotion. In this way, the atmosphere estimation unit 8 calculates the atmosphere value for each appearance frequency determination period. One appearance frequency determination period corresponds to one circle as an atmosphere value (atmosphere level), and a transition of the atmosphere value is expressed by sequentially shifting the period. By setting a threshold for this atmosphere value, the severity of the atmosphere can be defined in stages.
 図12(c)において、雰囲気悪化の程度が高まるのに応じて閾値をTh1からTh3として与え、例えばTh3を超えた時点で「険悪な雰囲気」が極まった状況と判定され、Th2、Th1と下がるに従い「険悪な雰囲気」の程度は弱まる場合を示している。 In FIG. 12C, the threshold value is given as Th1 to Th3 as the degree of deterioration of the atmosphere increases. For example, when the threshold value exceeds Th3, it is determined that the “aggressive atmosphere” has reached the extreme level, and the values fall to Th2 and Th1. Therefore, the degree of “aggressive atmosphere” is weakening.
 次に、刺激制御部9、刺激生成部10、及び刺激出力部11について説明する。刺激出力部11は、雰囲気の度合いに応じて雰囲気を清浄化させる刺激を音の発生源周辺に出力する。図13は、雰囲気推定部8の推定結果に基づく刺激制御部9、刺激生成部10及び刺激出力部11の具体的構成の一例を示す図である。「険悪な雰囲気」を清浄化する刺激としては、例えば聴覚刺激、嗅覚刺激、視覚刺激、電磁波刺激、帯電微粒子刺激などを個別に或いは複合的に出力するように構成している。刺激制御部9には、雰囲気推定部8の推定結果を受けて複数ある刺激の組み合わせ、各刺激制御の内容を指示する主制御部9aと、主制御部9aの指示に基づいて刺激ごとの制御を行う制御部、即ち聴覚刺激制御部9b、嗅覚刺激制御部9c、視覚刺激制御部9d、電磁波刺激制御部9e、及び帯電粒子刺激制御部9fを備える。 Next, the stimulus control unit 9, the stimulus generation unit 10, and the stimulus output unit 11 will be described. The stimulus output unit 11 outputs a stimulus for cleaning the atmosphere to the vicinity of the sound generation source according to the degree of the atmosphere. FIG. 13 is a diagram illustrating an example of a specific configuration of the stimulation control unit 9, the stimulation generation unit 10, and the stimulation output unit 11 based on the estimation result of the atmosphere estimation unit 8. As a stimulus for cleaning the "acute atmosphere", for example, an auditory stimulus, an olfactory stimulus, a visual stimulus, an electromagnetic wave stimulus, a charged fine particle stimulus, etc. are output individually or in combination. The stimulus control unit 9 receives the estimation result of the atmosphere estimation unit 8 and combines a plurality of stimuli, a main control unit 9a for instructing the content of each stimulus control, and a control for each stimulus based on the instruction of the main control unit 9a. The control unit for performing the above, that is, the auditory stimulation control unit 9b, the olfactory stimulation control unit 9c, the visual stimulation control unit 9d, the electromagnetic wave stimulation control unit 9e, and the charged particle stimulation control unit 9f.
 刺激生成部10は、各刺激制御部9bから9fの制御に基づいて刺激ごとの生成を行う生成部、即ち聴覚刺激生成部10b、嗅覚刺激生成部10c、視覚刺激生成部10d、電磁波刺激生成部10e、及び帯電粒子刺激生成部10fを備える。刺激出力部11は、各刺激生成部10bから10fの生成内容に応じて適した形態で刺激の出力を行う出力部、即ち聴覚刺激出力部11b、嗅覚刺激出力部11c、視覚刺激出力部11d、電磁波刺激出力部11e、及び帯電粒子刺激出力部11fを備える。刺激出力部11から出力される各刺激は、雰囲気推定部8の推定結果に応じて人々、つまり音の発生源周辺へ出力される。以下、各刺激の内容について個別に説明する。 The stimulus generation unit 10 generates a stimulus for each stimulus control unit 9b to 9f, that is, an auditory stimulus generation unit 10b, an olfactory stimulus generation unit 10c, a visual stimulus generation unit 10d, and an electromagnetic wave stimulus generation unit. 10e and a charged particle stimulus generation unit 10f. The stimulus output unit 11 outputs an stimulus in a form suitable for each stimulus generation unit 10b to 10f, that is, an auditory stimulus output unit 11b, an olfactory stimulus output unit 11c, and a visual stimulus output unit 11d. An electromagnetic wave stimulation output unit 11e and a charged particle stimulation output unit 11f are provided. Each stimulus output from the stimulus output unit 11 is output to people, that is, around the sound source, according to the estimation result of the atmosphere estimation unit 8. Hereinafter, the content of each stimulus will be described individually.
 まず、聴覚刺激について説明する。聴覚刺激は、険悪な雰囲気を清浄化する音である。聴覚刺激としては、人に対する作用として副交感神経の亢進やリラックス効果があるとされる音、例えば自然環境において発せられる音、特に水や大気の予測不能な動きに伴って生ずる音により構成された「自然音」を刺激として用いる。具体的には、砂浜に寄せては返す波の音、小川のせせらぎの音、清冽な滝の音、木の葉に当たる雨音、木々の枝葉が風にそよぎ擦れ合う音などが当てはまる。更には、これらの組み合わせに加え、小鳥のさえずり、虫の鳴き声、動物の鳴き声などが重なり存在するもの、或いは自然空間の奥行きと拡がりを感じさせる音として再構成されたものまでを含めて「自然音」として、聴覚刺激として用いる。 First, I will explain about auditory stimulation. Auditory stimulus is the sound that cleans a harsh atmosphere. The auditory stimulus is composed of sounds that are said to have parasympathetic nerve enhancement and relaxing effects as effects on humans, for example, sounds that are emitted in the natural environment, especially sounds that accompany unpredictable movements of water and the atmosphere. "Natural sound" is used as a stimulus. Specifically, the sound of waves returning to the beach, the sound of brooks, the sound of clear waterfalls, the sound of rain falling on leaves, and the sound of the branches and leaves of trees rubbing against the wind. Furthermore, in addition to these combinations, even those that overlap, such as the chirping of birds, the crying of insects, the crying of animals, or those reconstructed as sounds that make you feel the depth and breadth of natural space, It is used as a sound stimulus.
 この「自然音」の音響的特徴は、前述のように予測不能な動きを伴うことと、ピンクノイズと似て周波数が高くなるほどパワーが弱くなる、パワーが周波数fに反比例している所謂「1/fゆらぎ」をもった特性となっていることである。「自然音」の中に含まれる特定の特徴に意識を集中して聴くような場合は別として、「自然音」は漫然と聞き流すことにより自然空間の中にあたかも身を置いたような、日常の煩わしさや悩み、ストレスから離脱した状況を再現し、副交感神経の亢進やリラックス効果が得られるものと推測される。 The acoustic characteristic of this "natural sound" is that it is accompanied by unpredictable movement as described above, and the power becomes weaker as the frequency becomes higher like pink noise, that is, the power is inversely proportional to the frequency f. The characteristic is that it has "/f fluctuation". Aside from the case where you concentrate your attention on a specific feature contained in "natural sound," "natural sound" is used in everyday life, as if you were in a natural space by listening to it. It is speculated that by reproducing the annoyance, worries, and the situation of withdrawal from stress, parasympathetic nerve enhancement and relaxing effects can be obtained.
 また、聴覚刺激には人の記憶との関連性により、副交感神経の亢進やリラックス効果を生ずる場合もある。例えば、心温まる物語や自己の経験を想い起こさせるメロディ音を聴いた時に、副交感神経の亢進やリラックス効果を生ずると推測され、そのメロディ音を聴覚刺激として用いても良い。具体的には、童謡や唱歌、アニメやドラマの音楽などが考えられるが、個人的な好みの偏りが少ない普遍的な支持を得られるものが望ましい。また、クラシック音楽の中にもリラックス効果が高いとされるものがみられ、例えばラヴェル作曲「亡き王女のためのパヴァーヌ」などの演奏音を聴覚刺激として用いても良いが、その場合も個人的な好みの偏りが少ない普遍的な支持を得られるものが望ましい。更には、音律的特徴として、現代音楽で主流の平均律(音階が数学的に割り切れない関係にあるため和音にうねりを生ずる音律)ではなく純正律(音階が単純な整数比の関係にあり、和音にうねりが生じない音律)に基づく楽曲やメロディ音であっても良い。  In addition, auditory stimulus may cause parasympathetic nerve enhancement or relaxing effect depending on the relationship with human memory. For example, when a melody sound that reminds of a heartwarming story or one's own experience is heard, it is presumed that a parasympathetic nerve is enhanced or a relaxing effect is generated, and the melody sound may be used as an auditory stimulus. Specifically, it can be nursery rhymes, songs, anime and drama music, etc., but it is desirable that they have universal support with little bias in personal taste. In addition, some classical music is said to have a high relaxing effect, and for example, performance sounds such as Ravel's composition "Pavane for the Dead Princess" may be used as auditory stimuli, but in that case as well It is desirable to have universal support with little bias in taste. In addition, as a temperamental characteristic, not the main temperament in modern music (the temperament that causes the chords to undulate because the scale is mathematically indivisible), but the pure temperament (the scale has a simple integer ratio relationship, It may be a musical composition or melody sound based on a temperament in which a chord does not swell.
 また、副交感神経の亢進やリラックス効果を得る音刺激として「笑い」、又は「可笑しみ」を印象付ける音を使用する。音刺激により「笑い」が表出されるプロセスとしては、(1)音刺激により過去の特定のシーンを連想する、(2)愉快な気持ちになる、(3)笑う(生理的身体反応による表出)、という3段階により「笑い」が喚起される。但し、(1)については音刺激そのものが面白く、(2)の愉快な気持ちに直結する場合もあり得る。自律神経系に及ぼす「笑い」の研究としては、脳内血流量変化や心拍変動などに基づいた観測がなされ、「笑い」を喚起することにより一時的に交感神経が亢進した後に、副交感神経が亢進し、リラックス傾向を示すことが報告されている。 Also, use a sound that impresses "laughter" or "smileyness" as a sound stimulus for enhancing the parasympathetic nerve or relaxing effect. The process of expressing "laughter" by sound stimulation is (1) reminiscent of a specific scene in the past by sound stimulation, (2) making you feel happy, (3) laughing (expression by physiological body reaction) ), the three stages, "laughter" is evoked. However, with respect to (1), the sound stimulation itself may be interesting, and it may be directly connected to the pleasant feeling of (2). As a study of "laughter" on the autonomic nervous system, observations based on changes in blood flow in the brain and heart rate variability were made, and after stimulating "laughter" the sympathetic nerve was temporarily enhanced, and then the parasympathetic nerve It has been reported to increase and show a tendency to relax.
 「笑い」を喚起する音刺激としては、お笑い番組やコントなどでオチ(「笑い」の結末)やボケ(「笑い」を誘導するためになされる見当違いな言動や行動)を印象付ける効果音やテーマ音楽が相当する。特に具体的な楽曲としては、米国のジャズ演奏家Pee Wee Hunt演奏による「Somebody Stole My Gal」を使用する。また、この演奏そのものでなくても、Pee Wee Hunt演奏を想起させるメロディ音であれば同様の「笑い」喚起効果を期待できる。この楽曲は「吉本新喜劇」のテーマソングとして関西出身者或いは関西に地縁を有する者を主体とする、関西文化圏に何らかの関わりのある人々においては、「笑い」との因果関係が強く記憶されているものと推定される。因みに、この楽曲に含まれる音響的特徴が万人に「笑い」を喚起させる可能性も考えられるが、検証はなされていない。 Sound stimulus that evokes “laughter” is a sound effect that impresses punches (end of “laughter”) and blur (misguided behavior and actions made to induce “laughter”) in comedy programs and controls. Or theme music is equivalent. As a specific music piece, "Somebody Stole My Gal" played by Pee Wee Hunt, an American jazz musician, is used. Even if it is not this performance itself, a similar "laughing" evoking effect can be expected if it is a melody sound reminiscent of the Pee Wee Hunt performance. This song is the theme song of "Yoshimoto Shin Comedy", and the causal relationship with "laughter" is strongly remembered by those who have something to do with the Kansai cultural area, mainly people from the Kansai region or those who have ties to the Kansai area. It is estimated that By the way, it is possible that the acoustic features included in this song may cause everyone to laugh, but it has not been verified.
 図14は、以上に述べた聴覚刺激を出力処理するための構成の一例を示す図であり、図13に示す聴覚刺激系統をより具体的に示したものである。聴覚刺激出力部11bは、雰囲気を清浄化させる聴覚刺激として水又は大気の動きに伴って生ずる音により構成された自然音を出力する。又は、聴覚刺激出力部11bは、雰囲気を清浄化させる聴覚刺激として楽曲又はメロディ音を出力する。聴覚刺激制御部9bは、険悪な雰囲気を清浄化する音の生成指示を聴覚刺激生成部10bに送信する。聴覚刺激生成部10bは、オーディオ信号再生部10b1及びオーディオ信号データ記憶部10b2により構成されている。前述の「自然音や楽曲、メロディ音」などのオーディオ信号データがオーディオ信号データ記憶部10b2に記憶されている。聴覚刺激制御部9bの指示に基づいて、オーディオ信号データ記憶部10b2に記憶された所定のオーディオ信号データがオーディオ信号再生部10b1によって読み出されてオーディオ信号として再生され、聴覚刺激出力部11bを構成するスピーカ11bから聴覚刺激として空間に出力される。 FIG. 14 is a diagram showing an example of a configuration for outputting the auditory stimulation described above, and more specifically showing the auditory stimulation system shown in FIG. The auditory stimulus output unit 11b outputs a natural sound composed of a sound generated by the movement of water or the atmosphere as an auditory stimulus for cleaning the atmosphere. Alternatively, the auditory stimulus output unit 11b outputs a music or melody sound as an auditory stimulus for cleaning the atmosphere. The auditory stimulation control unit 9b transmits to the auditory stimulation generation unit 10b an instruction to generate a sound for cleaning a bad atmosphere. The auditory stimulation generation unit 10b includes an audio signal reproduction unit 10b1 and an audio signal data storage unit 10b2. Audio signal data such as the aforementioned "natural sound, music, melody sound" is stored in the audio signal data storage unit 10b2. Based on the instruction of the auditory stimulation control unit 9b, the predetermined audio signal data stored in the audio signal data storage unit 10b2 is read by the audio signal reproducing unit 10b1 and reproduced as an audio signal to configure the auditory stimulation output unit 11b. It is output to the space as an auditory stimulus from the speaker 11b.
 次に、嗅覚刺激について説明する。嗅覚刺激は、険悪な雰囲気を清浄化する香りである。嗅覚刺激としては、例えば植物由来の芳香成分、所謂エッセンシャルオイル(精油)又はアロマオイル(合成成分を含むもの)の希釈物を刺激として用いる。公益社団法人アロマ環境協会(AEAJ)によれば、エッセンシャルオイルは、植物の花、葉、果皮、果実、心材、根、種子、樹皮、樹脂などから抽出した天然素材の芳香物質であり、それぞれ特有の香りと機能を有している。 Next, I will explain olfactory stimulation. The olfactory stimulus is a scent that cleans a harsh atmosphere. As the olfactory stimulus, for example, a diluted plant-derived aromatic component, so-called essential oil (essential oil) or aroma oil (containing a synthetic component) is used as the stimulus. According to the Aroma Environment Association (AEAJ), essential oils are natural aroma substances extracted from plant flowers, leaves, peels, fruits, heartwood, roots, seeds, bark, resins, etc. Has scent and function.
 その中で、本発明において嗅覚刺激として用いるのは、神経系の働きを鎮め、心と身体の働きをリラックスさせる鎮静作用を有するものである。具体的には、イランイラン(効果成分:リナロール)、オレンジ・スイート(同:リモネン)、カモミール・ローマン(同:アンゲリカ酸)、クラリセージ(同:酢酸リナリル、リナロール)、グレープフルーツ(同:リモネン)、ラベンダー(同:酢酸リナリル、リナロール)、ベルガモット(同:リモネン、リナロール)、ペパーミント(同:メントール)などのハーブ系や柑橘系のほか、パイン(同:ピネン)、スギ(同:ピネン)、ヒノキ(同:ピネン)、クロモジ(同:リナロール)などの樹木系のものを単独もしくは組み合わせて使用する。 Among them, what is used as an olfactory stimulus in the present invention has a sedative effect that calms the work of the nervous system and relaxes the work of the mind and body. Specifically, ylang-ylang (effect ingredient: linalool), orange sweet (same: limonene), chamomile roman (same: angelica acid), clary sage (same: linalyl acetate, linalool), grapefruit (same: limonene), Herbs and citrus fruits such as lavender (the same: linalyl acetate, linalool), bergamot (the same: limonene, linalool), peppermint (the same: menthol), pine (the same: pinene), cedar (the same: pinene), cypress (Same: Pinene), kuromoji (Same: Linalool) and other tree-based ones are used alone or in combination.
 図15は、以上に述べた嗅覚刺激を出力処理するための構成の一例を示す図であり、図13に示す嗅覚刺激系統をより具体的に示したものである。嗅覚刺激出力部11cは、雰囲気を清浄化させる嗅覚刺激としてエッセンシャルオイルの希釈物又はアロマオイルの希釈物を出力する。嗅覚刺激制御部9cは、香りの生成指示を嗅覚刺激生成部10cに送信する。嗅覚刺激生成部10cは、カートリッジ操作部10c1及び香り成分カートリッジ10c2から10c4により構成されている。なお、図15では説明の簡便化のために3つのカートリッジを図示しているが、この数に限るものではない。嗅覚刺激生成部10cは、香り成分カートリッジ10c2から10c4は異なる香り成分を保持している。嗅覚刺激制御部9cの指示に基づいたカートリッジ操作部10c1からの操作信号に応じて単独で、或いは複数のブレンドで、香り成分が生成される。嗅覚刺激出力部11cは、この生成された香り成分を放出する。 FIG. 15 is a diagram showing an example of a configuration for output processing the olfactory stimulus described above, and more specifically shows the olfactory stimulus system shown in FIG. The olfactory stimulus output unit 11c outputs a diluted essential oil or a diluted aroma oil as an olfactory stimulus for cleaning the atmosphere. The olfactory stimulus control unit 9c transmits a scent generation instruction to the olfactory stimulus generation unit 10c. The olfactory stimulus generation unit 10c includes a cartridge operation unit 10c1 and scent component cartridges 10c2 to 10c4. Although three cartridges are shown in FIG. 15 for simplification of description, the number is not limited to this. In the olfactory stimulus generator 10c, the scent component cartridges 10c2 to 10c4 hold different scent components. The scent component is generated alone or in a plurality of blends according to an operation signal from the cartridge operation unit 10c1 based on an instruction from the olfactory stimulation control unit 9c. The olfactory stimulus output unit 11c releases the generated scent component.
 香り成分カートリッジ10c2から10c4としては、例えば香り成分となるエッセンシャルオイルを希釈化した液体を保持し、高圧のガスやピエゾ素子などによる機械的運動を用いて霧化するスプレー構造が考えられる。その他、希釈化した液体に対して超音波を照射して霧化する超音波霧化分離構造などを用いても良い。嗅覚刺激出力部11cは、ノズル11c1、誘導管11c2、ファン11c3、及び出力操作部11c4により構成されている。嗅覚刺激生成部10cにおいて霧化された香り成分は、誘導管11c2を通してノズル11c1に導かれて放出される。香り成分カートリッジから放出された嗅覚刺激は、嗅覚刺激制御部9cの指示に基づいた出力操作部11c4による操作により、意図した方向へ導くためにノズル11c1の方向を変えたり、ファン11c3の風量を操作したりする。 As the scent component cartridges 10c2 to 10c4, for example, a spray structure that holds a liquid obtained by diluting the essential oil that serves as the scent component and atomizes by using mechanical movement by a high-pressure gas or a piezo element can be considered. In addition, an ultrasonic atomization separation structure in which ultrasonic waves are applied to the diluted liquid to atomize it may be used. The olfactory stimulation output unit 11c includes a nozzle 11c1, a guide tube 11c2, a fan 11c3, and an output operation unit 11c4. The scent component atomized in the olfactory stimulus generation unit 10c is guided to the nozzle 11c1 through the guide tube 11c2 and is discharged. The olfactory stimulus emitted from the scent component cartridge is operated by the output operation unit 11c4 based on the instruction of the olfactory stimulus control unit 9c to change the direction of the nozzle 11c1 in order to guide it to the intended direction or to operate the air volume of the fan 11c3. To do
 なお、嗅覚刺激出力部11cが出力した香りは、香りが出力された場所に留まる可能性がある。そこで、嗅覚刺激出力部11cは、香りを出力してから所定時間後に、消臭成分を出力してもよい。これにより、雰囲気清浄化装置100は、香りを消臭することができる。 Note that the scent output by the olfactory stimulus output unit 11c may remain at the place where the scent was output. Therefore, the olfactory stimulus output unit 11c may output the deodorant component a predetermined time after outputting the scent. Thereby, the atmosphere cleaning device 100 can deodorize the scent.
 次に、視覚刺激について説明する。視覚刺激は、険悪な雰囲気を清浄化する情報及び光のうちの少なくとも1つである。視覚刺激としては、画像出力または照明制御を使用する。画像は動画、静止画に関わらず、自然風景を出力する。これは聴覚刺激や嗅覚刺激と関連付けた内容にすることにより、互いに副交感神経の亢進やリラックス効果を得ることを補強し合えることを想定している。例えば、森や渓流などの「自然風景」であれば、小川のせせらぎの音、清冽な滝の音、木々の枝葉が風にそよぎ擦れ合う音などの「自然音」との親和性が高く、樹木系の香りとの親和性も高いことが推測される。 Next, I will explain visual stimulation. A visual stimulus is at least one of information and light that cleans a hostile atmosphere. Image output or lighting control is used as a visual stimulus. The image outputs a natural landscape regardless of whether it is a moving image or a still image. It is assumed that the contents associated with the auditory stimulus and the olfactory stimulus can reinforce each other to obtain the enhancement of parasympathetic nerve and the relaxing effect. For example, in the case of "natural scenery" such as forests and mountain streams, there is a high affinity with "natural sounds" such as the sound of brooks, the sound of clear waterfalls, and the sound of the branches and leaves of trees rubbing against the wind. It is assumed that it has a high affinity with the scent of the system.
 照明制御としては、高い鎮静効果が確認されている青色系の光を強めたり、木漏れ日の清々しさを連想させる緑色系の光を強めたり、温かさや安心感や懐かしさを連想させる橙色系の光を強めるなど色調を制御する。これも聴覚刺激や嗅覚刺激と関連付けた制御内容とすることによって、互いに副交感神経の亢進やリラックス効果を得ることを補強し合える。 As for lighting control, strengthen the blue light that has been confirmed to have a high sedative effect, enhance the green light that reminds you of the freshness of sunlight through the trees, and use orange light that reminds you of warmth, security, and nostalgia. Controls the color tone such as strengthening. By also controlling the contents in association with the auditory and olfactory stimuli, it is possible to reinforce each other to obtain parasympathetic nerve enhancement and relaxing effects.
 図16は、以上に述べた視覚刺激を出力処理するための構成の一例を示す図であり、図13に示す視覚刺激系統をより具体的に示したものである。図16においては、図13に示す視覚刺激生成部10d及び視覚刺激出力部11dをそれぞれ2系統(画像表示型と照明操作型)備えているが、これに限らず一方のみでも良い。視覚刺激出力部11dは、雰囲気を清浄化させる視覚刺激として画像、水及び大気の動きに伴って存在する自然風景、又は照明光を出力する。視覚刺激出力部11dは、照明光として青色系、緑色系及び橙色系の光を個別に強める色調変化を伴う照明光を使用する。視覚刺激制御部9dは、視覚刺激の出力指示を視覚刺激生成部10dに送信する。視覚刺激生成部10dは、画像信号データ記憶部10d2、画像信号再生部10d1、照明制御データ記憶部10d4、及び照明駆動部10d3によって構成される。画像信号データ記憶部10d2は、画像情報を記憶する。画像信号再生部10d1は、出力指示に基づく画像情報を画像情報記憶部173aから読み出す。画像信号再生部10d1は、読み出した画像情報に基づく画像信号をモニタ11d1に出力する。これにより、モニタ11d1は、画像情報を表示できる。モニタ11d1は、視覚刺激出力部11dの一例である。 FIG. 16 is a diagram showing an example of a configuration for outputting the visual stimulus described above, and more specifically showing the visual stimulus system shown in FIG. In FIG. 16, the visual stimulus generation unit 10d and the visual stimulus output unit 11d shown in FIG. 13 are provided in each of two systems (image display type and lighting operation type), but the present invention is not limited to this and only one may be provided. The visual stimulus output unit 11d outputs an image, a natural landscape existing with the movement of water and the atmosphere, or illumination light as a visual stimulus for cleaning the atmosphere. The visual stimulus output unit 11d uses, as the illuminating light, illuminating light accompanied by a color tone change that individually intensifies the blue, green, and orange lights. The visual stimulus control unit 9d transmits a visual stimulus output instruction to the visual stimulus generation unit 10d. The visual stimulus generation unit 10d includes an image signal data storage unit 10d2, an image signal reproduction unit 10d1, an illumination control data storage unit 10d4, and an illumination drive unit 10d3. The image signal data storage unit 10d2 stores image information. The image signal reproduction unit 10d1 reads the image information based on the output instruction from the image information storage unit 173a. The image signal reproduction unit 10d1 outputs an image signal based on the read image information to the monitor 11d1. Thereby, the monitor 11d1 can display the image information. The monitor 11d1 is an example of the visual stimulus output unit 11d.
 照明制御データ記憶部10d4は、照明11d2を制御するための制御データを記憶する。視覚刺激制御部9dの指示に基づいて、照明駆動部10d3が照明制御データ記憶部10d4を制御して、記憶された照明制御データを読み出して、照明11d2に対して駆動信号を出力する。これにより、照明11d2は、光を出力できる。照明11d2は例えば色調を制御できる機能を有する照明であり、入力された駆動信号に応じた色合いの光を直接的、または反射により間接的に照射してもよい。また、視覚刺激生成部10dは、照明11d2が出力する光の強さを調整してもよい。 The lighting control data storage unit 10d4 stores control data for controlling the lighting 11d2. Based on the instruction from the visual stimulus control unit 9d, the illumination drive unit 10d3 controls the illumination control data storage unit 10d4 to read the stored illumination control data and output a drive signal to the illumination 11d2. Thereby, the illumination 11d2 can output light. The illumination 11d2 is, for example, an illumination having a function of controlling the color tone, and may emit light having a color shade corresponding to the input drive signal directly or indirectly by reflection. Further, the visual stimulus generation unit 10d may adjust the intensity of the light output by the illumination 11d2.
 次に、電磁波刺激について説明する。電磁波刺激は、険悪な雰囲気を清浄化する電磁波である。電磁波刺激としては、地球大地と電離層の間の球殻状の空洞を、雷放電で発生した極超長波が伝播することにより生ずる定在波、所謂シューマン共振(共鳴)に基づく約7.83Hzの周波数を有する電磁波を模擬的に発生させ刺激として使用する。人に対する作用として、現代社会に無数に存在する電気機器から輻射される高周波の電磁波に対して、生物進化の歴史の中に連綿と存在し続けてきた極超長波の電磁波が、副交感神経の亢進やリラックス効果に関係すると、脳波α波の周波数に近いことを根拠として着目される場合がある。 Next, I will explain electromagnetic wave stimulation. Electromagnetic stimulation is an electromagnetic wave that cleans a harsh atmosphere. As electromagnetic wave stimulation, a standing wave generated by propagating extra-long waves generated by lightning discharge in a spherical shell-shaped cavity between the earth and the ionosphere, approximately 7.83 Hz based on so-called Schumann resonance (resonance) An electromagnetic wave having a frequency is simulated and used as a stimulus. As an effect on humans, in contrast to the high-frequency electromagnetic waves radiated from innumerable electric devices in modern society, the ultra-long-wave electromagnetic waves that have continued to exist in the history of biological evolution have increased parasympathetic nerves. Regarding the relaxation effect and the relaxation effect, there are cases where attention is focused on the fact that it is close to the frequency of the electroencephalogram α wave.
 図17は、以上に述べた電磁波刺激を出力処理するための構成の一例を示す図であり、図13に示す電磁波刺激系統をより具体的に示したものである。電磁波刺激制御部9eは、電磁波刺激の出力指示を電磁波刺激生成部10eに送信する。電磁波刺激生成部10eは、電磁波駆動部10e1と極超長波発振部10e2により構成されている。極超長波発振部10e2では約7.83Hzの周波数で発振する電子回路が構成されており、電磁波駆動部10e1の指令に基づき発振して極超長波信号を出力する。電磁波駆動部10e1は入力された極超長波信号を所定の電力で駆動して、アンテナ11eに極超長波信号に基づく電流を流し、電磁波刺激を出力する。アンテナ11eは、電磁波刺激出力部11eの一例である。 FIG. 17 is a diagram showing an example of a configuration for output processing of the electromagnetic wave stimulation described above, and more specifically shows the electromagnetic wave stimulation system shown in FIG. The electromagnetic wave stimulus control unit 9e transmits an electromagnetic wave stimulus output instruction to the electromagnetic wave stimulus generation unit 10e. The electromagnetic wave stimulus generation unit 10e is composed of an electromagnetic wave drive unit 10e1 and an ultra long wave oscillation unit 10e2. The ultra-long wave oscillating unit 10e2 includes an electronic circuit that oscillates at a frequency of about 7.83 Hz, and oscillates based on a command from the electromagnetic wave driving unit 10e1 to output an ultra-long wave signal. The electromagnetic wave driving unit 10e1 drives the input ultra-long wave signal with a predetermined electric power, supplies a current based on the ultra-long wave signal to the antenna 11e, and outputs an electromagnetic wave stimulus. The antenna 11e is an example of the electromagnetic wave stimulation output unit 11e.
 次に、帯電粒子刺激について説明する。帯電粒子刺激は、険悪な雰囲気を清浄化する帯電粒子である。帯電粒子刺激としては、レナード効果により負に帯電した微細な水の粒子を刺激として用いる。元来、水滴は偏極しており表面には負電荷が分布し内部は正電荷が分布する傾向にあるため、水滴が機械的に激しく衝突し細かく分裂した状況では、比較的微小な水粒子は負に帯電(一般に、負イオン又はマイナスイオンとも呼ばれる)し、大きな水粒子は正に帯電する傾向がある。大きな水粒子は重力の作用で早く落下するが、微小な水粒子は浮遊し負イオンが優勢な空気塊となる。この負イオンは自然界では滝や渓流の周囲で発生し、人に対する作用として副交感神経の亢進やリラックス効果が報告されている。人工的には、例えば超音波霧化技術が加湿器などで使用されており、水中に配置した超音波振動子への入力の大きさを制御することにより霧化量を調整でき、超音波の周波数を制御することにより水粒子の大きさを調整できる。 Next, I will explain the stimulation of charged particles. Charged particle stimuli are charged particles that clean a harsh atmosphere. As the charged particle stimulus, fine water particles negatively charged by the Leonard effect are used as the stimulus. Originally, water droplets are polarized, and negative charges are distributed on the surface, and positive charges tend to be distributed inside.Therefore, when water droplets collide mechanically and are finely divided, relatively small water particles are generated. Are negatively charged (generally also called negative or negative ions), and large water particles tend to be positively charged. Large water particles fall quickly due to the action of gravity, but small water particles float and become an air mass in which negative ions predominate. In the natural world, these negative ions are generated around waterfalls and mountain streams, and it has been reported that they have an effect on humans to enhance parasympathetic nerves and have a relaxing effect. Artificially, for example, ultrasonic atomization technology is used in humidifiers, etc., and the amount of atomization can be adjusted by controlling the size of the input to an ultrasonic transducer placed in water. The size of water particles can be adjusted by controlling the frequency.
 この帯電粒子刺激を単独で使用してもよいが、聴覚刺激や嗅覚刺激、視覚刺激と関連付けた内容にすることにより、互いに副交感神経の亢進やリラックス効果を得ることを補強し合える。具体的に言えば、小川のせせらぎの音、清冽な滝の音などの「自然音」と、樹木系の香りと、森や渓流の風景画像と、帯電粒子を関連付けて出力すれば、心休まる自然環境を一層現実的に再現でき、結果として副交感神経の亢進やリラックス効果を得ることが可能となる。  Although this charged particle stimulus may be used alone, it is possible to reinforce each other by enhancing parasympathetic nerves and relaxing effects by making the contents associated with auditory stimuli, olfactory stimuli, and visual stimuli. To be specific, "natural sounds" such as the sound of brooks and the sound of clear waterfalls, tree-based scents, landscape images of forests and mountain streams, and charged particles can be output in association with each other. The natural environment can be reproduced more realistically, and as a result, it becomes possible to obtain the parasympathetic nerve promotion and relaxing effect.
 図18は、以上に述べた帯電粒子刺激を出力処理するための構成の一例を示す図であり、図13に示す帯電粒子刺激系統をより具体的に示したものである。帯電粒子刺激制御部9fは、出力指示を帯電粒子刺激生成部10fに送信する。帯電粒子刺激生成部10fは、素子駆動部10f1と霧化粒子水生成部10f2により構成されている。帯電粒子刺激出力部11fは、ノズル11f1、誘導管11f2、ファン11f3、及び出力操作部11f4により構成されている。帯電粒子刺激生成部10fにおいて霧化された粒子水は、誘導管11f2を通してノズル11f1に導かれて放出される。霧化粒子水生成部10f2から放出された帯電粒子は、帯電粒子刺激制御部9fの指示に基づいた出力操作部11f4による操作により、意図した方向へ導くためにノズル11f1の方向を変えたり、ファン11f3の風量を操作したりする。また、帯電粒子が通る誘導管11f2及びノズル11f1を部分的に負に帯電させるなどして、帯電粒子の電荷量を補う構成としても良い。 FIG. 18 is a diagram showing an example of a configuration for outputting the charged particle stimulus described above, and more specifically showing the charged particle stimulus system shown in FIG. The charged particle stimulation control unit 9f sends an output instruction to the charged particle stimulation generation unit 10f. The charged particle stimulus generation unit 10f includes an element drive unit 10f1 and an atomized particle water generation unit 10f2. The charged particle stimulation output unit 11f includes a nozzle 11f1, a guide tube 11f2, a fan 11f3, and an output operation unit 11f4. The particle water atomized in the charged particle stimulus generation unit 10f is guided to the nozzle 11f1 through the guide pipe 11f2 and is discharged. The charged particles discharged from the atomized particle water generation unit 10f2 are changed in the direction of the nozzle 11f1 in order to be guided to the intended direction by the operation of the output operation unit 11f4 based on the instruction of the charged particle stimulation control unit 9f, and the fan. Operate the air volume of 11f3. In addition, the guide tube 11f2 and the nozzle 11f1 through which the charged particles pass may be partially negatively charged to supplement the charge amount of the charged particles.
 ここで、主制御部9aは、刺激制御のマトリクス図を参照して出力する刺激を特定するので、刺激制御のマトリクス図の一例について説明する。図19は、刺激制御の一例を示すマトリクス図である。図13では5分類の刺激を出力する構成を示しているが、図19では説明の簡素化のために3分類とし、刺激KについてはK1からK4の4種類、刺激LについてはL1からL3の3種類、刺激MについてはM1からM3の3種類あるとしている。各刺激とも数字が大きいほど雰囲気悪化を清浄化する作用が強いことを示している。例えば、刺激Kが聴覚刺激(すなわち、音)の場合、刺激K1は、自然音であり、刺激K2は、クラシック音楽である。 Here, since the main control unit 9a identifies the stimulus to be output with reference to the stimulus control matrix diagram, an example of the stimulus control matrix diagram will be described. FIG. 19 is a matrix diagram showing an example of stimulation control. Although FIG. 13 shows a configuration in which five categories of stimuli are output, FIG. 19 shows three categories for the sake of simplification of explanation, four types of stimuli K, K1 to K4, and stimuli L, L1 to L3. It is assumed that there are three types and three types of stimulation M, M1 to M3. It is shown that the larger the number of each stimulus, the stronger the action of cleaning the deterioration of the atmosphere. For example, when the stimulus K is an auditory stimulus (that is, sound), the stimulus K1 is a natural sound and the stimulus K2 is classical music.
 また、刺激制御のマトリクス図は、険悪な雰囲気の度合を5段階で表現している。図12の説明では、閾値が3つ場合を例示したが、図19に示す刺激制御のマトリクス図では、閾値が20の場合を例示していると考えてもよい。例えば、険悪な雰囲気の度合が1の場合は、雰囲気値が閾値Th1以上、かつ閾値Th2未満を示す。険悪な雰囲気の度合が2の場合は、雰囲気値が閾値Th2以上、かつ閾値Th3未満を示す。このように、刺激制御のマトリクス図は、険悪な雰囲気の度合と出力する対象の刺激との関係を示している。 Also, the stimulus control matrix diagram expresses the degree of the harsh atmosphere in five levels. In the description of FIG. 12, the case where there are three threshold values is illustrated, but it may be considered that the case where the threshold value is 20 is illustrated in the matrix diagram of the stimulation control illustrated in FIG. 19. For example, when the degree of a bad atmosphere is 1, the atmosphere value is equal to or more than the threshold Th1 and less than the threshold Th2. When the degree of the severe atmosphere is 2, the atmosphere value is equal to or more than the threshold Th2 and less than the threshold Th3. As described above, the stimulus control matrix diagram shows the relationship between the degree of a bad atmosphere and the stimulus to be output.
 図19において、横方向は雰囲気悪化の程度を示しており、軽度から重度まで5段階に分け、更に各段階を4区分して表示している。右に行くほど雰囲気悪化が重篤になることを表している。雰囲気悪化の程度が軽度の領域では、刺激K1、K2、L1、L2から1つを使用し、悪化に合わせて徐々に2つの刺激を同時に使用するようになり、更には刺激M1からM3を加えて3つの刺激を同時に使用することにより、雰囲気悪化を清浄化する作用を強めて行くように刺激制御する方法などを用いる。例えば、図13における主制御部9aにおいて、このような複数の刺激を組み合わせる機能を備えておくことによって、雰囲気悪化が段階的に進む状況に柔軟に対応可能となる。また、図19において、各刺激の強度や量を制御することについて示していないが、そのような制御により雰囲気悪化の清浄化作用を段階的に強められることは言うまでもなく、前述の嗅覚刺激や帯電粒子刺激などが該当する。 In FIG. 19, the horizontal direction shows the degree of deterioration of the atmosphere, and is divided into 5 stages from mild to severe, and each stage is displayed in 4 sections. It means that the more you move to the right, the worse the atmosphere becomes. In a region where the degree of atmosphere deterioration is mild, one of the stimuli K1, K2, L1, and L2 is used, and two stimuli are gradually used at the same time according to the deterioration, and further the stimuli M1 to M3 are added. By simultaneously using three stimuli, the stimulus control method is used so as to enhance the action of cleaning the atmosphere deterioration. For example, by providing the main controller 9a in FIG. 13 with a function of combining a plurality of such stimuli, it is possible to flexibly cope with a situation in which the atmosphere deteriorates gradually. Further, although FIG. 19 does not show the control of the intensity or amount of each stimulus, it goes without saying that such a control can stepwise enhance the cleaning action for the deterioration of the atmosphere, and it goes without saying that the above-mentioned olfactory stimulus and charging are performed. Particle stimulation is applicable.
 このように構成された実施の形態1に係る雰囲気清浄化装置では、人々を取り巻く環境として、軽微な「嫌な雰囲気」から重篤な「険悪な雰囲気」に至るまでの不適切な状況を、取得される音声信号を分析することにより検出し、その雰囲気の不適切さの程度に合わせて、人々の感情或いは情動を変化させる作用のある刺激を出力するので、雰囲気の更なる悪化を防止できると共に、危機的な状況に至ることを防止することが出来るという効果が得られる。 In the atmosphere cleaning device according to the first embodiment configured as described above, as an environment surrounding people, an inappropriate situation from a slight “unpleasant atmosphere” to a serious “severe atmosphere” The acquired audio signal is detected by analysis, and a stimulus that has the effect of changing the emotions or emotions of people is output according to the degree of inadequacy of the atmosphere, so it is possible to prevent further deterioration of the atmosphere. At the same time, the effect of being able to prevent a critical situation is obtained.
 また、本実施の形態1に係る雰囲気清浄化装置では、雰囲気の不適切さの程度に合わせて、人々の感情或いは情動を穏やかな方向に導く作用を有する刺激を出力するので、雰囲気の更なる悪化を防止できると共に、危機的な状況に至ることを防止することが出来るという効果が得られる。 Further, in the atmosphere cleaning device according to the first embodiment, a stimulus having an action of guiding people's emotions or emotions in a gentle direction is output according to the degree of inappropriateness of the atmosphere, so that the atmosphere is further improved. As a result, it is possible to prevent deterioration and prevent a critical situation.
 また、本実施の形態1に係る雰囲気清浄化装置では、雰囲気の不適切さの程度に合わせて、人々の感情或いは情動を変化させる作用、或いは穏やかな方向に導く作用を有する聴覚刺激として、水または大気の動きに伴って生ずる音により構成された自然音を出力するので、意識の集中を阻害されることなく、雰囲気の更なる悪化を防止できると共に、危機的な状況に至ることを防止することが出来るという効果が得られる。 Further, in the atmosphere cleaning device according to the first embodiment, water is used as an auditory stimulus having an effect of changing people's emotions or emotions or an effect of guiding the person in a gentle direction according to the degree of inappropriateness of the atmosphere. Or, it outputs a natural sound composed of sounds that accompany the movement of the atmosphere, so that it is possible to prevent further deterioration of the atmosphere without hindering the concentration of consciousness and to prevent a critical situation. The effect that can be obtained is obtained.
 また、本実施の形態1に係る雰囲気清浄化装置では、雰囲気の不適切さの程度に合わせて、人々の感情或いは情動を変化させる作用、或いは穏やかな方向に導く作用を有する聴覚刺激として、楽曲またはメロディ音を出力するので、当事者が心温まる物語や自己の経験を想い起こすなどして激した感情が抑制され、雰囲気の更なる悪化を防止できると共に、危機的な状況に至ることを防止することが出来るという効果が得られる。 Moreover, in the atmosphere cleaning device according to the first embodiment, music is used as an auditory stimulus having an effect of changing people's emotions or emotions or an effect of leading to a gentle direction according to the degree of inappropriateness of the atmosphere. Or, by outputting a melody sound, it is possible to prevent further emotional deterioration by suppressing the emotional emotions caused by the parties remembering a heartwarming story or their own experience, and preventing a critical situation. The effect of being able to do is obtained.
 また、本実施の形態1に係る雰囲気清浄化装置では、雰囲気の不適切さの程度に合わせて、人々の感情或いは情動を変化させる作用、或いは穏やかな方向に導く作用を有する聴覚刺激として、オチやボケを印象付ける効果音やテーマ音楽を使用することにより笑いが喚起され、雰囲気の更なる悪化を防止できると共に、危機的な状況に至ることを防止することが出来るという効果が得られる。 Further, in the atmosphere cleaning device according to the first embodiment, as an auditory stimulus having an action of changing people's emotions or emotions or an action of leading to a gentle direction according to the degree of inappropriateness of the atmosphere, By using sound effects and theme music that impress or blur, laughter is aroused, and further deterioration of the atmosphere can be prevented, and it is possible to prevent a critical situation.
 また、本実施の形態1に係る雰囲気清浄化装置では、雰囲気の不適切さの程度に合わせて、人々の感情或いは情動を変化させる作用、或いは穏やかな方向に導く作用を有する聴覚刺激として、米国のジャズ演奏家Pee Wee Hunt演奏による「Somebody Stole My Gal」又はその模倣演奏を使用、或いはPee Wee Hunt演奏を想起させるメロディ音を使用することにより笑いが喚起され、雰囲気の更なる悪化を防止できると共に、危機的な状況に至ることを防止することが出来るという効果が得られる。 Further, in the atmosphere cleaning device according to the first embodiment, as an auditory stimulus having an effect of changing people's emotions or emotions or an effect of leading to a gentle direction according to the degree of inappropriateness of the atmosphere, The jazz musician Pee Wee Hunt performed "Somebody Stole My Gal" or its imitation performance, or the melody sound reminiscent of the Pee Wee Hunt performance evoked a laughter and prevented further deterioration of the atmosphere. At the same time, the effect of being able to prevent a critical situation is obtained.
 また、本実施の形態1に係る雰囲気清浄化装置では、雰囲気の不適切さの程度に合わせて、人々の感情或いは情動を変化させる作用、或いは穏やかな方向に導く作用を有する嗅覚刺激として、エッセンシャルオイル(精油)又はアロマオイル(合成成分を含むもの)の希釈物を使用するので、急速に大脳辺縁系へ刺激が伝わり、短期間で雰囲気の更なる悪化を防止できると共に、危機的な状況に至ることを防止することが出来るという効果が得られる。 Further, in the atmosphere cleaning device according to the first embodiment, the essential oil is used as an olfactory stimulus having an effect of changing people's emotions or emotions according to the degree of inappropriateness of the atmosphere, or an effect of leading to a gentle direction. Since it uses a dilution of (essential oil) or aroma oil (containing synthetic components), stimulation is rapidly transmitted to the limbic system, preventing further deterioration of the atmosphere in a short period of time, and in a critical situation It is possible to obtain the effect that it can be prevented from reaching.
 また、本実施の形態1に係る雰囲気清浄化装置では、雰囲気の不適切さの程度に合わせて、人々の感情或いは情動を変化させる作用、或いは穏やかな方向に導く作用を有する視覚刺激として、水または大気の動きを伴って存在する草木の瑞々しさを想起させる「自然風景」などの画像出力、または高い鎮静効果が確認されている青色系の光を強めたり、木漏れ日の清々しさを連想させる緑色系の光を強めたり、温かさや安心感や懐かしさを連想させる橙色系の光を強めるなどの色調変化を伴う照明制御を使用するので、多くの情報量を一度に脳に伝えることが出来るので、短期間で雰囲気の更なる悪化を防止できると共に、危機的な状況に至ることを防止することが出来るという効果が得られる。 In addition, in the atmosphere cleaning device according to the first embodiment, water is used as a visual stimulus having an action of changing people's emotions or emotions according to the degree of inadequacy of the atmosphere, or a action of guiding in a gentle direction. Or image output such as "natural scenery" that reminds the freshness of plants and trees that exist with the movement of the atmosphere, or enhance the blue light that has been confirmed to have a high sedative effect, reminiscent of the freshness of sunbeams Since we use lighting control with color tone changes such as intensifying the greenish light and intensifying the orangeish light that reminds us of warmth, security and nostalgia, we can transmit a large amount of information to the brain at once. In addition, it is possible to prevent the atmosphere from further deteriorating in a short period of time and to prevent a critical situation.
 また、本実施の形態1に係る雰囲気清浄化装置では、雰囲気の不適切さの程度に合わせて、人々の感情或いは情動を変化させる作用、或いは穏やかな方向に導く作用を有する電磁波刺激として、シューマン共振(共鳴)に基づく約7.83Hzの周波数を有する電磁波を使用するので、外乱を受け易い五感刺激を直接利用することなく、多人数に同時に刺激を伝えることが出来るので、雰囲気の更なる悪化を防止できると共に、危機的な状況に至ることを防止することが出来るという効果が得られる。 In addition, in the atmosphere cleaning device according to the first embodiment, Schumann is used as an electromagnetic wave stimulus having an effect of changing people's emotions or emotions according to the degree of inappropriateness of the atmosphere, or an effect of guiding in a gentle direction. Since an electromagnetic wave having a frequency of about 7.83 Hz based on resonance is used, it is possible to simultaneously transmit stimuli to a large number of people without directly using the five senses stimuli that are susceptible to disturbances, which further deteriorates the atmosphere. It is possible to prevent such a situation as well as prevent a critical situation.
 また、本実施の形態1に係る雰囲気清浄化装置では、雰囲気の不適切さの程度に合わせて、人々の感情或いは情動を変化させる作用、或いは穏やかな方向に導く作用を有する帯電粒子刺激として、レナード効果により負に帯電した微細な水の粒子を使用するので、自然環境に身を置いたような清々しさを脳に伝えることが出来るので、効果的に雰囲気の更なる悪化を防止できると共に、危機的な状況に至ることを防止することが出来るという効果が得られる。 Further, in the atmosphere cleaning device according to the first embodiment, as charged particle stimuli having an effect of changing people's emotions or emotions or an effect of leading to a gentle direction, depending on the degree of inappropriateness of the atmosphere, Since fine water particles negatively charged by the Leonard effect are used, it is possible to convey to the brain the freshness as if one were in the natural environment, so that it is possible to effectively prevent further deterioration of the atmosphere, The effect is that it is possible to prevent reaching a critical situation.
 また、本実施の形態1に係る雰囲気清浄化装置では、人々を取り巻く軽微な「嫌な雰囲気」から重篤な「険悪な雰囲気」に至るまでの不適切な状況を検知し、人々を主に平穏な状態に導く効果のある刺激を出力して雰囲気を清浄化することにより、危機的状態に至る前段階でストレスに満ちた不適切状況を清浄化し、健康阻害や犯罪発生を抑止することが出来るという効果が得られる。 In addition, the atmosphere cleaning device according to the first embodiment detects an inappropriate situation from a slight “unpleasant atmosphere” surrounding people to a serious “dangerous atmosphere”, and mainly detects people. By outputting a stimulus that has the effect of leading to a calm state and cleaning the atmosphere, it is possible to clean up stressful inappropriate situations before the crisis and prevent deteriorating health and crime. The effect of being able to be obtained is obtained.
実施の形態2.
 実施の形態1において行動音分析部において用いる行動音の感情特徴量について説明したが、本実施の形態2では、感情特徴量についてより具体的に説明する。図20は、行動音の分析方法の一例を示す図である。また、図21は、行動音の分析結果の一例を示す図である。前述の通り、感情的な行動音としては、「激怒」、「怒り」、「苛立」、「憎悪」、「嫌悪」などの感情により、他人やモノに対する配慮の気持ちが薄れ、モノを乱暴に扱ってしまうことによる音の特徴、例えばピーク成分が強く且つ多く表れることが予想される。そこで、波高率(クレストファクタ)と呼ばれるピーク成分を検出するのに適した音響特徴量を用いる。
Embodiment 2.
Although the emotion feature amount of the action sound used in the action sound analysis unit has been described in the first embodiment, the emotion feature amount will be described more specifically in the second embodiment. FIG. 20 is a diagram showing an example of a behavior sound analysis method. In addition, FIG. 21 is a diagram illustrating an example of a result of action sound analysis. As mentioned above, as emotional behavior sounds, feelings of "anger", "anger", "irritability", "hate", "disgust", etc. make us less considerate of others and things, and make things violent. It is expected that the characteristics of the sound due to being handled, for example, peak components will appear strong and many. Therefore, an acoustic feature quantity suitable for detecting a peak component called a crest factor is used.
 図20は、取得した行動音の分析方法を示しており、所定の長さの分析期間を定め、その期間を半分だけずらしながら各区間に対する波高率を求める。この求め方は音声解析の分野では一般的な分析手法であり、各分析期間を「短時間フレーム」、隣り合う区間をどれだけずらすかを「フレーム周期」と呼ぶ。なお、周波数領域において分析を行うに当って、フレーム両端の不連続性に起因する不都合を回避するために、短時間フレームに対して何らかの窓関数を掛けることが一般的である。しかしながら、波高率は時間領域の特徴量でありフレーム両端の不連続性とは無関係なので窓関数は使用しなくても良い。図20の下段に波高率の定義を示している。各分析期間即ち短時間フレームごとに絶対値の最高値を求め、同じく短時間フレームごとの実効値により除したものを波高率と呼ぶ。実効値は短時間フレームごとに自乗平均の平方根を求めたものである。波高率は、式(1)から算出される。ここで、実効値は、分析期間に2乗平均平方根を求めたものである。 FIG. 20 shows a method of analyzing the acquired action sound, in which an analysis period of a predetermined length is set and the crest factor for each section is calculated by shifting the period by half. This method of determination is a general analysis method in the field of speech analysis, and each analysis period is called a “short-time frame”, and how much the adjacent sections are shifted is called a “frame cycle”. In the analysis in the frequency domain, it is common to apply some window function to the short time frame in order to avoid the inconvenience caused by the discontinuity at both ends of the frame. However, since the crest factor is a feature amount in the time domain and has no relation to discontinuity at both ends of the frame, the window function need not be used. The crest factor definition is shown in the lower part of FIG. The maximum value of the absolute value is obtained for each analysis period, that is, for each short-time frame, and is similarly divided by the effective value for each short-time frame to be called a crest factor. The effective value is obtained by calculating the square root of the root mean square for each short time frame. The crest factor is calculated from the equation (1). Here, the effective value is the root mean square obtained during the analysis period.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 図21は、行動音の分析結果の一例として、平常時と苛立ち時とについてパソコンのキーボード操作を行った場合の行動音を比較したものである。図21において、縦軸は波高率が所定値を超えたフレーム数をカウントしたものであり、横軸はピークが所定値を超えたフレーム数をカウントしたものである。図21は、平常時の行動音と苛立時の行動音を複数回、サンプリングを行った結果を示している。黒丸は平常時の場合、×印は苛立ち時の場合を示している。例えば、図20のような苛立時の行動音に基づいて、波高率が所定値を超えた分析期間の数(第一特徴量)と、ピーク成分が所定値を超えた分析期間の数(第二特徴量)とが、算出される。同様に、複数の苛立時の行動音に基づいて、波高率が所定値を超えた分析期間の数と、ピーク成分が所定値を超えた分析期間の数とが、複数算出される。算出された複数の結果が、複数の苛立時の×印である。 FIG. 21 shows, as an example of the result of analysis of action sounds, comparison of action sounds when a keyboard operation of a personal computer is performed during normal times and during frustration. 21, the vertical axis represents the number of frames in which the crest factor exceeds a predetermined value, and the horizontal axis represents the number of frames in which the peak exceeds a predetermined value. FIG. 21 shows the result of sampling the action sound during normal times and the action sound during irritation a plurality of times. The black circles indicate normal times, and the X marks indicate irritation. For example, based on the behavior sound during frustration as shown in FIG. 20, the number of analysis periods in which the crest factor exceeds a predetermined value (first feature amount) and the number of analysis periods in which the peak component exceeds a predetermined value (first Two feature quantities) are calculated. Similarly, a plurality of analysis periods in which the crest factor exceeds a predetermined value and a plurality of analysis periods in which the peak component exceeds the predetermined value are calculated based on a plurality of action sounds during frustration. The calculated results are X marks at the time of irritation.
 図21より、平常時と苛立ち時の分布に偏りが表れることが分かる。このことから、波高率及びピーク値が所定値をこえる頻度を観測することにより、何らかの感情的行動時と平常時の行動音による識別が可能である。 From Figure 21, it can be seen that there is a bias in the distribution during normal times and during frustration. From this fact, by observing the frequency at which the crest factor and the peak value exceed the predetermined values, it is possible to discriminate the emotional behavior and the normal behavior sound.
 このように構成された実施の形態2に係る雰囲気清浄化装置では、人々の行動音からその場の雰囲気を検知する際に、行動音の特徴として波高率及びピーク値が所定値を超える頻度を用いるように構成したので、行動音による感情識別の利用性を向上することが出来、発話音声による感情識別の妨害となりがちな行動音の有効活用を図れ、雰囲気検知の頑健性を向上できるという効果が得られる。 In the atmosphere cleaning device according to the second embodiment configured as described above, when detecting the atmosphere of the place from the behavior sounds of people, the characteristics of the behavior sounds are the crest factor and the frequency at which the peak value exceeds the predetermined value. Since it is configured to be used, it is possible to improve the usability of emotion identification by action sound, effectively utilize action sound that tends to interfere with emotion identification by uttered voice, and improve the robustness of atmosphere detection. Is obtained.
実施の形態3.
 図22は、雰囲気清浄化装置をエレベータに適用した場合の機能ブロック図である。本実施の形態では、雰囲気清浄化装置をエレベータに適用した実施形態を示している。実施の形態3は、実施の形態1と相違する事項を主に説明し、実施の形態1と共通する事項の説明を省略する。
Embodiment 3.
FIG. 22 is a functional block diagram when the atmosphere cleaning device is applied to an elevator. The present embodiment shows an embodiment in which the atmosphere cleaning device is applied to an elevator. The third embodiment will mainly describe matters different from the first embodiment, and the explanation of matters common to the first embodiment will be omitted.
 図22において、エレベータかご300に対して、雰囲気清浄化装置200を結合し、音入力部201によりエレベータかご300内で発生する音を取得し、刺激出力部202からエレベータかご300内に刺激を出力する構成としている。また、雰囲気清浄化装置200の内部は雰囲気推定部203及び刺激生成部204の主要部のみに簡略化して記載しているが、実際には図1のブロック図の構成と同様である。当事者Xと当事者Yの二人が存在し、当事者Xが当事者Yに一方的に激昂している状況であり、雰囲気清浄化装置200はこのエレベータに適用した形態においても前述の通り、一連の音による雰囲気検知と各種刺激の出力による雰囲気清浄化がなされる。 In FIG. 22, the atmosphere cleaning device 200 is coupled to the elevator car 300, the sound input unit 201 acquires a sound generated in the elevator car 300, and the stimulus output unit 202 outputs a stimulus into the elevator car 300. It is configured to do. Further, although the inside of the atmosphere cleaning device 200 is simplified and described only in the main parts of the atmosphere estimation unit 203 and the stimulus generation unit 204, the configuration is actually the same as the block diagram of FIG. 1. There is a party X and a party Y, and the party X is unilaterally agitated with the party Y, and the atmosphere cleaning device 200 also applies a series of sounds to the elevator as described above. Atmosphere is detected by and the atmosphere is cleaned by outputting various stimuli.
 図23は、雰囲気清浄化装置をエレベータに適用した場合の重篤な状況の一例を示す図である。図22に示した状況が更に重篤な状況にまで進展した一例を示している。当事者Xは凶器を取り出し当事者Yに覆いかぶさるような姿勢を取るに至っている。或いはここまで重篤ではない状況としては、当事者Yの胸倉をつかみ当事者Xが殴り掛かろうとしている状況である。当事者Xは前後の見境のつかない所謂「ぶち切れ」状態で、当事者Yを組み押さえるような姿勢となるのが一般的である。少なくとも加害側の当事者Xが当事者Yの下側になる状況は想定し難い。ここまで重篤な状態では、実施の形態1に述べた各種刺激の出力は、重大な犯罪が発生する前に状況を清浄化できない可能性がある。図23に示した状況は、刺激の出力による副交感神経の亢進やリラックス効果を生ずるまで待てる時間的余裕は小さい。急速に大脳辺縁系へ刺激が伝わり、神経系の働きを鎮め、心と身体の働きをリラックスさせる鎮静作用を得られる嗅覚刺激をもってしても十分とは言い切れない。 FIG. 23 is a diagram showing an example of a serious situation when the atmosphere cleaning device is applied to an elevator. FIG. 22 shows an example in which the situation shown in FIG. 22 has progressed to a more serious situation. The party X has taken a posture of taking out the weapon and covering the party Y. Or, as a situation that is not so serious, the situation where the party X is about to hit and grab the chest of the party Y is about to hit. The party X generally has a posture of holding the party Y in a so-called “blunted” state where the front and rear are indistinguishable. At least, it is difficult to imagine a situation in which the party X on the offending side is below the party Y. In such a serious condition, the outputs of the various stimuli described in the first embodiment may not be able to clear the situation before a serious crime occurs. In the situation shown in FIG. 23, there is little time margin to wait until the parasympathetic nerve is enhanced or the relaxing effect is produced by the output of the stimulus. It is not enough to have an olfactory stimulus that can provide a sedative effect that quickly transmits limbic system to the limbic system, calms the nervous system, and relaxes the work of the mind and body.
 そこで、図23に示した状況に準ずるような重篤な状況に対しては、以下に示すような嗅覚刺激を出力する。刺激の内容以外の嗅覚刺激を出力するための構成などは図15と同様である。嗅覚刺激としては、人の健康に直接的に有害ではない程度まで希釈化した悪臭物質を使用する。具体的には、植物系としては、ラフレシア(悪臭成分:インドール、アミン類他)、ショクダイオオコンニャク(同:ジメチルトリスルフィド)、ドリアン(同:プロパンチオール他)、アメリカミズバショウ、デッドフォースアルム、オオバナサイカク、ヒドノラ・アフリカーナ、ギンナン(同:酪酸、ヘプタン酸)、ザゼンソウ、ドクダミ(同:デカノイル・アセトアルデヒド)などが放つ悪臭成分を使用する。動物系としては、カメムシ、ラクダ、スカンク、コンドル、ヤスデ、ゾリラなどが放つ悪臭成分を使用する。その他には都市ガスやプロパンガスの付臭剤として利用される有機硫黄化合物を使用する。また、ここに例示した植物及び動物、有機硫黄化合物以外にも、人の嗅覚が悪臭と認識し、人の健康に直接的に有害ではない程度まで希釈化したものであれば利用は可能である。これにより、エレベータかご300において、当事者Xの鎮静又は犯罪の未然防止を実現できる。当事者Xの鎮静又は犯罪の未然防止ができることは、険悪な雰囲気を一時的に清浄化しているとも言える。 Therefore, for a serious situation similar to the situation shown in FIG. 23, an olfactory stimulus as shown below is output. The configuration for outputting the olfactory stimulus other than the content of the stimulus is the same as in FIG. As an olfactory stimulus, a malodorous substance diluted to such an extent that it is not directly harmful to human health is used. Specifically, as a plant system, Rafflesia (a malodorous component: indole, amines, etc.), Shokdai Okonjak (the same: dimethyl trisulfide), Durian (the same: propanethiol, etc.), American Hornbill, Dead Force Arum, It uses the malodorous components emitted by sycamore, Hidnola africana, Ginnan (butyric acid, heptanoic acid), quince, Dokudami (decanoyl acetaldehyde). As an animal system, the odorous components emitted by stink bugs, camels, skunks, vultures, millipedes, and zorillas are used. In addition, organic sulfur compounds used as odorants for city gas and propane gas are used. In addition to the plants and animals and organic sulfur compounds exemplified here, it is possible to use as long as it is diluted to such an extent that the human sense of smell recognizes an offensive odor and is not directly harmful to human health. .. Thereby, in the elevator car 300, it is possible to prevent the party X from calming or crime. The fact that the party X can be sedated or the crime is prevented can be said to temporarily cleanse the harsh atmosphere.
 また、この場合の嗅覚刺激の出力においては、加害側である当事者Xに重点的に出力されるような制御を行ってもよい。具体的には、図15に示したノズル11c1及びファン11c3を用いて嗅覚刺激の出力方向を制御する。この際、エレベータかご300内に設置したカメラ画像を利用して、当事者Xの位置を特定し、嗅覚刺激の出力方向を決めてもよい。また、日本人の平均頭長が20cm程度であること、重篤な状況では当事者Xが当事者Yを組み押さえるような姿勢となる場合が想定される点を利用して、床面より25cm以上の空間を中心に嗅覚刺激の出力方向を設定するなどの制御により、当事者X側に集中的に嗅覚刺激の影響を及ぼすことができる。 In addition, in the output of the olfactory stimulus in this case, the control may be performed so as to output the olfactory stimulus mainly to the party X who is the aggressor. Specifically, the output direction of the olfactory stimulus is controlled using the nozzle 11c1 and the fan 11c3 shown in FIG. At this time, a camera image installed in the elevator car 300 may be used to specify the position of the party X and determine the output direction of the olfactory stimulus. In addition, taking into consideration that the average head length of the Japanese is about 20 cm, and that it is assumed that the party X is likely to hold the party Y in a serious situation, 25 cm or more from the floor surface is used. By controlling such as setting the output direction of the olfactory stimulus centering on the space, the olfactory stimulus can be intensively affected on the party X side.
 図24は、雰囲気清浄化装置をエレベータに適用し、嗅覚刺激として悪臭成分を使用し、換気処理を行う場合の機能ブロック図である。このように、嗅覚刺激として悪臭物質を使用する場合、エレベータかご300内に悪臭が長期に留まることは、エレベータの運用上支障がある。従って、図24に示すように床面に近い位置、例えば床面から25cm未満の位置から天井部に向けて気流を形成して換気する機構と連動させることが有効である。 FIG. 24 is a functional block diagram when an atmosphere cleaning device is applied to an elevator, a malodorous component is used as an olfactory stimulus, and ventilation processing is performed. As described above, when a malodorous substance is used as the olfactory stimulus, the malodor remaining in the elevator car 300 for a long time hinders the operation of the elevator. Therefore, as shown in FIG. 24, it is effective to interlock with a mechanism for forming an airflow toward the ceiling from a position close to the floor surface, for example, a position less than 25 cm from the floor surface for ventilation.
 図24においては、図22などと同様に図1の構成の一部を抜粋して記載しており、重複する部分の説明は省略する。図24において、エレベータかご300の天井部に換気ファン407を設置し、エレベータかご300の床面の近くに換気口408を設置し、雰囲気清浄化装置400に換気ファン制御部406を備えている。換気ファン制御部406は、刺激制御部404から制御信号を受信する。換気ファン制御部406は、制御信号に従って、換気ファン407を制御する。 In FIG. 24, like FIG. 22 and the like, a part of the configuration of FIG. 1 is extracted and described, and the description of the overlapping parts will be omitted. In FIG. 24, a ventilation fan 407 is installed on the ceiling of the elevator car 300, a ventilation port 408 is installed near the floor surface of the elevator car 300, and the atmosphere cleaning device 400 is equipped with a ventilation fan control unit 406. The ventilation fan control unit 406 receives the control signal from the stimulation control unit 404. The ventilation fan control unit 406 controls the ventilation fan 407 according to the control signal.
 換気ファン制御部406は、エレベータかご300の天井部に設置された換気ファン407を動作させ、床面に近い換気口408から換気ファン407に向かってエレベータかご300内を換気するための空気の流れを生じさせる。そして、上昇した空気と香りは、外部に排出される。また、雰囲気清浄化装置400に嗅覚刺激として用いる悪臭成分の他に消臭成分を保持しておき、刺激制御部404は、エレベータかご300に悪臭成分を出力してから、所定時間後に消臭成分を出力するようにしても良い。 The ventilation fan control unit 406 operates the ventilation fan 407 installed on the ceiling portion of the elevator car 300 to flow the air for ventilating the inside of the elevator car 300 from the ventilation port 408 near the floor toward the ventilation fan 407. Cause Then, the rising air and scent are discharged to the outside. Further, the atmosphere cleaning device 400 holds a deodorant component in addition to the malodorous component used as the olfactory stimulus, and the stimulation control unit 404 outputs the malodorous component to the elevator car 300, and after a predetermined time, the deodorant component. May be output.
 このように構成された実施の形態3に係る雰囲気清浄化装置を適用したエレベータでは、重大な犯罪が発生する直前の重篤な状況に対して、加害者側の意識に嗅覚的な割込みを掛けることにより、加害者側の行動を一時停滞させることが出来るので、閉空間の防犯・安全性を向上できる。適用先がエレベータであれば最寄り階緊急停止や警備員駆け付けなどの緊急対応に要する時間の確保が可能となり、エレベータの防犯・安全性を向上できるという効果が得られる。 In the elevator to which the atmosphere cleaning device according to the third embodiment configured as described above is applied, an olfactory interruption is given to the perpetrator's consciousness in a serious situation immediately before a serious crime occurs. As a result, the behavior of the perpetrator can be temporarily suspended, and crime prevention and safety in the closed space can be improved. If the application is to an elevator, it will be possible to secure the time required for an emergency response such as an emergency stop on the nearest floor or rushing security guards, and the effect of improving crime prevention and safety of the elevator can be obtained.
 また、本実施の形態3に係る雰囲気清浄化装置を適用したエレベータでは、加害者側の意識に嗅覚的な割込みを掛けるための刺激として、植物又は動物が放つ悪臭成分或いは有機硫黄化合物の希釈物を使用する構成としたので、加害者側の行動を一時停滞させることが出来、閉空間の防犯・安全性を向上できる。適用先がエレベータであれば最寄り階緊急停止や警備員駆け付けなどの緊急対応に要する時間の確保が可能となり、エレベータの防犯・安全性を向上できるという効果が得られる。 Further, in the elevator to which the atmosphere cleaning device according to the third embodiment is applied, as a stimulus for applying an olfactory interruption to the consciousness of the assailant side, a diluted malodorous component or an organic sulfur compound emitted by a plant or an animal. Since it is configured to use, it is possible to temporarily suspend the actions of the perpetrator and improve the crime prevention and safety of the closed space. If the application is to an elevator, it will be possible to secure the time required for an emergency response such as an emergency stop on the nearest floor or rushing security guards, and the effect of improving crime prevention and safety of the elevator can be obtained.
 以上に説明した各実施の形態における特徴は、互いに適宜組み合わせることができる。また、本実施の形態3においては、雰囲気清浄化装置をエレベータに適用した場合について説明したが、これに限るわけではない。例えば、鉄道や自動車、船舶など各種の乗り物や、宇宙ステーションなどの容易に外部に逃れられない閉じた空間であれば、本実施の形態1~3で説明した雰囲気清浄化装置を適用して同様の効果が得られる。 The features of the embodiments described above can be combined with each other as appropriate. Further, although the case where the atmosphere cleaning device is applied to the elevator has been described in the third embodiment, the present invention is not limited to this. For example, in the case of various vehicles such as railroads, automobiles, ships, and closed spaces such as space stations that cannot easily escape to the outside, the same applies by applying the atmosphere cleaning device described in the first to third embodiments. The effect of is obtained.
 ここで、雰囲気清浄化装置が有するハードウェアの構成について説明する。図25は、雰囲気清浄化装置が有するハードウェアの構成を示す図である。雰囲気清浄化装置100は、プロセッサ101、揮発性記憶装置102、及び不揮発性記憶装置103を有する。 Here, the hardware configuration of the atmosphere cleaning device will be described. FIG. 25 is a diagram showing a hardware configuration of the atmosphere cleaning device. The atmosphere cleaning device 100 includes a processor 101, a volatile storage device 102, and a non-volatile storage device 103.
 プロセッサ101は、雰囲気清浄化装置100全体を制御する。例えば、プロセッサ101は、CPU(Central Processing Unit)、又はFPGA(Field Programmable Gate Array)などである。プロセッサ101は、マルチプロセッサでもよい。雰囲気清浄化装置100は、処理回路によって実現されてもよく、又は、ソフトウェア、ファームウェア若しくはそれらの組み合わせによって実現されてもよい。なお、処理回路は、単一回路又は複合回路でもよい。 The processor 101 controls the entire atmosphere cleaning device 100. For example, the processor 101 is a CPU (Central Processing Unit), an FPGA (Field Programmable Gate Array), or the like. The processor 101 may be a multiprocessor. The atmosphere cleaning device 100 may be realized by a processing circuit, or may be realized by software, firmware or a combination thereof. The processing circuit may be a single circuit or a composite circuit.
 揮発性記憶装置102は、雰囲気清浄化装置100の主記憶装置である。例えば、揮発性記憶装置102は、RAM(Random Access Memory)である。不揮発性記憶装置103は、雰囲気清浄化装置100の補助記憶装置である。例えば、不揮発性記憶装置103は、SSD(Solid State Drive)である。 The volatile storage device 102 is a main storage device of the atmosphere cleaning device 100. For example, the volatile storage device 102 is a RAM (Random Access Memory). The non-volatile storage device 103 is an auxiliary storage device of the atmosphere cleaning device 100. For example, the non-volatile storage device 103 is an SSD (Solid State Drive).
 発話音声分析部2、行動音分析部3、第一識別部6、第二識別部7、第一記録部4、第二記録部5、雰囲気推定部8、刺激制御部9、及び刺激生成部10は、情報処理装置で実現することができる。また、第一記録部4及び第二記録部5は、揮発性記憶装置102又は不揮発性記憶装置103に確保した記憶領域として実現してもよい。 Utterance voice analysis unit 2, action sound analysis unit 3, first identification unit 6, second identification unit 7, first recording unit 4, second recording unit 5, atmosphere estimation unit 8, stimulus control unit 9, and stimulus generation unit. 10 can be realized by an information processing device. Further, the first recording unit 4 and the second recording unit 5 may be realized as a storage area secured in the volatile storage device 102 or the non-volatile storage device 103.
 音入力部1、発話音声分析部2、行動音分析部3、第一識別部6、第二識別部7、雰囲気推定部8、刺激制御部9、刺激生成部10、及び刺激出力部11の一部又は全部は、プロセッサ101によって実現してもよい。発話音声分析部2、行動音分析部3、第一識別部6、及び第二識別部7の一部又は全部は、プロセッサ101によって実現してもよい。 Of the sound input unit 1, the utterance voice analysis unit 2, the action sound analysis unit 3, the first identification unit 6, the second identification unit 7, the atmosphere estimation unit 8, the stimulation control unit 9, the stimulation generation unit 10, and the stimulation output unit 11. Some or all may be realized by the processor 101. A part or all of the uttered voice analysis unit 2, the action sound analysis unit 3, the first identification unit 6, and the second identification unit 7 may be realized by the processor 101.
 音入力部1、発話音声分析部2、行動音分析部3、第一識別部6、第二識別部7、雰囲気推定部8、刺激制御部9、刺激生成部10、及び刺激出力部11の一部又は全部は、プロセッサ101が実行するプログラムのモジュールとして実現してもよい。発話音声分析部2、行動音分析部3、第一識別部6、及び第二識別部7の一部又は全部は、プロセッサ101が実行するプログラムのモジュールとして実現してもよい。例えば、プロセッサ101が実行するプログラムは、雰囲気清浄化プログラムとも言う。 Of the sound input unit 1, the utterance voice analysis unit 2, the action sound analysis unit 3, the first identification unit 6, the second identification unit 7, the atmosphere estimation unit 8, the stimulation control unit 9, the stimulation generation unit 10, and the stimulation output unit 11. Some or all may be realized as a module of a program executed by the processor 101. Part or all of the uttered voice analysis unit 2, the action sound analysis unit 3, the first identification unit 6, and the second identification unit 7 may be realized as a module of a program executed by the processor 101. For example, the program executed by the processor 101 is also called an atmosphere cleaning program.
 1,201,401 音入力部、 2 発話音声分析部、 3 行動音分析部、 4 第一記録部、 5 第二記録部、 6 第一識別部、 7 第二識別部、 8,203 雰囲気推定部、 9,404 刺激制御部、 9a 主制御部、 10,204,405 刺激生成部、 11,202,402 刺激出力部、 100,200,400 雰囲気清浄化装置、 101 プロセッサ、 102 揮発性記憶装置、 103 不揮発性記憶装置、 300 エレベータかご、 406 換気ファン制御部、 407 換気ファン、 408 換気口。
 
 
                                              
1,201,401 sound input unit, 2 utterance voice analysis unit, 3 action sound analysis unit, 4 first recording unit, 5 second recording unit, 6 first identification unit, 7 second identification unit, 8,203 atmosphere estimation Section, 9,404 stimulation control section, 9a main control section, 10,204,405 stimulation generation section, 11,202,402 stimulation output section, 100,200,400 atmosphere cleaning apparatus, 101 processor, 102 volatile storage apparatus , 103 non-volatile storage device, 300 elevator car, 406 ventilation fan control unit, 407 ventilation fan, 408 ventilation port.


Claims (20)

  1.  音を取得する入力部と、
     前記音に含まれる人の感情に由来する特徴量と、前記特徴量から前記人の感情を識別するための境界情報とに基づいて前記人の感情を識別する感情識別部と、
     前記感情識別部で識別した前記人の感情の識別結果に基づいて前記音の発生源周辺の雰囲気の度合いを推定する雰囲気推定部と、
     前記雰囲気の度合いに応じて前記雰囲気を浄化させる刺激を前記音の発生源周辺に出力する刺激出力部と、
     を有する雰囲気清浄化装置。
    An input part to get the sound,
    A feature amount derived from the emotion of the person included in the sound, and an emotion identification unit that identifies the emotion of the person based on boundary information for identifying the emotion of the person from the feature amount,
    An atmosphere estimation unit that estimates the degree of the atmosphere around the source of the sound based on the identification result of the emotion of the person identified by the emotion identification unit,
    A stimulus output unit that outputs a stimulus that purifies the atmosphere according to the degree of the atmosphere to the vicinity of the sound source,
    Atmosphere cleaning device having.
  2.  前記音は、発話音声であり、
     前記感情識別部は、前記発話音声から算出した前記特徴量を使用する、
     請求項1に記載の雰囲気清浄化装置。
    The sound is a speech voice,
    The emotion identifying unit uses the feature amount calculated from the uttered voice,
    The atmosphere cleaning device according to claim 1.
  3.  前記音は、人の行動音であり、
     前記感情識別部は、前記行動音から算出した前記特徴量を使用する、
     請求項1に記載の雰囲気清浄化装置。
    The sound is a human action sound,
    The emotion identifying unit uses the feature amount calculated from the action sound,
    The atmosphere cleaning device according to claim 1.
  4.  前記感情識別部は、前記行動音における分析期間毎に算出した波高率が所定値を超えた分析期間の数を第一特徴量とし、前記行動音における分析期間毎に検出された前記行動音に含まれるピーク成分が所定値を超えた分析期間の数を第二特徴量とし、前記第一特徴量と第二特徴量との関係から感情を識別するための前記境界情報を定め、前記人の感情を識別する、
     請求項3に記載の雰囲気清浄化装置。
    The emotion identifying unit uses the number of analysis periods in which the crest factor calculated for each analysis period of the action sound exceeds a predetermined value as the first feature amount, and sets the action sound detected for each analysis period of the action sound to the action sound. The number of analysis periods in which the included peak component exceeds a predetermined value is the second feature amount, and the boundary information for identifying emotions from the relationship between the first feature amount and the second feature amount is determined, and the person's Identify emotions,
    The atmosphere cleaning device according to claim 3.
  5.  前記音は、発話音声及び人の行動音であり、
     前記感情識別部は、前記発話音声及び前記行動音から算出した前記特徴量を使用する、
     請求項1に記載の雰囲気清浄化装置。
    The sound is a speech voice and a human action sound,
    The emotion identifying unit uses the feature amount calculated from the uttered voice and the action sound,
    The atmosphere cleaning device according to claim 1.
  6.  前記刺激出力部は、前記人の感情を穏やかな方向に導く作用を有する刺激を出力する、
     請求項1から5のいずれか1項に記載の雰囲気清浄化装置。
    The stimulus output unit outputs a stimulus having an action of guiding the emotion of the person in a gentle direction,
    The atmosphere cleaning device according to any one of claims 1 to 5.
  7.  前記刺激出力部は、前記雰囲気を浄化させる聴覚刺激として水又は大気の動きに伴って生ずる音により構成された自然音を出力する、
     請求項1から5のいずれか1項に記載の雰囲気清浄化装置。
    The stimulus output unit outputs a natural sound composed of a sound generated by the movement of water or the atmosphere as an auditory stimulus for purifying the atmosphere,
    The atmosphere cleaning device according to any one of claims 1 to 5.
  8.  前記刺激出力部は、前記雰囲気を浄化させる聴覚刺激として楽曲又はメロディ音を出力する、
     請求項1から5のいずれか1項に記載の雰囲気清浄化装置。
    The stimulus output unit outputs a music or melody sound as an auditory stimulus for purifying the atmosphere,
    The atmosphere cleaning device according to any one of claims 1 to 5.
  9.  前記刺激出力部は、前記楽曲又は前記メロディ音として笑いを喚起し、又は可笑しみを印象付ける効果音又はテーマ音楽を使用する、
     請求項8に記載の雰囲気清浄化装置。
    The stimulus output unit uses a sound effect or theme music that evokes a laugh as the music or the melody sound, or impresses a laugh.
    The atmosphere cleaning device according to claim 8.
  10.  前記刺激出力部は、前記楽曲として米国のジャズ演奏家Pee Wee Hunt氏の演奏による「Somebody Stole My Gal」又はその模倣演奏を使用、あるいは前記メロディ音として前記Pee Wee Hunt氏の演奏による「Somebody Stole My Gal」を想起させるメロディ音を使用する
     請求項8に記載の雰囲気清浄化装置。
    The stimulus output unit uses "Somebody Stole My Gal" or an imitation performance thereof performed by the American jazz musician Pee Wee Hunt as the musical composition, or "Somebody Story Sole" performed by the Pee Wee Hunt as the melody sound. The atmosphere cleaning device according to claim 8, wherein a melody sound reminiscent of "My Gal" is used.
  11.  前記刺激出力部は、前記雰囲気を浄化させる嗅覚刺激としてエッセンシャルオイルの希釈物又はアロマオイルの希釈物を出力する、
     請求項1から5のいずれか1項に記載の雰囲気清浄化装置。
    The stimulus output unit outputs a dilution of essential oil or a dilution of aroma oil as an olfactory stimulus for purifying the atmosphere.
    The atmosphere cleaning device according to any one of claims 1 to 5.
  12.  前記刺激出力部は、前記雰囲気を浄化させる視覚刺激として画像を出力する、
     請求項1から5のいずれか1項に記載の雰囲気清浄化装置。
    The stimulus output unit outputs an image as a visual stimulus that purifies the atmosphere.
    The atmosphere cleaning device according to any one of claims 1 to 5.
  13.  前記刺激出力部は、前記画像として水及び大気の動きに伴って存在する自然風景を使用する、
     請求項12に記載の雰囲気清浄化装置。
    The stimulus output unit uses, as the image, a natural landscape existing with the movement of water and the atmosphere,
    The atmosphere cleaning device according to claim 12.
  14.  前記刺激出力部は、前記雰囲気を浄化させる視覚刺激として照明光を出力する、
     請求項1から5のいずれか1項に記載の雰囲気清浄化装置。
    The stimulus output unit outputs illumination light as a visual stimulus that purifies the atmosphere.
    The atmosphere cleaning device according to any one of claims 1 to 5.
  15.  前記刺激出力部は、前記照明光として青色系、緑色系及び橙色系の光を個別に強める色調変化を伴う照明光を使用する、
     請求項14に記載の雰囲気清浄化装置。
    The stimulus output unit uses, as the illumination light, illumination light with a color tone change that individually intensifies blue-based light, green-based light, and orange-based light,
    The atmosphere cleaning device according to claim 14.
  16.  前記刺激出力部は、前記雰囲気を浄化させる電磁波刺激としてシューマン共振に基づく7.83Hz程度の周波数を有する電磁波を出力する、
     請求項1から5のいずれか1項に記載の雰囲気清浄化装置。
    The stimulus output unit outputs an electromagnetic wave having a frequency of about 7.83 Hz based on Schumann resonance as an electromagnetic wave stimulus for cleaning the atmosphere.
    The atmosphere cleaning device according to any one of claims 1 to 5.
  17.  前記刺激出力部は、前記雰囲気を浄化させる帯電粒子刺激としてレナード効果により負に帯電した水の粒子を出力する、
     請求項1から5のいずれか1項に記載の雰囲気清浄化装置。
    The stimulus output unit outputs negatively charged water particles by the Leonard effect as charged particle stimuli that purify the atmosphere.
    The atmosphere cleaning device according to any one of claims 1 to 5.
  18.  前記刺激出力部は、前記雰囲気を浄化させる嗅覚刺激として植物又は動物が放出する悪臭成分の希釈物、もしくは有機硫黄化合物の希釈物を出力する、
     請求項1から5のいずれか1項に記載の雰囲気清浄化装置。
    The stimulus output unit outputs a dilution of a malodorous component released by a plant or an animal as an olfactory stimulus for purifying the atmosphere, or a dilution of an organic sulfur compound,
    The atmosphere cleaning device according to any one of claims 1 to 5.
  19.  請求項1から18のいずれか1項に記載の雰囲気清浄化装置
     を有するエレベータ。
    An elevator comprising the atmosphere cleaning device according to any one of claims 1 to 18.
  20.  音を取得し、
     前記音に含まれる人の感情に由来する特徴量と、前記特徴量から前記人の感情を識別するための境界情報とに基づいて前記人の感情を識別し、
     識別した前記人の感情の識別結果に基づいて前記音の発生源周辺の雰囲気を推定し、
     前記雰囲気の度合いに応じて前記雰囲気を浄化させる刺激を前記音の発生源周辺に出力する、
     雰囲気清浄化方法。
    Get the sound,
    A feature amount derived from the emotion of the person included in the sound, and identifies the emotion of the person based on the boundary information for identifying the emotion of the person from the feature amount,
    Estimating the atmosphere around the source of the sound based on the identification result of the identified emotion of the person,
    Outputting a stimulus for purifying the atmosphere in the vicinity of the source of the sound according to the degree of the atmosphere,
    Atmosphere cleaning method.
PCT/JP2018/048478 2018-12-28 2018-12-28 Atmosphere purifying device, atmosphere purifying method, and elevator WO2020136872A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2019541820A JPWO2020136872A1 (en) 2018-12-28 2018-12-28 Atmosphere Purifier, Atmosphere Purification Method and Elevator
PCT/JP2018/048478 WO2020136872A1 (en) 2018-12-28 2018-12-28 Atmosphere purifying device, atmosphere purifying method, and elevator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/048478 WO2020136872A1 (en) 2018-12-28 2018-12-28 Atmosphere purifying device, atmosphere purifying method, and elevator

Publications (1)

Publication Number Publication Date
WO2020136872A1 true WO2020136872A1 (en) 2020-07-02

Family

ID=71127859

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/048478 WO2020136872A1 (en) 2018-12-28 2018-12-28 Atmosphere purifying device, atmosphere purifying method, and elevator

Country Status (2)

Country Link
JP (1) JPWO2020136872A1 (en)
WO (1) WO2020136872A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022137276A1 (en) * 2020-12-21 2022-06-30 三菱電機株式会社 Acoustic system for closed space
WO2023042323A1 (en) * 2021-09-16 2023-03-23 三菱電機株式会社 Acoustic system for closed spaces

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002258880A (en) * 2001-03-06 2002-09-11 Sekiguchi Kikai Hanbai Kk Device for emitting radio waves
JP2004024879A (en) * 2002-06-26 2004-01-29 Samsung Electronics Co Ltd Apparatus and method for emotion guidance
JP2004281186A (en) * 2003-03-14 2004-10-07 Matsushita Electric Works Ltd Lighting equipment for bathtub
JP2004333200A (en) * 2003-05-01 2004-11-25 Nidec Copal Corp Apparatus and method for determining abnormal sound and program
JP2005346254A (en) * 2004-06-01 2005-12-15 Hitachi Ltd Danger monitoring system
JP2006171838A (en) * 2004-12-13 2006-06-29 Masui Yoshiharu Crime prevention device
JP3189667U (en) * 2013-11-07 2014-03-27 チャオジョゥ タイ ダ アーツ アンド クラフツ カンパニー リミテッド Craft aroma bottle
JP2016003806A (en) * 2014-06-17 2016-01-12 株式会社コロナ Mist generating device
WO2018074224A1 (en) * 2016-10-21 2018-04-26 株式会社デイジー Atmosphere generating system, atmosphere generating method, atmosphere generating program, and atmosphere estimating system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009090159A (en) * 2007-10-03 2009-04-30 Pioneer Electronic Corp Apparatus and method for transmission of aroma

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002258880A (en) * 2001-03-06 2002-09-11 Sekiguchi Kikai Hanbai Kk Device for emitting radio waves
JP2004024879A (en) * 2002-06-26 2004-01-29 Samsung Electronics Co Ltd Apparatus and method for emotion guidance
JP2004281186A (en) * 2003-03-14 2004-10-07 Matsushita Electric Works Ltd Lighting equipment for bathtub
JP2004333200A (en) * 2003-05-01 2004-11-25 Nidec Copal Corp Apparatus and method for determining abnormal sound and program
JP2005346254A (en) * 2004-06-01 2005-12-15 Hitachi Ltd Danger monitoring system
JP2006171838A (en) * 2004-12-13 2006-06-29 Masui Yoshiharu Crime prevention device
JP3189667U (en) * 2013-11-07 2014-03-27 チャオジョゥ タイ ダ アーツ アンド クラフツ カンパニー リミテッド Craft aroma bottle
JP2016003806A (en) * 2014-06-17 2016-01-12 株式会社コロナ Mist generating device
WO2018074224A1 (en) * 2016-10-21 2018-04-26 株式会社デイジー Atmosphere generating system, atmosphere generating method, atmosphere generating program, and atmosphere estimating system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022137276A1 (en) * 2020-12-21 2022-06-30 三菱電機株式会社 Acoustic system for closed space
EP4265553A4 (en) * 2020-12-21 2024-03-06 Mitsubishi Electric Corporation Acoustic system for closed space
JP7475495B2 (en) 2020-12-21 2024-04-26 三菱電機株式会社 Confined Space Sound Systems
WO2023042323A1 (en) * 2021-09-16 2023-03-23 三菱電機株式会社 Acoustic system for closed spaces

Also Published As

Publication number Publication date
JPWO2020136872A1 (en) 2021-02-18

Similar Documents

Publication Publication Date Title
Soltis et al. African elephant alarm calls distinguish between threats from humans and bees
Ratcliffe Sound and soundscape in restorative natural environments: A narrative literature review
Martin (Why) do you like scary movies? A review of the empirical research on psychological responses to horror films
Tajadura-Jiménez et al. Embodied auditory perception: the emotional impact of approaching and receding sound sources.
Porges et al. The polyvagal hypothesis: common mechanisms mediating autonomic regulation, vocalizations and listening
KR101248353B1 (en) Speech analyzer detecting pitch frequency, speech analyzing method, and speech analyzing program
Stoeger et al. Vocal cues indicate level of arousal in infant African elephant roars
Reybrouck et al. Music and noise: Same or different? What our body tells us
Larsson Self-generated sounds of locomotion and ventilation and the evolution of human rhythmic abilities
Charlton et al. Vocal cues to identity and relatedness in giant pandas (Ailuropoda melanoleuca)
Soltis et al. Measuring positive and negative affect in the voiced sounds of African elephants (Loxodonta africana)
WO2020136872A1 (en) Atmosphere purifying device, atmosphere purifying method, and elevator
JP2004510191A (en) Equipment for acoustically improving the environment
Huron The ramp archetype and the maintenance of passive auditory attention
Cooke et al. Is trilled smell possible? How the structure of olfaction determines the phenomenology of smell
Hantke et al. What is my dog trying to tell me? The automatic recognition of the context and perceived emotion of dog barks
CN109512441A (en) Emotion identification method and device based on multiple information
Lin et al. Social and vocal behavior in adult greater tube-nosed bats (Murina leucogaster)
Hennig et al. Divergence in male cricket song and female preference functions in three allopatric sister species
Sun et al. Great Himalayan leaf-nosed bats modify vocalizations to communicate threat escalation during agonistic interactions
Kanwal et al. Communication sounds and their cortical representation
Fan et al. Individuality in coo calls of adult male golden snub-nosed monkeys (Rhinopithecus roxellana) living in a multilevel society
Sharma et al. Asian elephants modulate their vocalizations when disturbed
Stoeger et al. African and Asian elephant vocal communication: a cross-species comparison
Stoeger Elephant sonic and infrasonic sound production, perception, and processing

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2019541820

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18945343

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18945343

Country of ref document: EP

Kind code of ref document: A1