WO2024003988A1 - Control device, control method, and program - Google Patents

Control device, control method, and program Download PDF

Info

Publication number
WO2024003988A1
WO2024003988A1 PCT/JP2022/025578 JP2022025578W WO2024003988A1 WO 2024003988 A1 WO2024003988 A1 WO 2024003988A1 JP 2022025578 W JP2022025578 W JP 2022025578W WO 2024003988 A1 WO2024003988 A1 WO 2024003988A1
Authority
WO
WIPO (PCT)
Prior art keywords
acoustic signal
user
concentration
information
control
Prior art date
Application number
PCT/JP2022/025578
Other languages
French (fr)
Japanese (ja)
Inventor
大将 千葉
弘章 伊藤
賢一 野口
達也 加古
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Priority to PCT/JP2022/025578 priority Critical patent/WO2024003988A1/en
Publication of WO2024003988A1 publication Critical patent/WO2024003988A1/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones

Definitions

  • the present invention relates to technology for controlling the reproduction of audio signals.
  • acoustic signal output devices that do not completely block the ear canal, such as open-ear earphones and headphones.
  • a user wearing such an audio signal output device can listen to his or her favorite reproduced sound, such as music, while being able to hear surrounding sounds.
  • Non-Patent Document 1 discloses a technology that automatically pauses or mutes the reproduced sound when a user who is listening to the reproduced sound with headphones makes a sound. Further, Patent Document 1 discloses a technique for estimating a user's behavior based on the detection result of a sensor or the like, and controlling the maximum permissible volume of reproduced sound based on the estimation result.
  • Non-Patent Document 1 the playback sound is not controlled unless the user utters a voice, and if the user does not notice the call, calling sound, notification sound, etc., there is a risk that communication with others will be hindered. There is. Furthermore, in the technique of Patent Document 1, even when the user intentionally listens to the reproduced sound at a high volume so as not to break the user's concentration state, the maximum permissible volume of the reproduced sound is controlled.
  • Such problems occur not only when the user listens to the reproduced sound using an acoustic signal output device that does not block the ear canal, but also when the user listening to the first acoustic signal is different from the first acoustic signal. This is common when the user is in an environment where the second acoustic signal can also be heard.
  • a second acoustic signal different from the first acoustic signal or a notification regarding the second acoustic signal When the user listening to the first acoustic signal is not in a concentrated state or when the degree of concentration of the user is lower than the first standard, a second acoustic signal different from the first acoustic signal or a notification regarding the second acoustic signal. Accordingly, a first control process is performed to change the first acoustic signal so that the user can easily hear the second acoustic signal, and when the user is in a concentrated state or the degree of concentration is equal to or higher than the first standard, , performs a second control process without performing the first control process.
  • FIG. 1 is a diagram illustrating the configuration of an audio signal reproduction system according to an embodiment.
  • FIG. 2 is a flow diagram illustrating the control method of the embodiment.
  • FIG. 3 is a diagram illustrating the configuration of the acoustic signal reproduction system according to the embodiment.
  • FIG. 4 is a block diagram illustrating the hardware configuration of the control device according to the embodiment.
  • the acoustic signal reproduction system 1 of the first embodiment includes a control device 11, a user sensor 12, an acoustic signal sensor 13, and an acoustic signal output device 14.
  • the control device 11 includes an input section 111 , a playback section 112 , a storage section 113 , a concentration state estimation section 114 , a control section 115 , and an environment estimation section 116 .
  • the user sensor 12 is a sensor that detects the state of the user 101.
  • the user sensor 12 is, for example, a biosignal sensor that detects biosignals of the user 101, an acceleration information sensor that detects the posture, motion, orientation, etc. of the user 101, or a position sensor that detects the position of the user 101. Contains at least one of them.
  • the biosignal sensor include a sensor that detects the pulse or heart rate of the user 101, a sensor that detects brain waves, and a sensor that detects eye movement.
  • Examples of the acceleration information sensor include an acceleration sensor, an angular velocity sensor, a geomagnetic sensor, and a 9-axis sensor.
  • position sensors are geomagnetic sensors, cameras, capacitive sensors, ultrasonic sensors, potentiometers, etc.
  • the acoustic signal sensor 13 is a microphone, a volume sensor, or the like that detects the surrounding acoustic signal AC2 (second acoustic signal).
  • the acoustic signal output device 14 is, for example, an earphone, headphone, neck speaker, bone conduction speaker, or other speaker that outputs the acoustic signal AC1 (first acoustic signal).
  • the acoustic signal output device 14 may be of a type that does not completely block the ear canal of the user 101, or may be of a type that completely blocks the ear canal of the user 101.
  • the storage unit 113 stores "concentration state estimation information” for estimating the concentration state of the user 101 from the "input information".
  • the “input information” may be, for example, “detection information” detected by the user sensor 12 or its function value, or “other information” regarding the “detection information” and the concentration state of the user 101 or It may be their function value, or it may be the "other information” or its function value.
  • “Other information” includes, for example, information representing the content of the user 101's task, information representing the task duration of the user 101, information representing the time when the user 101 performed the task, and information representing the user 101's intention ( Information that expresses intentions such as “I want to concentrate,””I want to communicate,””I want to turn off notifications,” and “I want to turn on notifications,” etc.
  • Information that expresses intentions such as "I want to concentrate,””I want to communicate,””I want to turn off notifications,” and “I want to turn on notifications,” etc.
  • “Information for estimating a state of concentration” may be, for example, information for the user 101 to obtain “information indicating whether or not the user is in a state of concentration" in response to "input information", or “input information” It may also be information for obtaining "information representing the degree of concentration" of the user 101.
  • Information for estimating a state of concentration may be, for example, a table in which "input information” and “information indicating whether or not the state is in a state of concentration” are associated with each other, or "input information” and "degree of concentration” are associated with each other.
  • It may be a table in which "information representing ⁇ input information'' is associated with each other, or it may be a threshold value of "input information” for determining "whether or not the user is in a state of concentration.” These tables and thresholds are determined in advance based on, for example, past “detection information", past task logs, past task durations, times when past tasks were performed, and the like.
  • the "concentration state estimation information” may be, for example, an estimation model that outputs "information indicating whether or not a person is in a concentration state" in response to "input information", or It may also be an estimation model that outputs "information representing the degree of concentration”.
  • estimation models include deep learning-based models, hidden Markov models, and SVMs (Support Vector Machines). These models are obtained, for example, by machine learning using a learning model.
  • An example of a learning model is "input information for learning” (past detection information, past task logs, past task duration, time when past tasks were performed, etc.) and "whether or not you are in a state of concentration.” This includes supervised learning data that associates labels such as ⁇ level'' and ⁇ degree of concentration.''
  • concentration state estimation method There is no limitation on the concentration state estimation method, and a known estimation method such as the technique disclosed in Japanese Patent Application Laid-Open No. 2014-158600 may be used.
  • the reproduction unit 112 of the control device 11 (FIG. 1) outputs a reproduction signal representing the acoustic signal AC1 (first acoustic signal) to be listened to by the user 101 under the control of the control unit 115.
  • the audio signal AC1 is, for example, music, voice, environmental sound, or other audio content.
  • the reproduced signal is transmitted by wire or wirelessly to the acoustic signal output device 14, and the acoustic signal output device 14 outputs the acoustic signal AC1 based on the transmitted reproduced signal.
  • the user 101 listens to the audio signal AC1 output from the audio signal output device 14.
  • the user sensor 12 detects the state of the user 101 and sends the detected "detection information" to the concentration state estimation unit 114.
  • the “detection information” includes information representing the biosignal of the user 101.
  • the “detection information” includes information representing the posture, motion, orientation, etc. of the user 101, such as acceleration and angular acceleration of the user 101.
  • the “detection information” includes information representing the position of the user 101.
  • the concentration state estimating unit 114 uses the “input information” of the user 101 including at least one of “detection information” and “other information” and the “concentration state estimation information” extracted from the storage unit 113 to estimate the user's
  • the user 101 obtains and outputs "information representing whether the user 101 is in a state of concentration” or "information representing the degree of concentration” of the user 101.
  • the concentration state estimator 114 obtains and outputs "information indicating whether the user is in a state of concentration” and "information indicating the degree of concentration” corresponding to the "input information" of the user 101.
  • the concentration state estimation unit 114 calculates the "input information" of the user 101.
  • a threshold value determination is performed, and the user 101 obtains and outputs "information indicating whether or not the user is in a concentrated state.”
  • the concentration state estimator 114 uses this estimation model to obtain and output "information representing whether or not the user is in a state of concentration” and "information representing the degree of concentration” corresponding to the "input information”. If the "other information” includes information representing the intention of the user 101, the concentration state estimating unit 114 prioritizes the intention of the user 101 and selects "information representing whether the user 101 is in a state of concentration”.
  • the concentration state estimating unit 114 uses “information indicating whether or not you are in a concentration state” as “I want to concentrate” or “I want to turn off notifications”. It is also possible to output "information indicating something”. For example, when “other information” indicates an intention such as “I am able to communicate” or “I want to turn on notifications,” the concentration state estimating unit 114 selects “information indicating whether or not I am in a concentration state” as " Information indicating that the user is not in a concentrated state may also be output.
  • Information representing the intention of the user 101 included in "other information” is stored in the storage unit 113, and the concentration state estimation unit 114 stores the information in the storage unit 113 until the intention of the user 101 is updated. Based on the information representing the user's intention, "information representing whether or not the user is in a concentrated state” may be obtained and output as described above. “Information indicating whether the user is in a state of concentration” or “information indicating the degree of concentration” is sent to the control unit 115.
  • the acoustic signal sensor 13 detects the surrounding acoustic signal AC2 (second acoustic signal) and sends information representing the acoustic signal AC2 to the environment estimation unit 116.
  • the environment estimation unit 116 uses the information representing the input acoustic signal AC2 to send “acoustic detection information” representing the detection result of surrounding acoustic signals to the control unit 115.
  • the sound detection information includes, for example, information indicating whether or not the user 102 has spoken in the surrounding area, information indicating the loudness of surrounding sounds, and the like.
  • the control unit 115 receives “information indicating whether or not the person is in a concentrated state” or “information indicating the degree of concentration” sent from the concentration state estimating unit 114, and “acoustic detection information” sent from the environment estimating unit 116. ' is input in real time.
  • the control unit 115 determines whether the user 101 listening to the acoustic signal AC1 is in a concentrated state using "information indicating whether or not the user is in a concentrated state," or uses "information indicating whether the user 101 is in a concentrated state” or ” is used to determine whether the degree of concentration of the user 101 is equal to or higher than the reference TH1 (first reference).
  • control processing CON1 first control processing
  • the control unit 115 control processing CON1 (first control processing) for changing the acoustic signal AC1 so that the user 101 can easily hear the acoustic signal AC2 according to the surrounding acoustic signal AC2 (a second acoustic signal different from the first acoustic signal); I do.
  • This process is done automatically. Thereby, when the user 101 is not concentrating, it becomes easier to notice calls from the user 102, and it becomes easier to communicate with others.
  • control processing CON2 (second control processing) is performed.
  • the control process CON2 is a process that does not perform the control process CON1 (first control process). This makes it easier for the user 101 to maintain his concentration when he is concentrating. Specific examples of the control process CON1 (first control process) and control process CON2 (second control process) will be shown below.
  • control processing CON1 (first control processing):
  • the control process CON1 is performed, for example, when the amplitude of the acoustic signal AC2 (second acoustic signal) is equal to or greater than the reference TH2 (second reference) or when the acoustic signal AC2 (second acoustic signal) is It includes a process of changing the acoustic signal AC1 (first acoustic signal) so that the user 101 can easily hear the acoustic signal AC2 (second acoustic signal) when detected.
  • An example of this process is a process of attenuating the amplitude of the acoustic signal AC1 (first acoustic signal).
  • the user 101 may be able to easily hear the acoustic signal AC2 (second acoustic signal) by changing the phase or waveform of the acoustic signal AC1 (first acoustic signal). This makes it easier for the user 101 to notice calls from the user 102.
  • the acoustic signal AC2 (second acoustic signal) is less than the reference TH2 (second criterion) or if the acoustic signal AC2 (second acoustic signal) is not detected, the acoustic signal
  • the process of changing AC1 (first acoustic signal) (the process of changing the first acoustic signal so that the user can easily hear the second acoustic signal) is not executed. Thereby, it is possible to prevent the acoustic signal AC1 from changing depending on the concentration state of the user 101 even though there is no call from the user 102.
  • control processing CON2 (second control processing): In the control process CON2 (second control process), a process for changing the acoustic signal AC1 so that the user 101 can easily hear the acoustic signal AC2 is not executed.
  • the control process CON2 does not automatically attenuate the amplitude of the acoustic signal AC1 or change the phase or waveform of the acoustic signal AC1.
  • the control process CON2 may be a control that does nothing.
  • control process CON2 may include a process of changing the acoustic signal AC1 (first acoustic signal) so that it becomes difficult for the user 101 to hear the acoustic signal AC2 (second acoustic signal).
  • the magnitude of each frequency component of the acoustic signal AC1 may be changed, the phase of the acoustic signal AC1 may be changed, or other
  • the acoustic signal AC1 may be changed to mask the signal AC2.
  • an acoustic signal obtained by adding an anti-phase acoustic signal of the acoustic signal AC2 or an acoustic signal similar to the anti-phase acoustic signal to the original acoustic signal AC1 may be used as the new acoustic signal AC1. This allows the user 101 to maintain a more concentrated state even if there are calls from the surroundings or the surroundings are noisy.
  • control processing by the control unit 115 receives “information indicating whether or not the person is in a concentration state” or “information indicating the degree of concentration” sent from the concentration state estimation unit 114, and “acoustic detection information” sent from the environment estimation unit 116. (Step S1).
  • the control unit 115 uses the acoustic detection information to determine whether there is a call. For example, the control unit 115 determines that there is a call when the amplitude of the acoustic signal AC2 is equal to or greater than the reference TH2 or when the acoustic signal AC2 is detected, and determines that there is no call at other times.
  • Step S2 if it is determined that there is no call, the process returns to step S1.
  • the control unit 115 uses "information indicating whether the user 101 is in a concentrated state" or "information indicating the degree of concentration” to determine whether the user 101 is in a concentrated state or not. , or the degree of concentration of the users 101 is determined (step S3).
  • the control unit 115 performs the control process CON1 (first control process) (step S4 ), then the process returns to step S1.
  • control unit 115 when the user 101 is in a concentrated state or when the degree of concentration is equal to or higher than the reference TH1, the control unit 115 performs a control process CON2 (second control process) (step S5), and then returns the process to step S1. .
  • CON2 second control process
  • the acoustic signal AC1 is controlled based on the concentration state and concentration level of the user 101 who listens to the acoustic signal AC1, and the acoustic signal AC2 different from the acoustic signal AC1. This makes it easier for the user 101 to communicate with others when he is not concentrating, and makes it easier for him to maintain his concentration when he is concentrating.
  • the second embodiment is a modification of the first embodiment, and based on the concentration state and concentration level of the user 101 who listens to the acoustic signal AC1, and the acoustic signal AC2 different from the acoustic signal AC1, the user 101 and other users 102.
  • differences from the first embodiment will be mainly explained, and the same reference numbers will be used to simplify the explanation of the items that have already been explained.
  • the acoustic signal reproduction system 2 of the second embodiment includes a control device 21, a user sensor 12, an acoustic signal sensor 13, and an acoustic signal output device 14.
  • the control device 21 includes an input section 111 , a playback section 112 , a storage section 113 , a concentration state estimation section 114 , a control section 115 , an environment estimation section 116 , a user notification section 217 , and a surrounding notification section 218 .
  • the difference from the first embodiment is the control process CON1 (first control process) and the control process CON2 (second control process).
  • the control process CON1 of the second embodiment is performed when the amplitude of the acoustic signal AC2 (second acoustic signal) is equal to or greater than the reference TH2 (second reference) or when the acoustic signal AC2 (second acoustic signal) is detected.
  • the control unit 115 further instructs the user notification unit 217 to output notification information N1, and the user notification unit 217 instructs the user notification unit 217 to present this notification information N1 to the user 101.
  • This process is a process for outputting notification information N1 to the user 101 from, for example, the acoustic signal output device 14, the control device 11, or other devices (for example, a smartphone).
  • the notification information N1 may be auditory (for example, notification sound, notification voice, etc.) or visual (for example, LED light emission, image display, change in lighting, notification message, etc.). It may be a tactile sensation (for example, vibration), or it may be a combination of at least some of these.
  • control process CON2 of the second embodiment includes a process for presenting notification information N2 (second notification information) to a person other than the user 101 (for example, the user 102).
  • the control unit 115 further instructs the surrounding notification unit 218 to output notification information N2, and the surrounding notification unit 218 presents this notification information N2 to a person other than the user 101.
  • Perform processing for This process is, for example, a process for outputting the notification information N2 from the acoustic signal output device 14, the control device 11, or other devices (for example, a smartphone) to a person other than the user 101.
  • the notification information N2 may be auditory (for example, notification sound, notification sound, etc.) or visual (for example, LED light emission, image display, change in lighting, notification message, etc.). It may be a tactile sensation (for example, vibration), or it may be a combination of at least some of these. This allows others, such as the user 102, to be informed that the user 101 is in a concentrated state, and the user 101 can maintain his concentrated state without being disturbed by others.
  • control process CON1 may include the process for presenting the above-mentioned notification information N1, and the control process CON2 may not include the process for presenting the above-mentioned notification information N2.
  • control process CON1 may not include the process for presenting the above-mentioned notification information N1, and the control process CON2 may include the process for presenting the above-mentioned notification information N2.
  • the acoustic signal reproduction system 3 of the third embodiment includes a control device 31, a user sensor 12, a communication device 33, and an acoustic signal output device 14.
  • the control device 31 includes an input section 111 , a playback section 112 , a storage section 113 , a concentration state estimation section 114 , a control section 315 , and a notification determination section 316 .
  • the control device 31 may further include a user notification section 217 and a communication notification section 318.
  • a communication device 33 such as a smartphone sends a notification such as an incoming call (notification regarding the second acoustic signal) to the notification determination unit 316, and the notification determination unit 316 , based on the input notification, sends "notification detection information" indicating whether or not the communication device 33 has received a notification to the control unit 315.
  • control processing CON2 (second control processing) is performed without performing control processing CON1 (first control processing). This makes it easier for the user 101 to maintain his concentration when he is concentrating.
  • control process CON1 first control process
  • control process CON2 second control process
  • control processing CON1 (first control processing):
  • CON1 first control process
  • the communication apparatus 33 receives a notification of an incoming call (notification regarding the second acoustic signal)
  • the user 101 controls the acoustic signal AC2 ( This includes processing for changing the acoustic signal AC1 (the first acoustic signal) so that the second acoustic signal can be easily heard.
  • An example of this process is a process of attenuating the amplitude of the acoustic signal AC1 (first acoustic signal). This makes it easier for the user 101 to notice notifications on the communication device 33.
  • the acoustic signal AC1 (first acoustic signal ) is not executed. Thereby, it is possible to prevent the acoustic signal AC1 from changing depending on the concentration state of the user 101 even though there is no incoming call or the like on the communication device 33.
  • control unit 315 further instructs the user notification unit 217 to output notification information N1
  • the user notification unit 217 instructs the user notification unit 217 to output notification information N1 to the user 101. Processing may be performed. This specific example is as described in the second embodiment.
  • control processing CON2 (second control processing):
  • a process for changing the acoustic signal AC1 so that the user 101 can easily hear the acoustic signal AC2 is not executed.
  • the control process CON2 does not automatically attenuate the amplitude of the acoustic signal AC1 or change the phase or waveform of the acoustic signal AC1. Thereby, even if the communication device 33 receives a call while the user 101 is listening to the acoustic signal AC1 at a high volume in order to maintain a concentrated state, the user 101 can maintain a concentrated state.
  • the communication device 33 may present notification information N2 (second notification information) to the communication partner.
  • the control unit 315 further instructs the communication notification unit 318 to send notification information N2 to the communication partner, and the communication notification unit 318 sends this notification information N2 to the communication device 33 and transmits it to the communication partner.
  • the notification information N2 may be auditory (e.g., notification sound, notification voice, etc.), visual (e.g., notification message, etc.), or tactile (e.g., notification message, etc.). , vibration, etc.), or a combination of at least some of them. This allows the communication partner to be informed that the user 101 is in a concentrated state, and the user 101 can maintain his or her concentrated state without being disturbed by others.
  • control unit 315 Other processing of the control unit 315 is the same as that of the control unit 115.
  • the acoustic signal AC1 is controlled based on the concentration state and concentration level of the user 101 who listens to the acoustic signal AC1, and the notification regarding the acoustic signal AC2 different from the acoustic signal AC1. This makes it easier for the user 101 to communicate with others when he is not concentrating, and makes it easier for him to maintain his concentration when he is concentrating.
  • the control devices 11, 21, and 31 in each embodiment include, for example, a processor (hardware processor) such as a CPU (central processing unit), a memory such as a RAM (random-access memory), a ROM (read-only memory), etc. It is a device configured by a general-purpose or dedicated computer equipped with a computer running a predetermined program. That is, the control devices 11, 21, and 31 in each embodiment have, for example, processing circuitry configured to mount each section of each control device.
  • This computer may include one processor and memory, or may include multiple processors and memories.
  • This program may be installed on the computer or may be pre-recorded in a ROM or the like.
  • processing units may be configured using an electronic circuit that independently realizes a processing function, rather than an electronic circuit that realizes a functional configuration by reading a program like a CPU.
  • an electronic circuit constituting one device may include a plurality of CPUs.
  • FIG. 4 is a block diagram illustrating the hardware configuration of the control devices 11, 21, and 31 in each embodiment.
  • the control devices 11, 21, and 31 in this example include a CPU (Central Processing Unit) 10a, an input section 10b, an output section 10c, a RAM (Random Access Memory) 10d, and a ROM (Read Only Memory). 10e, an auxiliary storage device 10f, a communication section 10h, and a bus 10g.
  • the CPU 10a in this example has a control section 10aa, a calculation section 10ab, and a register 10ac, and executes various calculation processes according to various programs read into the register 10ac.
  • the auxiliary storage device 10f is, for example, a hard disk, an MO (Magneto-Optical disc), a semiconductor memory, etc., and has a program area 10fa where a predetermined program is stored and a data area 10fb where various data are stored.
  • the bus 10g connects the CPU 10a, the input section 10b, the output section 10c, the RAM 10d, the ROM 10e, the communication section 10h, and the auxiliary storage device 10f so that information can be exchanged.
  • the CPU 10a writes the program stored in the program area 10fa of the auxiliary storage device 10f to the program area 10da of the RAM 10d according to the read OS (Operating System) program.
  • the CPU 10a writes various data stored in the data area 10fb of the auxiliary storage device 10f to the data area 10db of the RAM 10d. Then, the address on the RAM 10d where this program and data are written is stored in the register 10ac of the CPU 10a.
  • the control unit 10aa of the CPU 10a sequentially reads these addresses stored in the register 10ac, reads programs and data from the area on the RAM 10d indicated by the read addresses, and causes the calculation unit 10ab to sequentially execute the calculations indicated by the programs.
  • the calculation results are stored in the register 10ac. With such a configuration, the functional configuration of the control devices 11, 21, and 31 is realized.
  • the above program can be recorded on a computer readable recording medium.
  • a computer readable storage medium is a non-transitory storage medium. Examples of such recording media are magnetic recording devices, optical disks, magneto-optical recording media, semiconductor memories, and the like.
  • the computer may directly read the program from a portable recording medium and execute processing according to the program, and furthermore, the program may be transferred to this computer from the server computer.
  • the process may be executed in accordance with the received program each time.
  • the above-mentioned processing is executed by a so-called ASP (Application Service Provider) service, which does not transfer programs from the server computer to this computer, but only realizes processing functions by issuing execution instructions and obtaining results. You can also use it as Note that the program in this embodiment includes information that is used for processing by an electronic computer and that is similar to a program (data that is not a direct command to the computer but has a property that defines the processing of the computer, etc.).

Abstract

According to the present invention, when a user listening to a first acoustic signal is not in a state of concentration or when the concentration level of the user is lower than a first criterion, first control processing is performed in response to a second acoustic signal that is different from the first acoustic signal or a notification regarding the second acoustic signal, the first control processing changing the first acoustic signal to make it easier for the user to listen to the second acoustic signal, whereas when the user is in the state of concentration or when the concentration level is at or above the first criterion, second control processing is performed without performing the first control processing.

Description

制御装置、制御方法、およびプログラムControl device, control method, and program
 本発明は、音響信号の再生を制御する技術に関する。 The present invention relates to technology for controlling the reproduction of audio signals.
 オープンイヤー型(開放型)のイヤホンやヘッドホン等のような、外耳道を完全には塞がないタイプの音響信号出力装置が知られている。このような音響信号出力装置を装着した利用者は、周囲の音を聴き取ることができる状態で、音楽等の好みの再生音を聴取できる。 There are known acoustic signal output devices that do not completely block the ear canal, such as open-ear earphones and headphones. A user wearing such an audio signal output device can listen to his or her favorite reproduced sound, such as music, while being able to hear surrounding sounds.
 しかし、利用者がこのような音響信号出力装置を使用している場合であっても、再生音の音量が大きい場合には、周囲の人からの呼びかけやスマートフォンなどの端末装置からの通知に気が付かず、他者とのコミュニケーションに支障をきたすおそれがある。一方、集中状態が途切れないように、利用者があえて大きな音量で再生音を聴取していることもある。 However, even when users are using such audio signal output devices, if the volume of the playback sound is high, they may not be able to notice calls from people around them or notifications from terminal devices such as smartphones. This may hinder communication with others. On the other hand, users sometimes deliberately listen to the playback sound at a high volume so as not to interrupt their concentration.
 これに対し、非特許文献1には、ヘッドホンで再生音を聴いている利用者が声を発すると、自動的に再生音を一時停止したり、消音したりする技術が開示されている。また、特許文献1には、センサなどの検出結果に基づいて利用者の行動を推定し、その推定結果に基づいて再生音の最大許容音量を制御する技術が開示されている。 On the other hand, Non-Patent Document 1 discloses a technology that automatically pauses or mutes the reproduced sound when a user who is listening to the reproduced sound with headphones makes a sound. Further, Patent Document 1 discloses a technique for estimating a user's behavior based on the detection result of a sensor or the like, and controlling the maximum permissible volume of reproduced sound based on the estimation result.
特開2021-052262号公報JP 2021-052262 Publication
 しかし、非特許文献1の技術では、利用者が自ら声を発しなければ再生音は制御されず、呼びかけや発呼音や通知音などに気付かなければ、他者とのコミュニケーションに支障をきたすおそれがある。また特許文献1の技術では、集中状態が途切れないように利用者があえて大きな音量で再生音を聴取している場合にも、再生音の最大許容音量が制御されてしまう。 However, with the technology in Non-Patent Document 1, the playback sound is not controlled unless the user utters a voice, and if the user does not notice the call, calling sound, notification sound, etc., there is a risk that communication with others will be hindered. There is. Furthermore, in the technique of Patent Document 1, even when the user intentionally listens to the reproduced sound at a high volume so as not to break the user's concentration state, the maximum permissible volume of the reproduced sound is controlled.
 このような問題は、利用者が外耳道を塞がないタイプの音響信号出力装置で再生音を聴取している場合のみならず、第1音響信号を聴取する利用者が、第1音響信号と異なる第2音響信号をも聴取し得る環境にある場合に共通するものである。 Such problems occur not only when the user listens to the reproduced sound using an acoustic signal output device that does not block the ear canal, but also when the user listening to the first acoustic signal is different from the first acoustic signal. This is common when the user is in an environment where the second acoustic signal can also be heard.
 このような点に鑑み、本発明では、第1音響信号を聴取する利用者が当該第1音響信号と異なる第2音響信号をも聴取し得る環境において、当該利用者が集中していないときには他者とのコミュニケーションを取りやすく、当該利用者が集中しているときにはその集中状態を維持しやすくする技術を提供する。 In view of these points, in the present invention, in an environment where a user listening to a first acoustic signal can also listen to a second acoustic signal different from the first acoustic signal, when the user is not concentrating, other To provide technology that makes it easier to communicate with a user and to maintain a state of concentration when the user is concentrating.
 第1音響信号を聴取する利用者が集中状態にない場合または利用者の集中度合いが第1基準よりも低い場合に、第1音響信号と異なる第2音響信号または当該第2音響信号に関する通知に応じ、利用者が第2音響信号を聴取しやすくなるように第1音響信号を変化させる第1制御処理を行い、利用者が集中状態にある場合または集中度合いが第1基準以上である場合に、第1制御処理を行わない第2制御処理を行う。 When the user listening to the first acoustic signal is not in a concentrated state or when the degree of concentration of the user is lower than the first standard, a second acoustic signal different from the first acoustic signal or a notification regarding the second acoustic signal. Accordingly, a first control process is performed to change the first acoustic signal so that the user can easily hear the second acoustic signal, and when the user is in a concentrated state or the degree of concentration is equal to or higher than the first standard, , performs a second control process without performing the first control process.
 これにより、第1音響信号を聴取する利用者が当該第1音響信号と異なる第2音響信号をも聴取し得る環境において、当該利用者が集中していないときには他者とのコミュニケーションを取りやすく、当該利用者が集中しているときにはその集中状態を維持しやすくなる。 As a result, in an environment where a user listening to the first acoustic signal can also listen to a second acoustic signal different from the first acoustic signal, it is easier for the user to communicate with others when the user is not concentrating, When the user is concentrating, it becomes easier to maintain that concentration state.
図1は、実施形態の音響信号再生システムの構成を例示するための図である。FIG. 1 is a diagram illustrating the configuration of an audio signal reproduction system according to an embodiment. 図2は、実施形態の制御方法を例示するためのフロー図である。FIG. 2 is a flow diagram illustrating the control method of the embodiment. 図3は、実施形態の音響信号再生システムの構成を例示するための図である。FIG. 3 is a diagram illustrating the configuration of the acoustic signal reproduction system according to the embodiment. 図4は、実施形態の制御装置のハードウェア構成を例示するためのブロック図である。FIG. 4 is a block diagram illustrating the hardware configuration of the control device according to the embodiment.
 以下、図面を参照して本発明の実施形態を説明する。
 [第1実施形態]
 <構成>
 図1に例示するように、第1実施形態の音響信号再生システム1は、制御装置11、利用者センサ12、音響信号センサ13、および音響信号出力装置14を有する。
Embodiments of the present invention will be described below with reference to the drawings.
[First embodiment]
<Configuration>
As illustrated in FIG. 1, the acoustic signal reproduction system 1 of the first embodiment includes a control device 11, a user sensor 12, an acoustic signal sensor 13, and an acoustic signal output device 14.
 制御装置11は、入力部111、再生部112、記憶部113、集中状態推定部114,制御部115、および環境推定部116を有する。 The control device 11 includes an input section 111 , a playback section 112 , a storage section 113 , a concentration state estimation section 114 , a control section 115 , and an environment estimation section 116 .
 利用者センサ12は、利用者101の状態を検出するセンサである。利用者センサ12は、例えば、利用者101の生体信号を検出する生体信号センサ、利用者101の姿勢、動作、向きなどを検出する加速度情報センサ、または利用者101の位置を検出する位置センサの少なくともいずれかを含む。生体信号センサの例は、利用者101の脈拍や心拍数を検出するセンサ、脳波を検出するセンサ、眼球運動を検出するセンサなどである。加速度情報センサの例は、加速度センサ、角速度センサ、地磁気センサ、9軸センサなどである。位置センサの例は、地磁気センサ、カメラ、静電容量センサ、超音波センサ、ポテンションメータなどである。 The user sensor 12 is a sensor that detects the state of the user 101. The user sensor 12 is, for example, a biosignal sensor that detects biosignals of the user 101, an acceleration information sensor that detects the posture, motion, orientation, etc. of the user 101, or a position sensor that detects the position of the user 101. Contains at least one of them. Examples of the biosignal sensor include a sensor that detects the pulse or heart rate of the user 101, a sensor that detects brain waves, and a sensor that detects eye movement. Examples of the acceleration information sensor include an acceleration sensor, an angular velocity sensor, a geomagnetic sensor, and a 9-axis sensor. Examples of position sensors are geomagnetic sensors, cameras, capacitive sensors, ultrasonic sensors, potentiometers, etc.
 音響信号センサ13は、周囲の音響信号AC2(第2音響信号)を検出するマイクロホンや音量センサなどである。 The acoustic signal sensor 13 is a microphone, a volume sensor, or the like that detects the surrounding acoustic signal AC2 (second acoustic signal).
 音響信号出力装置14は、例えば、音響信号AC1(第1音響信号)を出力するイヤホン、ヘッドホン、ネックスピーカー、骨伝導スピーカー、その他のスピーカーなどである。音響信号出力装置14は、利用者101の外耳道を完全には塞がないタイプのものであってもよいし、利用者101の外耳道を完全に塞ぐタイプのものであってもよい。 The acoustic signal output device 14 is, for example, an earphone, headphone, neck speaker, bone conduction speaker, or other speaker that outputs the acoustic signal AC1 (first acoustic signal). The acoustic signal output device 14 may be of a type that does not completely block the ear canal of the user 101, or may be of a type that completely blocks the ear canal of the user 101.
 <事前処理>
 事前処理において、記憶部113には、「入力情報」から利用者101の集中状態を推定するための「集中状態推定用情報」が格納される。「入力情報」は、例えば、利用者センサ12で検出された「検出情報」またはその関数値であってもよいし、当該「検出情報」および利用者101の集中状態に関する「他の情報」またはそれらの関数値であってもよいし、当該「他の情報」またはその関数値であってもよい。「他の情報」は、例えば、利用者101のタスクの内容を表す情報、利用者101のタスク継続時間を表す情報、利用者101がタスクを行った時刻を表す情報、利用者101の意思(「集中したい」「コミュニケーションが可能である」「通知をオフにしたい」「通知をオンにしたい」などの意思)を表す情報などである。「集中状態推定用情報」は、例えば、「入力情報」に対して利用者101が「集中状態にあるか否かを表す情報」を得るための情報であってもよいし、「入力情報」に対して利用者101の「集中度合を表す情報」を得るための情報であってもよい。なお、集中度合が高いほど集中した状態であり、集中度合が低いほど集中していない状態である。「集中状態推定用情報」は、例えば、「入力情報」と「集中状態にあるか否かを表す情報」とが対応付けられたテーブルであってもよいし、「入力情報」と「集中度合を表す情報」とが対応付けられたテーブルであってもよいし、「集中状態にあるか否か」を決定するための「入力情報」の閾値であってもよい。これらのテーブルや閾値は、例えば、過去の「検出情報」、過去のタスクログ、過去のタスクの継続時間、過去のタスクが行われた時刻などに基づいて事前に決定される。その他、「集中状態推定用情報」は、例えば、「入力情報」に対して「集中状態にあるか否かを表す情報」を出力する推定モデルであってもよいし、「入力情報」に対して「集中度合を表す情報」を出力する推定モデルであってもよい。推定モデルの例は、深層学習に基づくモデル、隠れマルコフモデル、SVM(Support Vector Machine)などである。これらのモデルは、例えば、学習モデルを用いた機械学習によって得られる。学習モデルの例は、「学習用入力情報」(過去の検出情報、過去のタスクログ、過去のタスクの継続時間、過去のタスクが行われた時刻など)と、「集中状態にあるか否か」や「集中度合」などを表すラベルと、を対応付けた教師あり学習データなどである。集中状態の推定方法に限定はなく、特開2014-158600等に開示された技術などの公知の推定方法を用いればよい。
<Pre-processing>
In the pre-processing, the storage unit 113 stores "concentration state estimation information" for estimating the concentration state of the user 101 from the "input information". The “input information” may be, for example, “detection information” detected by the user sensor 12 or its function value, or “other information” regarding the “detection information” and the concentration state of the user 101 or It may be their function value, or it may be the "other information" or its function value. "Other information" includes, for example, information representing the content of the user 101's task, information representing the task duration of the user 101, information representing the time when the user 101 performed the task, and information representing the user 101's intention ( Information that expresses intentions such as "I want to concentrate,""I want to communicate,""I want to turn off notifications," and "I want to turn on notifications," etc. "Information for estimating a state of concentration" may be, for example, information for the user 101 to obtain "information indicating whether or not the user is in a state of concentration" in response to "input information", or "input information" It may also be information for obtaining "information representing the degree of concentration" of the user 101. Note that the higher the degree of concentration, the more concentrated the user is, and the lower the degree of concentration, the less concentrated the user is. "Information for estimating a state of concentration" may be, for example, a table in which "input information" and "information indicating whether or not the state is in a state of concentration" are associated with each other, or "input information" and "degree of concentration" are associated with each other. It may be a table in which "information representing ``input information'' is associated with each other, or it may be a threshold value of "input information" for determining "whether or not the user is in a state of concentration." These tables and thresholds are determined in advance based on, for example, past "detection information", past task logs, past task durations, times when past tasks were performed, and the like. In addition, the "concentration state estimation information" may be, for example, an estimation model that outputs "information indicating whether or not a person is in a concentration state" in response to "input information", or It may also be an estimation model that outputs "information representing the degree of concentration". Examples of estimation models include deep learning-based models, hidden Markov models, and SVMs (Support Vector Machines). These models are obtained, for example, by machine learning using a learning model. An example of a learning model is "input information for learning" (past detection information, past task logs, past task duration, time when past tasks were performed, etc.) and "whether or not you are in a state of concentration." This includes supervised learning data that associates labels such as ``level'' and ``degree of concentration.'' There is no limitation on the concentration state estimation method, and a known estimation method such as the technique disclosed in Japanese Patent Application Laid-Open No. 2014-158600 may be used.
 <処理>
 制御装置11(図1)の再生部112は、制御部115の制御の下、利用者101が聴取する音響信号AC1(第1音響信号)を表す再生信号を出力する。音響信号AC1は、例えば、音楽や音声や環境音その他の音響コンテンツなどである。再生信号は、有線または無線で音響信号出力装置14に伝達され、音響信号出力装置14は送られた再生信号に基づいて音響信号AC1を出力する。利用者101は、音響信号出力装置14から出力された音響信号AC1を聴取する。
<Processing>
The reproduction unit 112 of the control device 11 (FIG. 1) outputs a reproduction signal representing the acoustic signal AC1 (first acoustic signal) to be listened to by the user 101 under the control of the control unit 115. The audio signal AC1 is, for example, music, voice, environmental sound, or other audio content. The reproduced signal is transmitted by wire or wirelessly to the acoustic signal output device 14, and the acoustic signal output device 14 outputs the acoustic signal AC1 based on the transmitted reproduced signal. The user 101 listens to the audio signal AC1 output from the audio signal output device 14.
 利用者センサ12は、利用者101の状態を検出し、検出した「検出情報」を集中状態推定部114に送る。利用者センサ12が生体信号センサを含む場合、「検出情報」は利用者101の生体信号を表す情報を含む。利用者センサ12が加速度情報センサを含む場合、「検出情報」は利用者101の加速度や角加速度などの利用者101の姿勢、動作、向きなどを表す情報を含む。利用者センサ12が位置センサを含む場合、「検出情報」は利用者101の位置を表す情報を含む。入力部111から利用者101の集中状態に関する「他の情報」が入力された場合、当該「他の情報」は集中状態推定部114に送られる。 The user sensor 12 detects the state of the user 101 and sends the detected "detection information" to the concentration state estimation unit 114. When the user sensor 12 includes a biosignal sensor, the "detection information" includes information representing the biosignal of the user 101. When the user sensor 12 includes an acceleration information sensor, the "detection information" includes information representing the posture, motion, orientation, etc. of the user 101, such as acceleration and angular acceleration of the user 101. When the user sensor 12 includes a position sensor, the "detection information" includes information representing the position of the user 101. When “other information” regarding the concentration state of the user 101 is input from the input unit 111, the “other information” is sent to the concentration state estimation unit 114.
 集中状態推定部114は、「検出情報」および「他の情報」の少なくとも一方を含む利用者101の「入力情報」と記憶部113から抽出した「集中状態推定用情報」とを用い、利用者101が「集中状態にあるか否かを表す情報」または利用者101の「集中度合を表す情報」を得て出力する。例えば、「集中状態推定用情報」が「入力情報」と「集中状態にあるか否かを表す情報」や「集中度合を表す情報」とが対応付けられたテーブルである場合、集中状態推定部114は、利用者101の「入力情報」に対応する「集中状態にあるか否かを表す情報」や「集中度合を表す情報」を得て出力する。例えば、「集中状態推定用情報」が「集中状態にあるか否か」を決定するための「入力情報」の閾値である場合、集中状態推定部114は、利用者101の「入力情報」の閾値判定を行い、利用者101が「集中状態にあるか否かを表す情報」を得て出力する。例えば、「集中状態推定用情報」が「入力情報」に対して「集中状態にあるか否かを表す情報」や「集中度合を表す情報」を出力する推定モデルである場合、集中状態推定部114は、この推定モデルを用いて「入力情報」に対応する「集中状態にあるか否かを表す情報」や「集中度合を表す情報」を得て出力する。「他の情報」が利用者101の意思を表す情報を含んでいる場合、集中状態推定部114は、その利用者101の意思を優先して「集中状態にあるか否かを表す情報」を得て出力してもよい。例えば、「他の情報」が「集中したい」「通知をオフにしたい」などの意思を表す場合、集中状態推定部114は、「集中状態にあるか否かを表す情報」として「集中状態にあることを表す情報」を出力してもよい。例えば、「他の情報」が「コミュニケーションが可能である」「通知をオンにしたい」などの意思を表す場合、集中状態推定部114は、「集中状態にあるか否かを表す情報」として「集中状態にないことを表す情報」を出力してもよい。「他の情報」に含まれた利用者101の意思を表す情報が記憶部113に格納され、集中状態推定部114は、この利用者101の意思が更新されるまで、記憶部113に格納された意思を表す情報に基づいて、上述のように「集中状態にあるか否かを表す情報」を得て出力してもよい。「集中状態にあるか否かを表す情報」または「集中度合を表す情報」は制御部115に送られる。 The concentration state estimating unit 114 uses the “input information” of the user 101 including at least one of “detection information” and “other information” and the “concentration state estimation information” extracted from the storage unit 113 to estimate the user's The user 101 obtains and outputs "information representing whether the user 101 is in a state of concentration" or "information representing the degree of concentration" of the user 101. For example, if the "information for estimating concentration state" is a table in which "input information" is associated with "information indicating whether or not the user is in a state of concentration" and "information indicating the degree of concentration", the concentration state estimator 114 obtains and outputs "information indicating whether the user is in a state of concentration" and "information indicating the degree of concentration" corresponding to the "input information" of the user 101. For example, when the "concentration state estimation information" is a threshold value of "input information" for determining "whether or not the user is in a concentration state", the concentration state estimation unit 114 calculates the "input information" of the user 101. A threshold value determination is performed, and the user 101 obtains and outputs "information indicating whether or not the user is in a concentrated state." For example, if the "information for estimating concentration state" is an estimation model that outputs "information indicating whether or not the user is in a state of concentration" or "information indicating the degree of concentration" in response to "input information", the concentration state estimator 114 uses this estimation model to obtain and output "information representing whether or not the user is in a state of concentration" and "information representing the degree of concentration" corresponding to the "input information". If the "other information" includes information representing the intention of the user 101, the concentration state estimating unit 114 prioritizes the intention of the user 101 and selects "information representing whether the user 101 is in a state of concentration". You can also obtain and output it. For example, when "other information" indicates an intention such as "I want to concentrate" or "I want to turn off notifications", the concentration state estimating unit 114 uses "information indicating whether or not you are in a concentration state" as "I want to concentrate" or "I want to turn off notifications". It is also possible to output "information indicating something". For example, when "other information" indicates an intention such as "I am able to communicate" or "I want to turn on notifications," the concentration state estimating unit 114 selects "information indicating whether or not I am in a concentration state" as " Information indicating that the user is not in a concentrated state may also be output. Information representing the intention of the user 101 included in "other information" is stored in the storage unit 113, and the concentration state estimation unit 114 stores the information in the storage unit 113 until the intention of the user 101 is updated. Based on the information representing the user's intention, "information representing whether or not the user is in a concentrated state" may be obtained and output as described above. “Information indicating whether the user is in a state of concentration” or “information indicating the degree of concentration” is sent to the control unit 115.
 音響信号センサ13は、周囲の音響信号AC2(第2音響信号)を検出し、その音響信号AC2を表す情報を環境推定部116に送る。環境推定部116は、入力された音響信号AC2を表す情報を用いて、周囲の音響信号の検出結果を表す「音響検知情報」を制御部115に送る。音響検知情報は、例えば、周囲で利用者102が発話を行ったか否かを表す情報や、周囲の音の大きさを表す情報などである。 The acoustic signal sensor 13 detects the surrounding acoustic signal AC2 (second acoustic signal) and sends information representing the acoustic signal AC2 to the environment estimation unit 116. The environment estimation unit 116 uses the information representing the input acoustic signal AC2 to send “acoustic detection information” representing the detection result of surrounding acoustic signals to the control unit 115. The sound detection information includes, for example, information indicating whether or not the user 102 has spoken in the surrounding area, information indicating the loudness of surrounding sounds, and the like.
 制御部115には、集中状態推定部114から送られた「集中状態にあるか否かを表す情報」または「集中度合を表す情報」、および、環境推定部116から送られた「音響検知情報」がリアルタイムに入力される。制御部115は、「集中状態にあるか否かを表す情報」を用い、音響信号AC1を聴取する利用者101が集中状態にあるか否かを判定するか、または、「集中度合を表す情報」を用い、利用者101の集中度合いが基準TH1(第1基準)以上であるか否かを判定する。ここで、利用者101が集中状態にない場合または利用者101の集中度合いが基準TH1(第1基準)よりも低い場合、制御部115は、「音響検知情報」で表される、利用者101の周囲の音響信号AC2(第1音響信号と異なる第2音響信号)に応じ、利用者101が音響信号AC2を聴取しやすくなるように音響信号AC1を変化させる制御処理CON1(第1制御処理)を行う。この処理は自動的に行われる。これにより、利用者101は、集中していないときには、利用者102からの呼びかけなどに気付きやすくなり、他者とのコミュニケーションを取りやすくなる。一方、利用者101が集中状態にある場合または集中度合いが基準TH1以上である場合には、制御処理CON2(第2制御処理)を行う。制御処理CON2(第2制御処理)は、制御処理CON1(第1制御処理)を行わない処理である。これにより、利用者101は、集中しているときにはその集中状態を維持しやすくなる。以下、制御処理CON1(第1制御処理)および制御処理CON2(第2制御処理)の具体例を示す。 The control unit 115 receives “information indicating whether or not the person is in a concentrated state” or “information indicating the degree of concentration” sent from the concentration state estimating unit 114, and “acoustic detection information” sent from the environment estimating unit 116. ' is input in real time. The control unit 115 determines whether the user 101 listening to the acoustic signal AC1 is in a concentrated state using "information indicating whether or not the user is in a concentrated state," or uses "information indicating whether the user 101 is in a concentrated state" or ” is used to determine whether the degree of concentration of the user 101 is equal to or higher than the reference TH1 (first reference). Here, when the user 101 is not in a concentrated state or when the degree of concentration of the user 101 is lower than the standard TH1 (first standard), the control unit 115 control processing CON1 (first control processing) for changing the acoustic signal AC1 so that the user 101 can easily hear the acoustic signal AC2 according to the surrounding acoustic signal AC2 (a second acoustic signal different from the first acoustic signal); I do. This process is done automatically. Thereby, when the user 101 is not concentrating, it becomes easier to notice calls from the user 102, and it becomes easier to communicate with others. On the other hand, when the user 101 is in a concentrated state or when the degree of concentration is equal to or higher than the reference TH1, control processing CON2 (second control processing) is performed. The control process CON2 (second control process) is a process that does not perform the control process CON1 (first control process). This makes it easier for the user 101 to maintain his concentration when he is concentrating. Specific examples of the control process CON1 (first control process) and control process CON2 (second control process) will be shown below.
 制御処理CON1(第1制御処理)の具体例:
 制御処理CON1(第1制御処理)は、例えば、音響信号AC2(第2音響信号)の振幅の大きさが基準TH2(第2基準)以上である場合または音響信号AC2(第2音響信号)が検出された場合に、利用者101が音響信号AC2(第2音響信号)を聴取しやすくなるように音響信号AC1(第1音響信号)を変化させる処理を含む。この処理の例は、音響信号AC1(第1音響信号)の振幅を減衰させる処理である。その他、音響信号AC1(第1音響信号)の位相や波形を変化させることで、利用者101が音響信号AC2(第2音響信号)を聴取しやすくなるようにしてもよい。これにより、利用者101は利用者102からの呼びかけ等に気付きやすくなる。一方、音響信号AC2(第2音響信号)の振幅の大きさが基準TH2(第2基準)未満である場合または音響信号AC2(第2音響信号)が検出されなかった場合、このように音響信号AC1(第1音響信号)を変化させる処理(利用者が第2音響信号を聴取しやすくなるように第1音響信号を変化させる処理)は実行されない。これにより、利用者102からの呼びかけ等が無いにもかかわらず、利用者101の集中状態に応じて音響信号AC1が変化してしまうことを防止できる。
Specific example of control processing CON1 (first control processing):
The control process CON1 (first control process) is performed, for example, when the amplitude of the acoustic signal AC2 (second acoustic signal) is equal to or greater than the reference TH2 (second reference) or when the acoustic signal AC2 (second acoustic signal) is It includes a process of changing the acoustic signal AC1 (first acoustic signal) so that the user 101 can easily hear the acoustic signal AC2 (second acoustic signal) when detected. An example of this process is a process of attenuating the amplitude of the acoustic signal AC1 (first acoustic signal). In addition, the user 101 may be able to easily hear the acoustic signal AC2 (second acoustic signal) by changing the phase or waveform of the acoustic signal AC1 (first acoustic signal). This makes it easier for the user 101 to notice calls from the user 102. On the other hand, if the amplitude of the acoustic signal AC2 (second acoustic signal) is less than the reference TH2 (second criterion) or if the acoustic signal AC2 (second acoustic signal) is not detected, the acoustic signal The process of changing AC1 (first acoustic signal) (the process of changing the first acoustic signal so that the user can easily hear the second acoustic signal) is not executed. Thereby, it is possible to prevent the acoustic signal AC1 from changing depending on the concentration state of the user 101 even though there is no call from the user 102.
 制御処理CON2(第2制御処理)の具体例:
 制御処理CON2(第2制御処理)では、利用者101が音響信号AC2を聴取しやすくなるように音響信号AC1を変化させる処理は実行されない。例えば、制御処理CON2では、自動的に音響信号AC1の振幅を減衰させたり、音響信号AC1の位相や波形を変化させたりすることはない。例えば、制御処理CON2は何もしない制御であってもよい。これにより、利用者101が集中状態を維持するために大きな音量で音響信号AC1を聴取しているときに、周囲からの呼びかけがあったり、周囲が騒がしかったりしても、利用者101の集中状態を維持できる。
Specific example of control processing CON2 (second control processing):
In the control process CON2 (second control process), a process for changing the acoustic signal AC1 so that the user 101 can easily hear the acoustic signal AC2 is not executed. For example, the control process CON2 does not automatically attenuate the amplitude of the acoustic signal AC1 or change the phase or waveform of the acoustic signal AC1. For example, the control process CON2 may be a control that does nothing. As a result, when the user 101 is listening to the acoustic signal AC1 at a high volume in order to maintain a concentrated state, even if there are calls from the surroundings or the surroundings are noisy, the user 101's concentrated state can be maintained. can be maintained.
 また、制御処理CON2が、利用者101が音響信号AC2(第2音響信号)を聴取しにくくなるように音響信号AC1(第1音響信号)を変化させる処理を含んでいてもよい。例えば、利用者101が音響信号AC2を聴取しにくくなるように、音響信号AC1の各周波数成分の大きさを変化させてもよいし、音響信号AC1の位相を変化させてもよいし、その他音響信号AC2をマスキングするように音響信号AC1を変化させてもよい。例えば、音響信号AC2の逆位相音響信号または当該逆位相音響信号に近似する音響信号を、元の音響信号AC1に加えた音響信号を、新たな音響信号AC1としてもよい。これにより、周囲からの呼びかけがあったり、周囲が騒がしかったりしても、より利用者101の集中状態を維持できる。 Furthermore, the control process CON2 may include a process of changing the acoustic signal AC1 (first acoustic signal) so that it becomes difficult for the user 101 to hear the acoustic signal AC2 (second acoustic signal). For example, in order to make it difficult for the user 101 to hear the acoustic signal AC2, the magnitude of each frequency component of the acoustic signal AC1 may be changed, the phase of the acoustic signal AC1 may be changed, or other The acoustic signal AC1 may be changed to mask the signal AC2. For example, an acoustic signal obtained by adding an anti-phase acoustic signal of the acoustic signal AC2 or an acoustic signal similar to the anti-phase acoustic signal to the original acoustic signal AC1 may be used as the new acoustic signal AC1. This allows the user 101 to maintain a more concentrated state even if there are calls from the surroundings or the surroundings are noisy.
 制御部115の制御処理の具体例:
 図2を用い、制御部115による制御処理の具体例を示す。
 制御部115は、集中状態推定部114から送られた「集中状態にあるか否かを表す情報」または「集中度合を表す情報」、および、環境推定部116から送られた「音響検知情報」を受け付ける(ステップS1)。制御部115は、音響検知情報を用いて、呼びかけがあったか否かを判定する。例えば、制御部115は、音響信号AC2の振幅の大きさが基準TH2以上である場合または音響信号AC2が検出された場合に、呼びかけがあったと判断し、それ以外のときに呼びかけが無かったと判断する(ステップS2)。ここで、呼びかけが無かったと判断された場合にはステップS1に戻る。一方、呼びかけがあったと判断された場合、制御部115は、「集中状態にあるか否かを表す情報」または「集中度合を表す情報」を用い、利用者101が集中状態にあるか否か、または利用者101の集中度合を判定する(ステップS3)。ここで、利用者101が集中状態にない場合または利用者101の集中度合いが基準TH1(第1基準)よりも低い場合、制御部115は制御処理CON1(第1制御処理)を行い(ステップS4)、その後に処理をステップS1に戻す。一方、利用者101が集中状態にある場合または集中度合いが基準TH1以上である場合、制御部115は制御処理CON2(第2制御処理)を行い(ステップS5)、その後に処理をステップS1に戻す。
Specific example of control processing by the control unit 115:
A specific example of control processing by the control unit 115 will be shown with reference to FIG.
The control unit 115 receives “information indicating whether or not the person is in a concentration state” or “information indicating the degree of concentration” sent from the concentration state estimation unit 114, and “acoustic detection information” sent from the environment estimation unit 116. (Step S1). The control unit 115 uses the acoustic detection information to determine whether there is a call. For example, the control unit 115 determines that there is a call when the amplitude of the acoustic signal AC2 is equal to or greater than the reference TH2 or when the acoustic signal AC2 is detected, and determines that there is no call at other times. (Step S2). Here, if it is determined that there is no call, the process returns to step S1. On the other hand, if it is determined that there has been a call, the control unit 115 uses "information indicating whether the user 101 is in a concentrated state" or "information indicating the degree of concentration" to determine whether the user 101 is in a concentrated state or not. , or the degree of concentration of the users 101 is determined (step S3). Here, if the user 101 is not in a concentrated state or if the degree of concentration of the user 101 is lower than the standard TH1 (first standard), the control unit 115 performs the control process CON1 (first control process) (step S4 ), then the process returns to step S1. On the other hand, when the user 101 is in a concentrated state or when the degree of concentration is equal to or higher than the reference TH1, the control unit 115 performs a control process CON2 (second control process) (step S5), and then returns the process to step S1. .
 <本実施形態の特徴>
 以上のように、本実施形態では、音響信号AC1を聴取する利用者101の集中状態や集中度合と、音響信号AC1と異なる音響信号AC2とに基づいて音響信号AC1を制御する。これにより、利用者101は、集中していないときには他者とのコミュニケーションを取りやすくなり、集中しているときにはその集中状態を維持しやすくなる。
<Features of this embodiment>
As described above, in this embodiment, the acoustic signal AC1 is controlled based on the concentration state and concentration level of the user 101 who listens to the acoustic signal AC1, and the acoustic signal AC2 different from the acoustic signal AC1. This makes it easier for the user 101 to communicate with others when he is not concentrating, and makes it easier for him to maintain his concentration when he is concentrating.
 [第2実施形態]
 第2実施形態は、第1実施形態の変形例であり、音響信号AC1を聴取する利用者101の集中状態や集中度合と、音響信号AC1と異なる音響信号AC2とに基づいて、さらに利用者101や他の利用者102への通知を行う。以下では、第1実施形態との相違点を中心に説明し、既に説明した事項については同じ参照番号を用いて説明を簡略化する。
[Second embodiment]
The second embodiment is a modification of the first embodiment, and based on the concentration state and concentration level of the user 101 who listens to the acoustic signal AC1, and the acoustic signal AC2 different from the acoustic signal AC1, the user 101 and other users 102. In the following, differences from the first embodiment will be mainly explained, and the same reference numbers will be used to simplify the explanation of the items that have already been explained.
 <構成>
 図1に例示するように、第2実施形態の音響信号再生システム2は、制御装置21、利用者センサ12、音響信号センサ13、および音響信号出力装置14を有する。制御装置21は、入力部111、再生部112、記憶部113、集中状態推定部114,制御部115、環境推定部116、利用者通知部217、および周囲通知部218を有する。
<Configuration>
As illustrated in FIG. 1, the acoustic signal reproduction system 2 of the second embodiment includes a control device 21, a user sensor 12, an acoustic signal sensor 13, and an acoustic signal output device 14. The control device 21 includes an input section 111 , a playback section 112 , a storage section 113 , a concentration state estimation section 114 , a control section 115 , an environment estimation section 116 , a user notification section 217 , and a surrounding notification section 218 .
 <事前処理>
 第1実施形態と同じである。
<Pre-processing>
This is the same as the first embodiment.
 <処理>
 第1実施形態との違いは、制御処理CON1(第1制御処理)および制御処理CON2(第2制御処理)である。第2実施形態の制御処理CON1は、音響信号AC2(第2音響信号)の振幅の大きさが基準TH2(第2基準)以上である場合または音響信号AC2(第2音響信号)が検出された場合に、利用者101に通知情報N1(第1通知情報)を提示するための処理を含む。この制御処理CON1では、制御部115がさらに利用者通知部217に対して、通知情報N1を出力する旨を指示し、利用者通知部217は、この通知情報N1を利用者101に提示するための処理を行う。この処理は、例えば音響信号出力装置14や制御装置11やその他の装置(例えば、スマートフォンなど)から利用者101に通知情報N1を出力するための処理である。通知情報N1は、聴覚的なもの(例えば、通知音や通知音声など)であってもよいし、視覚的なもの(例えば、LED発光や画像表示や照明の変化や通知メッセージなど)であってもよいし、触覚的なもの(例えば、振動など)であってもよいし、それらの少なくとも一部の組み合わせであってもよい。通知情報N1を利用者101に提示することで、利用者101は他者からの呼びかけがあったことに気付くことができ、他者とのコミュニケーションをより円滑に行うことができる。
<Processing>
The difference from the first embodiment is the control process CON1 (first control process) and the control process CON2 (second control process). The control process CON1 of the second embodiment is performed when the amplitude of the acoustic signal AC2 (second acoustic signal) is equal to or greater than the reference TH2 (second reference) or when the acoustic signal AC2 (second acoustic signal) is detected. includes a process for presenting notification information N1 (first notification information) to the user 101. In this control process CON1, the control unit 115 further instructs the user notification unit 217 to output notification information N1, and the user notification unit 217 instructs the user notification unit 217 to present this notification information N1 to the user 101. Process. This process is a process for outputting notification information N1 to the user 101 from, for example, the acoustic signal output device 14, the control device 11, or other devices (for example, a smartphone). The notification information N1 may be auditory (for example, notification sound, notification voice, etc.) or visual (for example, LED light emission, image display, change in lighting, notification message, etc.). It may be a tactile sensation (for example, vibration), or it may be a combination of at least some of these. By presenting the notification information N1 to the user 101, the user 101 can notice that there is a call from another person, and can communicate with the other person more smoothly.
 また、第2実施形態の制御処理CON2は、利用者101以外の者(例えば、利用者102)に通知情報N2(第2通知情報)を提示するための処理を含む。この制御処理CON2では、制御部115がさらに周囲通知部218に対して、通知情報N2を出力する旨を指示し、周囲通知部218は、この通知情報N2を利用者101以外の者に提示するための処理を行う。この処理は、例えば、音響信号出力装置14や制御装置11やその他の装置(例えば、スマートフォンなど)から、利用者101以外の者に通知情報N2を出力するための処理である。通知情報N2は、聴覚的なもの(例えば、通知音や通知音声など)であってもよいし、視覚的なもの(例えば、LED発光や画像表示や照明の変化や通知メッセージなど)であってもよいし、触覚的なもの(例えば、振動など)であってもよいし、それらの少なくとも一部の組み合わせであってもよい。これにより、利用者102などの他者に利用者101が集中状態にあることを伝えることができ、利用者101は他者に邪魔されることなく、集中状態を維持できる。 Furthermore, the control process CON2 of the second embodiment includes a process for presenting notification information N2 (second notification information) to a person other than the user 101 (for example, the user 102). In this control process CON2, the control unit 115 further instructs the surrounding notification unit 218 to output notification information N2, and the surrounding notification unit 218 presents this notification information N2 to a person other than the user 101. Perform processing for This process is, for example, a process for outputting the notification information N2 from the acoustic signal output device 14, the control device 11, or other devices (for example, a smartphone) to a person other than the user 101. The notification information N2 may be auditory (for example, notification sound, notification sound, etc.) or visual (for example, LED light emission, image display, change in lighting, notification message, etc.). It may be a tactile sensation (for example, vibration), or it may be a combination of at least some of these. This allows others, such as the user 102, to be informed that the user 101 is in a concentrated state, and the user 101 can maintain his concentrated state without being disturbed by others.
 あるいは、制御処理CON1が上述した通知情報N1を提示するための処理を含み、制御処理CON2が上述した通知情報N2を提示するための処理を含んでいなくてもよい。あるいは、制御処理CON1が上述した通知情報N1を提示するための処理を含んでおらず、制御処理CON2が上述した通知情報N2を提示するための処理を含んでいてもよい。 Alternatively, the control process CON1 may include the process for presenting the above-mentioned notification information N1, and the control process CON2 may not include the process for presenting the above-mentioned notification information N2. Alternatively, the control process CON1 may not include the process for presenting the above-mentioned notification information N1, and the control process CON2 may include the process for presenting the above-mentioned notification information N2.
 <本実施形態の特徴>
 以上のように、利用者101が集中していないときに利用者101に通知情報N1を提示することで、他者とのコミュニケーションをさらに取りやすくなる。また、利用者101が集中しているときに利用者101以外の者に通知情報N2を提示することで、利用者101の集中状態をさらに維持しやすくなる。
<Features of this embodiment>
As described above, by presenting the notification information N1 to the user 101 when the user 101 is not concentrating, it becomes easier to communicate with others. Moreover, by presenting the notification information N2 to a person other than the user 101 when the user 101 is concentrating, it becomes easier to maintain the user's 101's concentration state.
 [第3実施形態]
 第1および2実施形態では、利用者101がその周囲の利用者102とコミュニケーションをとる状況を想定した。しかし、利用者101がスマートフォンなどの通信装置を介して他者とコミュニケーションをとる状況も想定できる。本実施形態では、そのような状況を想定したものである。
[Third embodiment]
In the first and second embodiments, a situation is assumed in which the user 101 communicates with users 102 around him. However, a situation in which the user 101 communicates with others via a communication device such as a smartphone can also be assumed. This embodiment assumes such a situation.
 <構成>
 図3に例示するように、第3実施形態の音響信号再生システム3は、制御装置31、利用者センサ12、通信装置33、および音響信号出力装置14を有する。制御装置31は、入力部111、再生部112、記憶部113、集中状態推定部114,制御部315、および通知判定部316を有する。制御装置31は、さらに利用者通知部217および通信通知部318を有していてもよい。
<Configuration>
As illustrated in FIG. 3, the acoustic signal reproduction system 3 of the third embodiment includes a control device 31, a user sensor 12, a communication device 33, and an acoustic signal output device 14. The control device 31 includes an input section 111 , a playback section 112 , a storage section 113 , a concentration state estimation section 114 , a control section 315 , and a notification determination section 316 . The control device 31 may further include a user notification section 217 and a communication notification section 318.
 <事前処理>
 第1実施形態と同じである。
<Pre-processing>
This is the same as the first embodiment.
 <処理>
 第1および2実施形態との違いは、音響信号センサ13に代えてスマートフォンなどの通信装置33が着信などの通知(第2音響信号に関する通知)を通知判定部316に送り、通知判定部316が、入力された通知に基づいて通信装置33に通知があったか否かを表す「通知検知情報」を制御部315に送る点である。
<Processing>
The difference from the first and second embodiments is that instead of the acoustic signal sensor 13, a communication device 33 such as a smartphone sends a notification such as an incoming call (notification regarding the second acoustic signal) to the notification determination unit 316, and the notification determination unit 316 , based on the input notification, sends "notification detection information" indicating whether or not the communication device 33 has received a notification to the control unit 315.
 制御部315には、前述のように集中状態推定部114から送られた「集中状態にあるか否かを表す情報」または「集中度合を表す情報」、および、通知判定部316から送られた「通知検知情報」がリアルタイムに入力される。制御部315は、「集中状態にあるか否かを表す情報」を用い、音響信号AC1を聴取する利用者101が集中状態にあるか否かを判定するか、または、「集中度合を表す情報」を用い、利用者101の集中度合いが基準TH1(第1基準)以上であるか否かを判定する。ここで、利用者101が集中状態にない場合または利用者101の集中度合いが基準TH1(第1基準)よりも低い場合、制御部315は、「通知検知情報」が表す通知(第2音響信号に関する通知)に応じ、利用者101が通信装置33から出力される音響信号AC2(第2音響信号)を聴取しやすくなるように音響信号AC1を変化させる制御処理CON1(第1制御処理)を行う。この処理は自動的に行われる。これにより、利用者101は、集中していないときには、通信装置33からの通知に気付きやすくなり、通信装置33を介した他者とのコミュニケーションを取りやすくなる。一方、利用者101が集中状態にある場合または集中度合いが基準TH1以上である場合に、制御処理CON1(第1制御処理)を行わない制御処理CON2(第2制御処理)を行う。これにより、利用者101は、集中しているときにはその集中状態を維持しやすくなる。以下、制御処理CON1(第1制御処理)および制御処理CON2(第2制御処理)の具体例を示す。 The control unit 315 receives the “information indicating whether or not the user is in a concentrated state” or “information indicating the degree of concentration” sent from the concentration state estimating unit 114 as described above, and the information sent from the notification determining unit 316. "Notification detection information" is input in real time. The control unit 315 determines whether the user 101 listening to the acoustic signal AC1 is in a concentrated state using "information indicating whether or not the user is in a concentrated state," or uses "information indicating the degree of concentration" to determine whether the user 101 listening to the acoustic signal AC1 is in a concentrated state. ” is used to determine whether the degree of concentration of the user 101 is equal to or higher than the reference TH1 (first reference). Here, if the user 101 is not in a concentrated state or if the degree of concentration of the user 101 is lower than the standard TH1 (first standard), the control unit 315 detects the notification (the second acoustic signal control processing CON1 (first control processing) is performed to change the acoustic signal AC1 so that the user 101 can easily hear the acoustic signal AC2 (second acoustic signal) output from the communication device 33. . This process is done automatically. Thereby, when the user 101 is not concentrating, it becomes easier to notice notifications from the communication device 33, and it becomes easier to communicate with others via the communication device 33. On the other hand, when the user 101 is in a concentrated state or when the degree of concentration is equal to or higher than the reference TH1, control processing CON2 (second control processing) is performed without performing control processing CON1 (first control processing). This makes it easier for the user 101 to maintain his concentration when he is concentrating. Specific examples of the control process CON1 (first control process) and control process CON2 (second control process) will be shown below.
 制御処理CON1(第1制御処理)の具体例:
 制御処理CON1(第1制御処理)は、例えば、通信装置33に着信などの通知(第2音響信号に関する通知)があった場合に、利用者101が通信装置33から出力される音響信号AC2(第2音響信号)を聴取しやすくなるように音響信号AC1(第1音響信号)を変化させる処理を含む。この処理の例は、音響信号AC1(第1音響信号)の振幅を減衰させる処理である。これにより、利用者101は通信装置33における通知に気付きやすくなる。一方、通信装置33に着信などの通知(第2音響信号に関する通知)が無い場合、利用者101が音響信号AC2(第2音響信号)を聴取しやすくなるように音響信号AC1(第1音響信号)を変化させる処理は実行されない。これにより、通信装置33に着信等が無いにもかかわらず、利用者101の集中状態に応じて音響信号AC1が変化してしまうことを防止できる。
Specific example of control processing CON1 (first control processing):
In the control process CON1 (first control process), for example, when the communication apparatus 33 receives a notification of an incoming call (notification regarding the second acoustic signal), the user 101 controls the acoustic signal AC2 ( This includes processing for changing the acoustic signal AC1 (the first acoustic signal) so that the second acoustic signal can be easily heard. An example of this process is a process of attenuating the amplitude of the acoustic signal AC1 (first acoustic signal). This makes it easier for the user 101 to notice notifications on the communication device 33. On the other hand, if there is no notification of an incoming call (notification regarding the second acoustic signal) on the communication device 33, the acoustic signal AC1 (first acoustic signal ) is not executed. Thereby, it is possible to prevent the acoustic signal AC1 from changing depending on the concentration state of the user 101 even though there is no incoming call or the like on the communication device 33.
 この制御処理CON1において、制御部315がさらに利用者通知部217に対して、通知情報N1を出力する旨を指示し、利用者通知部217がこの通知情報N1を利用者101に提示するための処理を行ってもよい。この具体例は第2実施形態で説明した通りである。 In this control process CON1, the control unit 315 further instructs the user notification unit 217 to output notification information N1, and the user notification unit 217 instructs the user notification unit 217 to output notification information N1 to the user 101. Processing may be performed. This specific example is as described in the second embodiment.
 制御処理CON2(第2制御処理)の具体例:
 制御処理CON2(第2制御処理)では、利用者101が音響信号AC2を聴取しやすくなるように音響信号AC1を変化させる処理は実行されない。例えば、制御処理CON2では、自動的に音響信号AC1の振幅を減衰させたり、音響信号AC1の位相や波形を変化させたりすることはない。これにより、利用者101が集中状態を維持するために大きな音量で音響信号AC1を聴取しているときに、通信装置33に着信等があったとしても、利用者101の集中状態を維持できる。
Specific example of control processing CON2 (second control processing):
In the control process CON2 (second control process), a process for changing the acoustic signal AC1 so that the user 101 can easily hear the acoustic signal AC2 is not executed. For example, the control process CON2 does not automatically attenuate the amplitude of the acoustic signal AC1 or change the phase or waveform of the acoustic signal AC1. Thereby, even if the communication device 33 receives a call while the user 101 is listening to the acoustic signal AC1 at a high volume in order to maintain a concentrated state, the user 101 can maintain a concentrated state.
 この制御処理CON2において、通信装置33が通信相手に対して通知情報N2(第2通知情報)を提示してもよい。この場合、制御部315がさらに通信通知部318に対して、通知情報N2を通信相手に送信する旨を指示し、通信通知部318はこの通知情報N2を通信装置33に送り、通信相手に送信するよう指示すればよい。通知情報N2は、聴覚的なもの(例えば、通知音や通知音声など)であってもよいし、視覚的なもの(例えば、通知メッセージなど)であってもよいし、触覚的なもの(例えば、振動など)であってもよいし、それらの少なくとも一部の組み合わせであってもよい。これにより、通信相手に利用者101が集中状態にあることを伝えることができ、利用者101は他者に邪魔されることなく、集中状態を維持できる。 In this control process CON2, the communication device 33 may present notification information N2 (second notification information) to the communication partner. In this case, the control unit 315 further instructs the communication notification unit 318 to send notification information N2 to the communication partner, and the communication notification unit 318 sends this notification information N2 to the communication device 33 and transmits it to the communication partner. You can instruct them to do so. The notification information N2 may be auditory (e.g., notification sound, notification voice, etc.), visual (e.g., notification message, etc.), or tactile (e.g., notification message, etc.). , vibration, etc.), or a combination of at least some of them. This allows the communication partner to be informed that the user 101 is in a concentrated state, and the user 101 can maintain his or her concentrated state without being disturbed by others.
 その他の制御部315の処理は、制御部115と同一である。 Other processing of the control unit 315 is the same as that of the control unit 115.
 <本実施形態の特徴>
 以上のように、本実施形態では、音響信号AC1を聴取する利用者101の集中状態や集中度合と、音響信号AC1と異なる音響信号AC2に関する通知とに基づいて音響信号AC1を制御する。これにより、利用者101は、集中していないときには他者とのコミュニケーションを取りやすくなり、集中しているときにはその集中状態を維持しやすくなる。
<Features of this embodiment>
As described above, in this embodiment, the acoustic signal AC1 is controlled based on the concentration state and concentration level of the user 101 who listens to the acoustic signal AC1, and the notification regarding the acoustic signal AC2 different from the acoustic signal AC1. This makes it easier for the user 101 to communicate with others when he is not concentrating, and makes it easier for him to maintain his concentration when he is concentrating.
 [ハードウェア構成]
 各実施形態における制御装置11,21,31は、例えば、CPU(central processing unit)等のプロセッサ(ハードウェア・プロセッサ)やRAM(random-access memory)・ROM(read-only memory)等のメモリ等を備える汎用または専用のコンピュータが所定のプログラムを実行することで構成される装置である。すなわち、各実施形態における制御装置11,21,31は、例えば、それぞれが有する各部を実装するように構成された処理回路(processing circuitry)を有する。このコンピュータは1個のプロセッサやメモリを備えていてもよいし、複数個のプロセッサやメモリを備えていてもよい。このプログラムはコンピュータにインストールされてもよいし、予めROM等に記録されていてもよい。また、CPUのようにプログラムが読み込まれることで機能構成を実現する電子回路(circuitry)ではなく、単独で処理機能を実現する電子回路を用いて一部またはすべての処理部が構成されてもよい。また、1個の装置を構成する電子回路が複数のCPUを含んでいてもよい。
[Hardware configuration]
The control devices 11, 21, and 31 in each embodiment include, for example, a processor (hardware processor) such as a CPU (central processing unit), a memory such as a RAM (random-access memory), a ROM (read-only memory), etc. It is a device configured by a general-purpose or dedicated computer equipped with a computer running a predetermined program. That is, the control devices 11, 21, and 31 in each embodiment have, for example, processing circuitry configured to mount each section of each control device. This computer may include one processor and memory, or may include multiple processors and memories. This program may be installed on the computer or may be pre-recorded in a ROM or the like. In addition, some or all of the processing units may be configured using an electronic circuit that independently realizes a processing function, rather than an electronic circuit that realizes a functional configuration by reading a program like a CPU. . Further, an electronic circuit constituting one device may include a plurality of CPUs.
 図4は、各実施形態における制御装置11,21,31のハードウェア構成を例示したブロック図である。図4に例示するように、この例の制御装置11,21,31は、CPU(Central Processing Unit)10a、入力部10b、出力部10c、RAM(Random Access Memory)10d、ROM(Read Only Memory)10e、補助記憶装置10f、通信部10hおよびバス10gを有している。この例のCPU10aは、制御部10aa、演算部10abおよびレジスタ10acを有し、レジスタ10acに読み込まれた各種プログラムに従って様々な演算処理を実行する。また、入力部10bは、データが入力される入力端子、キーボード、マウス、タッチパネル等である。また、出力部10cは、データが出力される出力端子、ディスプレイ等である。通信部10hは、所定のプログラムを読み込んだCPU10aによって制御されるLANカード等である。また、RAM10dは、SRAM (Static Random Access Memory)、DRAM (Dynamic Random Access Memory)等であり、所定のプログラムが格納されるプログラム領域10daおよび各種データが格納されるデータ領域10dbを有している。また、補助記憶装置10fは、例えば、ハードディスク、MO(Magneto-Optical disc)、半導体メモリ等であり、所定のプログラムが格納されるプログラム領域10faおよび各種データが格納されるデータ領域10fbを有している。また、バス10gは、CPU10a、入力部10b、出力部10c、RAM10d、ROM10e、通信部10hおよび補助記憶装置10fを、情報のやり取りが可能なように接続する。CPU10aは、読み込まれたOS(Operating System)プログラムに従い、補助記憶装置10fのプログラム領域10faに格納されているプログラムをRAM10dのプログラム領域10daに書き込む。同様にCPU10aは、補助記憶装置10fのデータ領域10fbに格納されている各種データを、RAM10dのデータ領域10dbに書き込む。そして、このプログラムやデータが書き込まれたRAM10d上のアドレスがCPU10aのレジスタ10acに格納される。CPU10aの制御部10aaは、レジスタ10acに格納されたこれらのアドレスを順次読み出し、読み出したアドレスが示すRAM10d上の領域からプログラムやデータを読み出し、そのプログラムが示す演算を演算部10abに順次実行させ、その演算結果をレジスタ10acに格納していく。このような構成により、制御装置11,21,31の機能構成が実現される。 FIG. 4 is a block diagram illustrating the hardware configuration of the control devices 11, 21, and 31 in each embodiment. As illustrated in FIG. 4, the control devices 11, 21, and 31 in this example include a CPU (Central Processing Unit) 10a, an input section 10b, an output section 10c, a RAM (Random Access Memory) 10d, and a ROM (Read Only Memory). 10e, an auxiliary storage device 10f, a communication section 10h, and a bus 10g. The CPU 10a in this example has a control section 10aa, a calculation section 10ab, and a register 10ac, and executes various calculation processes according to various programs read into the register 10ac. The input unit 10b is an input terminal into which data is input, a keyboard, a mouse, a touch panel, etc. Further, the output unit 10c is an output terminal, a display, etc. to which data is output. The communication unit 10h is a LAN card or the like that is controlled by the CPU 10a loaded with a predetermined program. Further, the RAM 10d is an SRAM (Static Random Access Memory), a DRAM (Dynamic Random Access Memory), etc., and has a program area 10da in which a predetermined program is stored and a data area 10db in which various data are stored. Further, the auxiliary storage device 10f is, for example, a hard disk, an MO (Magneto-Optical disc), a semiconductor memory, etc., and has a program area 10fa where a predetermined program is stored and a data area 10fb where various data are stored. There is. Further, the bus 10g connects the CPU 10a, the input section 10b, the output section 10c, the RAM 10d, the ROM 10e, the communication section 10h, and the auxiliary storage device 10f so that information can be exchanged. The CPU 10a writes the program stored in the program area 10fa of the auxiliary storage device 10f to the program area 10da of the RAM 10d according to the read OS (Operating System) program. Similarly, the CPU 10a writes various data stored in the data area 10fb of the auxiliary storage device 10f to the data area 10db of the RAM 10d. Then, the address on the RAM 10d where this program and data are written is stored in the register 10ac of the CPU 10a. The control unit 10aa of the CPU 10a sequentially reads these addresses stored in the register 10ac, reads programs and data from the area on the RAM 10d indicated by the read addresses, and causes the calculation unit 10ab to sequentially execute the calculations indicated by the programs. The calculation results are stored in the register 10ac. With such a configuration, the functional configuration of the control devices 11, 21, and 31 is realized.
 上述のプログラムは、コンピュータで読み取り可能な記録媒体に記録しておくことができる。コンピュータで読み取り可能な記録媒体の例は非一時的な(non-transitory)記録媒体である。このような記録媒体の例は、磁気記録装置、光ディスク、光磁気記録媒体、半導体メモリ等である。 The above program can be recorded on a computer readable recording medium. An example of a computer readable storage medium is a non-transitory storage medium. Examples of such recording media are magnetic recording devices, optical disks, magneto-optical recording media, semiconductor memories, and the like.
 このプログラムの流通は、例えば、そのプログラムを記録したDVD、CD-ROM等の可搬型記録媒体を販売、譲渡、貸与等することによって行う。さらに、このプログラムをサーバコンピュータの記憶装置に格納しておき、ネットワークを介して、サーバコンピュータから他のコンピュータにそのプログラムを転送することにより、このプログラムを流通させる構成としてもよい。上述のように、このようなプログラムを実行するコンピュータは、例えば、まず、可搬型記録媒体に記録されたプログラムもしくはサーバコンピュータから転送されたプログラムを、一旦、自己の記憶装置に格納する。そして、処理の実行時、このコンピュータは、自己の記憶装置に格納されたプログラムを読み取り、読み取ったプログラムに従った処理を実行する。また、このプログラムの別の実行形態として、コンピュータが可搬型記録媒体から直接プログラムを読み取り、そのプログラムに従った処理を実行することとしてもよく、さらに、このコンピュータにサーバコンピュータからプログラムが転送されるたびに、逐次、受け取ったプログラムに従った処理を実行することとしてもよい。また、サーバコンピュータから、このコンピュータへのプログラムの転送は行わず、その実行指示と結果取得のみによって処理機能を実現する、いわゆるASP(Application Service Provider)型のサービスによって、上述の処理を実行する構成としてもよい。なお、本形態におけるプログラムには、電子計算機による処理の用に供する情報であってプログラムに準ずるもの(コンピュータに対する直接の指令ではないがコンピュータの処理を規定する性質を有するデータ等)を含むものとする。 This program is distributed, for example, by selling, transferring, lending, etc. portable recording media such as DVDs and CD-ROMs on which the program is recorded. Furthermore, this program may be distributed by storing the program in the storage device of the server computer and transferring the program from the server computer to another computer via a network. As described above, a computer that executes such a program, for example, first stores a program recorded on a portable recording medium or a program transferred from a server computer in its own storage device. When executing a process, this computer reads a program stored in its own storage device and executes a process according to the read program. In addition, as another form of execution of this program, the computer may directly read the program from a portable recording medium and execute processing according to the program, and furthermore, the program may be transferred to this computer from the server computer. The process may be executed in accordance with the received program each time. In addition, the above-mentioned processing is executed by a so-called ASP (Application Service Provider) service, which does not transfer programs from the server computer to this computer, but only realizes processing functions by issuing execution instructions and obtaining results. You can also use it as Note that the program in this embodiment includes information that is used for processing by an electronic computer and that is similar to a program (data that is not a direct command to the computer but has a property that defines the processing of the computer, etc.).
 各実施形態では、コンピュータ上で所定のプログラムを実行させることにより、本装置を構成することとしたが、これらの処理内容の少なくとも一部をハードウェア的に実現することとしてもよい。 In each of the embodiments, the present apparatus is configured by executing a predetermined program on a computer, but at least a part of these processing contents may be implemented in hardware.
 [その他の変形例]
 なお、本発明は上述の実施形態に限定されるものではない。上述の各種の処理は、記載に従って時系列に実行されるのみならず、処理を実行する装置の処理能力あるいは必要に応じて並列的にあるいは個別に実行されてもよい。その他、本発明の趣旨を逸脱しない範囲で適宜変更が可能であることはいうまでもない。
[Other variations]
Note that the present invention is not limited to the above-described embodiments. The various processes described above are not only executed in chronological order according to the description, but also may be executed in parallel or individually depending on the processing capacity of the device that executes the processes or as necessary. It goes without saying that other changes can be made as appropriate without departing from the spirit of the present invention.
1,2,3 音響信号再生システム
11,21,31 制御装置
115,315 制御部
1, 2, 3 Audio signal reproduction system 11, 21, 31 Control device 115, 315 Control unit

Claims (8)

  1.  第1音響信号を聴取する利用者が集中状態にない場合または前記利用者の集中度合いが第1基準よりも低い場合に、前記第1音響信号と異なる第2音響信号または前記第2音響信号に関する通知に応じ、前記利用者が前記第2音響信号を聴取しやすくなるように前記第1音響信号を変化させる第1制御処理を行い、前記利用者が集中状態にある場合または前記集中度合いが前記第1基準以上である場合に、前記第1制御処理を行わない第2制御処理を行う、制御部を有する制御装置。 Regarding a second acoustic signal different from the first acoustic signal or the second acoustic signal when the user listening to the first acoustic signal is not in a concentrated state or when the degree of concentration of the user is lower than the first standard. In response to the notification, a first control process is performed to change the first acoustic signal so that the user can easily hear the second acoustic signal, and when the user is in a concentrated state or the degree of concentration is A control device having a control unit that performs a second control process without performing the first control process when the first reference value or more is exceeded.
  2.  請求項1の制御装置であって、
     前記第1制御処理は、前記第2音響信号の振幅の大きさが第2基準以上である場合または前記第2音響信号が検出された場合に、前記利用者が前記第2音響信号を聴取しやすくなるように前記第1音響信号を変化させる処理を含む、制御装置。
    The control device according to claim 1,
    The first control process may cause the user to listen to the second acoustic signal when the magnitude of the amplitude of the second acoustic signal is equal to or greater than a second reference or when the second acoustic signal is detected. The control device includes processing for changing the first acoustic signal so that the first acoustic signal becomes easier.
  3.  請求項1の制御装置であって、
     前記第1制御処理は、前記第2音響信号の振幅の大きさが第2基準以上である場合または前記第2音響信号が検出された場合に、前記第1音響信号の振幅を減衰させる処理を含む、制御装置。
    The control device according to claim 1,
    The first control process includes a process of attenuating the amplitude of the first acoustic signal when the magnitude of the amplitude of the second acoustic signal is equal to or higher than a second reference or when the second acoustic signal is detected. Including, control device.
  4.  請求項1の制御装置であって、
     前記第1制御処理は、前記第2音響信号の振幅の大きさが第2基準以上である場合または前記第2音響信号が検出された場合に、前記利用者に第1通知情報を提示するための処理を含む、制御装置。
    The control device according to claim 1,
    The first control process is for presenting first notification information to the user when the magnitude of the amplitude of the second acoustic signal is equal to or higher than a second reference or when the second acoustic signal is detected. control device, including the processing of
  5.  請求項1の制御装置であって、
     前記第2制御処理は、前記利用者が前記第2音響信号を聴取しにくくなるように前記第1音響信号を変化させる処理を含む、制御装置。
    The control device according to claim 1,
    The second control process includes a process of changing the first acoustic signal so that it becomes difficult for the user to hear the second acoustic signal.
  6.  請求項1の制御装置であって、
     前記第2制御処理は、前記利用者以外の者に第2通知情報を提示するための処理を含む、制御装置。
    The control device according to claim 1,
    The second control process includes a process for presenting second notification information to a person other than the user.
  7.  制御装置による制御方法であって、
     第1音響信号を聴取する利用者が集中状態にない場合または前記利用者の集中度合いが第1基準よりも低い場合に、前記第1音響信号と異なる第2音響信号または前記第2音響信号に関する通知に応じ、前記利用者が前記第2音響信号を聴取しやすくなるように前記第1音響信号を変化させる第1制御処理を行う第1ステップと、
     前記利用者が集中状態にある場合または前記集中度合いが前記第1基準以上である場合に、前記第1制御処理を行わない第2制御処理を行う第2ステップと、
    を有する制御方法。
    A control method using a control device,
    Regarding a second acoustic signal different from the first acoustic signal or the second acoustic signal when the user listening to the first acoustic signal is not in a concentrated state or when the degree of concentration of the user is lower than the first standard. A first step of performing a first control process to change the first acoustic signal so that the user can easily hear the second acoustic signal in response to the notification;
    a second step of performing a second control process without performing the first control process when the user is in a concentrated state or when the degree of concentration is equal to or higher than the first reference;
    A control method having
  8.  請求項1の制御装置としてコンピュータを機能させるためのプログラム。 A program for causing a computer to function as the control device according to claim 1.
PCT/JP2022/025578 2022-06-27 2022-06-27 Control device, control method, and program WO2024003988A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/025578 WO2024003988A1 (en) 2022-06-27 2022-06-27 Control device, control method, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/025578 WO2024003988A1 (en) 2022-06-27 2022-06-27 Control device, control method, and program

Publications (1)

Publication Number Publication Date
WO2024003988A1 true WO2024003988A1 (en) 2024-01-04

Family

ID=89382195

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/025578 WO2024003988A1 (en) 2022-06-27 2022-06-27 Control device, control method, and program

Country Status (1)

Country Link
WO (1) WO2024003988A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010283873A (en) * 2007-01-04 2010-12-16 Bose Corp Microphone techniques
WO2017056604A1 (en) * 2015-09-29 2017-04-06 ソニー株式会社 Information processing device, information processing method, and program
JP2019152861A (en) * 2018-03-05 2019-09-12 ハーマン インターナショナル インダストリーズ インコーポレイテッド Controlling perceived ambient sounds based on focus level

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010283873A (en) * 2007-01-04 2010-12-16 Bose Corp Microphone techniques
WO2017056604A1 (en) * 2015-09-29 2017-04-06 ソニー株式会社 Information processing device, information processing method, and program
JP2019152861A (en) * 2018-03-05 2019-09-12 ハーマン インターナショナル インダストリーズ インコーポレイテッド Controlling perceived ambient sounds based on focus level

Similar Documents

Publication Publication Date Title
US11095985B2 (en) Binaural recording for processing audio signals to enable alerts
JP7337262B2 (en) Active noise reduction audio device and system
US11089402B2 (en) Conversation assistance audio device control
US10817251B2 (en) Dynamic capability demonstration in wearable audio device
US10154360B2 (en) Method and system of improving detection of environmental sounds in an immersive environment
JP2016136722A (en) Headphones with integral image display
US11006202B2 (en) Automatic user interface switching
US20220238091A1 (en) Selective noise cancellation
US10922044B2 (en) Wearable audio device capability demonstration
US11467666B2 (en) Hearing augmentation and wearable system with localized feedback
US11438710B2 (en) Contextual guidance for hearing aid
WO2020103562A1 (en) Voice processing method and apparatus
WO2024003988A1 (en) Control device, control method, and program
US20220122630A1 (en) Real-time augmented hearing platform
US20230066600A1 (en) Adaptive noise suppression for virtual meeting/remote education
NL1044390B1 (en) Audio wearables and operating methods thereof
US20230229383A1 (en) Hearing augmentation and wearable system with localized feedback
US11275551B2 (en) System for voice-based alerting of person wearing an obstructive listening device
US20220246168A1 (en) Techniques for detecting and processing domain-specific terminology
US20220167087A1 (en) Audio output using multiple different transducers
JP7172041B2 (en) Sound transmission device and program
WO2023013019A1 (en) Speech feedback device, speech feedback method, and program
KR20230104744A (en) Incorporation of short-term contexts to adapt content playback

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22949267

Country of ref document: EP

Kind code of ref document: A1