WO2024003988A1 - Dispositif de commande, procédé de commande et programme - Google Patents

Dispositif de commande, procédé de commande et programme Download PDF

Info

Publication number
WO2024003988A1
WO2024003988A1 PCT/JP2022/025578 JP2022025578W WO2024003988A1 WO 2024003988 A1 WO2024003988 A1 WO 2024003988A1 JP 2022025578 W JP2022025578 W JP 2022025578W WO 2024003988 A1 WO2024003988 A1 WO 2024003988A1
Authority
WO
WIPO (PCT)
Prior art keywords
acoustic signal
user
concentration
information
control
Prior art date
Application number
PCT/JP2022/025578
Other languages
English (en)
Japanese (ja)
Inventor
大将 千葉
弘章 伊藤
賢一 野口
達也 加古
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Priority to PCT/JP2022/025578 priority Critical patent/WO2024003988A1/fr
Publication of WO2024003988A1 publication Critical patent/WO2024003988A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones

Definitions

  • the present invention relates to technology for controlling the reproduction of audio signals.
  • acoustic signal output devices that do not completely block the ear canal, such as open-ear earphones and headphones.
  • a user wearing such an audio signal output device can listen to his or her favorite reproduced sound, such as music, while being able to hear surrounding sounds.
  • Non-Patent Document 1 discloses a technology that automatically pauses or mutes the reproduced sound when a user who is listening to the reproduced sound with headphones makes a sound. Further, Patent Document 1 discloses a technique for estimating a user's behavior based on the detection result of a sensor or the like, and controlling the maximum permissible volume of reproduced sound based on the estimation result.
  • Non-Patent Document 1 the playback sound is not controlled unless the user utters a voice, and if the user does not notice the call, calling sound, notification sound, etc., there is a risk that communication with others will be hindered. There is. Furthermore, in the technique of Patent Document 1, even when the user intentionally listens to the reproduced sound at a high volume so as not to break the user's concentration state, the maximum permissible volume of the reproduced sound is controlled.
  • Such problems occur not only when the user listens to the reproduced sound using an acoustic signal output device that does not block the ear canal, but also when the user listening to the first acoustic signal is different from the first acoustic signal. This is common when the user is in an environment where the second acoustic signal can also be heard.
  • a second acoustic signal different from the first acoustic signal or a notification regarding the second acoustic signal When the user listening to the first acoustic signal is not in a concentrated state or when the degree of concentration of the user is lower than the first standard, a second acoustic signal different from the first acoustic signal or a notification regarding the second acoustic signal. Accordingly, a first control process is performed to change the first acoustic signal so that the user can easily hear the second acoustic signal, and when the user is in a concentrated state or the degree of concentration is equal to or higher than the first standard, , performs a second control process without performing the first control process.
  • FIG. 1 is a diagram illustrating the configuration of an audio signal reproduction system according to an embodiment.
  • FIG. 2 is a flow diagram illustrating the control method of the embodiment.
  • FIG. 3 is a diagram illustrating the configuration of the acoustic signal reproduction system according to the embodiment.
  • FIG. 4 is a block diagram illustrating the hardware configuration of the control device according to the embodiment.
  • the acoustic signal reproduction system 1 of the first embodiment includes a control device 11, a user sensor 12, an acoustic signal sensor 13, and an acoustic signal output device 14.
  • the control device 11 includes an input section 111 , a playback section 112 , a storage section 113 , a concentration state estimation section 114 , a control section 115 , and an environment estimation section 116 .
  • the user sensor 12 is a sensor that detects the state of the user 101.
  • the user sensor 12 is, for example, a biosignal sensor that detects biosignals of the user 101, an acceleration information sensor that detects the posture, motion, orientation, etc. of the user 101, or a position sensor that detects the position of the user 101. Contains at least one of them.
  • the biosignal sensor include a sensor that detects the pulse or heart rate of the user 101, a sensor that detects brain waves, and a sensor that detects eye movement.
  • Examples of the acceleration information sensor include an acceleration sensor, an angular velocity sensor, a geomagnetic sensor, and a 9-axis sensor.
  • position sensors are geomagnetic sensors, cameras, capacitive sensors, ultrasonic sensors, potentiometers, etc.
  • the acoustic signal sensor 13 is a microphone, a volume sensor, or the like that detects the surrounding acoustic signal AC2 (second acoustic signal).
  • the acoustic signal output device 14 is, for example, an earphone, headphone, neck speaker, bone conduction speaker, or other speaker that outputs the acoustic signal AC1 (first acoustic signal).
  • the acoustic signal output device 14 may be of a type that does not completely block the ear canal of the user 101, or may be of a type that completely blocks the ear canal of the user 101.
  • the storage unit 113 stores "concentration state estimation information” for estimating the concentration state of the user 101 from the "input information".
  • the “input information” may be, for example, “detection information” detected by the user sensor 12 or its function value, or “other information” regarding the “detection information” and the concentration state of the user 101 or It may be their function value, or it may be the "other information” or its function value.
  • “Other information” includes, for example, information representing the content of the user 101's task, information representing the task duration of the user 101, information representing the time when the user 101 performed the task, and information representing the user 101's intention ( Information that expresses intentions such as “I want to concentrate,””I want to communicate,””I want to turn off notifications,” and “I want to turn on notifications,” etc.
  • Information that expresses intentions such as "I want to concentrate,””I want to communicate,””I want to turn off notifications,” and “I want to turn on notifications,” etc.
  • “Information for estimating a state of concentration” may be, for example, information for the user 101 to obtain “information indicating whether or not the user is in a state of concentration" in response to "input information", or “input information” It may also be information for obtaining "information representing the degree of concentration" of the user 101.
  • Information for estimating a state of concentration may be, for example, a table in which "input information” and “information indicating whether or not the state is in a state of concentration” are associated with each other, or "input information” and "degree of concentration” are associated with each other.
  • It may be a table in which "information representing ⁇ input information'' is associated with each other, or it may be a threshold value of "input information” for determining "whether or not the user is in a state of concentration.” These tables and thresholds are determined in advance based on, for example, past “detection information", past task logs, past task durations, times when past tasks were performed, and the like.
  • the "concentration state estimation information” may be, for example, an estimation model that outputs "information indicating whether or not a person is in a concentration state" in response to "input information", or It may also be an estimation model that outputs "information representing the degree of concentration”.
  • estimation models include deep learning-based models, hidden Markov models, and SVMs (Support Vector Machines). These models are obtained, for example, by machine learning using a learning model.
  • An example of a learning model is "input information for learning” (past detection information, past task logs, past task duration, time when past tasks were performed, etc.) and "whether or not you are in a state of concentration.” This includes supervised learning data that associates labels such as ⁇ level'' and ⁇ degree of concentration.''
  • concentration state estimation method There is no limitation on the concentration state estimation method, and a known estimation method such as the technique disclosed in Japanese Patent Application Laid-Open No. 2014-158600 may be used.
  • the reproduction unit 112 of the control device 11 (FIG. 1) outputs a reproduction signal representing the acoustic signal AC1 (first acoustic signal) to be listened to by the user 101 under the control of the control unit 115.
  • the audio signal AC1 is, for example, music, voice, environmental sound, or other audio content.
  • the reproduced signal is transmitted by wire or wirelessly to the acoustic signal output device 14, and the acoustic signal output device 14 outputs the acoustic signal AC1 based on the transmitted reproduced signal.
  • the user 101 listens to the audio signal AC1 output from the audio signal output device 14.
  • the user sensor 12 detects the state of the user 101 and sends the detected "detection information" to the concentration state estimation unit 114.
  • the “detection information” includes information representing the biosignal of the user 101.
  • the “detection information” includes information representing the posture, motion, orientation, etc. of the user 101, such as acceleration and angular acceleration of the user 101.
  • the “detection information” includes information representing the position of the user 101.
  • the concentration state estimating unit 114 uses the “input information” of the user 101 including at least one of “detection information” and “other information” and the “concentration state estimation information” extracted from the storage unit 113 to estimate the user's
  • the user 101 obtains and outputs "information representing whether the user 101 is in a state of concentration” or "information representing the degree of concentration” of the user 101.
  • the concentration state estimator 114 obtains and outputs "information indicating whether the user is in a state of concentration” and "information indicating the degree of concentration” corresponding to the "input information" of the user 101.
  • the concentration state estimation unit 114 calculates the "input information" of the user 101.
  • a threshold value determination is performed, and the user 101 obtains and outputs "information indicating whether or not the user is in a concentrated state.”
  • the concentration state estimator 114 uses this estimation model to obtain and output "information representing whether or not the user is in a state of concentration” and "information representing the degree of concentration” corresponding to the "input information”. If the "other information” includes information representing the intention of the user 101, the concentration state estimating unit 114 prioritizes the intention of the user 101 and selects "information representing whether the user 101 is in a state of concentration”.
  • the concentration state estimating unit 114 uses “information indicating whether or not you are in a concentration state” as “I want to concentrate” or “I want to turn off notifications”. It is also possible to output "information indicating something”. For example, when “other information” indicates an intention such as “I am able to communicate” or “I want to turn on notifications,” the concentration state estimating unit 114 selects “information indicating whether or not I am in a concentration state” as " Information indicating that the user is not in a concentrated state may also be output.
  • Information representing the intention of the user 101 included in "other information” is stored in the storage unit 113, and the concentration state estimation unit 114 stores the information in the storage unit 113 until the intention of the user 101 is updated. Based on the information representing the user's intention, "information representing whether or not the user is in a concentrated state” may be obtained and output as described above. “Information indicating whether the user is in a state of concentration” or “information indicating the degree of concentration” is sent to the control unit 115.
  • the acoustic signal sensor 13 detects the surrounding acoustic signal AC2 (second acoustic signal) and sends information representing the acoustic signal AC2 to the environment estimation unit 116.
  • the environment estimation unit 116 uses the information representing the input acoustic signal AC2 to send “acoustic detection information” representing the detection result of surrounding acoustic signals to the control unit 115.
  • the sound detection information includes, for example, information indicating whether or not the user 102 has spoken in the surrounding area, information indicating the loudness of surrounding sounds, and the like.
  • the control unit 115 receives “information indicating whether or not the person is in a concentrated state” or “information indicating the degree of concentration” sent from the concentration state estimating unit 114, and “acoustic detection information” sent from the environment estimating unit 116. ' is input in real time.
  • the control unit 115 determines whether the user 101 listening to the acoustic signal AC1 is in a concentrated state using "information indicating whether or not the user is in a concentrated state," or uses "information indicating whether the user 101 is in a concentrated state” or ” is used to determine whether the degree of concentration of the user 101 is equal to or higher than the reference TH1 (first reference).
  • control processing CON1 first control processing
  • the control unit 115 control processing CON1 (first control processing) for changing the acoustic signal AC1 so that the user 101 can easily hear the acoustic signal AC2 according to the surrounding acoustic signal AC2 (a second acoustic signal different from the first acoustic signal); I do.
  • This process is done automatically. Thereby, when the user 101 is not concentrating, it becomes easier to notice calls from the user 102, and it becomes easier to communicate with others.
  • control processing CON2 (second control processing) is performed.
  • the control process CON2 is a process that does not perform the control process CON1 (first control process). This makes it easier for the user 101 to maintain his concentration when he is concentrating. Specific examples of the control process CON1 (first control process) and control process CON2 (second control process) will be shown below.
  • control processing CON1 (first control processing):
  • the control process CON1 is performed, for example, when the amplitude of the acoustic signal AC2 (second acoustic signal) is equal to or greater than the reference TH2 (second reference) or when the acoustic signal AC2 (second acoustic signal) is It includes a process of changing the acoustic signal AC1 (first acoustic signal) so that the user 101 can easily hear the acoustic signal AC2 (second acoustic signal) when detected.
  • An example of this process is a process of attenuating the amplitude of the acoustic signal AC1 (first acoustic signal).
  • the user 101 may be able to easily hear the acoustic signal AC2 (second acoustic signal) by changing the phase or waveform of the acoustic signal AC1 (first acoustic signal). This makes it easier for the user 101 to notice calls from the user 102.
  • the acoustic signal AC2 (second acoustic signal) is less than the reference TH2 (second criterion) or if the acoustic signal AC2 (second acoustic signal) is not detected, the acoustic signal
  • the process of changing AC1 (first acoustic signal) (the process of changing the first acoustic signal so that the user can easily hear the second acoustic signal) is not executed. Thereby, it is possible to prevent the acoustic signal AC1 from changing depending on the concentration state of the user 101 even though there is no call from the user 102.
  • control processing CON2 (second control processing): In the control process CON2 (second control process), a process for changing the acoustic signal AC1 so that the user 101 can easily hear the acoustic signal AC2 is not executed.
  • the control process CON2 does not automatically attenuate the amplitude of the acoustic signal AC1 or change the phase or waveform of the acoustic signal AC1.
  • the control process CON2 may be a control that does nothing.
  • control process CON2 may include a process of changing the acoustic signal AC1 (first acoustic signal) so that it becomes difficult for the user 101 to hear the acoustic signal AC2 (second acoustic signal).
  • the magnitude of each frequency component of the acoustic signal AC1 may be changed, the phase of the acoustic signal AC1 may be changed, or other
  • the acoustic signal AC1 may be changed to mask the signal AC2.
  • an acoustic signal obtained by adding an anti-phase acoustic signal of the acoustic signal AC2 or an acoustic signal similar to the anti-phase acoustic signal to the original acoustic signal AC1 may be used as the new acoustic signal AC1. This allows the user 101 to maintain a more concentrated state even if there are calls from the surroundings or the surroundings are noisy.
  • control processing by the control unit 115 receives “information indicating whether or not the person is in a concentration state” or “information indicating the degree of concentration” sent from the concentration state estimation unit 114, and “acoustic detection information” sent from the environment estimation unit 116. (Step S1).
  • the control unit 115 uses the acoustic detection information to determine whether there is a call. For example, the control unit 115 determines that there is a call when the amplitude of the acoustic signal AC2 is equal to or greater than the reference TH2 or when the acoustic signal AC2 is detected, and determines that there is no call at other times.
  • Step S2 if it is determined that there is no call, the process returns to step S1.
  • the control unit 115 uses "information indicating whether the user 101 is in a concentrated state" or "information indicating the degree of concentration” to determine whether the user 101 is in a concentrated state or not. , or the degree of concentration of the users 101 is determined (step S3).
  • the control unit 115 performs the control process CON1 (first control process) (step S4 ), then the process returns to step S1.
  • control unit 115 when the user 101 is in a concentrated state or when the degree of concentration is equal to or higher than the reference TH1, the control unit 115 performs a control process CON2 (second control process) (step S5), and then returns the process to step S1. .
  • CON2 second control process
  • the acoustic signal AC1 is controlled based on the concentration state and concentration level of the user 101 who listens to the acoustic signal AC1, and the acoustic signal AC2 different from the acoustic signal AC1. This makes it easier for the user 101 to communicate with others when he is not concentrating, and makes it easier for him to maintain his concentration when he is concentrating.
  • the second embodiment is a modification of the first embodiment, and based on the concentration state and concentration level of the user 101 who listens to the acoustic signal AC1, and the acoustic signal AC2 different from the acoustic signal AC1, the user 101 and other users 102.
  • differences from the first embodiment will be mainly explained, and the same reference numbers will be used to simplify the explanation of the items that have already been explained.
  • the acoustic signal reproduction system 2 of the second embodiment includes a control device 21, a user sensor 12, an acoustic signal sensor 13, and an acoustic signal output device 14.
  • the control device 21 includes an input section 111 , a playback section 112 , a storage section 113 , a concentration state estimation section 114 , a control section 115 , an environment estimation section 116 , a user notification section 217 , and a surrounding notification section 218 .
  • the difference from the first embodiment is the control process CON1 (first control process) and the control process CON2 (second control process).
  • the control process CON1 of the second embodiment is performed when the amplitude of the acoustic signal AC2 (second acoustic signal) is equal to or greater than the reference TH2 (second reference) or when the acoustic signal AC2 (second acoustic signal) is detected.
  • the control unit 115 further instructs the user notification unit 217 to output notification information N1, and the user notification unit 217 instructs the user notification unit 217 to present this notification information N1 to the user 101.
  • This process is a process for outputting notification information N1 to the user 101 from, for example, the acoustic signal output device 14, the control device 11, or other devices (for example, a smartphone).
  • the notification information N1 may be auditory (for example, notification sound, notification voice, etc.) or visual (for example, LED light emission, image display, change in lighting, notification message, etc.). It may be a tactile sensation (for example, vibration), or it may be a combination of at least some of these.
  • control process CON2 of the second embodiment includes a process for presenting notification information N2 (second notification information) to a person other than the user 101 (for example, the user 102).
  • the control unit 115 further instructs the surrounding notification unit 218 to output notification information N2, and the surrounding notification unit 218 presents this notification information N2 to a person other than the user 101.
  • Perform processing for This process is, for example, a process for outputting the notification information N2 from the acoustic signal output device 14, the control device 11, or other devices (for example, a smartphone) to a person other than the user 101.
  • the notification information N2 may be auditory (for example, notification sound, notification sound, etc.) or visual (for example, LED light emission, image display, change in lighting, notification message, etc.). It may be a tactile sensation (for example, vibration), or it may be a combination of at least some of these. This allows others, such as the user 102, to be informed that the user 101 is in a concentrated state, and the user 101 can maintain his concentrated state without being disturbed by others.
  • control process CON1 may include the process for presenting the above-mentioned notification information N1, and the control process CON2 may not include the process for presenting the above-mentioned notification information N2.
  • control process CON1 may not include the process for presenting the above-mentioned notification information N1, and the control process CON2 may include the process for presenting the above-mentioned notification information N2.
  • the acoustic signal reproduction system 3 of the third embodiment includes a control device 31, a user sensor 12, a communication device 33, and an acoustic signal output device 14.
  • the control device 31 includes an input section 111 , a playback section 112 , a storage section 113 , a concentration state estimation section 114 , a control section 315 , and a notification determination section 316 .
  • the control device 31 may further include a user notification section 217 and a communication notification section 318.
  • a communication device 33 such as a smartphone sends a notification such as an incoming call (notification regarding the second acoustic signal) to the notification determination unit 316, and the notification determination unit 316 , based on the input notification, sends "notification detection information" indicating whether or not the communication device 33 has received a notification to the control unit 315.
  • control processing CON2 (second control processing) is performed without performing control processing CON1 (first control processing). This makes it easier for the user 101 to maintain his concentration when he is concentrating.
  • control process CON1 first control process
  • control process CON2 second control process
  • control processing CON1 (first control processing):
  • CON1 first control process
  • the communication apparatus 33 receives a notification of an incoming call (notification regarding the second acoustic signal)
  • the user 101 controls the acoustic signal AC2 ( This includes processing for changing the acoustic signal AC1 (the first acoustic signal) so that the second acoustic signal can be easily heard.
  • An example of this process is a process of attenuating the amplitude of the acoustic signal AC1 (first acoustic signal). This makes it easier for the user 101 to notice notifications on the communication device 33.
  • the acoustic signal AC1 (first acoustic signal ) is not executed. Thereby, it is possible to prevent the acoustic signal AC1 from changing depending on the concentration state of the user 101 even though there is no incoming call or the like on the communication device 33.
  • control unit 315 further instructs the user notification unit 217 to output notification information N1
  • the user notification unit 217 instructs the user notification unit 217 to output notification information N1 to the user 101. Processing may be performed. This specific example is as described in the second embodiment.
  • control processing CON2 (second control processing):
  • a process for changing the acoustic signal AC1 so that the user 101 can easily hear the acoustic signal AC2 is not executed.
  • the control process CON2 does not automatically attenuate the amplitude of the acoustic signal AC1 or change the phase or waveform of the acoustic signal AC1. Thereby, even if the communication device 33 receives a call while the user 101 is listening to the acoustic signal AC1 at a high volume in order to maintain a concentrated state, the user 101 can maintain a concentrated state.
  • the communication device 33 may present notification information N2 (second notification information) to the communication partner.
  • the control unit 315 further instructs the communication notification unit 318 to send notification information N2 to the communication partner, and the communication notification unit 318 sends this notification information N2 to the communication device 33 and transmits it to the communication partner.
  • the notification information N2 may be auditory (e.g., notification sound, notification voice, etc.), visual (e.g., notification message, etc.), or tactile (e.g., notification message, etc.). , vibration, etc.), or a combination of at least some of them. This allows the communication partner to be informed that the user 101 is in a concentrated state, and the user 101 can maintain his or her concentrated state without being disturbed by others.
  • control unit 315 Other processing of the control unit 315 is the same as that of the control unit 115.
  • the acoustic signal AC1 is controlled based on the concentration state and concentration level of the user 101 who listens to the acoustic signal AC1, and the notification regarding the acoustic signal AC2 different from the acoustic signal AC1. This makes it easier for the user 101 to communicate with others when he is not concentrating, and makes it easier for him to maintain his concentration when he is concentrating.
  • the control devices 11, 21, and 31 in each embodiment include, for example, a processor (hardware processor) such as a CPU (central processing unit), a memory such as a RAM (random-access memory), a ROM (read-only memory), etc. It is a device configured by a general-purpose or dedicated computer equipped with a computer running a predetermined program. That is, the control devices 11, 21, and 31 in each embodiment have, for example, processing circuitry configured to mount each section of each control device.
  • This computer may include one processor and memory, or may include multiple processors and memories.
  • This program may be installed on the computer or may be pre-recorded in a ROM or the like.
  • processing units may be configured using an electronic circuit that independently realizes a processing function, rather than an electronic circuit that realizes a functional configuration by reading a program like a CPU.
  • an electronic circuit constituting one device may include a plurality of CPUs.
  • FIG. 4 is a block diagram illustrating the hardware configuration of the control devices 11, 21, and 31 in each embodiment.
  • the control devices 11, 21, and 31 in this example include a CPU (Central Processing Unit) 10a, an input section 10b, an output section 10c, a RAM (Random Access Memory) 10d, and a ROM (Read Only Memory). 10e, an auxiliary storage device 10f, a communication section 10h, and a bus 10g.
  • the CPU 10a in this example has a control section 10aa, a calculation section 10ab, and a register 10ac, and executes various calculation processes according to various programs read into the register 10ac.
  • the auxiliary storage device 10f is, for example, a hard disk, an MO (Magneto-Optical disc), a semiconductor memory, etc., and has a program area 10fa where a predetermined program is stored and a data area 10fb where various data are stored.
  • the bus 10g connects the CPU 10a, the input section 10b, the output section 10c, the RAM 10d, the ROM 10e, the communication section 10h, and the auxiliary storage device 10f so that information can be exchanged.
  • the CPU 10a writes the program stored in the program area 10fa of the auxiliary storage device 10f to the program area 10da of the RAM 10d according to the read OS (Operating System) program.
  • the CPU 10a writes various data stored in the data area 10fb of the auxiliary storage device 10f to the data area 10db of the RAM 10d. Then, the address on the RAM 10d where this program and data are written is stored in the register 10ac of the CPU 10a.
  • the control unit 10aa of the CPU 10a sequentially reads these addresses stored in the register 10ac, reads programs and data from the area on the RAM 10d indicated by the read addresses, and causes the calculation unit 10ab to sequentially execute the calculations indicated by the programs.
  • the calculation results are stored in the register 10ac. With such a configuration, the functional configuration of the control devices 11, 21, and 31 is realized.
  • the above program can be recorded on a computer readable recording medium.
  • a computer readable storage medium is a non-transitory storage medium. Examples of such recording media are magnetic recording devices, optical disks, magneto-optical recording media, semiconductor memories, and the like.
  • the computer may directly read the program from a portable recording medium and execute processing according to the program, and furthermore, the program may be transferred to this computer from the server computer.
  • the process may be executed in accordance with the received program each time.
  • the above-mentioned processing is executed by a so-called ASP (Application Service Provider) service, which does not transfer programs from the server computer to this computer, but only realizes processing functions by issuing execution instructions and obtaining results. You can also use it as Note that the program in this embodiment includes information that is used for processing by an electronic computer and that is similar to a program (data that is not a direct command to the computer but has a property that defines the processing of the computer, etc.).

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Circuits Of Receivers In General (AREA)

Abstract

Selon la présente invention, lorsqu'un utilisateur écoutant un premier signal acoustique n'est pas dans un état de concentration ou lorsque le niveau de concentration de l'utilisateur est inférieur à un premier critère, un premier traitement de commande est effectué en réponse à un second signal acoustique qui est différent du premier signal acoustique ou d'une notification concernant le second signal acoustique, le premier traitement de commande changeant le premier signal acoustique pour faciliter l'écoute par l'utilisateur du second signal acoustique, tandis que lorsque l'utilisateur est dans l'état de concentration ou lorsque le niveau de concentration est au niveau ou au-dessus du premier critère, un second traitement de commande est effectué sans effectuer le premier traitement de commande.
PCT/JP2022/025578 2022-06-27 2022-06-27 Dispositif de commande, procédé de commande et programme WO2024003988A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/025578 WO2024003988A1 (fr) 2022-06-27 2022-06-27 Dispositif de commande, procédé de commande et programme

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/025578 WO2024003988A1 (fr) 2022-06-27 2022-06-27 Dispositif de commande, procédé de commande et programme

Publications (1)

Publication Number Publication Date
WO2024003988A1 true WO2024003988A1 (fr) 2024-01-04

Family

ID=89382195

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/025578 WO2024003988A1 (fr) 2022-06-27 2022-06-27 Dispositif de commande, procédé de commande et programme

Country Status (1)

Country Link
WO (1) WO2024003988A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010283873A (ja) * 2007-01-04 2010-12-16 Bose Corp マイク技術
WO2017056604A1 (fr) * 2015-09-29 2017-04-06 ソニー株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations et programme
JP2019152861A (ja) * 2018-03-05 2019-09-12 ハーマン インターナショナル インダストリーズ インコーポレイテッド 集中レベルに基づく、知覚される周囲音の制御

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010283873A (ja) * 2007-01-04 2010-12-16 Bose Corp マイク技術
WO2017056604A1 (fr) * 2015-09-29 2017-04-06 ソニー株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations et programme
JP2019152861A (ja) * 2018-03-05 2019-09-12 ハーマン インターナショナル インダストリーズ インコーポレイテッド 集中レベルに基づく、知覚される周囲音の制御

Similar Documents

Publication Publication Date Title
US11095985B2 (en) Binaural recording for processing audio signals to enable alerts
JP7337262B2 (ja) アクティブノイズ低減オーディオデバイス及びシステム
US11089402B2 (en) Conversation assistance audio device control
US10817251B2 (en) Dynamic capability demonstration in wearable audio device
US10154360B2 (en) Method and system of improving detection of environmental sounds in an immersive environment
JP2016136722A (ja) 一体的画像ディスプレイを有するヘッドフォン
US11006202B2 (en) Automatic user interface switching
US20220238091A1 (en) Selective noise cancellation
US10922044B2 (en) Wearable audio device capability demonstration
US11467666B2 (en) Hearing augmentation and wearable system with localized feedback
US11438710B2 (en) Contextual guidance for hearing aid
US11615775B2 (en) Synchronized mode transition
WO2020103562A1 (fr) Procédé et appareil de traitement vocal
WO2024003988A1 (fr) Dispositif de commande, procédé de commande et programme
US20220122630A1 (en) Real-time augmented hearing platform
US20220167087A1 (en) Audio output using multiple different transducers
US20230066600A1 (en) Adaptive noise suppression for virtual meeting/remote education
NL1044390B1 (en) Audio wearables and operating methods thereof
US11275551B2 (en) System for voice-based alerting of person wearing an obstructive listening device
US20220246168A1 (en) Techniques for detecting and processing domain-specific terminology
JP7172041B2 (ja) 音伝達装置及びプログラム
WO2023013019A1 (fr) Dispositif de rétroaction de parole, procédé de rétroaction de parole et programme
KR20230104744A (ko) 콘텐츠 재생 적응을 위한 단기 콘텍스트의 통합

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22949267

Country of ref document: EP

Kind code of ref document: A1