WO2018179722A1 - Système audio, dispositif de commande sonore et dispositif de lecture associé - Google Patents

Système audio, dispositif de commande sonore et dispositif de lecture associé Download PDF

Info

Publication number
WO2018179722A1
WO2018179722A1 PCT/JP2018/001859 JP2018001859W WO2018179722A1 WO 2018179722 A1 WO2018179722 A1 WO 2018179722A1 JP 2018001859 W JP2018001859 W JP 2018001859W WO 2018179722 A1 WO2018179722 A1 WO 2018179722A1
Authority
WO
WIPO (PCT)
Prior art keywords
listener
control
playback device
playback
state
Prior art date
Application number
PCT/JP2018/001859
Other languages
English (en)
Japanese (ja)
Inventor
坂井 剛
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Publication of WO2018179722A1 publication Critical patent/WO2018179722A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K15/00Acoustics not otherwise provided for
    • G10K15/02Synthesis of acoustic waves
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/04Circuits for transducers, loudspeakers or microphones for correcting frequency response
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/12Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers

Definitions

  • the present invention relates to an acoustic system, an acoustic control device, and a playback device.
  • Patent Document 1 in a living room or the like in a living space, the speaker position is limited by the shape of the room or the arrangement of furniture.
  • Patent Document 1 is optimally equipped with a plurality of speakers, a plurality of amplifiers for driving the speakers, a microphone for controlling sound, and an acoustic correction device, which are arranged in the environment as described above.
  • An audio playback device (hereinafter referred to as a conventional example) that provides a sound environment is disclosed.
  • the speaker wiring work becomes complicated. Further, as the number of speakers increases, the number of drive amplifiers also increases and the installation space increases.
  • An object of the present disclosure is to provide an acoustic system that can easily perform acoustic adjustment, and an acoustic control device and a reproduction device thereof.
  • the sound control device includes a first playback device including a sensor for detecting the state of the first listener, each connected via a network, and control means for controlling the sound of the first playback device.
  • An audio control device including: an audio control device configured to transmit audio data to the first playback device, and to play back the audio data received by the first playback device. It is the said acoustic control apparatus.
  • the control means includes Receiving first listener data indicating the state of the first listener from the first playback device; Based on the first listener data, the first playback control content of the first playback device is determined, The first playback device is configured to instruct the first playback control content.
  • the acoustic control device includes a first playback device including a sensor for detecting a state of the first listener, each connected via a network, and control means for controlling the sound of the first playback device.
  • An audio control device including: an audio control device configured to transmit audio data to the first playback device, and to play back the audio data received by the first playback device. It is the said acoustic control apparatus.
  • the control means includes Receiving first listener data indicating the state of the first listener from the first playback device; Sending the first listener data to the cloud artificial intelligence device; Receiving the first playback control content of the first playback device from the cloud artificial intelligence device; The first playback device is configured to instruct the first playback control content.
  • the senor may be a human sensor, and the state of the first listener may be the presence / absence and position of the first listener.
  • the senor may be a video camera
  • the state of the first listener may be the presence / absence of the first listener, a state and position related to operation.
  • the first reproduction control content may be at least one of a volume value of the first reproduction device, a correction value of frequency and phase characteristics, and a delay amount.
  • the acoustic system further includes a second playback device that is connected via the network and includes a sensor that detects the state of the second listener.
  • the acoustic control device may be configured to transmit the audio data to the second playback device, and the second playback device plays back the received audio data.
  • control means includes Configured to control sound of the second playback device; Receiving second listener data indicating the state of the second listener from the second playback device; Determining the second playback control content of the second playback device based on the second listener data; The second playback device may be configured to instruct the second playback control content.
  • the acoustic system includes: A second playback device including a sensor connected via the network and detecting a state of the second listener;
  • the acoustic control device may be configured to transmit the audio data to the second playback device, and the second playback device plays back the received audio data.
  • control means includes Configured to control sound of the second playback device; Receiving second listener data indicating the state of the second listener from the second playback device; Sending the second listener data to the cloud artificial intelligence device; Receiving the second playback control content of the second playback device from the cloud artificial intelligence device; The second playback device may be configured to instruct the second playback control content.
  • the playback device is A playback device including a sensor for detecting a listener's state, each connected via a network, and an acoustic control device including a control means for controlling sound of the playback device, wherein the acoustic control device is the playback device.
  • the playback device is a playback device for an acoustic system configured to play back the received audio data; Based on the instruction signal transmitted from the acoustic control device, the listener data indicating the state of the listener detected by the sensor is transmitted to the acoustic control device, A control signal including the reproduction control content is received from the acoustic control device, and the audio data is reproduced based on the received reproduction control content.
  • the senor may be a human sensor, and the state of the listener may be the presence / absence and position of the listener.
  • the senor may be a video camera
  • the state of the listener may be the presence / absence of the listener, a state and a position related to operation.
  • the playback control content may be at least one of a volume value of the playback apparatus, a correction value of frequency and phase characteristics, and a delay amount.
  • the acoustic system according to the fourth aspect is The sound control device and the playback device (the first playback device) are provided.
  • the senor may be a human sensor, and the state of the listener may be the presence / absence and position of the listener.
  • the senor is a video camera
  • the state of the listener may be the presence / absence of the listener, the state and position related to the operation.
  • the playback control content may be at least one of a volume value of the playback device, a correction value of frequency and phase characteristics, and a delay amount.
  • FIG. 1A is a front view illustrating a configuration example of a room in which an acoustic system is arranged according to the first embodiment.
  • 1B is a perspective view of an acoustic system disposed in the room of FIG. 1A.
  • 2A is a front view of the playback device 100 of the sound system of FIG. 1B.
  • 2B is a side view of the playback device 100 of the sound system of FIG. 1B.
  • FIG. 2C is a block diagram illustrating a configuration example of the playback device 100 of the sound system of FIG. 1B.
  • FIG. 2D is a side view showing a configuration example for fixing the playback device 100 of the sound system of FIG. 1B.
  • FIG. 3 is a block diagram showing a detailed configuration of the playback device 100 of the sound system of FIG. 1B.
  • FIG. 4 is a block diagram illustrating a configuration example of the main device 101 of the sound system of FIGS. 1A and 1B.
  • FIG. 5 is a timing chart showing a configuration example of a transmission signal between the main device 101 and each playback device 100 of the sound system of FIG. 1B.
  • FIG. 6 is a flowchart showing a playback device control process executed by the main device 101 of FIG.
  • FIG. 7 is a flowchart showing the initialization process (S1) in the subroutine of FIG.
  • FIG. 8 is a flowchart showing the end processing (S6) in the subroutine of FIG.
  • S1 initialization process
  • S6 end processing
  • FIG. 9 is a flowchart showing the playback device individual control process (S4) in the subroutine of FIG.
  • FIG. 10 is a flowchart illustrating the viewing environment estimation process (S8), which is the subroutine of FIG.
  • FIG. 11 is a flowchart showing the reproduction control content determination process (S9) in the subroutine of FIG.
  • FIG. 12 is a table showing an example of a playback control table generated by the main device 101 of FIG.
  • FIG. 13 is a perspective view illustrating a configuration example of an acoustic system according to the second embodiment.
  • FIG. 14 is a flowchart showing the viewing environment estimation process (S8A) by the main device 101 of the acoustic system of FIG. FIG.
  • FIG. 15 is a flowchart showing a playback control content determination process (S9A) by the main device 101 of the sound system of FIG.
  • FIG. 16 is a table showing an example of a playback control table generated by the main device 101 of FIG.
  • FIG. 17 is a perspective view illustrating a configuration example of an acoustic system according to the third embodiment.
  • FIG. 18 is a block diagram illustrating a configuration example of the cloud artificial intelligence device 20 of FIG.
  • FIG. 19 is a flowchart showing viewing environment estimation processing (S8B) by the main device 101 of the acoustic system of FIG.
  • FIG. 20 is a flowchart showing the reproduction control content reception process (S9B) by the main device 101 of the acoustic system of FIG.
  • S9A playback control content determination process
  • FIG. 21 is a table showing an example of a playback control table generated by the main device 101 of FIG.
  • FIG. 22 is a table showing an example of the rule table used in the first embodiment.
  • FIG. 23 is a table showing an example of the rule table used in the first embodiment.
  • FIG. 24 is a table showing an example of a rule table used in the first embodiment.
  • FIG. 25 is a table showing an example of the rule table used in the second and third embodiments.
  • FIG. 26 is a table showing an example of the rule table used in the second and third embodiments.
  • FIG. 27 is a table showing an example of the rule table used in the second and third embodiments.
  • FIG. 28 is a table showing an example of the rule table used in the second and third embodiments.
  • FIG. 29 is a table showing an example of a rule table used in the second and third embodiments.
  • FIG. 30 is a table showing an example of the rule table used in the second and third embodiments.
  • FIG. 31 is a table showing an example of the rule table used in the second and third embodiments.
  • FIG. 32 is a table showing an example of the rule table used in the second and third embodiments.
  • FIG. 33 is a table showing an example of the rule table used in the second and third embodiments.
  • FIG. 34 is a table showing an example of the rule table used in the second and third embodiments.
  • FIG. 1A is a front view illustrating a configuration example of a room in which an acoustic system is arranged according to the first embodiment.
  • a room in which the acoustic system is arranged includes a living room 1 and a dining room 2, for example.
  • the living room 1 and the dining room 2 are separated and separated by arranging the shelf 4 and the hallway 3 therebetween.
  • a main body device 101 of the sound system, a sofa 5, and a table 6 are arranged in the living room 2.
  • a kitchen 7 and a table 8 are arranged.
  • FIG. 1B is a perspective view of an acoustic system arranged in the room of FIG. 1A.
  • the acoustic system includes a main body device 101 that is an acoustic control device and, for example, five reproduction devices 100A to 100E (hereinafter collectively referred to as reference numeral 100) that are reproduction devices.
  • the playback device 100A corresponds to the first playback device of the present disclosure
  • the playback device 100E corresponds to the second playback device of the present disclosure.
  • the playback device 100A to the playback device 100E and the playback device 100 and the main device 101 are connected via a transmission cable 102.
  • the transmission cable 102 is, for example, a known Ethernet cable.
  • the Ethernet cable can transmit a transmission signal including audio data and control data using the four core wires and the ground conductor in the Ethernet cable.
  • the main device 101 is placed on the floor of the living room 1.
  • the playback devices 100A to 100D are provided on the ceiling 9 of the living room 1.
  • the playback device 100E is provided on the ceiling 9 of the dining room 2.
  • FIG. 2A is a front view of the playback device 100 of the sound system of FIG. 1B.
  • the front here is a surface of the playback device 100 viewed from below on the paper surface of FIG. 1B.
  • a sensor 201 is provided in the lower left part of the front surface of the playback device 100, and a speaker 200 is provided in the center of the front surface.
  • FIG. 2B is a side view of the playback device 100 of the sound system of FIG. 1B.
  • a sensor 201 is provided at the bottom of the playback device 100.
  • a speaker unit 200 a of the speaker 200 is provided below the playback device 100.
  • an amplifier 203 is provided at the center of the playback device 100.
  • a signal processing unit 202 and transmission cable connectors 203a, 203b, and 203c are juxtaposed on the reproduction apparatus 100.
  • FIG. 2C is a block diagram illustrating a configuration example of the playback device 100 of the sound system of FIG. 1B.
  • the playback device 100 includes a speaker 200, a sensor 201, a signal processing unit 202, and an amplifier 203.
  • FIG. 2D is a side view showing a configuration example for fixing the playback device 100 of the sound system of FIG. 1B to a building.
  • the upper part of the reproduction apparatus 100 is fixed to the building beam 31 by screwing 32.
  • the lower part of the playback device 100 is provided in contact with the ceiling 9, and the sensor 201 is provided so as to protrude downward from the ceiling 9.
  • FIG. 3 is a block diagram showing a detailed configuration of the playback device 100 of the sound system of FIG. 1B.
  • the signal processing unit 202 includes a control unit 300 having an internal memory 300m, transmission signal transmission / reception units 301, 302, and 303, a multiplexer 304, a delay memory 305, a frequency and phase characteristic correction circuit 306, and a variable. And an amplifier 307.
  • the control data 308 in the transmission signal is illustrated by a dotted line
  • the audio data 309 or the audio signal is illustrated by a one-dot chain line.
  • a solid line illustrates a control signal in the signal processing unit 202.
  • the transmission cable 102 physically connected to the main device 101 or the upstream reproduction device 100 is connected to the transmission signal transmission / reception unit 301 via the transmission cable connector 203a.
  • the transmission signal transmission / reception units 301, 302, and 303 are communication interfaces that transmit / receive control data 308 and audio data 309 to / from other playback devices 100 or the main device 101.
  • the transmission signal transmission / reception unit 301 includes control data 308 from the main device 101 or the upstream playback device 100 (that is, a control signal including playback control content, a first control signal including first playback control content, or a second playback signal).
  • the second control signal including the control content is received and output to the control unit 300.
  • the transmission signal transmission / reception unit 301 transmits the control data 308 to the playback device 100 on the downstream side via the multiplexer 304, the transmission signal transmission / reception units 302 and 303, and the transmission cable connectors 203b and 203c.
  • the control data 308 includes acoustic control data such as an activation instruction of the sensor 201, a delay value, a frequency and phase correction value, and a volume value, which will be described later.
  • the transmission signal transmission / reception unit 302 receives control data 308 from the downstream reproduction device 100, and receives the main device 101 or the upstream reproduction device via the multiplexer 304, the transmission signal transmission / reception unit 301, and the transmission cable connector 203a. To 100.
  • the transmission signal transmission / reception unit 303 receives the control data 308 from the downstream reproduction device 100, and the main device 101 or the upstream reproduction device via the multiplexer 304, the transmission signal transmission / reception unit 301, and the transmission cable connector 203a. To 100.
  • the multiplexer 304 distributes the control data 308 to one of the transmission signal transmission / reception unit 302 and the transmission signal transmission / reception unit 303 based on the transmission destination data included in the control data 308 and transmits the control data.
  • the transmission signal transmission / reception unit 301 receives the audio data 309 from the main device 101 or the upstream playback device 100 and outputs it to the delay memory 305. Also, the transmission signal transmission / reception unit 301 transmits the audio data 309 to the downstream playback device 100 via the transmission signal transmission / reception units 302 and 303 and the transmission cable connectors 203b and 203c.
  • the delay memory 305 includes a buffer memory, for example. The delay memory 305 refers to the delay value instructed by the control unit 300, delays the received audio data 309, and outputs the delayed audio data 309 to the frequency and phase characteristic correction circuit 306 (hereinafter referred to as the correction circuit 306).
  • the correction circuit 306 corrects the input audio data 309 based on the frequency characteristic correction value and the phase characteristic correction value instructed by the control unit 300, and then outputs them to the variable amplifier 307.
  • the variable amplifier 307 changes the signal level of the input audio data 309 according to the volume value specified by the control unit 300.
  • the audio data 309 is DA (digital / analog) converted and output to the speaker 200 via the amplifier 203. As a result, the speaker 200 emits sound.
  • the control unit 300 controls the operation of the signal processing unit 202 based on the program and data stored in the internal memory 300m (including the acoustic control data at the previous end or the acoustic control data instructed from the main device 101). Control. In particular, the control unit 300 controls the operations of the delay memory 305, the correction circuit 306, and the variable amplifier 307 based on the control data 308 received by the transmission signal transmission / reception unit 301, so that the audio signal output from the speaker 200 is obtained. Control the acoustic characteristics of
  • the sensor 201 is a human sensor connected to the control unit 300.
  • the sensor 201 detects the presence or absence of a human body (listener) by detecting the temperature of a two-dimensional area of 8 ⁇ 8 pixels, for example.
  • the sensor 201 detects the position of the human body when the human body exists.
  • data indicating the presence / absence of a human body and the position of the human body are referred to as detection data.
  • the detection data corresponds to the listener data (or the first listener data or the second listener data) of the present disclosure.
  • the control unit 300 operates the sensor 201 based on an instruction signal in the control data from the main device 101, and transmits detection data from the sensor 201 to the main device 101 as control data for the main device 101.
  • FIG. 4 is a block diagram illustrating a configuration example of the main device 101 of the sound system of FIG. 1B.
  • the main device 101 includes a control unit 10, a memory 11, a transmission signal transmission / reception unit 12, an operation unit 13, a display unit 14, and a transmission cable connector 15.
  • the control unit 10 controls the operations of the processing units 11 to 14, for example, by executing the playback device control process of FIG.
  • the memory 11 stores programs (such as playback device control processing in FIG. 6) and data (including playback control table data and audio data described later) necessary for the operation of the control unit 10.
  • the transmission signal transmission / reception unit 12 is connected to each playback device 100 via a transmission cable connector 15 and a current detector.
  • the transmission signal transmission / reception unit 12 transmits a transmission signal including control data and audio data to each playback device 100 based on the control of the control unit 10.
  • the transmission signal transmission / reception unit 12 receives a transmission signal including control data from each playback device 100.
  • the operation unit 13 includes at least one of a mouse, a keyboard, a touch panel, a button, a dial, and the like, and receives an input such as an operator's instruction.
  • the display unit 14 displays data such as an instruction from the operator, control data of the received transmission signal, and the like.
  • FIG. 5 is a timing chart showing a configuration example of a transmission signal between the main device 101 and each playback device 100 of the sound system of FIG. 1B.
  • the audio data includes an L-channel audio signal and an R-channel audio signal in a sequence after an 8-bit preamble and 4-bit frequency information. Audio data is output by repeating this sequence configuration.
  • the upstream control data is control data transmitted from each playback device 100 to the main device 101.
  • the uplink control data includes information (S1_M information, S2_M information, S3_M information, etc Composed of transmission source information and detection data before and after empty data.
  • Downlink control data is control data transmitted from the main device 101 to each playback device 100.
  • the downlink control data includes information (M_ALL information, M_S1 information, M_S2 information, etc. Composed of transmission destination information and control instruction data before and after empty data.
  • the control instruction data includes a sensor activation instruction and acoustic control data.
  • FIG. 6 is a flowchart showing a playback device control process executed by the main device 101 of FIG.
  • step S1 of FIG. 6 first, the main device 101 executes the initialization process of FIG. 7, and determines in step S2 whether or not the auto mode is set. If YES in step S2, the process proceeds to step S7. On the other hand, if NO in step S2, it is determined that the manual mode is set, and the process proceeds to step S3. In step S ⁇ b> 3, it is determined whether or not an operation is accepted by the operation unit 13. If YES in step S3, the process proceeds to step S4. On the other hand, if NO in step S3, the process proceeds to step S5. In step S4, after executing the playback device individual control process of FIG. 9, the process proceeds to step S5. In step S5, it is determined whether or not power-off is instructed.
  • step S6 If not turned off (in the case of No), the process returns to step S2. If it is determined in step S6 that power-off has been instructed (Yes), the process proceeds to step S6.
  • step S6 after executing the end process of FIG. 8, the playback device control process ends.
  • step S7 the main device 101 instructs the sensor 201 of each playback device to operate. After that, the main device 101 executes viewing environment estimation processing shown in FIG. 10 in step S8. Thereafter, the main device 101 executes a control content determination process shown in FIG. 11 in step S9, and proceeds to step S5.
  • FIG. 7 is a flowchart showing the initialization process (S1) in the subroutine of FIG.
  • the main device 101 confirms the connection of each playback device 100. That is, the main device 101 checks whether or not each playback device 100 is connected to the main device 101.
  • the main device 101 instructs each playback device 100 whose connection has been confirmed to start from the standby state.
  • instructing each playback device 100 means transmitting an instruction signal.
  • the main device 101 activates each playback device 100 with the sound control data (delay value, volume value, frequency, and phase characteristic correction value) at the previous end stored in the internal memory 300m. Return to the original main routine.
  • FIG. 8 is a flowchart showing the end processing (S6) in the subroutine of FIG.
  • step S21 of FIG. 8 the main device 101 instructs the end of each playback device 100, and then in step S22, the main device 101 sends the current sound control data (delay value, volume value, frequency and phase characteristic correction) to each playback device 100. Value) is recorded in the internal memory 300m, the standby state is entered, and the process returns to the original main routine.
  • the current sound control data delay value, volume value, frequency and phase characteristic correction
  • FIG. 9 is a flowchart showing the playback device individual control process (S4) in the subroutine of FIG.
  • step S31 it is determined whether or not the operator has performed an operation for changing the volume value using the operation unit 13. If YES in step S31, the process proceeds to step S32. On the other hand, if NO in step S31, the process proceeds to step S33. In step S32, the main device 101 changes the volume value of the corresponding playback device 100, and then proceeds to step S33. Next, in step S ⁇ b> 33, it is determined whether or not the operator has performed an operation for changing the delay value using the operation unit 13. If YES in step S33, the process proceeds to step S34. On the other hand, if NO in step S33, the process proceeds to step S35. In step S34, after changing the delay value of the corresponding playback device 100, the process proceeds to step S35.
  • step S35 it is determined whether or not the operator has changed the frequency and phase characteristic correction amount using the operation unit 13. If YES in step S35, the process proceeds to step S36. On the other hand, if NO in step S35, the process returns to the original main routine. In step S36, after changing the frequency and phase characteristic correction amount of the corresponding playback device 100, the process returns to the original main routine.
  • FIG. 10 is a flowchart showing the viewing environment estimation process (S8), which is the subroutine of FIG.
  • each playback device 100 uses the sensor 201 to determine whether or not there is a person in the corresponding area of each playback device 100. .
  • the corresponding area is an area where the sensor 201 can detect the presence / absence and position of a person.
  • the corresponding area of each playback device 100 is referred to as the own area of each playback device 100. If YES in step S42, the process proceeds from step S42 to step S43. If NO, the process proceeds from step S42 to step S44.
  • each playback device 100 identifies the position of the person from the image of the sensor 201, notifies the main device 101 of the presence and position of the person, and returns to the original main routine.
  • each playback device 100 notifies the main device 101 of the absence of a person and returns to the original main routine.
  • FIG. 11 is a flowchart showing the reproduction control content determination process (S9) in the subroutine of FIG.
  • FIG. 12 is a table showing an example of a playback control table generated by the main device 101 of FIG. 22 to 24 are tables showing examples of rule tables used in the first embodiment.
  • the rule table is stored in advance in the memory 11 of the main device 101.
  • the rule table includes rules R1 to R28, corresponding to the state estimated from the output content of the sensor 201 (presence / absence of human presence, position), the playback control rule based on that state, and the playback control content for each playback device 100 Is specified. Therefore, the reproduction control table in FIG. 12 is generated with reference to the output contents of the sensor 201 and the rule table. As shown in FIG.
  • the playback control table shows the playback control content for each playback device 100 in accordance with the output content of the sensor 201 (presence / absence of human presence, position).
  • playback devices A to E indicate playback devices 100A to 100E, respectively.
  • “there is a person in playback apparatus A” means that there is a person in the vicinity of playback apparatus 100A and within the detection area of sensor 201, and “the person in playback apparatus A” is in the vicinity of playback apparatus 100A. A person who is in the detection area of the sensor 201.
  • step S51 in FIG. 11 first, the main device 101 acquires a reproduction control table (FIG. 12) for determining the content of reproduction control.
  • step S52 the reproduction control content matching the estimated state is determined with reference to the acquired reproduction control table.
  • step S53 the corresponding playback device 100 is controlled based on the determined playback control content, and the process returns to the original main routine.
  • the state of the person estimated from the sensor output includes, for example, “right” side, “left” side, “just below”, “absence”, etc. Indicates how to control the volume value, for example.
  • the content of reproduction control of the acoustic system can be determined and controlled according to the state of the person detected by the sensor 201 of each reproduction device 100. Therefore, the acoustic adjustment can be simplified as compared with the prior art. Further, when the playback device 100 is added to the main device 101, the playback device 100 can adjust the sound of the added playback device 100. Therefore, the expansion work of the acoustic system is easy as compared with the prior art. In addition, an optimal acoustic environment can be realized by software regardless of the shape of the room and the arrangement of furniture. Therefore, it is easy to change the installation or expansion of the acoustic system.
  • FIG. 13 is a perspective view illustrating a configuration example of an acoustic system according to the second embodiment.
  • the acoustic system according to the second embodiment is different from the acoustic system according to the first embodiment in the following points.
  • the sensor 201 is a moving image camera instead of the human sensor.
  • the state estimated from the sensor output includes a state relating to a specific operation of the person (such as “reading”, “meal”). Shall.
  • the reproduction control process includes a viewing environment estimation process (S8A) in FIG. 14 instead of the viewing environment estimation process (S8) in FIG. (4)
  • the reproduction control process includes a reproduction control content determination process (S9A) in FIG.
  • the acoustic system uses the rule tables of FIGS. 25 to 34 instead of the rule tables of FIGS.
  • the rule tables of FIGS. 25 to 34 include rules R101 to R141, and the playback control contents of each playback device 100 are determined based not only on the presence and position of the person but also on the state relating to the specific operation of the person. It is determined. That is, the reproduction control table according to the second embodiment is generated based on the state relating to the specific operation of the person from the sensor output with reference to the rule table.
  • the rule tables in FIGS. 25 to 34 are also used in the third embodiment to be described later.
  • FIG. 14 is a flowchart showing the viewing environment estimation process (S8A) by the main device 101 of the acoustic system of FIG.
  • step S41A of FIG. 14 each playback device 100 senses the state of a person in its own area.
  • step S42 it is determined whether or not there is a person. If YES in step S42, the process proceeds to step S43A. On the other hand, if NO in step S42, the process proceeds to step S44.
  • step S43A a state related to a specific operation of the person is estimated from the image of the sensor 201, notified to the main device 101, and the process returns to the original main routine.
  • step S44 the absence of a person is notified to the main device 101, and the process returns to the original main routine.
  • FIG. 15 is a flowchart showing the reproduction control content determination process (S9A) by the main device 101 of the sound system of FIG.
  • FIG. 16 is a table showing an example of a playback control table generated by the main device 101 of FIG.
  • step S51A of FIG. 15 first, the main device 101 acquires a playback control table (FIG. 16) for determining the content of playback control.
  • step S52A the main device 101 refers to the acquired playback control table.
  • the reproduction control content that matches the estimated specific state is determined.
  • step S53A the main device 101 controls each corresponding playback device 100 based on the determined playback control content, and returns to the original main routine.
  • the content of the reproduction control of the acoustic system is determined according to the specific state relating to the human action detected by the sensor 201 of each reproduction device 100. Can be controlled. Therefore, the acoustic adjustment can be simplified as compared with the prior art.
  • the playback device 100 is added to the main device 101, the sound adjustment of the added playback device 100 can be performed, so that the expansion work of the sound system is easier than the conventional technology.
  • an optimal acoustic environment can be realized regardless of the shape of the room and the arrangement of furniture, it is easy to change the installation or expansion of the acoustic system.
  • FIG. 17 is a perspective view illustrating a configuration example of an acoustic system according to the third embodiment.
  • the acoustic system according to Embodiment 3 differs from the acoustic system according to Embodiment 2 in the following points.
  • the acoustic system includes a cloud artificial intelligence device 20 connected to the main device 101 via a network.
  • the cloud artificial intelligence device 20 determines the reproduction control content of the reproduction control table based on the state relating to the specific operation of the person detected by the sensor 201 which is a video camera, and transmits it to the main device 101.
  • the acoustic system uses the rule tables of FIGS.
  • FIG. 21 shows an example of the playback control table.
  • FIG. 18 is a block diagram showing a configuration example of the cloud artificial intelligence device 20 of FIG.
  • the cloud artificial intelligence apparatus 20 includes a control unit 21, an artificial intelligence determination unit 22, a deep learning unit 23, switches SW1 and SW2, and a transmission signal transmission / reception unit 24 (hereinafter referred to as transmission / reception unit 24). It is prepared for.
  • the transmission / reception unit 24 is connected to the main device 101 via the transmission cable 102.
  • the transmission / reception unit 24 is connected to the input layer of the artificial intelligence determination unit 22 via the switch SW1.
  • the output layer of the artificial intelligence determination unit 22 is connected to the transmission / reception unit 24 via the switch SW2.
  • the control unit 21 controls the operation of the cloud artificial intelligence device 20. Specifically, the control unit 21 performs reproduction based on the detection result of the sensor 201 of each reproduction device 100 acquired from the main device 101 (that is, information on a state relating to a specific operation of a person who is a listener). The control content is determined and notified to the main device 101.
  • the artificial intelligence determination unit 22 is configured by a neural network including, for example, an input layer, a plurality of intermediate layers, and an output layer. The artificial intelligence determination unit 22 receives “a state relating to a specific operation of a person of each playback device 100” as input, and outputs playback control content.
  • the switches SW1 and SW2 are switched to the contact a side, and the deep learning unit 23 uses the rule tables of FIGS. 25 to 34 including the rules R101 to R140 to make the artificial intelligence judgment unit. 22 to learn.
  • the switches SW1 and SW2 are respectively switched to the contact point b side, and the artificial intelligence determination unit 22 determines the content of the playback control based on the state related to the specific operation of each playback device 100 from the main device 101.
  • the data is output and transmitted to the main device 101. That is, while the content of playback control is determined by the main device 101 in the second embodiment, the cloud artificial intelligence apparatus 20 determines and determines the content of playback control in the third embodiment.
  • FIG. 19 is a flowchart showing the viewing environment estimation process (S8B) executed by the main device 101 of FIG.
  • step S41A in FIG. 19 each playback device 100 senses the state of a person in its own area.
  • step S42 the main device 101 determines whether there is a person. If YES in step S42, the process proceeds to step S43B. On the other hand, if NO in step S42, the process proceeds to step S44.
  • step S43B the main device 101 estimates a state related to a specific operation of each person of the playback device 100 from the image of the sensor 201, notifies the cloud artificial intelligence device 20 from the main device 101, and returns to the original main routine. Return to. On the other hand, in step S44, the absence of a person is notified to the main device 101, and the process returns to the original main routine.
  • FIG. 20 is a flowchart showing a reproduction control content determination process (S9B) by the main device 101 of the sound system of FIG.
  • FIG. 21 is a table showing an example of a playback control table generated by the main device 101 of FIG.
  • step S52B of FIG. 20 first, the main device 101 receives the playback control table from the cloud artificial intelligence apparatus 20, and acquires the playback control content.
  • step S53B the corresponding playback device 100 is controlled based on the acquired playback control content, and the process returns to the original main routine.
  • the content of playback control is determined by the cloud artificial intelligence device 20, and therefore the hardware and software configurations of the main device 101 are changed to the second embodiment.
  • the configuration can be simplified by reducing the comparison.
  • Other functions and effects are the same as those of the second embodiment.
  • Embodiment 3 a moving image camera is used as the sensor 201, but the present invention is not limited to this, and a human sensor may be used.
  • the volume value is changed as the playback control content according to the state of the person or the specific state thereof, but the present disclosure is not limited to this.
  • the correction value of the frequency and phase characteristics or the delay amount may be changed according to the state of the person. That is, at least one of the volume value, the frequency and phase characteristic correction value, and the delay amount may be controlled. For example, control may be performed such as increasing the signal level of a high frequency if “meal”, and flattening frequency characteristics if “reading” or “study”.
  • the main device 101 may receive the detection result of the sensor 201 as listener data, and the control unit 10 of the main device 101 may estimate the state of the listener.
  • the control unit 300 may estimate the state of the listener from the detection result of the sensor 201, and information on the estimated state may be transmitted to the main device 101 as listener data.
  • the cloud artificial intelligence device 20 may receive the detection result of the sensor 201 as listener data via the main device 101, and the control unit 21 of the cloud artificial intelligence device 20 may estimate the state of the listener. Then, the estimated state information may be transmitted to the main device 101 as listener data.
  • the acoustic adjustment can be easily performed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Otolaryngology (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Reverberation, Karaoke And Other Acoustics (AREA)
  • Stereophonic System (AREA)

Abstract

La présente invention concerne un système audio comportant : un premier dispositif de lecture (100A) qui comprend un capteur qui détecte un état d'un premier auditeur, et un dispositif de commande sonore (101) qui comprend un moyen de commande qui commande le son du premier dispositif de lecture (100A), le premier dispositif de lecture et le dispositif de commande sonore étant chacun connectés par l'intermédiaire d'un réseau. Le moyen de commande du dispositif de commande sonore (101) est configuré de manière : à recevoir, à partir du premier dispositif de lecture (100A), des premières données d'auditeur qui indiquent l'état du premier auditeur ; à déterminer, sur la base des premières données d'auditeur, le premier contenu de commande de lecture du premier dispositif de lecture (100A) ; et à émettre des instructions concernant le premier contenu de commande de lecture au premier dispositif de lecture.
PCT/JP2018/001859 2017-03-27 2018-01-23 Système audio, dispositif de commande sonore et dispositif de lecture associé WO2018179722A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017-061736 2017-03-27
JP2017061736A JP2020095064A (ja) 2017-03-27 2017-03-27 音響システムとその音響制御装置及び再生装置

Publications (1)

Publication Number Publication Date
WO2018179722A1 true WO2018179722A1 (fr) 2018-10-04

Family

ID=63674992

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/001859 WO2018179722A1 (fr) 2017-03-27 2018-01-23 Système audio, dispositif de commande sonore et dispositif de lecture associé

Country Status (2)

Country Link
JP (1) JP2020095064A (fr)
WO (1) WO2018179722A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014093204A (ja) * 2012-11-05 2014-05-19 Sony Corp 音響照明装置およびプログラム
JP2016005208A (ja) * 2014-06-18 2016-01-12 東芝ライテック株式会社 音送信システムおよび照明システム
JP2016051464A (ja) * 2014-08-29 2016-04-11 パナソニック インテレクチュアル プロパティ コーポレーション オブアメリカPanasonic Intellectual Property Corporation of America 提示情報制御方法および提示情報制御装置
JP2017046042A (ja) * 2015-08-24 2017-03-02 シャープ株式会社 音響システム、音響機器、及び端末装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014093204A (ja) * 2012-11-05 2014-05-19 Sony Corp 音響照明装置およびプログラム
JP2016005208A (ja) * 2014-06-18 2016-01-12 東芝ライテック株式会社 音送信システムおよび照明システム
JP2016051464A (ja) * 2014-08-29 2016-04-11 パナソニック インテレクチュアル プロパティ コーポレーション オブアメリカPanasonic Intellectual Property Corporation of America 提示情報制御方法および提示情報制御装置
JP2017046042A (ja) * 2015-08-24 2017-03-02 シャープ株式会社 音響システム、音響機器、及び端末装置

Also Published As

Publication number Publication date
JP2020095064A (ja) 2020-06-18

Similar Documents

Publication Publication Date Title
US10206030B2 (en) Microphone array system and microphone array control method
US20200077048A1 (en) Information Processing Apparatus, Information Processing Method, And Program
JP6668661B2 (ja) パラメータ制御装置およびパラメータ制御プログラム
EP2011366B1 (fr) Dispositif de capture sonore et appareil de conférence vocale
JP3915804B2 (ja) オーディオ再生装置
EP3226579B1 (fr) Dispositif de traitement d'informations, système de traitement d'informations, procédé de commande, et programme
US20170156012A1 (en) System, audio output device, and method for automatically adjusting firing direction of upward firing speaker
EP2816823A1 (fr) Système audio, appareil audio et son procédé de mise en correspondance de canaux
EP3179744B1 (fr) Procédé, dispositif et système pour commander une image sonore dans une zone audio
KR102516997B1 (ko) 음성 출력 제어 장치, 음성 출력 제어 방법, 그리고 프로그램
JP2007329633A (ja) コントロール機器、同期補正方法および同期補正プログラム
US10292000B1 (en) Frequency sweep for a unique portable speaker listening experience
US10616684B2 (en) Environmental sensing for a unique portable speaker listening experience
US10861465B1 (en) Automatic determination of speaker locations
WO2016080536A1 (fr) Dispositif de lecture de contenu, procédé de lecture de contenu et programme de lecture de contenu
JP2010252134A (ja) スピーカーシステム
CN106126182B (zh) 数据输出方法及电子设备
JP2012049663A (ja) 天井スピーカ装置
WO2018179722A1 (fr) Système audio, dispositif de commande sonore et dispositif de lecture associé
JP2006196940A (ja) 音像定位制御装置
US10979803B2 (en) Communication apparatus, communication method, program, and telepresence system
US11050579B2 (en) Distribution destination specifying device and distribution destination specifying method
JP2007329753A (ja) 音声通信装置および音声通信システム
US20070041598A1 (en) System for location-sensitive reproduction of audio signals
US20180088898A1 (en) Content Reproduction Device, Content Reproduction System, and Control Method for A Content Reproduction Device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18776450

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18776450

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP